Skip to main content

Get started in three steps

Get PromptMetrics running and log your first LLM request with full EU AI Act compliance tracking.​

Step 1: Set up your account

Sign up for a free account at app.promptmetrics.ai and create your first workspace. All your data is automatically stored in our EU-central-1 (Frankfurt) region with guaranteed EU data residency.Your workspace will include:
  1. 5 days log retention
  2. Up to 5,000 requests/month
  3. EU data residency
  4. Basic compliance reporting
Navigate to Settings > API Keys and generate your PromptMetrics API key. This key enables automatic request logging while keeping your LLM provider API keys secure—they never leave your environment.

Step 2: Install and configure

Install the PromptMetrics SDK for your language:Python
pip install promptmetrics
JavaScript/TypeScript
npm install promptmetrics
# or
yarn add promptmetrics
LangChain Integration
pip install promptmetrics langchain
Replace your existing OpenAI or Anthropic client with PromptMetrics:
Python with OpenAI

from promptmetrics import PromptMetrics

# Initialize PromptMetrics with EU region
client = PromptMetrics(
    api_key="your_promptmetrics_api_key",
    region="eu-central-1"  # Enforces EU data residency
)

# Use exactly like OpenAI SDK - drop-in replacement
response = client.openai.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"}
    ],
    pl_tags=["quickstart", "demo"],
    pl_compliance_fields={
        "risk_level": "minimal-risk",
        "use_case": "general_qa",
        "human_oversight": False
    }
)

print(response.choices[0].message.content)

JavaScript with OpenAI
import { PromptMetrics } from 'promptmetrics';

// Initialize PromptMetrics with EU region
const client = new PromptMetrics({
  apiKey: 'your_promptmetrics_api_key',
  region: 'eu-central-1'
});

// Use exactly like OpenAI SDK
const response = await client.openai.chat.completions.create({
  model: 'gpt-4',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is the capital of France?' }
  ],
  pl_tags: ['quickstart', 'demo'],
  pl_compliance_fields: {
    risk_level: 'minimal-risk',
    use_case: 'general_qa',
    human_oversight: false
  }
});

console.log(response.choices[0].message.content);

LangChain Integration
from promptmetrics import PromptMetricsCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

# Initialize callback handler
pm_handler = PromptMetricsCallbackHandler(
    api_key="your_promptmetrics_api_key",
    region="eu-central-1",
    tags=["langchain", "quickstart"],
    compliance_fields={
        "risk_level": "minimal-risk",
        "use_case": "general_qa"
    }
)

# Use with any LangChain model
chat = ChatOpenAI(callbacks=[pm_handler])
response = chat([HumanMessage(content="What is the capital of France?")])
print(response.content)

Step 3: Go live

Navigate to your PromptMetrics dashboard at app.promptmetrics.ai/requests to see your logged request with:​
  1. Complete prompt and response
  2. Token usage and cost breakdown
  3. Latency metrics (total duration, time to first token)
  4. EU AI Act compliance fields (risk level, use case, human oversight)
  5. Unique request ID for tracking
  6. Automatic EU data residency verification
Move your prompts out of code and into the Prompt Registry for version control and compliance tracking:python
# Create and publish a template
client.templates.publish(
    prompt_name="customer_support_v1",
    prompt_template=[
        {"role": "system", "content": "You are a helpful customer support assistant."},
        {"role": "user", "content": "{{customer_query}}"}
    ],
    tags=["customer_support", "production"],
    metadata={
        "model": {"provider": "openai", "name": "gpt-4"},
        "parameters": {"temperature": 0.7, "max_tokens": 150},
        "category": "support",
        "team": "customer_service"
    },
    release_labels=["prod"],
    compliance_fields={
        "risk_level": "limited-risk",
        "use_case": "customer_support",
        "requires_approval": False
    },
    commit_message="Initial customer support template"
)

# Use the template in production
response = client.run(
    prompt_name="customer_support_v1",
    input_variables={"customer_query": "How do I reset my password?"},
    compliance_fields={
        "risk_level": "limited-risk",
        "use_case": "customer_support",
        "human_oversight": True
    }
)

Next steps

Now that you have PromptMetrics running, explore these core capabilities:
Need help? See our full documentation or join our community.