PromptMetrics provides three distinct ways to analyze your LLM requests: via the Global Dashboard, the Prompt Library, or the Playground. This guide details how to interpret the data, audit logs, and risk classifications for every interaction.
Global Request Dashboard
The primary way to view traffic is through the Requests tab in the left-hand navigation menu. This view aggregates all requests, regardless of whether they were initiated via the User Interface (UI) or the Backend.
Key Metrics
For every request logged, the dashboard provides the following high-level data points:
- Timestamp: When the request occurred.
- Prompt Info: The specific template used and the version number.
- LLM Model: The specific model invoked (e.g., GPT-4, Claude 3).
- Status: The success or failure status of the API call.
- Risk Classification: Categorized based on the EU AI Act risk classifications.
- Performance: Latency (speed) and Cost.
Deep-Dive Analysis
For granular details on a specific interaction, click the “i” icon next to any request row.
Request Details
This view provides a complete audit trail for the interaction:
- Token Usage: Breakdown of Input vs. Output tokens.
- Payload: The full text of the request sent and the response received.
- Performance Metrics: Detailed latency, token counts, and specific cost for that single call.
Debugging & Auditing: You can click the Note icon on any request detail view to attach comments. This is useful for flagging anomalies to your engineering team or marking specific interactions for compliance audits.
Context-Specific Analytics
In addition to the global view, you can analyze requests within the specific context of your templates or experiments.
1. Prompt Library Analytics
To view performance data for a specific prompt:
- Navigate to your Prompt Library.
- Open a specific Folder and Template.
- Click the Analytics tab.
Unlike the global view, this filters data to show only requests made for that specific version of the template within the current workspace.
2. Playground Analytics
When testing in the Playground:
- Click the Analytics tab within the playground interface.
- This displays a history of requests made specifically during your playground session, allowing you to track iteration costs and latency in real-time.
Mintlify supports HTML tags in Markdown. This is helpful if you prefer HTML tags to Markdown syntax, and lets you create documentation with infinite flexibility.