Skip to main content
The Playground serves as your interactive testing environment. This module allows you to test prompt templates, manage versions, configure LLM providers, and conduct A/B testing to optimize performance.
Follow the steps below to navigate and configure the Playground.
Accessing the Playground
To begin, navigate to the Playground tab located in the left-hand navigation menu.
Upon entering the Playground, you will see a list of all your available Prompt Templates. Clicking on a specific template will expand the view to show all versions associated with that template.
Configuration & Editing
Once a prompt template is selected, the Editor pane opens. Here, you can make quick edits and configure the parameters for your test run.
1. Select LLM Provider & Model
You must define which model you wish to test your prompt against. The Playground supports various providers, including:
- OpenAI
- Anthropic
- OpenRouter
After selecting the provider, choose the specific LLM Model from the dropdown menu.
2. Set Request Parameters
Customize the behavior of the model by adjusting the LLM Request Parameters and options (such as temperature or token limits). These settings mirror the options available in your Prompt Library.
3. Run Test
Once configured, click the Run Test button to execute the prompt. The system will display:
- The generated output.
- Analytics regarding the specific test run.
To ensure your prompts are easy to find and report on, you can manage tags directly within the Playground.
Note: Proper tagging is essential for keeping your workspace organized and ensures accurate reporting downstream.
To manage tags:
- Locate the tagging section within the template view.
- Add a new tag manually, or;
- Choose an existing tag from the pre-populated list.
A/B Testing
The Playground allows you to run A/B tests to compare the performance of two different prompt versions against one another.
Setting up an A/B Test
- Click on the A/B Testing section.
- Select Create New A/B Test.
- Choose the Prompt Template you wish to evaluate.
- Select the two distinct Versions of that template you want to compare.
Viewing Results
Once the test is complete, you can view a side-by-side comparison of both versions. The results page will display the output and analytics for both variants, allowing you to make data-driven decisions on which version to deploy.
PromptMetrics Prompt Library
Mintlify supports HTML tags in Markdown. This is helpful if you prefer HTML tags to Markdown syntax, and lets you create documentation with infinite flexibility.