
Running Single Inputs
- Preview: The main panel shows a preview of the messages that will be sent to the model, based on your prompt template and current parameter values.
- Parameters: If your prompt uses input parameters (like
{{ topic }}
), they appear in the “Parameters” section. Fill in values here. - Run: Click “Run prompt”. Latitude sends the request to the configured provider and model.
- Chat Mode: The response appears, and the Playground enters Chat mode. You can continue the conversation turn by turn.
- Reset: Click “New Chat” to clear the conversation and run the prompt again from the beginning, potentially with new parameter values.
Parameter Input Methods
You can populate parameters in several ways:- Manual: Type values directly into the fields.
- Dataset: Load inputs from a Dataset. Each row becomes a separate test case. This is great for batch testing.
- History: Reuse parameter values from previous runs.
Parameter Types
Parameters can accept different input types, configured either in the prompt’s settings or directly in the Playground:
- Text: Standard text input (default).
Advanced Users: Lists are also acceptable inputs for this field. Specify a list with the following format: [a1, a2, etc..]
- Image: Upload an image file. Passed to the model as content (requires model support like GPT-4V, Claude 3). Use
<content-image>
tag in your prompt. - File: Upload any file type. Passed as content (requires model support). Use
<content-file>
tag.
Testing Tool Responses
If your prompt uses Tools, the Playground allows you to simulate their responses:- Run the prompt: Initiate the prompt run as usual.
- Tool Call Request: If the model decides to call a tool, the Playground will pause and display the requested tool call and its arguments.
- Mock Response: Enter the JSON response you want the tool to pretend to return.
- Continue: Click “Send tool response”. Latitude sends the mocked tool response back to the model, which then continues its generation process based on that simulated information.
Viewing Logs in the Playground
Every run in the Playground generates a log entry. You can quickly access the detailed log for the current run:- Click the “Logs” icon or link within the Playground interface (location may vary slightly).
- This opens the detailed log view, showing inputs, outputs, metadata, timings, and any evaluation results associated with that specific run.
Next Steps
- Learn about Prompt Configuration
- Manage changes using Version Control
- Explore how to use Tools in your prompts