Overview
PromptL enables you to craft a controlled interaction history so that the model can be conditioned on arbitrary prior outputs. In advanced usage, there are two principle mechanisms of interest:- Mocking Roles
Pretend that the assistant has already replied with specified content. - Mocking Tool Calls
Emulate a tool invocation and its response so that PromptL sees it as if a real tool were executed.
1. Mocking Roles
Mocking a role means inserting an assistant response directly into the prompt. The model will behave as though this response actually occurred. This is useful when you want to:- Seed the conversation with example assistant behavior
- Use few-shot prompting techniques
- Test how the model continues from a predetermined reply
Syntax
Wrap the desired assistant reply between<assistant>
tags:
Some models do not allow assistant messages as the last item.
2. Mocking Tool Calls
Mocking a tool call allows you to simulate both the invocation of an external function and its response—as if the model had called a real tool and received structured output. This gives you fine-grained control over how the model interprets prior tool usage and is especially useful in logic-heavy prompt flows that rely on external data.When to Use
- Testing prompt branches that depend on tool results
- Simulating APIs without triggering real network requests
- Validating error-handling logic by crafting edge-case tool responses
Syntax
A mocked tool call consists of two components:- A
<tool-call />
element inside an<assistant>
block, which mimics the model making a function call. - A corresponding
<tool>
block with the sameid
andname
, which contains the tool’s response.