Implementation:Arize ai Phoenix PromptVersion Format
| Knowledge Sources | |
|---|---|
| Domains | Prompt Engineering, Template Processing, Python API |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
Concrete tool for rendering parameterized prompt templates with variable substitution and SDK-specific formatting provided by the arize-phoenix-client Python package through the PromptVersion.format() method.
Description
The PromptVersion.format() method takes a stored prompt version, substitutes its variable placeholders with provided runtime values, and returns an SDK-specific prompt object that can be passed directly to an LLM provider's API. The method performs two operations in sequence:
- Variable substitution -- uses a template formatter (Mustache, f-string, or no-op) to replace placeholders in the template's message content with concrete values from the
variablesmapping. - SDK adaptation -- transforms the substituted messages and model configuration into the data structures expected by the target SDK (OpenAI, Anthropic, or Google Generative AI).
The method returns a frozen dataclass that implements Mapping[str, Any], enabling it to be unpacked directly into an SDK client call with the ** operator.
Usage
Use PromptVersion.format() when:
- Preparing a prompt for an LLM API call by injecting runtime variables (user input, context documents, etc.).
- Converting a prompt between SDK formats -- for example, rendering an OpenAI-authored prompt for the Anthropic API.
- Building type-safe LLM call pipelines where the formatted output provides IDE autocompletion and type checking.
- Testing prompt rendering with known variable values to validate template correctness.
Code Reference
Source Location
- Repository: Phoenix
- File (format method):
packages/phoenix-client/src/phoenix/client/types/prompts.py(lines 164-212) - File (template formatters):
packages/phoenix-client/src/phoenix/client/utils/template_formatters.pyMustacheBaseTemplateFormatter: lines 119-151FStringBaseTemplateFormatter: lines 92-116NoOpFormatterBase: lines 72-89to_formatter(): lines 169-189
Signature
def format(
self,
*,
variables: Mapping[str, str] = MappingProxyType({}),
formatter: Optional[TemplateFormatter] = None,
sdk: Optional[Literal["openai", "anthropic", "google_generativeai", "boto3"]] = None,
) -> Union[OpenAIPrompt, AnthropicPrompt, GoogleGenerativeaiPrompt]
Import
from phoenix.client import Client
client = Client()
prompt_version = client.prompts.get(prompt_identifier="my-prompt")
formatted = prompt_version.format(variables={"question": "What is AI?"}, sdk="openai")
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| variables | Mapping[str, str] |
No | A mapping of variable names to their string values. Defaults to an empty mapping. All variables referenced in the template must be present; missing variables raise a TemplateFormatterError.
|
| formatter | Optional[TemplateFormatter] |
No | A custom template formatter to override the default formatter determined by the prompt's template_format setting. Must implement the TemplateFormatter protocol (a format(template, *, variables) method). Defaults to None, which uses the built-in formatter.
|
| sdk | Optional[Literal["openai", "anthropic", "google_generativeai", "boto3"]] |
No | The target SDK format. Defaults to None, which auto-detects the SDK based on the prompt's model_provider. The provider-to-SDK mapping is: OPENAI/AZURE_OPENAI/DEEPSEEK/XAI/OLLAMA -> "openai", ANTHROPIC -> "anthropic", GOOGLE -> "google_generativeai", AWS -> "boto3" (not yet implemented).
|
Outputs
| Name | Type | Description |
|---|---|---|
| return value (OpenAI) | OpenAIPrompt |
A frozen dataclass with messages: Sequence[ChatCompletionMessageParam] and kwargs: OpenAIChatCompletionModelKwargs. Contains model name, tools, response_format, and invocation parameters. Implements Mapping[str, Any] for ** unpacking.
|
| return value (Anthropic) | AnthropicPrompt |
A frozen dataclass with messages: Sequence[MessageParam] and kwargs: AnthropicMessageModelKwargs. Contains model name, system prompt, tools, max_tokens, and invocation parameters. Implements Mapping[str, Any] for ** unpacking.
|
| return value (Google) | GoogleGenerativeaiPrompt |
A frozen dataclass with messages: Sequence[protos.Content] and kwargs: GoogleModelKwargs. Contains model name, tools, and invocation parameters. Implements Mapping[str, Any] for ** unpacking.
|
Template Formatters
| Format | Syntax | Class | Behavior |
|---|---|---|---|
| MUSTACHE (default) | {{variable}}, {{ variable }} |
MustacheBaseTemplateFormatter |
Matches (?<!\\){{\s*(\w+)\s*}}. Supports escaped braces (\{{). Safe for JSON-embedded templates.
|
| F_STRING | {variable} |
FStringBaseTemplateFormatter |
Uses Python's string.Formatter. Literal braces must be escaped as {{ / }}.
|
| NONE | N/A | NoOpFormatterBase |
No substitution. Template is returned as-is. For static prompts without variables. |
Custom Formatter Protocol
class TemplateFormatter(Protocol):
def format(
self,
template: str,
/,
*,
variables: Mapping[str, str],
) -> str: ...
Any object that implements this protocol can be passed as the formatter parameter to override the default formatting behavior.
SDK-Specific Output Details
OpenAIPrompt
Produced when sdk="openai" or when the model provider is OPENAI, AZURE_OPENAI, DEEPSEEK, XAI, or OLLAMA.
@dataclass(frozen=True)
class OpenAIPrompt(_FormattedPrompt):
messages: Sequence[ChatCompletionMessageParam]
kwargs: OpenAIChatCompletionModelKwargs
# kwargs may contain: model, tools, response_format, temperature,
# top_p, max_tokens, stop, frequency_penalty, presence_penalty, etc.
AnthropicPrompt
Produced when sdk="anthropic" or when the model provider is ANTHROPIC.
@dataclass(frozen=True)
class AnthropicPrompt(_FormattedPrompt):
messages: Sequence[MessageParam]
kwargs: AnthropicMessageModelKwargs
# kwargs may contain: model, system, tools, max_tokens,
# temperature, top_p, top_k, stop_sequences, etc.
GoogleGenerativeaiPrompt
Produced when sdk="google_generativeai" or when the model provider is GOOGLE.
@dataclass(frozen=True)
class GoogleGenerativeaiPrompt(_FormattedPrompt):
messages: Sequence[protos.Content]
kwargs: GoogleModelKwargs
# kwargs may contain: model, tools, generation_config, etc.
Usage Examples
Basic Mustache Template Rendering
from phoenix.client import Client
client = Client()
# Retrieve the prompt
prompt_version = client.prompts.get(
prompt_identifier="text-summarizer",
tag="production",
)
# Render with variables for OpenAI
formatted = prompt_version.format(
variables={"text": "The quick brown fox jumps over the lazy dog."},
sdk="openai",
)
# Use directly with the OpenAI SDK
import openai
openai_client = openai.OpenAI()
response = openai_client.chat.completions.create(**formatted)
print(response.choices[0].message.content)
Rendering for Anthropic
from phoenix.client import Client
client = Client()
prompt_version = client.prompts.get(prompt_identifier="qa-assistant")
# Render for Anthropic, regardless of the original model provider
formatted = prompt_version.format(
variables={"question": "What causes the seasons on Earth?"},
sdk="anthropic",
)
# Use directly with the Anthropic SDK
import anthropic
anthropic_client = anthropic.Anthropic()
response = anthropic_client.messages.create(**formatted)
print(response.content[0].text)
Rendering for Google Generative AI
from phoenix.client import Client
client = Client()
prompt_version = client.prompts.get(prompt_identifier="code-explainer")
formatted = prompt_version.format(
variables={"concept": "recursion"},
sdk="google_generativeai",
)
# The formatted object contains messages and kwargs for the Google SDK
print(f"Model: {formatted.kwargs.get('model')}")
print(f"Number of content parts: {len(formatted.messages)}")
Auto-Detected SDK from Model Provider
from phoenix.client import Client
client = Client()
# If the prompt was created with model_provider="ANTHROPIC",
# the SDK is auto-detected -- no need to specify sdk parameter
prompt_version = client.prompts.get(prompt_identifier="anthropic-prompt")
formatted = prompt_version.format(
variables={"input": "Tell me about quantum computing."},
)
# formatted is an AnthropicPrompt because model_provider is ANTHROPIC
import anthropic
anthropic_client = anthropic.Anthropic()
response = anthropic_client.messages.create(**formatted)
Rendering Without Variables (Static Prompt)
from phoenix.client import Client
client = Client()
# A prompt with template_format="NONE" or no variable placeholders
prompt_version = client.prompts.get(prompt_identifier="static-greeting")
# No variables needed
formatted = prompt_version.format(sdk="openai")
import openai
openai_client = openai.OpenAI()
response = openai_client.chat.completions.create(**formatted)
Using a Custom Formatter
from typing import Mapping
from phoenix.client import Client
class UpperCaseFormatter:
"""Custom formatter that uppercases all variable values."""
def format(self, template: str, /, *, variables: Mapping[str, str]) -> str:
import re
for name, value in variables.items():
pattern = rf"(?<!\\){{{{\s*{re.escape(name)}\s*}}}}"
template = re.sub(pattern, value.upper(), template)
return template
client = Client()
prompt_version = client.prompts.get(prompt_identifier="my-prompt")
formatted = prompt_version.format(
variables={"name": "world"},
formatter=UpperCaseFormatter(),
sdk="openai",
)
# The variable "name" will be substituted as "WORLD"
Error Handling
- TemplateFormatterError -- raised when required template variables are missing from the
variablesmapping. The error message lists the missing variable names:"Missing template variable(s): variable1, variable2". - NotImplementedError -- raised when
sdk="boto3"is specified, as AWS Bedrock support is not yet implemented. - AssertionError -- raised for internal inconsistencies such as unexpected template types.
Provider-to-SDK Mapping
| Model Provider | Default SDK | Notes |
|---|---|---|
OPENAI |
"openai" |
Standard OpenAI API |
AZURE_OPENAI |
"openai" |
Uses OpenAI-compatible format |
DEEPSEEK |
"openai" |
Uses OpenAI-compatible format |
XAI |
"openai" |
Uses OpenAI-compatible format |
OLLAMA |
"openai" |
Uses OpenAI-compatible format |
ANTHROPIC |
"anthropic" |
Native Anthropic format |
GOOGLE |
"google_generativeai" |
Google Generative AI format |
AWS |
"boto3" |
Not yet implemented |