Implementation:Anthropics Anthropic sdk python BetaMessage
| Knowledge Sources | |
|---|---|
| Domains | API_Types, Beta_Features |
| Last Updated | 2026-02-15 12:00 GMT |
Overview
BetaMessage is the complete response model for the beta Messages API. It extends BaseModel and represents the full structure of a message returned by the Anthropic beta API, including content blocks, usage information, stop reason, and metadata about the model, container, and context management.
Description
BetaMessage is a Pydantic model (via BaseModel) that mirrors the JSON response from the beta /v1/messages endpoint. It provides typed access to all fields of the API response:
- id -- A unique object identifier for the message
- content -- A list of
BetaContentBlockitems containing the model's generated output (text, tool use, thinking, etc.) - model -- The model identifier that generated the response
- role -- Always
"assistant"for responses - stop_reason -- Why the model stopped generating (e.g.,
"end_turn","max_tokens","tool_use","stop_sequence","pause_turn","refusal") - stop_sequence -- Which custom stop sequence was matched, if any
- type -- Always
"message" - usage -- Token counts for billing and rate limiting
- container -- Optional container metadata for code execution tool
- context_management -- Optional context management strategy information
Usage
BetaMessage is the return type when calling the beta Messages API via client.beta.messages.create(). You interact with it to extract the model's response text, tool use calls, thinking blocks, and usage statistics. It is the beta-specific equivalent of the stable Message type.
Code Reference
Source Location
- Repository: Anthropic SDK Python
- File:
src/anthropic/types/beta/beta_message.py
Signature
class BetaMessage(BaseModel):
id: str
container: Optional[BetaContainer] = None
content: List[BetaContentBlock]
context_management: Optional[BetaContextManagementResponse] = None
model: Model
role: Literal["assistant"]
stop_reason: Optional[BetaStopReason] = None
stop_sequence: Optional[str] = None
type: Literal["message"]
usage: BetaUsage
Import
from anthropic.types.beta import BetaMessage
I/O Contract
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id |
str |
Yes | -- | Unique object identifier |
container |
Optional[BetaContainer] |
No | None |
Container info for code execution tool |
content |
List[BetaContentBlock] |
Yes | -- | List of content blocks generated by the model |
context_management |
Optional[BetaContextManagementResponse] |
No | None |
Context management strategy information |
model |
Model |
Yes | -- | The model that generated the response |
role |
Literal["assistant"] |
Yes | -- | Always "assistant"
|
stop_reason |
Optional[BetaStopReason] |
No | None |
Reason the model stopped generating |
stop_sequence |
Optional[str] |
No | None |
Custom stop sequence that was matched |
type |
Literal["message"] |
Yes | -- | Always "message"
|
usage |
BetaUsage |
Yes | -- | Token usage for billing and rate limiting |
Stop Reason Values
| Value | Description |
|---|---|
"end_turn" |
Model reached a natural stopping point |
"max_tokens" |
Exceeded max_tokens or the model's maximum
|
"stop_sequence" |
A custom stop sequence was generated |
"tool_use" |
The model invoked one or more tools |
"pause_turn" |
A long-running turn was paused (can be continued) |
"refusal" |
Streaming classifiers intervened for policy enforcement |
Usage Examples
Basic Message Response Handling
import anthropic
client = anthropic.Anthropic()
message = client.beta.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello, Claude!"}],
betas=["beta-feature-flag"],
)
# Access basic fields
print(f"Message ID: {message.id}")
print(f"Model: {message.model}")
print(f"Stop reason: {message.stop_reason}")
# Extract text content
for block in message.content:
if block.type == "text":
print(block.text)
Checking Usage Statistics
message = client.beta.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Summarize quantum computing."}],
betas=["beta-feature-flag"],
)
print(f"Input tokens: {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")
Handling Tool Use Responses
message = client.beta.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=[{"name": "get_weather", "description": "Get weather", "input_schema": {...}}],
messages=[{"role": "user", "content": "What's the weather in NYC?"}],
betas=["beta-feature-flag"],
)
if message.stop_reason == "tool_use":
for block in message.content:
if block.type == "tool_use":
print(f"Tool: {block.name}, Input: {block.input}")
Related Pages
- Anthropics_Anthropic_sdk_python_BetaContentBlock -- The union type for content blocks in the
contentfield - Anthropics_Anthropic_sdk_python_BetaRawMessageDeltaEvent -- Streaming delta events for beta messages
- Anthropics_Anthropic_sdk_python_Message_Response -- The stable (non-beta) counterpart
- Anthropics_Anthropic_sdk_python_Messages_Create -- The API method that returns this type
- Anthropics_Anthropic_sdk_python_BetaToolParam -- Tool definitions used with beta messages