Implementation:Anthropics Anthropic sdk python Message Response
Appearance
| Knowledge Sources | |
|---|---|
| Domains | API_Client, LLM |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
This page documents the Message, ContentBlock, TextBlock, and Usage Pydantic models that represent the parsed response from the Anthropic Messages API. These models provide typed, validated access to all fields returned by the API.
API Signature
class Message(BaseModel):
id: str
content: List[ContentBlock]
model: Model
role: Literal["assistant"]
stop_reason: Optional[StopReason] = None
stop_sequence: Optional[str] = None
type: Literal["message"]
usage: Usage
Source Location
- Message:
src/anthropic/types/message.py, lines 14-118 - TextBlock:
src/anthropic/types/text_block.py, lines 12-23 - ContentBlock:
src/anthropic/types/content_block.py, lines 16-19 - Usage:
src/anthropic/types/usage.py, lines 13-37 - StopReason:
src/anthropic/types/stop_reason.py, line 7
Import
from anthropic.types import Message, TextBlock, Usage
from anthropic.types.content_block import ContentBlock
from anthropic.types.stop_reason import StopReason
Message Fields
| Field | Type | Description |
|---|---|---|
id |
str |
Unique object identifier. Format and length may change over time. |
content |
List[ContentBlock] |
Array of content blocks generated by the model. Each block has a type discriminator.
|
model |
Model |
The model that generated the response. |
role |
Literal["assistant"] |
Always "assistant".
|
stop_reason |
Optional[StopReason] |
Why generation stopped. None in the initial streaming message_start event.
|
stop_sequence |
Optional[str] |
The matched custom stop sequence, if any. |
type |
Literal["message"] |
Always "message".
|
usage |
Usage |
Token usage for billing and rate limiting. |
StopReason Values
StopReason: TypeAlias = Literal[
"end_turn", # Natural completion
"max_tokens", # Token limit reached
"stop_sequence", # Custom stop sequence matched
"tool_use", # Model invoked a tool
"pause_turn", # Long-running turn paused
"refusal", # Streaming classifier intervention
]
ContentBlock Union
ContentBlock: TypeAlias = Annotated[
Union[
TextBlock, # type = "text"
ThinkingBlock, # type = "thinking"
RedactedThinkingBlock, # type = "redacted_thinking"
ToolUseBlock, # type = "tool_use"
ServerToolUseBlock, # type = "server_tool_use"
WebSearchToolResultBlock, # type = "web_search_tool_result"
],
PropertyInfo(discriminator="type"),
]
Discrimination is performed on the type field of each content block.
TextBlock
class TextBlock(BaseModel):
citations: Optional[List[TextCitation]] = None
text: str
type: Literal["text"]
The primary content block type. The text field contains the model's generated text. citations is populated when the request included document sources.
Usage
class Usage(BaseModel):
cache_creation: Optional[CacheCreation] = None
cache_creation_input_tokens: Optional[int] = None
cache_read_input_tokens: Optional[int] = None
inference_geo: Optional[str] = None
input_tokens: int
output_tokens: int
server_tool_use: Optional[ServerToolUsage] = None
service_tier: Optional[Literal["standard", "priority", "batch"]] = None
| Field | Type | Description |
|---|---|---|
input_tokens |
int |
Number of input tokens consumed. |
output_tokens |
int |
Number of output tokens generated. Non-zero even for empty responses. |
cache_creation_input_tokens |
Optional[int] |
Tokens used to create a prompt cache entry. |
cache_read_input_tokens |
Optional[int] |
Tokens read from an existing prompt cache. |
cache_creation |
Optional[CacheCreation] |
Breakdown of cached tokens by TTL. |
inference_geo |
Optional[str] |
Geographic region where inference was performed. |
server_tool_use |
Optional[ServerToolUsage] |
Number of server tool requests. |
service_tier |
Optional[Literal["standard", "priority", "batch"]] |
Tier used for the request. |
Usage Examples
from anthropic import Anthropic
client = Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello, Claude!"}],
)
# Access the response text
for block in message.content:
if block.type == "text":
print(block.text)
# Access metadata
print(f"Model: {message.model}")
print(f"Stop reason: {message.stop_reason}")
print(f"Message ID: {message.id}")
# Access token usage
print(f"Input tokens: {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")
if message.usage.cache_read_input_tokens:
print(f"Cache read tokens: {message.usage.cache_read_input_tokens}")
# Shorthand for simple text responses
text = message.content[0].text
print(text)
Dependencies
- pydantic --
BaseModelfor model definitions and validation - typing --
List,Optional,Union - typing_extensions --
Literal,Annotated,TypeAlias
Related Pages
Implements Principle
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment