Implementation:Anthropics Anthropic sdk python MessageCountTokensParams
| Knowledge Sources | |
|---|---|
| Domains | API Types, Token Counting |
| Last Updated | 2026-02-15 12:00 GMT |
Overview
MessageCountTokensParams is a TypedDict that defines the request parameters for the stable (non-beta) token counting endpoint. It allows callers to estimate the number of tokens a Messages API request would consume before sending it, supporting tools, system prompts, extended thinking, and output configuration.
Description
The MessageCountTokensParams class in anthropic.types is a typed dictionary for constructing requests to the count_tokens endpoint. It requires two fields: messages (an iterable of MessageParam) and model (a ModelParam).
Optional fields include:
- output_config -- Configuration options for the model's output format via
OutputConfigParam. - system -- A system prompt, either as a string or an iterable of
TextBlockParam. - thinking -- Extended thinking configuration via
ThinkingConfigParam. - tool_choice -- How the model should select from available tools via
ToolChoiceParam. - tools -- Tool definitions the model may use, typed as
Iterable[MessageCountTokensToolParam].
Compared to the beta version (anthropic.types.beta.MessageCountTokensParams), this stable version has a simpler tool type (not a broad union of beta tool params), and does not include beta-specific options like mcp_servers, context_management, speed, output_format, or the betas header.
Usage
Use this parameter type when calling client.messages.count_tokens() to estimate token consumption for a stable Messages API request. This is useful for:
- Pre-validating that a request fits within model context limits.
- Estimating costs before committing to a request.
- Debugging token usage in conversations that include tools and system prompts.
Code Reference
Source Location
- Repository: Anthropic SDK Python
- File:
src/anthropic/types/message_count_tokens_params.py
Signature
class MessageCountTokensParams(TypedDict, total=False):
messages: Required[Iterable[MessageParam]]
model: Required[ModelParam]
output_config: OutputConfigParam
system: Union[str, Iterable[TextBlockParam]]
thinking: ThinkingConfigParam
tool_choice: ToolChoiceParam
tools: Iterable[MessageCountTokensToolParam]
Import
from anthropic.types import MessageCountTokensParams
I/O Contract
Fields
| Field | Type | Required | Description |
|---|---|---|---|
messages |
Iterable[MessageParam] |
Yes | Input messages in alternating user/assistant conversational turns. Limit of 100,000 messages. |
model |
ModelParam |
Yes | The model to use for token counting. |
output_config |
OutputConfigParam |
No | Configuration for the model's output format. |
system |
Union[str, Iterable[TextBlockParam]] |
No | System prompt providing context and instructions. |
thinking |
ThinkingConfigParam |
No | Extended thinking configuration. |
tool_choice |
ToolChoiceParam |
No | How the model should select from available tools. |
tools |
Iterable[MessageCountTokensToolParam] |
No | Tool definitions the model may use. |
Usage Examples
import anthropic
client = anthropic.Anthropic()
# Count tokens for a simple request
token_count = client.messages.count_tokens(
model="claude-sonnet-4-20250514",
messages=[
{"role": "user", "content": "What is the meaning of life?"}
],
)
print(f"Input tokens: {token_count.input_tokens}")
# Count tokens with tools and system prompt
token_count = client.messages.count_tokens(
model="claude-sonnet-4-20250514",
messages=[
{"role": "user", "content": "What's the weather in London?"}
],
system="You are a helpful weather assistant.",
tools=[
{
"name": "get_weather",
"description": "Get current weather for a city.",
"input_schema": {
"type": "object",
"properties": {
"city": {"type": "string"}
},
"required": ["city"],
},
}
],
)
print(f"Input tokens with tools: {token_count.input_tokens}")
Related Pages
- Beta MessageCountTokensParams -- Beta version with expanded tool support and additional parameters.
- BetaUsage -- Usage tracking model showing actual token consumption in responses.