Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Anthropics Anthropic sdk python Provider Model Mapping

From Leeroopedia
Knowledge Sources
Domains Cloud_Deployment, LLM, Infrastructure
Last Updated 2026-02-15 00:00 GMT

Overview

This page documents how model identifiers are specified and mapped across cloud providers in the Anthropic Python SDK. The ModelParam type provides IDE autocompletion for known models while accepting arbitrary strings for provider-specific identifiers. Each provider client internally rewrites request URLs based on the model identifier via the _prepare_options() method.

Type: Pattern Doc

Source: src/anthropic/types/model_param.py:L10-35

Import: from anthropic.types import ModelParam

ModelParam Type Definition

from typing import Union
from typing_extensions import Literal, TypeAlias

ModelParam: TypeAlias = Union[
    Literal[
        "claude-opus-4-6",
        "claude-opus-4-5-20251101",
        "claude-opus-4-5",
        "claude-3-7-sonnet-latest",
        "claude-3-7-sonnet-20250219",
        "claude-3-5-haiku-latest",
        "claude-3-5-haiku-20241022",
        "claude-haiku-4-5",
        "claude-haiku-4-5-20251001",
        "claude-sonnet-4-20250514",
        "claude-sonnet-4-0",
        "claude-4-sonnet-20250514",
        "claude-sonnet-4-5",
        "claude-sonnet-4-5-20250929",
        "claude-opus-4-0",
        "claude-opus-4-20250514",
        "claude-4-opus-20250514",
        "claude-opus-4-1-20250805",
        "claude-3-opus-latest",
        "claude-3-opus-20240229",
        "claude-3-haiku-20240307",
    ],
    str,
]

The Union[Literal[...], str] pattern allows type checkers and IDEs to suggest known model names while accepting any string value, enabling provider-specific model identifiers without type errors.

Provider-Specific Model Identifiers

AWS Bedrock

Bedrock uses cross-region inference profile IDs or full ARNs:

# Cross-region inference profile ID
model = "us.anthropic.claude-sonnet-4-20250514-v1:0"

# Full ARN
model = "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-sonnet-4-20250514-v1:0"

Google Cloud Vertex AI

Vertex uses the publisher format with @ as a date separator:

model = "claude-sonnet-4@20250514"
model = "claude-opus-4@20250514"
model = "claude-haiku-4-5@20251001"

Azure AI Foundry

Foundry uses standard Anthropic model names:

model = "claude-sonnet-4-20250514"
model = "claude-opus-4-20250514"

URL Rewriting via _prepare_options()

Bedrock URL Rewriting

Source: src/anthropic/lib/bedrock/_client.py:L38-67

def _prepare_options(input_options: FinalRequestOptions) -> FinalRequestOptions:
    options = model_copy(input_options, deep=True)

    if is_dict(options.json_data):
        options.json_data.setdefault("anthropic_version", DEFAULT_VERSION)
        # Propagate anthropic-beta header to body
        if is_given(options.headers):
            betas = options.headers.get("anthropic-beta")
            if betas:
                options.json_data.setdefault("anthropic_beta", betas.split(","))

    if options.url in {"/v1/complete", "/v1/messages", "/v1/messages?beta=true"} and options.method == "post":
        model = options.json_data.pop("model", None)
        model = urllib.parse.quote(str(model), safe=":")
        stream = options.json_data.pop("stream", False)
        if stream:
            options.url = f"/model/{model}/invoke-with-response-stream"
        else:
            options.url = f"/model/{model}/invoke"

    return options

Resulting URL pattern:

  • Non-streaming: /model/us.anthropic.claude-sonnet-4-20250514-v1:0/invoke
  • Streaming: /model/us.anthropic.claude-sonnet-4-20250514-v1:0/invoke-with-response-stream

Vertex URL Rewriting

Source: src/anthropic/lib/vertex/_client.py:L380-412

def _prepare_options(
    input_options: FinalRequestOptions,
    *,
    project_id: str | None,
    region: str,
) -> FinalRequestOptions:
    options = model_copy(input_options, deep=True)

    if is_dict(options.json_data):
        options.json_data.setdefault("anthropic_version", DEFAULT_VERSION)

    if options.url in {"/v1/messages", "/v1/messages?beta=true"} and options.method == "post":
        model = options.json_data.pop("model")
        stream = options.json_data.get("stream", False)
        specifier = "streamRawPredict" if stream else "rawPredict"
        options.url = (
            f"/projects/{project_id}/locations/{region}"
            f"/publishers/anthropic/models/{model}:{specifier}"
        )

    return options

Resulting URL pattern:

  • Non-streaming: /projects/my-project/locations/us-east5/publishers/anthropic/models/claude-sonnet-4@20250514:rawPredict
  • Streaming: /projects/my-project/locations/us-east5/publishers/anthropic/models/claude-sonnet-4@20250514:streamRawPredict

Foundry URL Handling

The Foundry client does not rewrite URLs based on model identifiers. It inherits the base Anthropic client's behavior, posting to /v1/messages with the model name in the JSON body. Azure's infrastructure handles model routing.

Version Headers

Each provider injects a provider-specific version into the request body:

Provider anthropic_version Value
Bedrock bedrock-2023-05-31
Vertex vertex-2023-10-16
Foundry Inherits from base client

Related Pages

Implements Principle

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment