Workflow:Anthropics Anthropic sdk python Cloud Provider Deployment
| Knowledge Sources | |
|---|---|
| Domains | LLMs, Cloud_Deployment, Infrastructure |
| Last Updated | 2026-02-15 12:00 GMT |
Overview
End-to-end process for deploying Claude through cloud provider platforms (AWS Bedrock, Google Vertex AI, Azure AI Foundry) using the Anthropic Python SDK's provider-specific client classes.
Description
This workflow demonstrates how to use Claude through major cloud platforms instead of the direct Anthropic API. The SDK provides dedicated client classes for each provider (AnthropicBedrock, AnthropicVertex, AnthropicFoundry) that handle platform-specific authentication, endpoint routing, and request transformation while maintaining the same Messages API interface. This allows applications to leverage existing cloud infrastructure, comply with data residency requirements, and use cloud-native billing and access controls.
Usage
Execute this workflow when your organization requires Claude access through an existing cloud provider agreement, when data residency or compliance requirements mandate using a specific cloud platform, or when you want to leverage cloud-native features like VPC networking, IAM policies, or consolidated billing.
Execution Steps
Step 1: Provider Selection and Dependencies
Choose the target cloud provider and install the required additional dependencies. Each provider has specific authentication mechanisms and may require extra Python packages beyond the base anthropic SDK.
Key considerations:
- AWS Bedrock requires boto3 for AWS credential management
- Google Vertex AI requires google-auth for GCP authentication
- Azure AI Foundry uses API key or Azure AD token authentication
- Install provider extras via pip (e.g., anthropic[bedrock], anthropic[vertex])
Step 2: Authentication Configuration
Configure authentication credentials for the selected cloud provider. Each provider uses its native authentication mechanism rather than an Anthropic API key.
Key considerations:
- Bedrock: Uses AWS credentials (environment variables, ~/.aws/credentials, IAM roles, or boto3 session)
- Vertex AI: Uses Google Cloud credentials (service account, application default credentials, or explicit access token)
- Azure Foundry: Uses either an API key or an Azure AD token provider function (mutually exclusive)
- Region/project configuration is provider-specific and typically set via environment variables or constructor parameters
Step 3: Provider Client Initialization
Create the provider-specific client instance instead of the standard Anthropic client. The provider clients inherit the same interface but add provider-specific URL routing, header transformation, and authentication injection.
Key considerations:
- AnthropicBedrock auto-discovers AWS region and credentials from the environment
- AnthropicVertex requires region and project_id (from environment or explicit parameters)
- AnthropicFoundry requires a resource name and authentication credential
- All provider clients offer both synchronous and asynchronous variants
Step 4: Model Identifier Mapping
Use the provider-specific model naming convention when specifying the model parameter. Each cloud platform has its own model identifier format that differs from the direct Anthropic API.
Key considerations:
- Bedrock: Uses format like "anthropic.claude-sonnet-4-5-20250929-v1:0"
- Vertex AI: Uses format like "claude-sonnet-4@20250514"
- Azure Foundry: Uses the standard Anthropic model names
- Model availability varies by provider and region
Step 5: API Interaction
Use the same Messages API methods (create, stream) as the standard Anthropic client. The provider client transparently handles request transformation, endpoint routing, and response parsing so that application code remains provider-agnostic.
Key considerations:
- The messages.create() and messages.stream() methods work identically across all providers
- Some features may not be available on all providers (e.g., batch processing is not supported on Bedrock or Azure)
- Token counting may not be available on all providers
- Streaming is supported on all three providers
- Error types are mapped from provider-specific HTTP errors to standard Anthropic exception classes