Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:BerriAI Litellm Litellm Global Configuration

From Leeroopedia
Knowledge Sources BerriAI/litellm - litellm/__init__.py
Domains LLM Integration, Configuration Management, Credential Management
Last Updated 2026-02-15

Overview

Concrete tool for configuring LLM provider credentials and global runtime settings, provided by the litellm Python package through module-level global variables defined in litellm/__init__.py.

Description

The litellm module exposes a flat namespace of global variables that govern authentication, behavioral defaults, and provider-specific settings for all subsequent API calls. On import, the module loads environment variables via dotenv.load_dotenv() (in DEV mode) and initializes defaults for API keys, base URLs, token limits, retry policies, callback hooks, and more. Users configure LiteLLM by assigning values to these module-level attributes (e.g., litellm.api_key = "sk-...") or by setting the corresponding environment variables before import.

Usage

Import and assign global configuration before making any litellm.completion() or litellm.acompletion() calls. This is typically done at application startup.

Code Reference

Source Location: litellm/__init__.py, lines 1-1738

Key Attributes (Signature):

# Authentication globals
litellm.api_key: Optional[str] = None
litellm.openai_key: Optional[str] = None
litellm.azure_key: Optional[str] = None
litellm.anthropic_key: Optional[str] = None
litellm.cohere_key: Optional[str] = None
litellm.replicate_key: Optional[str] = None
litellm.huggingface_key: Optional[str] = None
litellm.togetherai_api_key: Optional[str] = None
litellm.vertex_project: Optional[str] = None
litellm.vertex_location: Optional[str] = None

# Behavioral globals
litellm.max_tokens: int = 256  # DEFAULT_MAX_TOKENS
litellm.drop_params: bool = False
litellm.modify_params: bool = False
litellm.retry: bool = True
litellm.request_timeout: float = 6000  # default timeout
litellm.ssl_verify: Union[str, bool] = True
litellm.telemetry: bool = True

# Callback globals
litellm.success_callback: List[Union[str, Callable]] = []
litellm.failure_callback: List[Union[str, Callable]] = []
litellm.callbacks: List[Union[Callable, str]] = []

# Logging globals
litellm.turn_off_message_logging: Optional[bool] = False
litellm.log_raw_request_response: bool = False

Import:

import litellm

I/O Contract

Inputs

Parameter Type Description
litellm.api_key Optional[str] Generic API key used as a fallback when no provider-specific key is set.
litellm.openai_key Optional[str] OpenAI-specific API key.
litellm.azure_key Optional[str] Azure OpenAI-specific API key.
litellm.anthropic_key Optional[str] Anthropic-specific API key.
litellm.max_tokens int Default maximum tokens for completions (defaults to DEFAULT_MAX_TOKENS).
litellm.drop_params bool When True, silently drops parameters unsupported by the target provider instead of raising errors.
litellm.modify_params bool When True, allows LiteLLM to modify parameters for provider compatibility.
litellm.retry bool When True, enables automatic retry on transient failures.
litellm.success_callback List List of callback functions or integration names invoked after successful completions.
litellm.failure_callback List List of callback functions or integration names invoked after failed completions.

Outputs

Output Type Description
Module namespace Module attributes All globals are available as litellm.<attribute> for downstream consumption by completion(), acompletion(), and provider handlers.

Usage Examples

Setting provider keys via module-level assignment:

import litellm

# Set credentials for multiple providers
litellm.api_key = "sk-fallback-key"
litellm.openai_key = "sk-openai-xxx"
litellm.anthropic_key = "sk-ant-xxx"
litellm.azure_key = "azure-key-xxx"

# Configure behavioral defaults
litellm.drop_params = True
litellm.max_tokens = 1024
litellm.retry = True

# Now all completion calls inherit these settings
response = litellm.completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}]
)

Setting credentials via environment variables:

import os
os.environ["OPENAI_API_KEY"] = "sk-openai-xxx"
os.environ["ANTHROPIC_API_KEY"] = "sk-ant-xxx"

import litellm  # dotenv.load_dotenv() is called automatically in DEV mode

response = litellm.completion(
    model="claude-3-opus-20240229",
    messages=[{"role": "user", "content": "Hello!"}]
)

Configuring callbacks for observability:

import litellm

litellm.success_callback = ["langfuse"]
litellm.failure_callback = ["langfuse"]

response = litellm.completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "Explain callbacks."}]
)

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment