Implementation:BerriAI Litellm Proxy Config Loader
| Knowledge Sources | Domains | Last Updated |
|---|---|---|
| BerriAI/litellm repository | LLM Gateway, Configuration Management, Proxy Infrastructure | 2026-02-15 |
Overview
Concrete tool for loading and applying YAML-based proxy configuration provided by the LiteLLM ProxyConfig class.
Description
The ProxyConfig class is the central abstraction for all configuration loading and updating logic in the LiteLLM proxy server. It reads a YAML configuration file, resolves environment variables, applies litellm module-level settings (such as caching, callbacks, and parameter behavior), constructs or reconfigures the litellm.Router with declared model deployments, and applies general server settings. The load_config method returns a tuple of the initialized router, the model list, and the general settings dictionary, which are then assigned to the proxy server's global state.
The class also supports:
- Validating whether a file is a YAML file via
is_yaml(). - Loading raw YAML content from disk via
_load_yaml_file(). - Retrieving the merged config from file or database via
get_config(). - Maintaining internal state of the loaded config for subsequent reads.
Usage
Import and use ProxyConfig when initializing or reconfiguring the LiteLLM proxy server. It is instantiated as a module-level singleton (proxy_config) and called during the proxy_startup_event lifespan handler or when a configuration reload is triggered.
Code Reference
| Attribute | Value |
|---|---|
| Source Location | litellm/proxy/proxy_server.py, class defined at line 2021
|
| Method | ProxyConfig.load_config(), defined at line 2459
|
| Signature | async def load_config(self, router: Optional[litellm.Router], config_file_path: str) -> Tuple[Router, list, dict]
|
| Import | from litellm.proxy.proxy_server import ProxyConfig
|
I/O Contract
Inputs
| Parameter | Type | Description |
|---|---|---|
router |
Optional[litellm.Router] |
An existing Router instance to reconfigure, or None if a new Router should be created.
|
config_file_path |
str |
Filesystem path to the YAML configuration file to load. |
Outputs
| Return Element | Type | Description |
|---|---|---|
router |
litellm.Router |
The initialized or updated Router instance containing all model deployments and routing settings. |
model_list |
list |
A list of model deployment dictionaries, each containing model_name, litellm_params, and model_info.
|
general_settings |
dict |
A dictionary of server-level settings extracted from the configuration (e.g., master key, database URL, auth modes). |
Usage Examples
Basic configuration loading during server startup:
from litellm.proxy.proxy_server import ProxyConfig
proxy_config = ProxyConfig()
# During the server startup lifespan event
llm_router, llm_model_list, general_settings = await proxy_config.load_config(
router=None,
config_file_path="/app/config.yaml"
)
Example YAML configuration file (config.yaml):
environment_variables:
OPENAI_API_KEY: "os.environ/OPENAI_API_KEY"
model_list:
- model_name: gpt-4
litellm_params:
model: openai/gpt-4
api_key: os.environ/OPENAI_API_KEY
- model_name: claude-3
litellm_params:
model: anthropic/claude-3-opus-20240229
api_key: os.environ/ANTHROPIC_API_KEY
litellm_settings:
drop_params: true
cache: true
cache_params:
type: redis
host: localhost
port: 6379
general_settings:
master_key: os.environ/LITELLM_MASTER_KEY
database_url: os.environ/DATABASE_URL
router_settings:
routing_strategy: least-busy
num_retries: 3
Reloading configuration at runtime:
# Reconfigure existing router with updated config
llm_router, llm_model_list, general_settings = await proxy_config.load_config(
router=existing_router,
config_file_path="/app/config.yaml"
)