Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:BerriAI Litellm Environment Configuration

From Leeroopedia
Knowledge Sources BerriAI/litellm repository
Domains LLM Integration, Configuration Management, Credential Management
Last Updated 2026-02-15

Overview

Environment configuration is the practice of externalizing provider credentials and runtime settings so that application code remains decoupled from infrastructure-specific details.

Description

When an application must communicate with multiple LLM providers (OpenAI, Anthropic, Azure, Cohere, and others), each provider requires its own authentication credentials and endpoint settings. Environment configuration solves this problem by establishing a single, well-known location -- either operating system environment variables or a shared, in-process configuration namespace -- where all provider keys, base URLs, API versions, and behavioral flags are declared before any API call is made.

This principle separates what the application does (sending completion requests) from how it authenticates and connects. It enables the same codebase to target different providers, regions, or accounts simply by changing the environment rather than the source code.

Usage

Apply environment configuration whenever:

  • Multiple LLM providers are used within the same application or deployment.
  • Credentials must rotate without code changes (secrets management, CI/CD pipelines).
  • Global behavioral defaults (timeouts, retry policies, token limits, parameter dropping) need to be set once and inherited by every subsequent API call.
  • A development environment must point to different endpoints than production.

Theoretical Basis

Environment configuration follows the Twelve-Factor App methodology (Factor III: "Store config in the environment") and the broader Separation of Concerns design principle. The key ideas are:

1. Externalized Secrets

Credentials are never hard-coded. Instead, they are read from environment variables at process startup or assigned to a well-known module-level namespace.

# Pseudocode: credential resolution order
function resolve_credential(provider):
    if module_namespace has provider_key set:
        return module_namespace.provider_key
    else if environment variable for provider exists:
        return environment_variable(provider)
    else:
        raise MissingCredentialError

2. Layered Defaults

Settings form a hierarchy: explicit per-call parameters override module-level globals, which override environment variables. This layered approach provides both convenience (set once) and flexibility (override when needed).

# Pseudocode: parameter resolution
function resolve_param(name, call_value, module_value, env_value, default):
    if call_value is not None:
        return call_value
    if module_value is not None:
        return module_value
    if env_value is not None:
        return env_value
    return default

3. Provider-Agnostic Namespace

A single configuration namespace maps logical names (api_key, api_base, api_version) to provider-specific values. This abstraction allows dispatch logic to query one location regardless of the target provider.

4. Behavioral Flags

Beyond credentials, the configuration namespace houses flags that alter runtime behavior globally -- such as whether to drop unsupported parameters, modify parameters for compatibility, enable retries, or set maximum token limits.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment