Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:BerriAI Litellm Docker Deployment

From Leeroopedia
Knowledge Sources
Domains Infrastructure, Deployment
Last Updated 2026-02-15 16:00 GMT

Overview

Docker container environment using Chainguard Wolfi base image for deploying the LiteLLM proxy server with PostgreSQL and optional Redis.

Description

This environment defines the containerized deployment setup for the LiteLLM proxy server. The production Dockerfile uses Chainguard Wolfi as a security-hardened base image. A non-root variant runs as UID 101:101 with read-only filesystem, dropped capabilities, and tmpfs mounts for cache/migrations. The Docker Compose stack includes PostgreSQL 16, the LiteLLM proxy, and optionally Prometheus for monitoring.

Usage

Use this environment for production deployment of the LiteLLM proxy server. It bundles the proxy server, database migrations, and the admin UI dashboard. Required for Kubernetes deployments, cloud container services, and any Docker-based deployment.

System Requirements

Category Requirement Notes
Software Docker >= 20.10 Docker Compose v2 recommended
Hardware >= 2 CPU cores, 2GB RAM For proxy + PostgreSQL; scale based on traffic
Disk >= 5GB For Docker images and PostgreSQL data volume
Network Ports 4000 (proxy), 5432 (PostgreSQL), 9090 (Prometheus) Configurable via environment

Dependencies

Docker Images

  • `cgr.dev/chainguard/wolfi-base` (production Dockerfile)
  • `python:3.11-slim` (development Dockerfile)
  • `python:3.11-alpine` (Alpine Dockerfile)
  • `postgres:16` (database)
  • `prom/prometheus` (optional monitoring)

System Packages (in container)

  • `bash`, `openssl`, `tzdata` (runtime)
  • `nodejs`, `npm` (for Prisma engine and admin UI)
  • `libsndfile` (for audio processing)
  • `supervisor` (process management)
  • `gcc`, `python3-dev`, `openssl-dev` (build stage only)

Credentials

  • `DATABASE_URL`: PostgreSQL connection string.
  • `LITELLM_MASTER_KEY`: Master admin API key for the proxy.
  • `LITELLM_LICENSE`: Enterprise license key (optional).
  • `STORE_MODEL_IN_DB`: Set to "true" to persist model config in database.
  • `LITELLM_NON_ROOT`: Enable non-root mode for security.

Quick Install

# Clone and deploy with Docker Compose
git clone https://github.com/BerriAI/litellm
cd litellm

# Set required environment variables
export DATABASE_URL="postgresql://llmproxy:dbpassword16@db:5432/litellm"
export LITELLM_MASTER_KEY="sk-1234"

# Start the stack
docker compose up -d

Code Evidence

Docker Compose service definition from `docker-compose.yml`:

litellm:
  build:
    context: .
    args:
      target: runtime
  image: ghcr.io/berriai/litellm:main-latest
  ports:
    - "4000:4000"
  volumes:
    - ./litellm-config.yaml:/app/config.yaml
  environment:
    DATABASE_URL: "postgresql://llmproxy:dbpassword16@db:5432/litellm"
    STORE_MODEL_IN_DB: "True"
    LITELLM_MASTER_KEY: "sk-1234"

Security hardening from `docker-compose.hardened.yml`:

litellm:
  security_opt:
    - no-new-privileges:true
  cap_drop:
    - ALL
  read_only: true
  user: "101:101"
  tmpfs:
    - /app/cache:size=128m
    - /app/migrations:size=64m

Non-root mode detection from `litellm/proxy/proxy_server.py:1158`:

if os.getenv("LITELLM_NON_ROOT"):
    # Skip UI serving, use minimal routes

Common Errors

Error Message Cause Solution
`pg_isready: could not connect to server` PostgreSQL not ready Wait for database health check to pass before starting proxy
`PermissionError: [Errno 13]` in non-root mode Read-only filesystem Ensure tmpfs mounts at `/app/cache` and `/app/migrations`
`prisma migrate deploy failed` Missing database or migrations Verify `DATABASE_URL` and run `prisma migrate deploy` manually

Compatibility Notes

  • Chainguard Wolfi: Production image uses Wolfi for minimal attack surface. Dev images use `python:3.11-slim`.
  • Non-Root: The `Dockerfile.non_root` variant runs as UID 101:101 with all capabilities dropped and read-only root filesystem.
  • Helm Chart: A Helm chart index is available in `index.yaml` for Kubernetes deployment.
  • AWS Lambda: A minimal Lambda handler exists at `litellm/proxy/lambda.py` for serverless deployment.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment