Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:Mlflow Mlflow Docker Container Environment

From Leeroopedia
Knowledge Sources
Domains Infrastructure, Deployment
Last Updated 2026-02-13 20:00 GMT

Overview

Docker environment for building and deploying MLflow model containers with nginx reverse proxy and uvicorn ASGI server.

Description

This environment provides the Docker runtime required for containerizing MLflow models using `mlflow models build-docker`. The generated Docker image uses Ubuntu 22.04 as a base, includes an nginx reverse proxy, and serves models via uvicorn on port 8080. The environment supports both conda and virtualenv-based dependency management within the container, and can optionally include Java/JDK for Spark ML models.

Usage

Use this environment when you need to containerize MLflow models for deployment using `mlflow models build-docker` or `mlflow models generate-dockerfile`. It is also required when running the Docker Compose reference stack for MLflow with PostgreSQL and MinIO.

System Requirements

Category Requirement Notes
OS Linux, macOS, or Windows Docker Desktop required on macOS/Windows
Docker Docker Engine 4.0+ Docker Python SDK >= 4.0.0
Disk 2GB+ For base image layers and model artifacts
Network Port 8080 Default container serving port

Dependencies

System Packages (Inside Container)

  • `nginx` (reverse proxy)
  • `wget`, `curl`, `ca-certificates`
  • `bzip2`, `build-essential`, `cmake`, `git-core`
  • `openjdk-{version}-jdk` (optional, for Java/Spark models)
  • `maven` (optional, for Java models)

Python Packages

  • `docker` >= 4.0.0, < 8 (on the host, for building images)
  • `mlflow` (inside the container)
  • `uvicorn` (inside the container, ASGI server)

Container Configuration

  • Base image: `ubuntu:22.04` or `python:{version}-slim`
  • Nginx upstream: `127.0.0.1:8000` (uvicorn)
  • Client max body size: 100MB
  • Keepalive timeout: 75s

Credentials

The following environment variables can be set at container runtime:

  • `DISABLE_NGINX`: Set to disable nginx (for platforms like Google Cloud Run that handle proxying)
  • `MLFLOW_MODELS_WORKERS`: Override the default uvicorn worker count (default: CPU count)
  • `MLFLOW_DISABLE_ENV_CREATION`: Disable creating new conda/virtualenv inside container
  • `SERVING_MODEL_CONFIG`: Path to model configuration file
  • `MLFLOW_DOCKER_OPENJDK_VERSION`: OpenJDK version for the image (default: "11")

Quick Install

# Build a Docker image for a logged model
mlflow models build-docker -m "runs:/RUN_ID/model" -n "my-model-image"

# Generate a Dockerfile without building
mlflow models generate-dockerfile -m "runs:/RUN_ID/model" -d ./output

# Run the container
docker run -p 8080:8080 my-model-image

# Use Docker Compose for full MLflow stack
cd docker-compose && docker compose up

Code Evidence

Docker image building from `mlflow/models/cli.py:251-311`:

@commands.command("build-docker")
@click.option("--model-uri", "-m")
@click.option("--name", "-n", default="mlflow-pyfunc-servable")
def build_docker(model_uri, name, ...):
    # Builds Docker image for model serving

Nginx configuration from `mlflow/models/container/scoring_server/nginx.conf`:

upstream gunicorn {
    server 127.0.0.1:8000;
}
server {
    listen 8080;
    client_max_body_size 100m;
}

Container worker control from `mlflow/models/container/__init__.py`:

# DISABLE_NGINX - Disable nginx for platforms like Google Cloud Run
# MLFLOW_MODELS_WORKERS - Custom uvicorn worker count (default: CPU count)

Common Errors

Error Message Cause Solution
`docker.errors.DockerException` Docker daemon not running Start Docker Desktop or Docker service
`No such image` Base image not pulled Ensure internet access for image pull
`Port 8080 already in use` Port conflict Map to different host port: `-p 9090:8080`
`Permission denied` Docker socket permissions Add user to docker group: `sudo usermod -aG docker $USER`

Compatibility Notes

  • Google Cloud Run: Set `DISABLE_NGINX=true` as Cloud Run provides its own proxy layer.
  • Docker Compose stack: Includes PostgreSQL 15 and MinIO for S3-compatible artifact storage. Requires `psycopg2-binary` and `boto3` installed in the MLflow container.
  • Java/Spark models: Automatically includes OpenJDK and Maven when Java dependencies are detected.
  • ARM64: Docker images can be built for ARM architectures but may need platform-specific base images.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment