Implementation:Mlflow Mlflow Build Docker
| Knowledge Sources | |
|---|---|
| Domains | ML_Ops, Model_Serving |
| Last Updated | 2026-02-13 20:00 GMT |
Overview
Concrete tool for building Docker container images that serve MLflow models via HTTP, provided by the MLflow library as both a CLI command and a Python API.
Description
The build_docker function and its corresponding mlflow models build-docker CLI command produce a Docker image whose default entrypoint serves an MLflow model at port 8080 using the python_function flavor. The image includes nginx as a reverse proxy and uvicorn as the application server. When a model_uri is provided, the model artifacts are embedded directly in the image. When omitted, the image expects a model to be volume-mounted at /opt/ml/model at runtime.
The build process delegates to the flavor backend system via get_flavor_backend(), which generates a Dockerfile and executes the Docker build. Since MLflow 2.10.1, Java is not installed by default (to reduce image size and build time) unless the model flavor requires it (e.g., Spark, H2O, John Snow Labs). Users can force Java installation with the install_java parameter. The default base image is ubuntu:22.04 or python:{version}-slim, but custom base images are supported.
Usage
Use this tool when you need to package an MLflow model into a portable Docker image for deployment to container orchestration platforms such as Kubernetes, Amazon ECS, or Google Cloud Run. It is also useful for creating standardized model-serving images in CI/CD pipelines.
Code Reference
Source Location
- Repository: mlflow
- File:
mlflow/models/cli.py(CLI),mlflow/models/python_api.py(Python API) - Lines: L251-311 (CLI), L20-93 (Python API)
Signature
def build_docker(
model_uri=None,
name="mlflow-pyfunc",
env_manager=VIRTUALENV,
mlflow_home=None,
install_java=False,
install_mlflow=False,
enable_mlserver=False,
base_image=None,
):
Import
from mlflow.models.python_api import build_docker
# Or via CLI:
# mlflow models build-docker -m <model_uri> --name <image_name>
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| model_uri | str | No | URI to the model (e.g., runs:/<run_id>/model). If None, the image expects a model volume-mounted at /opt/ml/model
|
| name | str | No | Name for the Docker image (default: mlflow-pyfunc)
|
| env_manager | str | No | Environment manager: virtualenv (default), conda, or local
|
| mlflow_home | str | No | Path to local MLflow clone (development use only) |
| install_java | bool | No | If True, install Java in the image (default: False; auto-enabled for Spark/H2O flavors) |
| install_mlflow | bool | No | If True, install MLflow into the model's environment |
| enable_mlserver | bool | No | If True, build with Seldon MLServer as the serving backend |
| base_image | str | No | Custom base Docker image (default: ubuntu:22.04 or python:{version}-slim)
|
Outputs
| Name | Type | Description |
|---|---|---|
| Docker image | Docker image | A built Docker image tagged with the specified name, serving at port 8080 with /invocations and /ping endpoints
|
Usage Examples
Basic Usage
from mlflow.models.python_api import build_docker
# Build a Docker image with a specific model baked in
build_docker(
model_uri="runs:/abc123/my-model",
name="my-model-server",
)
# Run the container:
# docker run -p 5001:8080 my-model-server
CLI Usage
# Build an image with an embedded model
mlflow models build-docker \
--model-uri "runs:/abc123/my-model" \
--name "my-model-image"
# Run the image
docker run -p 5001:8080 "my-model-image"
# Build a generic image and mount a model at runtime
mlflow models build-docker --name "generic-server"
docker run -p 5001:8080 \
-v /path/to/model:/opt/ml/model \
"generic-server"
# Disable nginx for platforms like Google Cloud Run
docker run -p 5001:8080 -e DISABLE_NGINX=true "my-model-image"
# Set custom worker count
docker run -p 5001:8080 -e MLFLOW_MODELS_WORKERS=4 "my-model-image"