Implementation:Mlflow Mlflow Mlflow Server CLI
| Knowledge Sources | |
|---|---|
| Domains | ML_Ops, Experiment_Tracking |
| Last Updated | 2026-02-13 20:00 GMT |
Overview
Concrete tool for launching the MLflow tracking server and web UI, provided by the MLflow library as a CLI command.
Description
mlflow server is the command-line entry point for starting the MLflow tracking server. It launches a web application that serves both the tracking API (for programmatic access) and the tracking UI (for visual experiment exploration). The server reads from a configurable backend store (local filesystem or SQL database) and serves artifacts from a configurable artifact root. By default, it uses uvicorn as the application server, with options for gunicorn and waitress as alternatives. The server includes built-in security middleware for host validation and CORS control.
Internally, the CLI command delegates to mlflow.server._run_server, which configures environment variables, initializes backend stores, and spawns the application server process.
Usage
Run mlflow server from the command line to start the tracking UI and API server. Specify --backend-store-uri to point to a SQL database for production use, or leave it unset for local filesystem storage. Set --host 0.0.0.0 and configure --allowed-hosts to accept connections from other machines. Use --serve-artifacts to enable artifact proxying through the server. Access the web UI in a browser at the configured host and port (default http://localhost:5000).
Code Reference
Source Location
- Repository: mlflow
- File (CLI command):
mlflow/cli/__init__.py - Lines (CLI command): L365-709
- File (_run_server):
mlflow/server/__init__.py - Lines (_run_server): L318-499
Signature
# CLI invocation
# mlflow server [OPTIONS]
# Internal Python function
def _run_server(
*,
file_store_path,
registry_store_uri,
default_artifact_root,
serve_artifacts,
artifacts_only,
artifacts_destination,
host,
port,
static_prefix=None,
workers=None,
gunicorn_opts=None,
waitress_opts=None,
expose_prometheus=None,
app_name=None,
uvicorn_opts=None,
env_file=None,
secrets_cache_ttl=None,
secrets_cache_max_size=None,
): ...
Import
# Typically used via CLI, not imported directly:
# mlflow server --backend-store-uri sqlite:///mlflow.db --host 0.0.0.0 --port 5000
# For programmatic use (advanced):
from mlflow.server import _run_server
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| --backend-store-uri | str (CLI option) | No | URI for the backend store where experiments, runs, params, and metrics are persisted. Accepts SQLAlchemy connection strings (e.g. sqlite:///mlflow.db) or local filesystem URIs. Defaults to ./mlruns.
|
| --host | str (CLI option) | No | Network interface to bind to. Defaults to localhost (local connections only). Use 0.0.0.0 to accept remote connections.
|
| --port | int (CLI option) | No | Port number to listen on. Defaults to 5000.
|
| --default-artifact-root | str (CLI option) | No | Default root directory for storing artifacts for new experiments. Required when using SQL backend stores. |
| --serve-artifacts | flag (CLI option) | No | If set, the server proxies artifact upload and download requests. Enables artifact access without direct storage credentials. |
| --workers | int (CLI option) | No | Number of server worker processes. Defaults to 4 for uvicorn. |
| --allowed-hosts | str (CLI option) | No | Comma-separated list of allowed host headers for security middleware. |
| --cors-allowed-origins | str (CLI option) | No | Comma-separated list of origins allowed for CORS requests. |
| --dev | flag (CLI option) | No | Enables debug logging and auto-reload for development. Not supported on Windows. |
Outputs
| Name | Type | Description |
|---|---|---|
| Web UI | HTTP service | Interactive web application at http://host:port for browsing experiments, comparing runs, viewing metrics charts, and downloading artifacts.
|
| REST API | HTTP service | Tracking API endpoints at http://host:port/api/ that accept programmatic logging and querying of experiments, runs, metrics, parameters, and artifacts.
|
Usage Examples
Basic Usage
# Start the server with default local file storage
mlflow server
# Server will be available at http://localhost:5000
Production Setup with SQL Backend
# Start server with PostgreSQL backend and artifact proxying
mlflow server \
--backend-store-uri postgresql://user:pass@db-host:5432/mlflow \
--default-artifact-root s3://my-bucket/mlflow-artifacts \
--serve-artifacts \
--host 0.0.0.0 \
--port 5000 \
--allowed-hosts "mlflow.example.com"
Local Development
# Start server in development mode with auto-reload and debug logging
mlflow server \
--backend-store-uri sqlite:///mlflow.db \
--dev
SQLite Backend with Custom Artifact Root
# Start server with SQLite backend and local artifact storage
mlflow server \
--backend-store-uri sqlite:///mlflow.db \
--default-artifact-root /data/mlflow-artifacts \
--host 127.0.0.1 \
--port 8080