Implementation:Mlflow Mlflow Start Run
| Knowledge Sources | |
|---|---|
| Domains | ML_Ops, Experiment_Tracking |
| Last Updated | 2026-02-13 20:00 GMT |
Overview
Concrete tool for creating or resuming an MLflow experiment run, provided by the MLflow library.
Description
mlflow.start_run is the primary entry point in the MLflow fluent API for initiating a tracked experiment run. It creates a new run (or resumes an existing one by ID) and returns an ActiveRun context manager. While the run is active, all calls to logging functions (log_param, log_metric, log_artifact, etc.) target this run. The function pushes the run onto a thread-local _active_run_stack, enabling nested run support. When used as a context manager, the run is automatically ended (with appropriate status) when the with block exits.
Usage
Call start_run at the beginning of any training or evaluation task. Prefer using it as a context manager (with mlflow.start_run() as run:) to guarantee clean run termination. Pass run_id to resume a previously started run. Set nested=True or provide parent_run_id when creating child runs within a hyperparameter search or cross-validation loop. Enable log_system_metrics=True when CPU/GPU monitoring is needed.
Code Reference
Source Location
- Repository: mlflow
- File:
mlflow/tracking/fluent.py - Lines: L325-591
Signature
def start_run(
run_id: str | None = None,
experiment_id: str | None = None,
run_name: str | None = None,
nested: bool = False,
parent_run_id: str | None = None,
tags: dict[str, Any] | None = None,
description: str | None = None,
log_system_metrics: bool | None = None,
) -> ActiveRun: ...
Import
import mlflow
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| run_id | str or None | No | UUID of an existing run to resume. When provided, the run status is set back to RUNNING and other creation parameters are ignored. |
| experiment_id | str or None | No | ID of the experiment under which to create the run. If unspecified, uses the active experiment from set_experiment, MLFLOW_EXPERIMENT_NAME, MLFLOW_EXPERIMENT_ID, or the default experiment.
|
| run_name | str or None | No | Human-readable name for the run. A random name is generated if not provided. |
| nested | bool | No | If True, allows creating a nested child run when a parent run is already active. Defaults to False. |
| parent_run_id | str or None | No | UUID of a parent run under which to nest this run. The parent must be in ACTIVE state. |
| tags | dict[str, Any] or None | No | Dictionary of tags to set on the run at creation or resumption time. |
| description | str or None | No | Free-text description for the run, stored as the mlflow.note.content tag.
|
| log_system_metrics | bool or None | No | If True, logs CPU/GPU/memory metrics via a background monitor. If None, reads from MLFLOW_ENABLE_SYSTEM_METRICS_LOGGING.
|
Outputs
| Name | Type | Description |
|---|---|---|
| return value | mlflow.ActiveRun | A context manager wrapping the mlflow.entities.Run object. Provides .info (run metadata including run_id) and .data (params, metrics, tags). The run is pushed onto the thread-local _active_run_stack.
|
Usage Examples
Basic Usage
import mlflow
mlflow.set_experiment("my-experiment")
with mlflow.start_run(run_name="training-v1") as run:
mlflow.log_param("learning_rate", 0.01)
mlflow.log_metric("accuracy", 0.95)
print(f"Run ID: {run.info.run_id}")
Nested Runs
import mlflow
with mlflow.start_run(run_name="PARENT_RUN") as parent:
mlflow.log_param("search_strategy", "grid")
for lr in [0.01, 0.001, 0.0001]:
with mlflow.start_run(run_name=f"child-lr-{lr}", nested=True) as child:
mlflow.log_param("learning_rate", lr)
mlflow.log_metric("val_loss", 1.0 / lr)
Resuming an Existing Run
import mlflow
# Resume a previous run to add late-arriving metrics
with mlflow.start_run(run_id="abc123def456") as run:
mlflow.log_metric("test_accuracy", 0.93)
Related Pages
Implements Principle
Requires Environment
- Environment:Mlflow_Mlflow_Python_Runtime_Environment
- Environment:Mlflow_Mlflow_GPU_System_Metrics_Environment