Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Mlflow Mlflow Log Metric

From Leeroopedia
Knowledge Sources
Domains ML_Ops, Experiment_Tracking
Last Updated 2026-02-13 20:00 GMT

Overview

Concrete tool for recording numeric performance metrics within an MLflow run, provided by the MLflow library.

Description

mlflow.log_metric and mlflow.log_metrics are fluent API functions for recording numeric performance measurements under the currently active MLflow run. log_metric records a single metric value at an optional step and timestamp, while log_metrics records a dictionary of metric key-value pairs in a single batch call. Both functions support optional association with a model ID and dataset for fine-grained tracking. If no run is active, a new run is automatically created.

Usage

Use log_metric inside a training loop to record per-step or per-epoch metrics such as loss, accuracy, or learning rate. Use log_metrics to log multiple evaluation metrics at once after a validation pass. Provide the step parameter to build training curves that can be visualized in the MLflow UI. Set synchronous=False in tight training loops where blocking on I/O is undesirable.

Code Reference

Source Location

  • Repository: mlflow
  • File: mlflow/tracking/fluent.py
  • Lines (log_metric): L1007-1100
  • Lines (log_metrics): L1163-1252

Signature

def log_metric(
    key: str,
    value: float,
    step: int | None = None,
    synchronous: bool | None = None,
    timestamp: int | None = None,
    run_id: str | None = None,
    model_id: str | None = None,
    dataset: Dataset | DatasetEntity | None = None,
) -> RunOperations | None: ...

def log_metrics(
    metrics: dict[str, float],
    step: int | None = None,
    synchronous: bool | None = None,
    run_id: str | None = None,
    timestamp: int | None = None,
    model_id: str | None = None,
    dataset: Dataset | DatasetEntity | None = None,
) -> RunOperations | None: ...

Import

import mlflow

I/O Contract

Inputs

Name Type Required Description
key str Yes (log_metric) Metric name. May contain alphanumerics, underscores, dashes, periods, spaces, and slashes. Maximum length 250 characters.
value float Yes (log_metric) Metric value. Special values like +/- Infinity may be replaced by store-specific bounds (e.g., max/min float for SQL stores).
metrics dict[str, float] Yes (log_metrics) Dictionary mapping metric names to float values for batch logging.
step int or None No Training step or epoch number associated with the metric. Defaults to 0 if unspecified.
synchronous bool or None No If True, blocks until logged. If False, returns a future. If None, reads from MLFLOW_ENABLE_ASYNC_LOGGING.
timestamp int or None No Unix timestamp in milliseconds when the metric was computed. Defaults to current system time.
run_id str or None No If specified, logs the metric to the given run instead of the currently active run.
model_id str or None No ID of the model associated with this metric. Falls back to the active model ID if not specified.
dataset Dataset or DatasetEntity or None No The dataset associated with the metric, enabling dataset-level performance tracking.

Outputs

Name Type Description
(log_metric return) RunOperations or None When synchronous=True, returns None. When synchronous=False, returns a RunOperations future representing the pending logging operation.
(log_metrics return) RunOperations or None When synchronous=True, returns None. When synchronous=False, returns a RunOperations future for the batch operation.

Usage Examples

Basic Usage

import mlflow

with mlflow.start_run():
    # Log a single metric
    mlflow.log_metric("mse", 2500.00)

    # Log a metric with a step for training curves
    for epoch in range(10):
        loss = 1.0 / (epoch + 1)
        mlflow.log_metric("train_loss", loss, step=epoch)

Batch Metric Logging

import mlflow

with mlflow.start_run():
    # Log multiple evaluation metrics at once
    eval_metrics = {
        "accuracy": 0.95,
        "precision": 0.93,
        "recall": 0.91,
        "f1_score": 0.92,
    }
    mlflow.log_metrics(eval_metrics)

Asynchronous Logging

import mlflow

with mlflow.start_run():
    # Log metrics asynchronously in a tight training loop
    for step in range(1000):
        loss = compute_loss(step)
        mlflow.log_metric("loss", loss, step=step, synchronous=False)

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment