Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Evidentlyai Evidently Report Run

From Leeroopedia
Knowledge Sources
Domains ML_Monitoring, Evaluation
Last Updated 2026-02-14 12:00 GMT

Overview

Concrete method for executing Evidently Report evaluation pipelines over datasets provided by the Evidently library.

Description

Report.run() executes all configured metrics against the provided datasets and returns a Snapshot with computed results. It accepts current data (required) and optional reference data for comparison-based metrics like drift detection. The method handles DataFrame-to-Dataset conversion automatically.

Usage

Call this method after configuring a Report and preparing your datasets. It is the central execution point in every Evidently workflow.

Code Reference

Source Location

  • Repository: evidently
  • File: src/evidently/core/report.py
  • Lines: L903-938

Signature

class Report:
    def run(
        self,
        current_data: PossibleDatasetTypes,
        reference_data: Optional[PossibleDatasetTypes] = None,
        additional_data: Optional[Dict[str, PossibleDatasetTypes]] = None,
        timestamp: Optional[datetime] = None,
        metadata: Dict[str, MetadataValueType] = None,
        tags: List[str] = None,
        name: Optional[str] = None,
    ) -> Snapshot:
        """
        Args:
            current_data: Current dataset (DataFrame or Dataset).
            reference_data: Optional reference dataset for comparison/drift.
            additional_data: Optional dict of additional datasets by name.
            timestamp: Optional timestamp for the snapshot (defaults to now).
            metadata: Optional metadata to merge with report metadata.
            tags: Optional tags to merge with report tags.
            name: Optional name for the snapshot.
        Returns:
            Snapshot with computed metric results, tests, and visualizations.
        """

Import

from evidently import Report

I/O Contract

Inputs

Name Type Required Description
current_data PossibleDatasetTypes Yes Current dataset to evaluate (DataFrame or Dataset)
reference_data Optional[PossibleDatasetTypes] No Reference baseline dataset for comparison
additional_data Optional[Dict[str, PossibleDatasetTypes]] No Named additional datasets
timestamp Optional[datetime] No Snapshot timestamp (defaults to now)
metadata Dict[str, MetadataValueType] No Additional metadata
tags List[str] No Additional tags
name Optional[str] No Snapshot name

Outputs

Name Type Description
return value Snapshot Object with computed metrics, tests, and visualizations

Usage Examples

Basic Execution

from evidently import Report, Dataset, DataDefinition
from evidently.metrics import ValueDrift

report = Report([ValueDrift(column="age")])
snapshot = report.run(current_dataset, reference_dataset)

Batch Monitoring with Timestamps

from datetime import datetime

for i, batch in enumerate(data_batches):
    current = Dataset.from_pandas(batch, data_definition=data_def)
    snapshot = report.run(
        current_data=current,
        reference_data=reference,
        timestamp=datetime(2024, 1, 1 + i),
        name=f"batch_{i}",
    )
    # Process or store snapshot

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment