Principle:Evidentlyai Evidently Report Execution
| Knowledge Sources | |
|---|---|
| Domains | ML_Monitoring, Evaluation, Data_Quality |
| Last Updated | 2026-02-14 12:00 GMT |
Overview
An evaluation execution mechanism that runs configured metrics over datasets and produces a snapshot of results.
Description
Report Execution is the step where a configured Report is run against one or two datasets (current and optional reference) to produce a Snapshot containing all computed metric results, test outcomes, and visualizations. This is the core computation step in every Evidently workflow.
The execution pipeline:
- Converts raw DataFrames to Dataset objects if needed
- Creates an execution Context with dataset metadata
- Iterates through all configured metrics and presets
- Expands presets into individual metrics
- Computes each metric, resolving inter-metric dependencies
- Collects test results from metrics with auto-tests
- Produces a Snapshot with all results
Usage
Use this principle after configuring a Report with metrics and preparing your datasets. It is the execution step in every Evidently workflow. For monitoring dashboards, this is called repeatedly with different data batches and timestamps.
Theoretical Basis
Report execution follows the pipeline pattern with dependency resolution:
# Pseudocode: Execution pipeline
context = create_context(current_data, reference_data)
results = {}
for metric in expand_presets(report.metrics):
deps = metric.get_dependencies()
for dep in deps:
if dep not in results:
results[dep] = compute(dep, context)
results[metric] = compute(metric, context, dependencies=results)
snapshot = Snapshot(results, tests, widgets)
The dependency resolution ensures metrics that depend on other metrics (e.g., drift metrics needing column statistics) are computed in the correct order.