Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Confident ai Deepeval EvaluationDataset Push

From Leeroopedia
Sources Domains Last Updated
DeepEval Synthetic_Data, LLM_Evaluation, Data_Management 2026-02-14 09:00 GMT

Overview

The push method on the EvaluationDataset class publishes evaluation data to the Confident AI cloud platform, enabling collaborative review and centralized dataset management.

Description

push uploads the contents of an EvaluationDataset (its goldens) to the Confident AI platform under a specified alias. The alias parameter serves as the dataset identifier on the platform, and the finalized flag indicates whether the dataset is ready for evaluation use or still under review. Once pushed, the dataset is accessible through the Confident AI web dashboard for review, annotation, and programmatic retrieval.

Usage

Call this method after populating an EvaluationDataset with goldens to publish the data to the Confident AI platform for team-wide access and review.

Code Reference

Source Location: Repository: confident-ai/deepeval, File: deepeval/dataset/dataset.py (L717-751)

Signature:

def push(
    self,
    alias: str,
    finalized: bool = True,
):
    ...

Import:

from deepeval.dataset import EvaluationDataset

I/O Contract

Inputs:

Parameter Type Required Description
alias str Yes Dataset name/identifier on the Confident AI platform
finalized bool No Whether the dataset is finalized and ready for use (default: True)

Outputs:

  • Dataset is uploaded and published on the Confident AI platform under the specified alias

Usage Examples

from deepeval.dataset import EvaluationDataset

dataset = EvaluationDataset(goldens=goldens)
dataset.push(alias="my-eval-dataset-v2", finalized=True)

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment