Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Dotnet Machinelearning Trial Result Extraction

From Leeroopedia


Knowledge Sources
Domains Machine_Learning, AutoML
Last Updated 2026-02-09 00:00 GMT

Overview

After AutoML completes, the best trial result encapsulates the optimal trained model, its evaluation metric, training loss, and resource usage metadata, providing everything needed to deploy or further analyze the winning model.

Description

The trial result is the output artifact of an AutoML experiment. It bundles together all information about the best-performing trial:

  • Trained model: The fully fitted model (an ITransformer) that can be used directly for predictions or saved for later deployment. This model was trained with the specific algorithm and hyperparameters that produced the best evaluation score.
  • Metric value: The evaluation score (e.g., accuracy, AUC, F1) achieved by this model on the validation set. This is the primary criterion by which the experiment selected this trial as the best.
  • Loss value: The internal loss used by the tuner for optimization. For maximization metrics like accuracy, the loss is typically 1 - metric. Lower loss is always better from the tuner's perspective.
  • Training duration: The wall-clock time in milliseconds that this particular trial took to train and evaluate. This is useful for understanding the computational cost of the selected model.
  • Trial settings: The complete hyperparameter configuration (algorithm choice, learning rate, regularization, tree depth, etc.) that produced this result. This metadata is essential for reproducibility and for understanding why this particular configuration succeeded.
  • Resource usage: Optional peak CPU utilization and peak memory consumption during the trial, useful for capacity planning in production deployments.

The trial result serves as the bridge between AutoML search and production deployment. The trained model can be saved to disk for serving, and the trial settings can be logged for experiment tracking and reproducibility.

Usage

Extract the trial result immediately after Run() or RunAsync() completes. Use the Model property to obtain the trained transformer for predictions or model saving. Inspect the Metric and Loss properties to assess model quality. Log the TrialSettings for reproducibility. Check DurationInMilliseconds, PeakCpu, and PeakMemoryInMegaByte when evaluating deployment feasibility on resource-constrained environments.

Theoretical Basis

The trial result represents a single point in the evaluated search space:

TrialResult = (model, metric, loss, config, resources)

where:
  model   = Fit(Pipeline(config), D_train)
  metric  = Evaluate(model, D_validation)
  loss    = ToLoss(metric)
  config  = (algorithm_choice, hyperparameters)
  resources = (duration_ms, peak_cpu, peak_memory_mb)

The "best" trial is selected by:

best = argmin over all completed trials T_i of loss(T_i)

In experiment tracking, the trial result also captures the information needed for reproducibility. Given the same data and the same trial settings (algorithm + hyperparameters + random seed), the model should be reproducible. The trial settings therefore serve as both an explanation of the search result and a recipe for recreating it.

The distinction between metric and loss is important: the metric is human-interpretable (e.g., "93.5% accuracy") while the loss is tuner-interpretable (e.g., 0.065). The experiment always reports both, ensuring that humans and the optimization algorithm each have the representation they need.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment