Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Snorkel team Snorkel Multitask Training Execution

From Leeroopedia
Knowledge Sources
Domains Training, Multi_Task_Learning, Optimization
Last Updated 2026-02-14 20:00 GMT

Overview

A training orchestration procedure that manages the multi-task optimization loop with configurable scheduling, logging, and checkpointing.

Description

Multitask Training Execution handles the complexity of training models with multiple tasks and dataloaders. Key considerations include:

  • Batch scheduling: How to interleave batches from different task dataloaders (shuffled across tasks vs sequential per task)
  • Loss aggregation: Computing and combining per-task losses
  • Learning rate scheduling: Adjusting learning rates over training
  • Gradient management: Gradient clipping to prevent exploding gradients
  • Monitoring: Logging metrics and saving checkpoints

This is the same Trainer used for both general multi-task classification and slice-aware training.

Usage

Use this principle when training MultitaskClassifier models. The Trainer handles all training logistics; configure it through TrainerConfig parameters.

Theoretical Basis

The training loop follows:

For each epoch:

  1. Draw batches according to batch_scheduler
  2. For each batch from task t:
    1. Forward pass: y^t=model(x,t)
    2. Compute loss: t=losst(y^t,yt)
    3. Backward pass: t
    4. Gradient clipping: clip
    5. Optimizer step

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment