Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Eric mitchell Direct preference optimization Checkpoint Saving

From Leeroopedia


Knowledge Sources
Domains Checkpointing, Training, Deep_Learning
Last Updated 2026-02-08 02:00 GMT

Overview

A model persistence technique that saves policy weights, optimizer state, and learning rate scheduler state to disk for training resumption and downstream use.

Description

Checkpoint saving serializes the complete training state to disk at regular intervals and at the end of training. This serves two purposes:

  • Training resumption: Saving optimizer and scheduler state alongside model weights allows training to be resumed from the exact point it was interrupted.
  • Downstream use: The saved policy weights (policy.pt) from SFT training are used as initialization for DPO training, forming the critical link between the two training stages.

Each checkpoint is a dictionary containing the step index, state dict, and evaluation metrics, enabling tracking of training progress across checkpoints.

Usage

Use this principle at evaluation intervals during training and at the end of training. The SFT checkpoint (policy.pt) is a required input for DPO training.

Theoretical Basis

Checkpoint saving preserves the complete training state (θ,ω,ηt) where θ represents model parameters, ω represents optimizer state (e.g., running averages for RMSprop), and ηt represents the scheduler state.

Pseudo-code:

# Abstract checkpointing (NOT actual implementation)
save(path, {
    'step_idx': current_step,
    'state': model.state_dict(),
    'metrics': eval_metrics,
})
# Repeat for optimizer and scheduler

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment