Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Volcengine Verl SFT Checkpointing

From Leeroopedia


Knowledge Sources
Domains Training_Infrastructure, Model_Management, Distributed_Systems
Last Updated 2026-02-07 14:00 GMT

Overview

The process of gathering FSDP-sharded model weights, consolidating them, and saving in HuggingFace format for deployment or continued training.

Description

SFT Checkpointing handles saving model checkpoints during supervised fine-tuning with FSDP. Because model parameters are sharded across GPUs during training, checkpoint saving requires gathering all shards before serialization.

The process:

  1. Gather sharded parameters to a single process (rank 0)
  2. Save model weights in HuggingFace format (compatible with from_pretrained)
  3. Save tokenizer and configuration files alongside weights
  4. Optionally copy checkpoints to HDFS for distributed storage

For LoRA models, only the adapter weights are saved (much smaller than full model checkpoints).

Usage

SFT checkpointing is triggered at intervals configured by trainer.save_freq. Checkpoints are saved to trainer.default_local_dir.

Theoretical Basis

FSDP checkpoint saving follows a gather-then-save pattern:

# Abstract FSDP checkpoint saving
# 1. Gather sharded weights
full_state_dict = gather_fsdp_state_dict(model)
# 2. Save on rank 0 only
if rank == 0:
    model.save_pretrained(checkpoint_path)
    tokenizer.save_pretrained(checkpoint_path)
# 3. Synchronize all ranks
barrier()

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment