Implementation:Hpcaitech ColossalAI HybridAdam CosineScheduler
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Optimization, Deep_Learning |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
Concrete tools for optimization provided by ColossalAI: a heterogeneous Adam optimizer and a cosine learning rate scheduler with warmup.
Description
HybridAdam extends CPUAdam to support parameters on both GPU and CPU, using fused CUDA kernels for GPU parameters and optimized C++ kernels for CPU parameters. CosineAnnealingWarmupLR provides cosine annealing with a linear warmup phase.
Usage
Use HybridAdam as the optimizer for any ColossalAI training workflow, especially with ZeRO or Gemini plugins. Use CosineAnnealingWarmupLR as the standard scheduler for LLM training.
Code Reference
Source Location
- Repository: ColossalAI
- File (HybridAdam): colossalai/nn/optimizer/hybrid_adam.py
- Lines (HybridAdam): 11-192
- File (CosineAnnealingWarmupLR): colossalai/nn/lr_scheduler/cosine.py
- Lines (CosineAnnealingWarmupLR): 49-65
Signature
class HybridAdam(CPUAdam):
def __init__(
self,
model_params,
lr: float = 1e-3,
bias_correction: bool = True,
betas: Tuple[float, float] = (0.9, 0.999),
eps: float = 1e-8,
weight_decay: float = 0,
adamw_mode: bool = True,
nvme_offload_fraction: float = 0.0,
nvme_offload_dir: Optional[str] = None,
):
"""
Hybrid Adam optimizer for CPU+GPU parameter updates.
"""
class CosineAnnealingWarmupLR(WarmupScheduler):
def __init__(
self,
optimizer,
total_steps: int,
warmup_steps: int = 0,
eta_min: float = 0.0,
last_epoch: int = -1,
):
"""
Cosine annealing LR scheduler with linear warmup.
"""
Import
from colossalai.nn.optimizer import HybridAdam
from colossalai.nn.lr_scheduler import CosineAnnealingWarmupLR
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| model_params | iterable | Yes | Model parameters to optimize |
| lr | float | No | Learning rate (default: 1e-3, typically 5e-6 for LLM SFT) |
| betas | Tuple[float, float] | No | Adam momentum coefficients (default: (0.9, 0.999)) |
| weight_decay | float | No | L2 regularization weight (default: 0, typically 0.1) |
| total_steps | int | Yes | Total training steps for scheduler |
| warmup_steps | int | No | Linear warmup steps (default: 0) |
Outputs
| Name | Type | Description |
|---|---|---|
| optimizer | HybridAdam | Configured optimizer instance |
| lr_scheduler | CosineAnnealingWarmupLR | Configured LR scheduler instance |
Usage Examples
Standard SFT Configuration
from colossalai.nn.optimizer import HybridAdam
from colossalai.nn.lr_scheduler import CosineAnnealingWarmupLR
# Create optimizer
optimizer = HybridAdam(
model.parameters(),
lr=5e-6,
betas=(0.9, 0.95),
weight_decay=0.1,
)
# Create scheduler with warmup
total_steps = num_epochs * len(train_dataloader) // accumulation_steps
warmup_steps = int(total_steps * 0.03) # 3% warmup
lr_scheduler = CosineAnnealingWarmupLR(
optimizer=optimizer,
total_steps=total_steps,
warmup_steps=warmup_steps,
eta_min=0.1 * 5e-6, # min LR = 10% of base LR
)
Related Pages
Implements Principle
Requires Environment
Uses Heuristic
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment