Principle:Alibaba ROLL LoRA Checkpoint Merging
| Knowledge Sources | |
|---|---|
| Domains | Model_Management, Diffusion_Models |
| Last Updated | 2026-02-07 20:00 GMT |
Overview
A model management principle for merging trained LoRA adapter weights back into the base diffusion model for deployment.
Description
After training, LoRA adapters are stored separately from the base model. For deployment, the adapters must be merged back into the base weights. The merge formula computes: merged_weight = base_weight + alpha * (W_up @ W_down), where W_up and W_down are the LoRA matrices.
This principle also covers shard consolidation when the base model is stored in multiple safetensor shard files.
Usage
Use after reward flow training is complete, before deploying the fine-tuned diffusion model for inference.
Theoretical Basis
LoRA merge:
Where and .
Related Pages
Implemented By
Related Heuristics
No specific heuristics inform this principle.