Principle:LLMBook zh LLMBook zh github io LoRA Training Execution
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, Training, Parameter_Efficient_Finetuning |
| Last Updated | 2026-02-08 00:00 GMT |
Overview
The training loop that fine-tunes only the LoRA adapter parameters while keeping the base model weights frozen.
Description
LoRA Training Execution uses the standard HuggingFace Trainer to train a PeftModel. Only the LoRA adapter parameters (A and B matrices) are updated during training, resulting in much lower memory usage and faster training compared to full fine-tuning. The saved checkpoints contain only the adapter weights, not the full model.
Usage
Use this after applying LoRA configuration via get_peft_model. The training process is identical to standard Trainer usage, but only adapter weights are updated and saved.
Theoretical Basis
LoRA training follows the same optimization loop as standard training, with the key difference that:
- Only LoRA parameters have requires_grad=True.
- Gradient computation and optimizer steps only affect the small adapter matrices.
- Memory savings are proportional to the ratio of adapter parameters to total parameters.