Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:LLMBook zh LLMBook zh github io Trainer Train LoRA

From Leeroopedia


Knowledge Sources
Domains Deep_Learning, Training, Parameter_Efficient_Finetuning
Last Updated 2026-02-08 00:00 GMT

Overview

Concrete tool for training LoRA adapters using HuggingFace Trainer provided by the Transformers library.

Description

In the LoRA context, Trainer.train() trains only the adapter parameters of a PeftModel. The repository uses the same Trainer infrastructure as pre-training and SFT, but the model passed is a PeftModel with frozen base weights. Saved checkpoints contain only the small adapter files.

This is a Wrapper Doc documenting how the LLMBook repository uses HuggingFace Trainer for LoRA training.

Usage

Call trainer.train() after setting up the Trainer with a PeftModel from get_peft_model.

Code Reference

Source Location

  • Repository: LLMBook-zh
  • File: code/7.4 LoRA实践.py
  • Lines: 43

Signature

trainer = Trainer(
    model=model,          # PeftModel with LoRA adapters
    args=args,            # Arguments(TrainingArguments) with LoRA fields
    tokenizer=tokenizer,
    train_dataset=train_dataset,
)
trainer.train()  # Trains only LoRA adapter weights

Import

from transformers import Trainer

I/O Contract

Inputs

Name Type Required Description
model PeftModel Yes Model with LoRA adapters from get_peft_model
args TrainingArguments Yes Training hyperparameters
tokenizer AutoTokenizer Yes Tokenizer
train_dataset Dataset Yes Training data

Outputs

Name Type Description
train() returns TrainOutput Training metrics
checkpoints Files LoRA adapter weight files only

Usage Examples

from peft import LoraConfig, TaskType, get_peft_model
from transformers import Trainer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
peft_config = LoraConfig(task_type=TaskType.CAUSAL_LM, r=16, lora_alpha=16, lora_dropout=0.05)
model = get_peft_model(model, peft_config)

trainer = Trainer(model=model, args=args, tokenizer=tokenizer, train_dataset=dataset)
trainer.train()

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment