Implementation:LLMBook zh LLMBook zh github io LoraConfig Get Peft Model
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, Parameter_Efficient_Finetuning |
| Last Updated | 2026-02-08 00:00 GMT |
Overview
Concrete tool for configuring and applying LoRA adapters to pre-trained models using the PEFT library, as used in the LLMBook repository.
Description
LoraConfig defines LoRA hyperparameters, and get_peft_model applies them to a pre-trained model. In this repository, the default configuration uses rank 16, alpha 16, and dropout 0.05 for causal language modeling.
This is a Wrapper Doc documenting how the LLMBook repository uses the external PEFT library.
Usage
Use LoraConfig + get_peft_model after loading the base model and before training. The resulting PeftModel can be passed directly to HuggingFace Trainer.
Code Reference
Source Location
- Repository: LLMBook-zh
- File: code/7.4 LoRA实践.py
- Lines: 36-42
Signature
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
r: int = 16,
lora_alpha: int = 16,
lora_dropout: float = 0.05,
)
model = get_peft_model(model: PreTrainedModel, peft_config: LoraConfig) -> PeftModel
Import
from peft import LoraConfig, TaskType, get_peft_model
External Reference
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| task_type | TaskType | Yes | TaskType.CAUSAL_LM for language modeling |
| r | int | No | LoRA rank (default 16) |
| lora_alpha | int | No | Scaling factor (default 16) |
| lora_dropout | float | No | Dropout rate (default 0.05) |
| model | PreTrainedModel | Yes | Base model to wrap with LoRA adapters |
Outputs
| Name | Type | Description |
|---|---|---|
| return | PeftModel | Model with LoRA adapters injected, base weights frozen |
Usage Examples
from peft import LoraConfig, TaskType, get_peft_model
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
r=16,
lora_alpha=16,
lora_dropout=0.05,
)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
# trainable params: ~4M / total params: ~7B
Related Pages
Implements Principle
Requires Environment
- Environment:LLMBook_zh_LLMBook_zh_github_io_PyTorch_CUDA_GPU_Environment
- Environment:LLMBook_zh_LLMBook_zh_github_io_HuggingFace_Transformers_Stack