Implementation:Huggingface Trl Get Peft Config SFT
| Knowledge Sources | |
|---|---|
| Domains | NLP, Training |
| Last Updated | 2026-02-06 17:00 GMT |
Overview
Concrete wrapper for translating TRL's ModelConfig LoRA settings into a PEFT LoraConfig object, provided by the TRL library.
Description
The get_peft_config() function is a thin adapter between TRL's configuration system and the PEFT library. It reads LoRA-related fields from a ModelConfig dataclass and constructs a LoraConfig object that the SFTTrainer uses to wrap the base model with trainable LoRA adapters via peft.get_peft_model().
If model_args.use_peft is False, the function returns None, indicating that full fine-tuning should be used instead. If use_peft is True but the PEFT library is not installed, it raises a ValueError with installation instructions.
Usage
Use this function in any TRL training script where you want to optionally enable LoRA-based parameter-efficient fine-tuning based on command-line or YAML configuration.
Code Reference
Source Location
- Repository: TRL
- File:
trl/trainer/utils.py(lines 309-332) - File:
trl/trainer/model_config.py(lines 18-189,ModelConfigdataclass)
Signature
def get_peft_config(model_args: ModelConfig) -> "PeftConfig | None":
if model_args.use_peft is False:
return None
if not is_peft_available():
raise ValueError(
"You need to have PEFT library installed in your environment, "
"make sure to install `peft`. Make sure to run `pip install -U peft`."
)
peft_config = LoraConfig(
task_type=model_args.lora_task_type,
r=model_args.lora_r,
target_modules=model_args.lora_target_modules,
target_parameters=model_args.lora_target_parameters,
lora_alpha=model_args.lora_alpha,
lora_dropout=model_args.lora_dropout,
bias="none",
use_rslora=model_args.use_rslora,
use_dora=model_args.use_dora,
modules_to_save=model_args.lora_modules_to_save,
)
return peft_config
Import
from trl.trainer.utils import get_peft_config
from trl import ModelConfig
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| model_args | ModelConfig |
Yes | Configuration dataclass containing all model and LoRA settings |
| model_args.use_peft | bool |
Yes | Master switch: if False, returns None (no PEFT) |
| model_args.lora_r | int |
No | Rank of LoRA decomposition; default: 16 |
| model_args.lora_alpha | int |
No | LoRA scaling factor; default: 32 |
| model_args.lora_dropout | float |
No | Dropout on adapter inputs; default: 0.05 |
| model_args.lora_target_modules | None | No | Layers to inject adapters; None uses PEFT defaults |
| model_args.lora_target_parameters | None | No | Specific parameter names to target for LoRA |
| model_args.lora_task_type | str |
No | PEFT task type; default: "CAUSAL_LM"
|
| model_args.use_rslora | bool |
No | Use rank-stabilized scaling (alpha/sqrt(r)); default: False
|
| model_args.use_dora | bool |
No | Use Weight-Decomposed LoRA; default: False |
| model_args.lora_modules_to_save | None | No | Additional modules to fully unfreeze and train |
Outputs
| Name | Type | Description |
|---|---|---|
| peft_config | None | A PEFT LoraConfig ready to pass to SFTTrainer(peft_config=...), or None if PEFT is disabled
|
Usage Examples
Basic Usage
from trl import ModelConfig
from trl.trainer.utils import get_peft_config
model_args = ModelConfig(
model_name_or_path="Qwen/Qwen2-0.5B",
use_peft=True,
lora_r=32,
lora_alpha=16,
lora_dropout=0.1,
)
peft_config = get_peft_config(model_args)
print(peft_config)
# LoraConfig(task_type='CAUSAL_LM', r=32, lora_alpha=16, lora_dropout=0.1, bias='none')
Passing to SFTTrainer
from trl import SFTTrainer, SFTConfig, ModelConfig
from trl.trainer.utils import get_peft_config
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B")
model_args = ModelConfig(use_peft=True, lora_r=16, lora_alpha=32)
trainer = SFTTrainer(
model=model,
args=SFTConfig(output_dir="./output"),
train_dataset=train_dataset,
peft_config=get_peft_config(model_args),
)
Full Fine-Tuning (No PEFT)
from trl import ModelConfig
from trl.trainer.utils import get_peft_config
model_args = ModelConfig(
model_name_or_path="Qwen/Qwen2-0.5B",
use_peft=False,
)
peft_config = get_peft_config(model_args)
assert peft_config is None # No adapter will be applied