Implementation:Huggingface Diffusers Dual LoRA Add Adapter
| Knowledge Sources | |
|---|---|
| Domains | |
| Last Updated | 2026-02-13 00:00 GMT |
Overview
The concrete API for injecting LoRA adapters into both the UNet and text encoder using PeftAdapterMixin.add_adapter(). This implementation configures two separate LoraConfig objects with distinct target modules and calls add_adapter() on each model component.
Description
The dual LoRA injection follows two steps:
- UNet adapter -- A
LoraConfigtargeting the attention projection layers (to_k,to_q,to_v,to_out.0,add_k_proj,add_v_proj) is created and injected viaunet.add_adapter(unet_lora_config). - Text encoder adapter (optional) -- When
--train_text_encoderis enabled, a separateLoraConfigtargeting the text encoder's attention layers (q_proj,k_proj,v_proj,out_proj) is created and injected viatext_encoder.add_adapter(text_lora_config).
The add_adapter() method is provided by the PeftAdapterMixin class in Diffusers, which delegates to PEFT's inject_adapter_in_model() function. This function traverses the model's module tree, identifies modules matching the target_modules patterns, and wraps them with LoRA layers.
Both configs use:
init_lora_weights="gaussian"-- Gaussian random initialization for LoRA matrices.lora_alpha=rank-- Scaling factor equals rank, giving an effective scale of 1.0.lora_dropout-- Optional dropout on LoRA outputs (default 0.0).
Usage
from peft import LoraConfig
# UNet LoRA configuration
unet_lora_config = LoraConfig(
r=4,
lora_alpha=4,
lora_dropout=0.0,
init_lora_weights="gaussian",
target_modules=["to_k", "to_q", "to_v", "to_out.0", "add_k_proj", "add_v_proj"],
)
unet.add_adapter(unet_lora_config)
# Text encoder LoRA configuration (optional)
if train_text_encoder:
text_lora_config = LoraConfig(
r=4,
lora_alpha=4,
lora_dropout=0.0,
init_lora_weights="gaussian",
target_modules=["q_proj", "k_proj", "v_proj", "out_proj"],
)
text_encoder.add_adapter(text_lora_config)
Code Reference
Source Location
- Repository:
huggingface/diffusers - File:
examples/dreambooth/train_dreambooth_lora.py(lines 936--954) - File:
src/diffusers/loaders/peft.py(lines 526--562,PeftAdapterMixin.add_adapterdefinition)
Signature
# PeftAdapterMixin.add_adapter (from src/diffusers/loaders/peft.py)
def add_adapter(self, adapter_config, adapter_name: str = "default") -> None:
"""
Adds a new adapter to the current model for training.
Args:
adapter_config (`[~peft.PeftConfig]`):
The configuration of the adapter to add; supported adapters are
non-prefix tuning and adaption prompt methods.
adapter_name (`str`, *optional*, defaults to `"default"`):
The name of the adapter to add. If no name is passed, a default
name is assigned to the adapter.
"""
check_peft_version(min_version=MIN_PEFT_VERSION)
from peft import PeftConfig, inject_adapter_in_model
if not isinstance(adapter_config, PeftConfig):
raise ValueError(
f"adapter_config should be an instance of PeftConfig. Got {type(adapter_config)} instead."
)
adapter_config.base_model_name_or_path = None
inject_adapter_in_model(adapter_config, self, adapter_name)
self.set_adapter(adapter_name)
Import
from peft import LoraConfig
from diffusers.loaders.peft import PeftAdapterMixin
I/O Contract
Inputs
| Name | Type | Description |
|---|---|---|
adapter_config |
peft.PeftConfig | A LoraConfig instance specifying rank, alpha, dropout, initialization, and target modules.
|
adapter_name |
str | Name for the adapter (default "default"). Must be unique per model.
|
r (in LoraConfig) |
int | Rank of the LoRA decomposition matrices. Default 4 in DreamBooth. |
lora_alpha (in LoraConfig) |
int | Scaling factor for LoRA output. Set equal to r in DreamBooth.
|
target_modules (in LoraConfig) |
List[str] | Module name patterns to target for adapter injection. |
init_lora_weights (in LoraConfig) |
str | Initialization method: "gaussian" for DreamBooth.
|
lora_dropout (in LoraConfig) |
float | Dropout probability for LoRA layers. Default 0.0. |
Outputs
| Name | Type | Description |
|---|---|---|
| Side effect | None | The model is modified in-place: target modules are wrapped with LoRA layers whose parameters have requires_grad=True.
|
Usage Examples
Example 1: UNet-Only LoRA for DreamBooth
from peft import LoraConfig
from diffusers import UNet2DConditionModel
unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet")
unet.requires_grad_(False)
unet_lora_config = LoraConfig(
r=4,
lora_alpha=4,
lora_dropout=0.0,
init_lora_weights="gaussian",
target_modules=["to_k", "to_q", "to_v", "to_out.0", "add_k_proj", "add_v_proj"],
)
unet.add_adapter(unet_lora_config)
# Verify: only LoRA parameters are trainable
trainable = [n for n, p in unet.named_parameters() if p.requires_grad]
print(f"Trainable LoRA parameters: {len(trainable)}")
Example 2: Dual LoRA on UNet and Text Encoder
from peft import LoraConfig
from transformers import CLIPTextModel
text_encoder = CLIPTextModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="text_encoder")
text_encoder.requires_grad_(False)
text_lora_config = LoraConfig(
r=4,
lora_alpha=4,
lora_dropout=0.0,
init_lora_weights="gaussian",
target_modules=["q_proj", "k_proj", "v_proj", "out_proj"],
)
text_encoder.add_adapter(text_lora_config)
# Both UNet and text encoder now have trainable LoRA layers
unet_trainable = sum(p.numel() for p in unet.parameters() if p.requires_grad)
te_trainable = sum(p.numel() for p in text_encoder.parameters() if p.requires_grad)
print(f"UNet LoRA params: {unet_trainable:,}, Text encoder LoRA params: {te_trainable:,}")