Implementation:Huggingface Diffusers Save Lora Weights
| Knowledge Sources | |
|---|---|
| Domains | Diffusion_Models, LoRA, Model_Serialization |
| Last Updated | 2026-02-13 21:00 GMT |
Overview
Concrete tool for saving trained LoRA adapter weights in diffusers-compatible format, provided by StableDiffusionLoraLoaderMixin.save_lora_weights.
Description
save_lora_weights is a class method on StableDiffusionPipeline (inherited from StableDiffusionLoraLoaderMixin) that serializes LoRA adapter state dictionaries to disk. It accepts separate state dicts for the UNet and text encoder LoRA layers, combines them with appropriate component prefixes, and saves them using safetensors (by default) or PyTorch's pickle format.
In the LoRA training script, the saving workflow involves: (1) casting the UNet back to float32 (from mixed precision), (2) unwrapping the model from DDP, (3) extracting the PEFT state dict via get_peft_model_state_dict, (4) converting to diffusers format via convert_state_dict_to_diffusers, and (5) calling save_lora_weights with the converted state dict.
The method delegates to _save_lora_weights, which handles the actual file I/O. The is_main_process parameter ensures that only one process writes to disk during distributed training, preventing file corruption from concurrent writes.
Usage
Use save_lora_weights when:
- Saving final LoRA weights after training completion
- Saving intermediate checkpoints via save/load hooks
- You need to export LoRA weights that are loadable via
pipeline.load_lora_weights() - Saving adapters for sharing on the Hugging Face Hub
Code Reference
Source Location
- Repository: diffusers
- File:
src/diffusers/loaders/lora_pipeline.py - Lines: 473-534
Signature
@classmethod
def save_lora_weights(
cls,
save_directory: str | os.PathLike,
unet_lora_layers: dict[str, torch.nn.Module | torch.Tensor] = None,
text_encoder_lora_layers: dict[str, torch.nn.Module] = None,
is_main_process: bool = True,
weight_name: str = None,
save_function: Callable = None,
safe_serialization: bool = True,
unet_lora_adapter_metadata=None,
text_encoder_lora_adapter_metadata=None,
):
Import
from diffusers import StableDiffusionPipeline
from peft.utils import get_peft_model_state_dict
from diffusers.loaders.lora_conversion_utils import convert_state_dict_to_diffusers
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| save_directory | str or os.PathLike |
Yes | Directory to save the LoRA weight file. Will be created if it does not exist. |
| unet_lora_layers | dict[str, torch.nn.Module or torch.Tensor] |
No | State dict of LoRA layers for the UNet. At least one of unet_lora_layers or text_encoder_lora_layers must be provided.
|
| text_encoder_lora_layers | dict[str, torch.nn.Module] |
No | State dict of LoRA layers for the text encoder. |
| is_main_process | bool |
No | Whether this is the main process. Set to True only on rank 0 during distributed training. Default: True.
|
| weight_name | str |
No | Custom filename for the saved weights. Default: pytorch_lora_weights.safetensors.
|
| save_function | Callable |
No | Custom save function (e.g., for distributed file systems). Default: safetensors.torch.save_file.
|
| safe_serialization | bool |
No | Use safetensors format (recommended). If False, uses torch.save. Default: True.
|
Outputs
| Name | Type | Description |
|---|---|---|
| (saves to disk) | None |
Saves a safetensors (or .bin) file to save_directory/pytorch_lora_weights.safetensors.
|
Usage Examples
Basic Usage
from diffusers import StableDiffusionPipeline
from peft.utils import get_peft_model_state_dict
from diffusers.loaders.lora_conversion_utils import convert_state_dict_to_diffusers
# After training, cast UNet to float32 and extract state dict
unet = unet.to(torch.float32)
unwrapped_unet = accelerator.unwrap_model(unet)
# Extract and convert LoRA state dict
unet_lora_state_dict = convert_state_dict_to_diffusers(
get_peft_model_state_dict(unwrapped_unet)
)
# Save LoRA weights
StableDiffusionPipeline.save_lora_weights(
save_directory="./output",
unet_lora_layers=unet_lora_state_dict,
safe_serialization=True,
)
Checkpoint Saving Hook
def save_model_hook(models, weights, output_dir):
"""Hook registered with accelerator for checkpoint saving."""
if accelerator.is_main_process:
unet_lora_layers_to_save = None
for model in models:
if isinstance(model, type(accelerator.unwrap_model(unet))):
unet_lora_layers_to_save = get_peft_model_state_dict(model)
weights.pop()
StableDiffusionPipeline.save_lora_weights(
save_directory=output_dir,
unet_lora_layers=unet_lora_layers_to_save,
safe_serialization=True,
)
accelerator.register_save_state_pre_hook(save_model_hook)
Loading Saved LoRA Weights
from diffusers import DiffusionPipeline
# Load base pipeline
pipeline = DiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
torch_dtype=torch.float16,
)
# Load saved LoRA weights
pipeline.load_lora_weights("./output")
# Generate with the fine-tuned model
image = pipeline("a photo in the trained style").images[0]