Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Workflow:AUTOMATIC1111 Stable diffusion webui LoRA network application

From Leeroopedia


Knowledge Sources
Domains Model_Adaptation, Stable_Diffusion, LoRA, LyCORIS
Last Updated 2026-02-08 08:00 GMT

Overview

End-to-end process for loading and applying LoRA and LyCORIS network adapters to modify Stable Diffusion model behavior during image generation.

Description

This workflow covers the application of Low-Rank Adaptation (LoRA) and LyCORIS (LoRA beYond Conventional methods Including LoHa, LoKr, and others) network files during image generation. These adapter networks are small weight modifications that, when applied on top of a base checkpoint, alter the model's output to incorporate trained concepts, characters, or styles. The built-in Lora extension supports eight network types: standard LoRA, LoHa (Hadamard Product), LoKr (Kronecker Product), IA3, GLoRA, OFT (Orthogonal Fine-Tuning), BOFT (Butterfly OFT), Full weight diff, and Norm layers. Multiple networks can be stacked simultaneously with independent weight controls.

Usage

Execute this workflow when you want to apply pre-trained LoRA or LyCORIS adapter files to modify a base Stable Diffusion checkpoint's output. This is the most common method for applying fine-tuned concepts (characters, styles, objects) without replacing the entire checkpoint. LoRA files are typically 10-200 MB and can be combined freely.

Execution Steps

Step 1: Network file discovery and browsing

Browse available LoRA and LyCORIS network files through the Extra Networks panel in the UI. The system scans the configured LoRA directories (default: models/Lora) for .safetensors and .pt files, building a catalog with previews, metadata, and search/filter capabilities. Networks are displayed with thumbnails, training metadata, and hash identifiers. The panel supports sorting, filtering by name or tag, and organizing by subdirectories.

Key considerations:

  • Network files are discovered from the --lora-dir path (default: models/Lora)
  • Each network file can have a companion preview image and description file
  • The Extra Networks panel shows network metadata including training parameters
  • Networks are identified by hash for reproducibility in generation info

Step 2: Network activation via prompt syntax

Activate one or more networks by inserting activation tags into the prompt using the syntax <lora:name:weight> or <lyco:name:weight>. The name refers to the filename (without extension) and the weight controls the influence strength (typically 0.0 to 1.5). Multiple networks can be activated simultaneously by including multiple tags. Clicking a network in the Extra Networks panel automatically inserts its activation tag.

Key considerations:

  • Weight of 1.0 applies the network at full trained strength
  • Weights above 1.0 amplify the effect but may cause artifacts
  • Negative weights can partially invert the network's effect
  • Multiple LoRAs are applied additively and can interact
  • The activation tag is stripped from the prompt before CLIP encoding

Step 3: Network loading and weight calculation

When generation begins, the extra networks system parses the prompt for activation tags and loads the referenced network files. Each network's state dict is loaded and its type is auto-detected (LoRA, LoHa, LoKr, IA3, GLoRA, OFT, BOFT, Full, or Norm). The system matches network layers to corresponding model layers by name. Weight deltas are calculated for each matched layer using the network-type-specific algorithm (e.g., low-rank decomposition for LoRA, Hadamard product for LoHa, Kronecker product for LoKr).

Key considerations:

  • Network type is auto-detected from the structure of saved tensors
  • Layer matching uses name-based mapping between network and model weights
  • Networks can target different layer types: Linear, Conv2d, GroupNorm, LayerNorm
  • Unmatched layers are skipped with a warning
  • Network files are cached in memory for fast reuse across generations

Step 4: Weight patching and generation

Apply the calculated weight modifications to the model. The system monkey-patches PyTorch's Linear, Conv2d, GroupNorm, and LayerNorm forward methods to intercept computations and add the LoRA contribution. During the forward pass, each patched layer computes: output = original_output + lora_contribution * weight. The generation then proceeds through the normal txt2img or img2img pipeline with the modified model behavior. After generation completes, the patches are deactivated.

Key considerations:

  • Weight application uses on-the-fly computation rather than permanent weight modification
  • This allows instant switching between different LoRA combinations
  • The UNet and text encoder layers can be independently targeted
  • OFT and BOFT use orthogonal transformations rather than additive modifications
  • Performance overhead per LoRA is minimal due to the low-rank nature of the computations

Step 5: Output with network metadata

The generated images include LoRA activation information in their PNG metadata. The generation info records which networks were active, their weights, and their file hashes. This enables exact reproduction of the generation including the specific LoRA configuration. The Lora extension also provides a metadata editor for adding user notes, trigger words, and preferred settings to network files.

Key considerations:

  • Network hashes in metadata ensure the exact same network file is identified
  • Pasting generation parameters from an image re-activates the same LoRA configuration
  • The metadata editor supports custom descriptions and suggested trigger words
  • Network usage is logged for tracking which networks are active during generation

Execution Diagram

GitHub URL

Workflow Repository