Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:LLMBook zh LLMBook zh github io LoRALinear

From Leeroopedia


Knowledge Sources
Domains Deep_Learning, Parameter_Efficient_Finetuning
Last Updated 2026-02-08 00:00 GMT

Overview

Concrete tool for LoRA-augmented linear layers with low-rank A/B matrix decomposition provided by the LLMBook repository.

Description

The LoRALinear class extends nn.Linear to add a parallel low-rank path. It maintains the original linear layer's frozen weights and adds two small trainable matrices: A (down-projection from input to rank r) and B (up-projection from rank r to output). The forward pass sums the original linear output with the LoRA path output, applying dropout before the A projection.

Key design:

  • A initialized with normal distribution (std=0.02)
  • B initialized to zero (LoRA starts as no-op)
  • Dropout applied before A for regularization

Usage

Use this class as a drop-in replacement for nn.Linear when implementing LoRA from scratch. In practice, the PEFT library automates this injection.

Code Reference

Source Location

  • Repository: LLMBook-zh
  • File: code/7.3 LoRA基础.py
  • Lines: 2-30

Signature

class LoRALinear(nn.Linear):
    def __init__(self, in_features: int, out_features: int, config, bias: bool = True):
        """
        Args:
            in_features: Input dimension.
            out_features: Output dimension.
            config: Object with lora_r (rank) and lora_dropout attributes.
            bias: Whether to include bias (default True).

        Attributes:
            A: nn.Linear(in_features, r, bias=False) — down-projection
            B: nn.Linear(r, out_features, bias=False) — up-projection
            dropout: nn.Dropout(p=config.lora_dropout)
        """

    def forward(self, input: Tensor) -> Tensor:
        """
        Returns: F.linear(input, self.weight, self.bias) + self.B(self.A(self.dropout(input)))
        """

Import

from lora_basic import LoRALinear

I/O Contract

Inputs

Name Type Required Description
in_features int Yes Input dimension
out_features int Yes Output dimension
config object Yes Config with lora_r and lora_dropout attributes
bias bool No Include bias (default True)

Outputs

Name Type Description
forward returns Tensor original_linear_output + B(A(dropout(input)))

Usage Examples

import torch
import torch.nn as nn

class Config:
    lora_r = 16
    lora_dropout = 0.05

config = Config()
lora_layer = LoRALinear(in_features=4096, out_features=4096, config=config)

x = torch.randn(1, 128, 4096)
output = lora_layer(x)  # Shape: [1, 128, 4096]
# output = W@x + B(A(dropout(x)))

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment