Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Huggingface Optimum Sequential Block Quantization

From Leeroopedia

Overview

Algorithm for quantizing transformer blocks one at a time, propagating updated activations through the network to minimize cumulative error.

Description

Rather than quantizing all layers independently, GPTQ processes transformer blocks sequentially. For each block, the algorithm:

  1. Creates GPTQ solver instances for each linear layer in the block (or a subset if modules_in_block_to_quantize is specified).
  2. Registers forward hooks on the linear layers to accumulate Hessian statistics via add_batch() as calibration data flows through.
  3. Runs calibration data through the block, building the Hessian matrix H = 2 * X^T * X for each layer.
  4. Calls fasterquant() to solve for optimal quantized weights, producing quantized weights along with scale, zero-point, and activation order index (g_idx) parameters.
  5. Updates the block's weights with the quantized values and captures the block's output as input for the next block.

When true_sequential=True (the default), layers within a block are quantized one at a time, so each layer sees inputs that have already passed through the previously quantized layers. When true_sequential=False, all layers in the block share the same Hessian computation pass.

Usage

This is the core quantization loop, applied after model conversion and input capture. It is invoked automatically by GPTQQuantizer.quantize_model().

Theoretical Basis

The key GPTQ equation per column is:

q* = argmin_q (w - q)^T H_F (w - q)

This is solved greedily column-by-column. The fasterquant algorithm processes columns in groups, using Cholesky decomposition of the Hessian for efficiency. For a group of columns, the algorithm:

  1. Computes the Cholesky factorization of the relevant Hessian block.
  2. Quantizes each column by rounding and computing the quantization error.
  3. Distributes the error across remaining columns using the Hessian information (error compensation).

Sequential block processing ensures that quantization error in earlier blocks is accounted for when quantizing later blocks. After each block is quantized, the updated block outputs are used as inputs for the next block. This propagation of updated activations through the network reduces the cumulative quantization error compared to independent block quantization.

The percdamp parameter adds dampening to the Hessian diagonal:

H_damped = H + λI, where λ = percdamp × mean(diag(H))

This ensures numerical stability during the Cholesky decomposition, particularly for ill-conditioned Hessians.

Metadata

Key Value
source Paper GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers

Related

Connections

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment