Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:LaurentMazare Tch rs Batch Normalization

From Leeroopedia


Knowledge Sources
Domains Deep_Learning, Optimization
Last Updated 2026-02-08 14:00 GMT

Overview

Normalization technique that standardizes layer inputs across the batch dimension, stabilizing training and enabling higher learning rates.

Description

Batch Normalization (BatchNorm) normalizes the activations of each channel across the mini-batch by subtracting the mean and dividing by the standard deviation. Learnable affine parameters (gamma and beta) allow the network to undo the normalization when needed. During training, running statistics (mean and variance) are maintained via exponential moving average for use during inference, where batch statistics may not be available or meaningful.

Usage

Use batch normalization after convolutional or linear layers and before activation functions to stabilize training, reduce internal covariate shift, and allow higher learning rates. Use the ModuleT trait variant since BatchNorm has different behavior during training versus evaluation.

Theoretical Basis

x^i=xiμBσB2+ϵ

yi=γx^i+β

Where:

  • μB,σB2 are batch mean and variance
  • γ,β are learnable scale and shift parameters
  • ϵ is a small constant for numerical stability (default 1e-5)

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment