Implementation:Facebookresearch Habitat lab RunningMeanAndVar
| Knowledge Sources | |
|---|---|
| Domains | Embodied_AI, Observation_Normalization |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
RunningMeanAndVar is a PyTorch nn.Module that computes and applies online running mean and variance normalization to observation tensors, with support for distributed training via torch.distributed.
Description
RunningMeanAndVar maintains registered buffers for the running mean, variance, and sample count across channels. During training, for each forward pass it computes per-channel mean and variance of the input batch. When distributed training is active, these statistics are synchronized across all workers using all_reduce. The running statistics are updated using Welford's online algorithm for numerically stable variance computation. During both training and evaluation, the module normalizes the input by subtracting the running mean and multiplying by the inverse standard deviation (with a minimum variance clamp of 1e-2). The normalization is implemented using torch.addcmul for efficiency and numerical stability in fp16.
Usage
Insert RunningMeanAndVar as a preprocessing layer in a visual encoder to normalize observation channels (e.g., depth, RGB) before feeding them to convolutional layers. This is particularly useful in DD-PPO distributed training.
Code Reference
Source Location
- Repository: Facebookresearch_Habitat_lab
- File: habitat-baselines/habitat_baselines/rl/ddppo/policy/running_mean_and_var.py
- Lines: 13-78
Signature
class RunningMeanAndVar(nn.Module):
def __init__(self, n_channels: int) -> None:
def forward(self, x: Tensor) -> Tensor:
Import
from habitat_baselines.rl.ddppo.policy.running_mean_and_var import RunningMeanAndVar
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| n_channels | int | Yes | Number of input channels (must be > 0) |
| x | Tensor | Yes | Input tensor of shape (batch, n_channels, height, width) passed to forward() |
Outputs
| Name | Type | Description |
|---|---|---|
| normalized_x | Tensor | Normalized tensor of same shape as input, with zero mean and unit variance per channel |
Usage Examples
Basic Usage
import torch
from habitat_baselines.rl.ddppo.policy.running_mean_and_var import RunningMeanAndVar
# Normalize 1-channel depth observations
normalizer = RunningMeanAndVar(n_channels=1)
normalizer.train()
# Simulate a batch of depth observations (B, C, H, W)
depth_obs = torch.randn(16, 1, 256, 256)
normalized_depth = normalizer(depth_obs)
# During evaluation
normalizer.eval()
test_obs = torch.randn(4, 1, 256, 256)
normalized_test = normalizer(test_obs)
Integration in Visual Encoder
import torch.nn as nn
from habitat_baselines.rl.ddppo.policy.running_mean_and_var import RunningMeanAndVar
class VisualEncoder(nn.Module):
def __init__(self, n_input_channels):
super().__init__()
self.running_mean_and_var = RunningMeanAndVar(n_input_channels)
self.backbone = nn.Sequential(
nn.Conv2d(n_input_channels, 32, 8, stride=4),
nn.ReLU(True),
nn.Conv2d(32, 64, 4, stride=2),
nn.ReLU(True),
)
def forward(self, x):
x = self.running_mean_and_var(x)
return self.backbone(x)