Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Sktime Pytorch forecasting GroupNormalizer

From Leeroopedia


Knowledge Sources
Domains Time_Series, Data_Engineering, Preprocessing
Last Updated 2026-02-08 07:00 GMT

Overview

Concrete tool for per-group target normalization in time series datasets provided by the pytorch-forecasting library.

Description

The GroupNormalizer class extends TorchNormalizer to compute normalization statistics per group rather than globally. For each unique combination of groups columns, it computes center (mean or median) and scale (std or IQR) statistics. It supports standard scaling (mean/std) and robust scaling (median/IQR with configurable quantiles). Optional transformations (log, logit, softplus, relu, count) can be applied before normalization. The scale_by_group option computes the geometric mean of scales across group levels. Normalization parameters are stored per-sample as target_scale features in the dataset for model denormalization.

Usage

Pass as the target_normalizer parameter when constructing a TimeSeriesDataSet. The groups parameter should typically match the dataset's group_ids. Used in TFT Demand Forecasting (with softplus transformation) and DeepAR Probabilistic Forecasting (with standard scaling).

Code Reference

Source Location

Signature

class GroupNormalizer(TorchNormalizer):
    def __init__(
        self,
        method: str = "standard",
        groups: list[str] | None = None,
        center: bool = True,
        scale_by_group: bool = False,
        transformation: str | tuple[Callable, Callable] | None = None,
        method_kwargs: dict[str, Any] | None = None,
    ):
        """
        Group normalizer to normalize a given entry by groups.

        Parameters
        ----------
        method : str, optional, default="standard"
            "standard" (mean/std) or "robust" (quantile-based).
        groups : list[str], optional, default=[]
            Group names to normalize by.
        center : bool, optional, default=True
            If to center the output to zero.
        scale_by_group : bool, optional, default=False
            If to scale by geometric mean of group norms.
        transformation : str or tuple, optional, default=None
            "log", "log1p", "logit", "count", "softplus", "relu", or custom.
        method_kwargs : dict, optional
            For "robust": "upper", "lower", "center" quantiles.
        """

Import

from pytorch_forecasting import GroupNormalizer

I/O Contract

Inputs

Name Type Required Description
method str No Scaling method: "standard" or "robust" (default: "standard")
groups list[str] No Column names defining normalization groups (default: [])
center bool No Whether to center output to zero (default: True)
scale_by_group bool No Geometric mean scaling across groups (default: False)
transformation str or tuple No Pre-normalization transform: "log", "logit", "softplus", etc.

Outputs

Name Type Description
(used internally) Normalizer is consumed by TimeSeriesDataSet; produces target_scale features per sample

Usage Examples

TFT with Softplus Transformation

from pytorch_forecasting import GroupNormalizer

training = TimeSeriesDataSet(
    data,
    target="volume",
    group_ids=["agency", "sku"],
    target_normalizer=GroupNormalizer(
        groups=["agency", "sku"],
        transformation="softplus",
    ),
    # ... other params
)

DeepAR with Standard Scaling

from pytorch_forecasting import GroupNormalizer

training = TimeSeriesDataSet(
    data,
    target="value",
    group_ids=["series"],
    target_normalizer=GroupNormalizer(groups=["series"]),
    add_target_scales=True,
    # ... other params
)

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment