Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Sktime Pytorch forecasting Lightning Trainer

From Leeroopedia


Knowledge Sources
Domains Deep_Learning, Training, MLOps
Last Updated 2026-02-08 07:00 GMT

Overview

Wrapper documentation for the PyTorch Lightning Trainer as used in pytorch-forecasting workflows for training orchestration.

Description

The lightning.pytorch.Trainer class manages the complete training lifecycle for pytorch-forecasting models. It handles distributed training, mixed precision, gradient accumulation, gradient clipping, logging, checkpointing, and callback execution. In pytorch-forecasting workflows, the Trainer is configured with specific defaults: gradient_clip_val=0.1 (essential for transformer stability), EarlyStopping on validation loss, LearningRateMonitor for debugging, and optional limit_train_batches for fast iteration during development.

Usage

Import and configure the Trainer after creating DataLoaders and before model instantiation. The Trainer instance is passed to Tuner.lr_find() for learning rate finding and then used for Trainer.fit() to execute training.

Code Reference

Source Location

  • Repository: External — pytorch-lightning
  • File: lightning/pytorch/trainer/trainer.py

Signature

class Trainer:
    def __init__(
        self,
        *,
        accelerator: str = "auto",
        strategy: str = "auto",
        devices: int | str | list[int] = "auto",
        num_nodes: int = 1,
        precision: str | int = "32-true",
        logger: Logger | bool = True,
        callbacks: list[Callback] | None = None,
        fast_dev_run: bool | int = False,
        max_epochs: int | None = None,
        min_epochs: int | None = None,
        max_steps: int = -1,
        min_steps: int | None = None,
        max_time: str | timedelta | dict | None = None,
        limit_train_batches: int | float | None = None,
        limit_val_batches: int | float | None = None,
        limit_test_batches: int | float | None = None,
        val_check_interval: int | float | None = None,
        log_every_n_steps: int | None = 50,
        enable_checkpointing: bool | None = None,
        enable_progress_bar: bool | None = None,
        enable_model_summary: bool | None = None,
        accumulate_grad_batches: int = 1,
        gradient_clip_val: int | float | None = None,
        gradient_clip_algorithm: str | None = None,
        default_root_dir: str | None = None,
        **kwargs,
    ):

Import

import lightning.pytorch as pl
from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor
from lightning.pytorch.loggers import TensorBoardLogger

External Reference

I/O Contract

Inputs

Name Type Required Description
max_epochs int No Maximum training epochs (typical: 50-100)
accelerator str No Device type: "auto", "gpu", "cpu" (default: "auto")
gradient_clip_val float No Max gradient norm (default: None; use 0.1 for forecasting)
callbacks list[Callback] No EarlyStopping, LearningRateMonitor, etc.
logger Logger No TensorBoardLogger or other logging backend
limit_train_batches int or float No Limit batches per epoch for fast prototyping

Outputs

Name Type Description
Trainer pl.Trainer Configured training orchestrator ready for .fit() or Tuner usage

Usage Examples

TFT Training Configuration

import lightning.pytorch as pl
from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor
from lightning.pytorch.loggers import TensorBoardLogger

trainer = pl.Trainer(
    max_epochs=50,
    accelerator="auto",
    gradient_clip_val=0.1,
    limit_train_batches=30,
    callbacks=[
        LearningRateMonitor(),
        EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10),
    ],
    logger=TensorBoardLogger("lightning_logs"),
)

DeepAR Minimal Configuration

import lightning.pytorch as pl
from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor

trainer = pl.Trainer(
    max_epochs=10,
    accelerator="gpu",
    devices="auto",
    gradient_clip_val=0.1,
    limit_train_batches=30,
    limit_val_batches=3,
    callbacks=[
        LearningRateMonitor(),
        EarlyStopping(monitor="val_loss", patience=5),
    ],
)

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment