Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Workflow:Sktime Pytorch forecasting NBeats Univariate Forecasting

From Leeroopedia


Knowledge Sources
Domains Time_Series, Univariate_Forecasting, Deep_Learning
Last Updated 2026-02-08 07:00 GMT

Overview

End-to-end process for univariate time series forecasting using N-BEATS (Neural Basis Expansion Analysis for Interpretable Time Series Forecasting) with trend-seasonal decomposition.

Description

This workflow trains an N-BEATS model for pure univariate time series forecasting without exogenous covariates. N-BEATS uses a deep stack of fully-connected layers organized into blocks that learn basis expansion coefficients for both backcast (reconstruction of input) and forecast (prediction of future). The interpretable variant decomposes the signal into trend and seasonal components using structured basis functions, while the generic variant uses learnable bases. The model achieved state-of-the-art results on the M4 competition benchmark. This workflow covers data preparation with minimal metadata, model construction, and training.

Key capabilities:

  • Pure univariate forecasting without covariates
  • Interpretable trend-seasonal decomposition
  • Stack-based residual learning architecture
  • Strong performance on competition benchmarks (M4, M5)

Usage

Execute this workflow when you have univariate time series data (no exogenous covariates) and want a strong baseline or production model for point forecasting. N-BEATS is well-suited for benchmarking, competition settings, and scenarios where the target series contains sufficient information for forecasting without external features.

Execution Steps

Step 1: Data Preparation

Prepare the time series data as a pandas DataFrame with columns for series identifier, integer time index, and the target value. For experimentation, use generate_ar_data to create synthetic autoregressive data with configurable properties. Determine the context length (encoder) and forecast horizon (prediction length).

Key considerations:

  • N-BEATS is designed for univariate forecasting; only the target value column is used
  • Context length should be long enough to capture seasonality (e.g., 150 for daily data with weekly patterns)
  • Set the prediction length to match the desired forecast horizon
  • Split data by time cutoff: training on earlier data, validation on the last prediction_length steps

Step 2: TimeSeriesDataSet Construction

Create a minimal TimeSeriesDataSet with only the target, time index, and group identifiers. Disable features that N-BEATS does not use: set add_relative_time_idx=False, add_target_scales=False, and specify only time_varying_unknown_reals with the target column. Use NaNLabelEncoder for the series identifier. Fix encoder and prediction lengths to exact values with no randomization.

Key considerations:

  • N-BEATS does not use covariates; only list the target in time_varying_unknown_reals
  • Fixed-length windows are required: set min_encoder_length = max_encoder_length
  • Set randomize_length=None to disable variable-length subsampling
  • No normalizer is needed if the model handles raw values (default behavior)

Step 3: Validation Dataset and DataLoader Creation

Create the validation dataset using from_dataset with min_prediction_idx set to the training cutoff, ensuring validation windows start where training ends. Convert both datasets to DataLoaders with appropriate batch size.

Key considerations:

  • Use min_prediction_idx parameter (not predict=True) for time-based validation splits
  • Larger batch sizes (128) are efficient since N-BEATS has fewer parameters per sample than attention models

Step 4: Trainer and Model Configuration

Configure the PyTorch Lightning Trainer with early stopping on validation loss, gradient clipping, and automatic accelerator selection. Instantiate the N-BEATS model using from_dataset with the learning rate, weight decay, and logging parameters.

Key considerations:

  • N-BEATS benefits from weight decay (1e-2) to regularize the fully-connected layers
  • Gradient clipping (0.1) stabilizes training
  • The model auto-configures backcast and forecast lengths from the dataset
  • Default stack types are interpretable (trend + seasonal); use generic for maximum flexibility

Step 5: Model Training

Train the model using trainer.fit() with the training and validation dataloaders. The residual stacking architecture trains efficiently as each block learns to predict the residual from previous blocks.

Key considerations:

  • Monitor val_loss for convergence; N-BEATS typically converges faster than attention models
  • Early stopping with patience 10 provides a good balance
  • The backcast loss acts as a regularizer, improving forecast quality
  • For competition submissions, ensemble multiple N-BEATS models with different random seeds

Execution Diagram

GitHub URL

Workflow Repository