Principle:Sktime Pytorch forecasting DeepAR Model Instantiation
| Knowledge Sources | |
|---|---|
| Domains | Time_Series, Deep_Learning, Probabilistic_Forecasting |
| Last Updated | 2026-02-08 07:00 GMT |
Overview
Technique for instantiating an autoregressive recurrent neural network model that produces probabilistic forecasts by learning the parameters of a probability distribution.
Description
DeepAR is an autoregressive RNN-based model designed for probabilistic time series forecasting. Unlike point forecast models, DeepAR learns the parameters of a probability distribution (e.g., Normal, NegativeBinomial) at each time step, enabling uncertainty quantification through sampling. The model uses an encoder-decoder architecture where the encoder processes historical context through recurrent layers, and the decoder generates future predictions autoregressively — each predicted sample is fed back as input for the next step. Model instantiation via from_dataset configures the architecture from dataset metadata and validates that targets are continuous (categorical targets are not supported).
Usage
Use DeepAR when probabilistic forecasts with calibrated uncertainty intervals are needed. It is particularly suited for: (1) datasets with many related time series that share statistical patterns, (2) scenarios requiring distributional output (e.g., inventory optimization where quantile estimates matter), and (3) univariate or low-covariate settings. Use MultivariateNormalDistributionLoss to convert DeepAR into a DeepVAR network for correlated multi-target forecasting.
Theoretical Basis
DeepAR models the conditional distribution of future values autoregressively:
At each step, the RNN hidden state encodes history and projects to distribution parameters:
Key design choices:
- cell_type — LSTM or GRU recurrent cell
- hidden_size — RNN hidden dimension (main capacity control)
- rnn_layers — depth of RNN stack
- loss — DistributionLoss determining output distribution family