Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Microsoft Onnxruntime CPU LSTM IOUtils

From Leeroopedia


Knowledge Sources
Domains Training, CPU_Kernels
Last Updated 2026-02-10 04:00 GMT

Overview

Concrete tool for managing LSTM input/output tensor construction, validation, and attribute parsing on CPU in the ONNX Runtime training framework.

Description

This file provides I/O utility structures and validation logic for the LSTM training and gradient kernels. It implements LSTMAttributes (parsing kernel attributes like direction, activations, clip, hidden_size), LSTMInputs (loading and validating forward pass inputs from the OpKernelContext), LSTMOutputs (allocating output tensors for the forward pass), LSTMGradInputs (loading gradient-pass inputs including all hidden/cell states and iofc gates), and LSTMGradOutputs (allocating gradient output tensors for dX, dW, dR, dB, dH0, dC0, dP). Validation functions enforce shape constraints such as X having 3 dimensions, W having shape [directions, 4*H, I], and R having shape [directions, 4*H, H]. Only the forward direction is supported. The code enforces that sigmoid/tanh/tanh activations are used and that clip and input_forget are at their default values.

Usage

These utilities are used by both the LSTMTraining (forward) and LSTMGrad (backward) operators to prepare input spans and allocate output tensors before invoking the actual compute kernels.

Code Reference

Source Location

Signature

struct LSTMAttributes {
    LSTMAttributes(const OpKernelInfo& info);
};

template <typename T>
struct LSTMInputs {
    LSTMInputs(OpKernelContext* context, const int directions, const int hidden_size);
};

template <typename T>
struct LSTMOutputs {
    LSTMOutputs(OpKernelContext* context, const int directions, const int sequence_length,
                const int batch_size, const int hidden_size);
};

template <typename T>
struct LSTMGradInputs {
    LSTMGradInputs(OpKernelContext* context, const int directions, const int hidden_size);
};

template <typename T>
struct LSTMGradOutputs {
    LSTMGradOutputs(OpKernelContext* context, const int directions, const int sequence_length,
                    const int batch_size, const int hidden_size, const int input_size);
};

Import

#include "orttraining/orttraining/training_ops/cpu/rnn/lstm_io_utils.h"

I/O Contract

Inputs (LSTMInputs - Forward)

Name Type Required Description
X Tensor(float) Yes Input sequence [seq_length, batch_size, input_size]
W Tensor(float) Yes Weights [directions, 4*H, input_size]
R Tensor(float) Yes Recurrence weights [directions, 4*H, H]
B Tensor(float) No Bias [directions, 8*H]
SL Tensor(int) No Sequence lengths (not supported)
H0 Tensor(float) No Initial hidden state [directions, batch, H]
C0 Tensor(float) No Initial cell state [directions, batch, H]
P Tensor(float) No Peephole weights [directions, 3*H]

Outputs (LSTMGradOutputs)

Name Type Description
dX Tensor(float) Gradient w.r.t. input [seq_length, batch, input_size]
dW Tensor(float) Gradient w.r.t. weights [directions, 4*H, input_size]
dR Tensor(float) Gradient w.r.t. recurrence weights [directions, 4*H, H]
dB Tensor(float) Gradient w.r.t. bias [directions, 8*H]
dH0 Tensor(float) Gradient w.r.t. initial hidden state
dC0 Tensor(float) Gradient w.r.t. initial cell state
dP Tensor(float) Gradient w.r.t. peephole weights [directions, 3*H]

Usage Examples

// Using LSTMInputs in the forward pass
const auto lstm_inputs = lstm::LSTMInputs<float>(context, num_directions, hidden_size);
auto lstm_outputs = lstm::LSTMOutputs<float>(context, num_directions,
    lstm_inputs.shape.sequence_length, lstm_inputs.shape.batch_size, hidden_size);

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment