Implementation:Microsoft Onnxruntime CPU GatherGrad
| Knowledge Sources | |
|---|---|
| Domains | Training, CPU_Kernels |
| Last Updated | 2026-02-10 04:00 GMT |
Overview
Concrete tool for computing Gather gradient on CPU in the ONNX Runtime training framework.
Description
This file implements the GatherGrad kernel, which computes the gradient of the ONNX Gather operation. Given the upstream gradient and indices from the forward Gather, it scatters gradient values back to a zero-initialized output tensor of the original data shape. The kernel handles negative indices by wrapping them, validates that indices are within bounds, and uses parallel execution via the thread pool. Due to potential index collisions (multiple output elements mapping to the same input position), a mutex is used to protect the atomic addition to the output. The kernel supports float and double types with int32 and int64 index types.
Usage
This kernel is invoked during the backward pass when a Gather operation (e.g., for embedding lookups) was used in the forward pass. It accumulates gradient contributions from all positions that gathered from the same source index.
Code Reference
Source Location
- Repository: Microsoft_Onnxruntime
- File: orttraining/orttraining/training_ops/cpu/tensor/gather_grad.cc
- Lines: 1-104
Signature
Status GatherGrad::Compute(OpKernelContext* context) const;
template <typename T, typename Tind>
Status GatherGrad::ComputeImpl(const TensorShape& data_shape,
const Tensor& indices, const Tensor& grad,
Tensor& output, concurrency::ThreadPool* tp) const;
Import
#include "orttraining/orttraining/training_ops/cpu/tensor/gather_grad.h"
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| shape | Tensor(int64) | Yes | Shape of the original data tensor |
| indices | Tensor(Tind) | Yes | Indices used in the forward Gather |
| grad | Tensor(T) | Yes | Upstream gradient (same shape as Gather output) |
Outputs
| Name | Type | Description |
|---|---|---|
| output | Tensor(T) | Gradient w.r.t. original data (zero-initialized, then accumulated) |
Usage Examples
ONNX_OPERATOR_KERNEL_EX(
GatherGrad, kMSDomain, 1, kCpuExecutionProvider,
KernelDefBuilder()
.TypeConstraint("I", DataTypeImpl::GetTensorType<int64_t>())
.TypeConstraint("T", {DataTypeImpl::GetTensorType<float>(),
DataTypeImpl::GetTensorType<double>()})
.TypeConstraint("Tind", std::vector<MLDataType>{
DataTypeImpl::GetTensorType<int32_t>(),
DataTypeImpl::GetTensorType<int64_t>()}),
GatherGrad);