Implementation:NVIDIA TransformerEngine Softmax C API
Appearance
| Field | Value |
|---|---|
| Sources | TransformerEngine |
| Domains | Deep_Learning, Optimization |
| Last Updated | 2026-02-07 14:00 GMT |
Overview
Declares the C API for scaled softmax operations with various masking strategies used in attention computation, providing both forward and backward passes.
Description
softmax.h exposes forward/backward pairs for four softmax variants:
- nvte_scaled_softmax_forward/backward: Unmasked scaled softmax with a
scale_factorapplied to the input before softmax. - nvte_scaled_masked_softmax_forward/backward: Softmax with an explicit mask tensor. Masked positions are set to negative infinity before softmax.
- nvte_scaled_upper_triang_masked_softmax_forward/backward: Uses an implicit upper-triangular causal mask aligned top-left. Avoids materializing the mask tensor.
- nvte_scaled_aligned_causal_masked_softmax_forward/backward: Uses an implicit causal mask aligned bottom-right. Also avoids materializing the mask tensor.
The mask-specialized variants are critical for long-sequence models where attention matrices can be very large and materializing full mask tensors would waste significant memory.
Usage
Use for standalone attention softmax computation when not using the fully fused attention path.
Code Reference
Source Location
- Repository
NVIDIA/TransformerEngine- File
transformer_engine/common/include/transformer_engine/softmax.h- Lines
- 1--132
Signature
void nvte_scaled_softmax_forward(const NVTETensor input,
NVTETensor softmax_results,
float scale_factor, cudaStream_t stream);
void nvte_scaled_softmax_backward(const NVTETensor incoming_grads,
const NVTETensor softmax_results,
NVTETensor output_grads,
float scale_factor, cudaStream_t stream);
void nvte_scaled_masked_softmax_forward(const NVTETensor input,
const NVTETensor mask,
NVTETensor softmax_results,
float scale_factor, cudaStream_t stream);
void nvte_scaled_upper_triang_masked_softmax_forward(
const NVTETensor input, NVTETensor softmax_results,
float scale_factor, cudaStream_t stream);
void nvte_scaled_aligned_causal_masked_softmax_forward(
const NVTETensor input, NVTETensor softmax_results,
float scale_factor, cudaStream_t stream);
Import
#include "transformer_engine/softmax.h"
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
input |
NVTETensor |
Yes | Input attention scores |
mask |
NVTETensor |
No | Explicit mask tensor (for masked variant) |
scale_factor |
float |
Yes | Scalar applied before softmax (typically 1/sqrt(d)) |
stream |
cudaStream_t |
Yes | CUDA stream |
Outputs
| Name | Type | Description |
|---|---|---|
softmax_results |
NVTETensor |
Softmax output probabilities |
Usage Examples
#include "transformer_engine/softmax.h"
// Causal attention softmax (no explicit mask needed)
nvte_scaled_aligned_causal_masked_softmax_forward(
attn_scores, softmax_output, 1.0f / sqrtf(head_dim), stream);
// Backward pass
nvte_scaled_softmax_backward(grad_output, softmax_output,
grad_input, scale_factor, stream);
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment