Implementation:Kornia Kornia AugmentationSequential Forward
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Vision, Training, Augmentation |
| Last Updated | 2026-02-09 15:00 GMT |
Overview
Concrete tool for executing differentiable augmentation within PyTorch training loops provided by Kornia.
Description
The AugmentationSequential.forward() method applies the composed augmentation pipeline during training. It accepts batched image tensors and optional annotation data (masks, boxes, keypoints).
Key capabilities:
- Pre-computed params can be passed via the
params=argument for reproducibility - Gradient preservation through geometric and photometric transforms enables end-to-end differentiable training
- Dynamic data_keys override allows changing which data types are transformed at call time
- Supports Dict input for named data access
Usage
Call as part of the training loop:
augmented = aug_pipeline(images, masks, data_keys=["input", "mask"])
Use params= for reproducible augmentation across multiple calls with identical random state.
Code Reference
Source Location
- Repository: kornia
- File: kornia/augmentation/container/augment.py
- Lines: L428-433
Signature
def forward(
self,
*args: Union[DataType, Dict[str, DataType]],
params: Optional[List[ParamItem]] = None,
data_keys: Optional[Union[List[str], List[int], List[DataKey]]] = None,
) -> Union[DataType, List[DataType], Dict[str, DataType]]
Import
from kornia.augmentation import AugmentationSequential
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| *args | Dict[str, DataType] | Yes | Batched tensors for each data key (images, masks, boxes, etc.) |
| params | List[ParamItem] | No | Pre-computed augmentation parameters for reproducibility |
| data_keys | List[int] | List[DataKey] | No | Override data keys specified at construction time |
Outputs
| Name | Type | Description |
|---|---|---|
| return | List[DataType] | Dict[str, DataType] | Augmented data with gradients preserved; format matches input format |
Usage Examples
Training Loop Integration
import torch
import torch.nn as nn
from kornia.augmentation import AugmentationSequential, RandomAffine, ColorJiggle
# Define augmentation pipeline
aug = AugmentationSequential(
RandomAffine(degrees=(-15, 15), scale=(0.9, 1.1), p=0.8),
ColorJiggle(brightness=0.2, contrast=0.2, p=0.5),
data_keys=["input", "mask"],
)
model = nn.Sequential(...) # your model
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.CrossEntropyLoss()
for images, masks, labels in dataloader:
images = images.cuda()
masks = masks.cuda()
labels = labels.cuda()
# Differentiable augmentation on GPU
aug_images, aug_masks = aug(images, masks)
# Forward pass through model (gradients flow through augmentation)
predictions = model(aug_images)
loss = criterion(predictions, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Reproducible Augmentation with Params
import torch
from kornia.augmentation import AugmentationSequential, RandomAffine
aug = AugmentationSequential(
RandomAffine(degrees=(-30, 30), p=1.0),
data_keys=["input"],
)
images = torch.randn(4, 3, 256, 256).cuda()
# First forward pass -- augmentation parameters are generated and stored
augmented = aug(images)
# Retrieve the generated parameters
saved_params = aug._params
# Second forward pass with same parameters -- identical augmentation
augmented_replay = aug(images, params=saved_params)
# Both outputs are identical
assert torch.allclose(augmented, augmented_replay)
Related Pages
Implements Principle
Requires Environment
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment