Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Heuristic:Kornia Kornia Avoid Inplace Ops Compile

From Leeroopedia




Knowledge Sources
Domains Optimization, Deep_Learning
Last Updated 2026-02-09 15:00 GMT

Overview

Avoid in-place tensor operations in code paths that must be compatible with `torch.compile()` and gradient checkpointing.

Description

PyTorch's `torch.compile()` (Dynamo + Inductor) and autograd have known issues with in-place operations. In-place ops modify tensors that may be saved for backward, breaking gradient computation or causing Dynamo graph breaks. Kornia has adopted a pattern of replacing in-place operations with their out-of-place equivalents in critical paths. This includes replacing `tensor -= value` with `tensor = tensor - value`, and avoiding in-place augmentations on bbox coordinates when flips are involved.

Usage

Apply this heuristic when writing any Kornia code that should support `torch.compile()` or be used inside training loops with autograd. Never use `+=`, `-=`, `*=`, `/=` on tensors that participate in gradient computation. Also avoid `tensor[mask] = value` indexing assignments when possible.

The Insight (Rule of Thumb)

  • Action: Replace in-place operations (`tensor -= x`) with out-of-place equivalents (`tensor = tensor - x`).
  • Value: Critical for `torch.compile()` compatibility and correct gradient computation.
  • Trade-off: Out-of-place operations allocate new memory for the result tensor. In most cases the compiler can optimize this away.

Reasoning

In-place operations on tensors that require grad cause autograd to raise errors because the saved tensor has been modified. With `torch.compile()`, in-place ops can cause graph breaks where the compiler falls back to eager mode, negating the compilation benefits. The Kornia team has documented these cases with explicit comments when converting in-place to out-of-place code.

Code Evidence

Avoiding in-place for torch.compile from `kornia/models/rt_detr/post_processor.py:110`:

# bboxes[..., :2] -= bboxes[..., 2:] * 0.5  # in-place operation is not torch.compile()-friendly

Avoiding in-place ops in loss computation from `kornia/losses/hausdorff.py:83`:

# NOTE: avoid in-place ops like below, which will not pass gradcheck:

Bbox coordinate warning about in-place flip from `kornia/geometry/bbox.py:501-503`:

warnings.warn(
    "Previous behaviour produces incorrect box coordinates if a flip transformation performed on boxes."
)

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment