Principle:LaurentMazare Tch rs Frozen Feature Computation
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, Transfer_Learning |
| Last Updated | 2026-02-08 14:00 GMT |
Overview
Pattern for computing model outputs with gradient tracking disabled to efficiently extract features without memory overhead.
Description
Frozen feature computation wraps forward pass execution in a no_grad context, which disables gradient tracking for all tensor operations within the scope. This eliminates the memory overhead of storing intermediate activations needed for backpropagation, making it suitable for inference and feature extraction from frozen models. The RAII-based NoGradGuard automatically restores gradient tracking when it goes out of scope.
Usage
Use when extracting features from frozen pretrained models or during evaluation. Always wrap feature extraction in no_grad to avoid unnecessary memory consumption.
Theoretical Basis
With gradients (training):
Memory: O(N * layers * activations) — stores all intermediate values
Computation: Forward + builds autograd graph
Without gradients (frozen):
Memory: O(N * output_size) — no intermediate storage
Computation: Forward only, no graph construction
tch::no_grad(|| { ... }) — Closure-based API
tch::no_grad_guard() — RAII guard (dropped at scope exit)