Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:LaurentMazare Tch rs Frozen Feature Computation

From Leeroopedia


Knowledge Sources
Domains Deep_Learning, Transfer_Learning
Last Updated 2026-02-08 14:00 GMT

Overview

Pattern for computing model outputs with gradient tracking disabled to efficiently extract features without memory overhead.

Description

Frozen feature computation wraps forward pass execution in a no_grad context, which disables gradient tracking for all tensor operations within the scope. This eliminates the memory overhead of storing intermediate activations needed for backpropagation, making it suitable for inference and feature extraction from frozen models. The RAII-based NoGradGuard automatically restores gradient tracking when it goes out of scope.

Usage

Use when extracting features from frozen pretrained models or during evaluation. Always wrap feature extraction in no_grad to avoid unnecessary memory consumption.

Theoretical Basis

With gradients (training):
  Memory: O(N * layers * activations) — stores all intermediate values
  Computation: Forward + builds autograd graph

Without gradients (frozen):
  Memory: O(N * output_size) — no intermediate storage
  Computation: Forward only, no graph construction

tch::no_grad(|| { ... }) — Closure-based API
tch::no_grad_guard()     — RAII guard (dropped at scope exit)

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment