Implementation:LaurentMazare Tch rs No Grad Apply T
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, Transfer_Learning |
| Last Updated | 2026-02-08 14:00 GMT |
Overview
Concrete pattern for executing gradient-free forward passes using tch's no_grad context combined with the apply_t helper.
Description
This pattern combines tch::no_grad (a closure that disables gradient tracking) with Tensor::apply_t (a convenience method for calling ModuleT::forward_t). Together they enable efficient feature extraction from frozen models. The no_grad_guard() variant returns an RAII guard that disables gradients until dropped.
Usage
Use to pre-compute features for transfer learning. Wrap the entire feature extraction in tch::no_grad and call apply_t with train: false.
Code Reference
Source Location
- Repository: tch-rs
- File: src/nn/module.rs
- Lines: 50-52 (apply_t)
Signature
// Closure-based API
pub fn no_grad<F: FnOnce() -> T, T>(f: F) -> T
// RAII guard API
pub fn no_grad_guard() -> NoGradGuard
// Tensor helper
impl Tensor {
pub fn apply_t<M: ModuleT>(&self, m: &M, train: bool) -> Tensor
}
Import
use tch;
use tch::nn::ModuleT;
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| self (Tensor) | &Tensor | Yes | Input data tensor (e.g., all training images) |
| m | &M: ModuleT | Yes | Frozen model to apply |
| train | bool | Yes | Set to false for feature extraction |
Outputs
| Name | Type | Description |
|---|---|---|
| Tensor | Tensor | Feature tensor with no gradient tracking (e.g., [N, 512]) |
Usage Examples
use tch::{nn::ModuleT, vision::resnet, Device};
// Pre-compute features for entire dataset
let train_features = tch::no_grad(|| {
dataset.train_images.apply_t(&backbone, false)
});
let test_features = tch::no_grad(|| {
dataset.test_images.apply_t(&backbone, false)
});
Related Pages
Implements Principle
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment