Implementation:LaurentMazare Tch rs Sgd Build
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, Optimization |
| Last Updated | 2026-02-08 14:00 GMT |
Overview
Concrete tool for constructing the SGD optimizer for gradient-based training provided by the tch nn module.
Description
nn::Sgd::default().build(&vs, lr) creates an Optimizer wrapping a C++ SGD optimizer with all trainable variables from the VarStore. Default configuration has zero momentum, dampening, and weight decay. Use builder methods to configure momentum and other parameters.
Usage
Use for transfer learning tasks where you are training a small classifier on frozen features. Pass the VarStore of only the trainable layers (not the frozen backbone).
Code Reference
Source Location
- Repository: tch-rs
- File: src/nn/optimizer.rs
- Lines: 46-50 (Sgd::default), 57-61 (build_copt)
Signature
impl Default for Sgd {
fn default() -> Self {
Sgd { momentum: 0., dampening: 0., wd: 0., nesterov: false }
}
}
// Inherited from OptimizerConfig trait
fn build(self, vs: &VarStore, lr: f64) -> Result<Optimizer, TchError>
Import
use tch::nn::{self, OptimizerConfig};
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| vs | &VarStore | Yes | Variable store with trainable parameters |
| lr | f64 | Yes | Learning rate |
Outputs
| Name | Type | Description |
|---|---|---|
| Result<Optimizer> | nn::Optimizer | SGD optimizer wrapping C++ implementation |
Usage Examples
use tch::nn::{self, OptimizerConfig, Module};
let vs = nn::VarStore::new(tch::Device::Cpu);
let linear = nn::linear(vs.root(), 512, 10, Default::default());
let mut opt = nn::Sgd::default().build(&vs, 1e-3)?;
for epoch in 1..=1000 {
let logits = train_features.apply(&linear);
let loss = logits.cross_entropy_for_logits(&train_labels);
opt.backward_step(&loss);
}
Related Pages
Implements Principle
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment