Implementation:LaurentMazare Tch rs TrainableCModule Load
| Knowledge Sources | |
|---|---|
| Domains | Model_Training, Interoperability |
| Last Updated | 2026-02-08 14:00 GMT |
Overview
Concrete tool for loading TorchScript models with optimizer-trainable parameters provided by the tch wrappers module.
Description
TrainableCModule::load loads a TorchScript .pt file via CModule::load_on_device and then iterates over all named parameters, registering each in the provided VarStore Path. The requires_grad flag is preserved, enabling selective training. The resulting TrainableCModule delegates forward passes to the inner CModule.
Usage
Use when you need to fine-tune a Python-exported model in Rust. Create a VarStore, load the model with TrainableCModule, then build an optimizer on the VarStore.
Code Reference
Source Location
- Repository: tch-rs
- File: src/wrappers/jit.rs
- Lines: 647-654
Signature
impl TrainableCModule {
pub fn load<T: AsRef<std::path::Path>>(
module_path: T,
path: Path,
) -> Result<Self, TchError>
}
Import
use tch::TrainableCModule;
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| module_path | T: AsRef<Path> | Yes | Path to TorchScript .pt file |
| path | Path | Yes | VarStore path for parameter registration |
Outputs
| Name | Type | Description |
|---|---|---|
| Result<TrainableCModule> | TrainableCModule | JIT module with parameters in VarStore for training |
Usage Examples
use tch::{nn, nn::OptimizerConfig, TrainableCModule, Device};
let vs = nn::VarStore::new(Device::Cpu);
let model = TrainableCModule::load("resnet18.pt", vs.root())?;
let mut opt = nn::Adam::default().build(&vs, 1e-4)?;
// Training loop
for (images, labels) in dataset.train_iter(32) {
let output = model.forward_ts(&[&images])?;
let loss = output.cross_entropy_for_logits(&labels);
opt.backward_step(&loss);
}