Heuristic:LaurentMazare Tch rs Device Fallback Pattern
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Deep_Learning |
| Last Updated | 2026-02-08 13:00 GMT |
Overview
Use `Device::cuda_if_available()` to automatically select GPU when CUDA is present, falling back to CPU otherwise, ensuring code runs on any hardware configuration.
Description
tch-rs provides the `Device::cuda_if_available()` method which checks for CUDA availability at runtime and returns `Device::Cuda(0)` if a GPU is found, or `Device::Cpu` otherwise. This pattern is the recommended way to write hardware-portable code. All model parameters, input tensors, and training operations should use this device. The pattern appears throughout the tch-rs examples and is the standard idiom for VarStore initialization.
Usage
Use this heuristic in any tch-rs application that should work on both GPU and CPU hardware. Apply it at the start of the program when creating the `VarStore`. Also use `tensor.to_device(device)` to ensure input data is on the same device as the model. This is critical for deployment portability where the target hardware may or may not have a GPU.
The Insight (Rule of Thumb)
- Action: Initialize VarStore with `nn::VarStore::new(Device::cuda_if_available())` instead of hardcoding `Device::Cpu` or `Device::Cuda(0)`.
- Value: N/A (pattern-based).
- Trade-off: None. This is strictly better than hardcoding a device, as it gracefully degrades.
- Compatibility: Works on all platforms. CUDA detection is a runtime check via `atc_cuda_is_available()`.
Reasoning
Hardcoding `Device::Cuda(0)` causes a panic or error on machines without NVIDIA GPUs. Hardcoding `Device::Cpu` wastes available GPU resources. The fallback pattern provides the best of both worlds: automatic GPU utilization when available, seamless CPU execution otherwise. This is especially important for open-source libraries and examples that run on diverse hardware.
Code Evidence
Device fallback implementation from `src/wrappers/device.rs:106-112`:
/// Returns a GPU device if available, else default to CPU.
pub fn cuda_if_available() -> Device {
if Cuda::is_available() {
Device::Cuda(0)
} else {
Device::Cpu
}
}
CUDA availability check from `src/wrappers/device.rs:26-28`:
/// Returns true if at least one CUDA device is available.
pub fn is_available() -> bool {
unsafe_torch!(torch_sys::cuda::atc_cuda_is_available()) != 0
}
Usage in README example:
let mut vs = VarStore::new(Device::cuda_if_available());