Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:ARISE Initiative Robomimic PyTorch CUDA Environment

From Leeroopedia
Knowledge Sources
Domains Infrastructure, Deep_Learning, Robot_Learning
Last Updated 2026-02-15 07:30 GMT

Overview

Python 3.8+ environment with PyTorch 2.0+, optional CUDA GPU support, and cuDNN benchmark acceleration for CNN-based observation encoders.

Description

This environment provides the core compute context for all robomimic training and evaluation workflows. It supports both CPU and GPU execution. When a CUDA-capable GPU is available, the framework automatically enables cuDNN benchmark mode to optimize CNN performance (used for image observation encoders like ResNet-18). The device selection is controlled by the `config.train.cuda` flag (default `True`), which triggers automatic GPU detection via `torch.cuda.is_available()`. Checkpoint loading also adapts — when no GPU is available, weights are mapped to CPU storage automatically.

Usage

Use this environment for all robomimic workflows: training policies from demonstrations, evaluating trained agents, and running hyperparameter sweeps. GPU acceleration is strongly recommended for image-based observation training but not strictly required for low-dimensional observation experiments.

System Requirements

Category Requirement Notes
OS Mac OS X or Linux Windows not officially supported
Python >= 3.6 (recommended 3.8.0) Conda environment recommended
Hardware (GPU) NVIDIA GPU with CUDA support Optional; CPU fallback available
Hardware (CPU) Any modern CPU Sufficient for low-dim experiments
Disk 10GB+ Model checkpoints and datasets require storage

Dependencies

System Packages

  • `conda` (or `virtualenv`) for environment management

Python Packages

  • `torch` (recommended == 2.0.0)
  • `torchvision` (recommended == 0.15.1)
  • `numpy` >= 1.13.3
  • `six` (Python 2/3 metaclass compatibility)

Credentials

No credentials required for the base PyTorch/CUDA environment.

Quick Install

# Create conda environment
conda create -n robomimic_venv python=3.8.0
conda activate robomimic_venv

# Install PyTorch (Linux with CUDA)
conda install pytorch==2.0.0 torchvision==0.15.1 -c pytorch

# Install robomimic (includes torch as dependency)
pip install robomimic

Code Evidence

Device selection from `robomimic/utils/torch_utils.py:38-54`:

def get_torch_device(try_to_use_cuda):
    if try_to_use_cuda and torch.cuda.is_available():
        torch.backends.cudnn.benchmark = True
        device = torch.device("cuda:0")
    else:
        device = torch.device("cpu")
    return device

CPU fallback for checkpoint loading from `robomimic/utils/file_utils.py:199-202`:

if not torch.cuda.is_available():
    ckpt_dict = torch.load(ckpt_path, map_location=lambda storage, loc: storage, weights_only=False)
else:
    ckpt_dict = torch.load(ckpt_path, weights_only=False)

Default CUDA flag from `robomimic/config/base_config.py:234`:

self.train.cuda = True          # use GPU or not

Common Errors

Error Message Cause Solution
`RuntimeError: CUDA out of memory` Insufficient GPU VRAM for batch size/model Reduce `config.train.batch_size` or use `hdf5_cache_mode="low_dim"` instead of `"all"`
`RuntimeError: No CUDA GPUs are available` No GPU detected but `config.train.cuda=True` Set `config.train.cuda = False` to use CPU
Models run very slowly on GPU cuDNN benchmark not enabled Ensure `config.train.cuda = True` (enables `cudnn.benchmark` automatically)

Compatibility Notes

  • Mac OS: CUDA not available; all training runs on CPU. Do not install `cudatoolkit`.
  • Linux with GPU: Recommended platform. cuDNN benchmark mode auto-enabled for CNN optimization.
  • CPU-only mode: Fully supported. Set `config.train.cuda = False`. Sufficient for low-dimensional observation experiments.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment