Environment:Isaac sim IsaacGymEnvs Python CUDA Runtime
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Deep_Learning |
| Last Updated | 2026-02-15 09:00 GMT |
Overview
Python 3.6+ runtime with PyTorch CUDA support and NVIDIA GPU for physics simulation and RL training.
Description
IsaacGymEnvs requires a Python environment with CUDA-enabled PyTorch for GPU-accelerated physics simulation and reinforcement learning. The default configuration assumes `cuda:0` for both simulation and RL devices. All tensor operations, observation buffers, reward computations, and policy networks execute on GPU. A conda environment is strongly recommended for managing the CUDA toolkit and Python version compatibility.
Usage
Use this environment as the base runtime for all IsaacGymEnvs workflows. The CUDA runtime is required for GPU simulation (PhysX on GPU), the GPU data pipeline, and PyTorch-based RL training. CPU-only mode is available but significantly slower and not recommended for production training.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| Python | >= 3.6 | 3.7 and 3.8 also listed in classifiers; conda recommended |
| PyTorch | torch (latest compatible) | CUDA-enabled build required |
| CUDA Toolkit | Compatible with PyTorch build | Typically CUDA 11.x |
| GPU | NVIDIA GPU with CUDA support | Default device is `cuda:0` |
| OS | Ubuntu 18.04 / 20.04 LTS | Linux only |
Dependencies
System Packages
- NVIDIA CUDA Toolkit (version matching PyTorch build)
- `conda` (strongly recommended for environment management)
Python Packages
- `torch` (CUDA-enabled build)
- `numpy`
- `gym` == 0.23.1
Credentials
The following environment variables are used at runtime:
- `RANK`: Global rank for multi-GPU distributed training (default: `"0"`)
- `LOCAL_RANK`: Local GPU rank for multi-GPU training (default: `"0"`)
- `WORLD_SIZE`: Total number of GPU processes (default: `"1"`)
- `PYTHONHASHSEED`: Set automatically when `torch_deterministic=True`
- `CUBLAS_WORKSPACE_CONFIG`: Set to `':4096:8'` when `torch_deterministic=True`
Quick Install
# Create conda environment (recommended)
conda create -n isaacgym python=3.8
conda activate isaacgym
# Install PyTorch with CUDA support
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
# Install Isaac Gym Preview 4 (see IsaacGym_Preview_4 environment page)
cd isaacgym/python && pip install -e .
# Install IsaacGymEnvs
cd IsaacGymEnvs && pip install -e .
Code Evidence
Default CUDA device from `isaacgymenvs/utils/torch_jit_utils.py:37`:
def to_torch(x, dtype=torch.float, device='cuda:0', requires_grad=False):
GPU device configuration from `isaacgymenvs/cfg/config.yaml:24-27`:
sim_device: 'cuda:0'
rl_device: 'cuda:0'
graphics_device_id: 0
Multi-GPU rank assignment from `isaacgymenvs/utils/rlgames_utils.py:91-100`:
local_rank = int(os.getenv("LOCAL_RANK", "0"))
global_rank = int(os.getenv("RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
_sim_device = f'cuda:{local_rank}'
_rl_device = f'cuda:{local_rank}'
GPU pipeline enforcement from `isaacgymenvs/tasks/base/vec_task.py:83-88`:
if config["sim"]["use_gpu_pipeline"]:
if self.device_type.lower() == "cuda" or self.device_type.lower() == "gpu":
self.device = "cuda" + ":" + str(self.device_id)
else:
print("GPU Pipeline can only be used with GPU simulation. Forcing CPU Pipeline.")
config["sim"]["use_gpu_pipeline"] = False
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `CUDA out of memory` | Too many parallel environments for available VRAM | Reduce `task.env.numEnvs` in config |
| `GPU Pipeline can only be used with GPU simulation. Forcing CPU Pipeline.` | `sim_device` set to `'cpu'` with `pipeline: 'gpu'` | Set `sim_device: 'cuda:0'` or use `pipeline: 'cpu'` |
| `RuntimeError: CUDA error: device-side assert triggered` | Numerical instability in simulation | Reduce velocity limits or check physics parameters |
Compatibility Notes
- Multi-GPU: Supported via `torchrun` with `multi_gpu=True` flag. Each process maps to `cuda:{LOCAL_RANK}`.
- AMP Training: Multi-GPU training is not supported for AMP (Adversarial Motion Priors) tasks.
- CPU Simulation: Available via `sim_device: 'cpu'` but GPU pipeline will be forcibly disabled.
- Mixed Precision: Supported via `torch.cuda.amp.autocast` in both standard and AMP agents.