Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Heuristic:Isaac sim IsaacGymEnvs GPU Pipeline Selection

From Leeroopedia
Knowledge Sources
Domains Optimization, Infrastructure
Last Updated 2026-02-15 09:00 GMT

Overview

Performance optimization heuristic: use `pipeline=gpu` to keep all simulation and training data on GPU, avoiding costly CPU-GPU data transfers every step.

Description

IsaacGymEnvs supports two data pipelines: `gpu` and `cpu`. The GPU pipeline keeps observation, reward, action, and reset tensors entirely on the GPU, eliminating per-step host-device memory copies. The CPU pipeline copies data from GPU to CPU at every simulation step, which introduces significant overhead. The pipeline choice is independent of the physics simulation device (`sim_device`), but the GPU pipeline requires GPU simulation.

Usage

Use this heuristic when configuring training runs for maximum throughput. Always prefer `pipeline=gpu` with `sim_device='cuda:0'` unless you specifically need CPU-side data access (e.g., for debugging or custom CPU-based post-processing). If you set `sim_device='cpu'`, the system will automatically force `pipeline=cpu` regardless of your setting.

The Insight (Rule of Thumb)

  • Action: Set `pipeline: 'gpu'` and `sim_device: 'cuda:0'` in config or via CLI.
  • Value: Default in `config.yaml` is already `pipeline: 'gpu'`.
  • Trade-off: GPU pipeline requires all data to remain on GPU. CPU-side inspection of per-step data requires explicit `.cpu()` calls. CPU pipeline is slower but allows direct CPU access.
  • Constraint: If `sim_device` is `'cpu'`, the GPU pipeline is automatically disabled with a warning message.

Reasoning

The GPU pipeline avoids the overhead of `gym.fetch_results()` data copy at every simulation step. In the CPU pipeline path (see `vec_task.py:384-386`), `fetch_results` is called every step when `self.device == 'cpu'`. With thousands of parallel environments, this per-step copy becomes a significant bottleneck. The GPU pipeline keeps the entire simulation → observation → policy → action loop on the GPU.

From `vec_task.py:384-386`:

# to fix!
if self.device == 'cpu':
    self.gym.fetch_results(self.sim, True)

The `# to fix!` comment indicates this CPU fallback path is considered suboptimal by the developers.

GPU pipeline enforcement logic from `vec_task.py:83-88`:

if config["sim"]["use_gpu_pipeline"]:
    if self.device_type.lower() == "cuda" or self.device_type.lower() == "gpu":
        self.device = "cuda" + ":" + str(self.device_id)
    else:
        print("GPU Pipeline can only be used with GPU simulation. Forcing CPU Pipeline.")
        config["sim"]["use_gpu_pipeline"] = False

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment