Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Haosulab ManiSkill CPU GPU Dual Backend

From Leeroopedia
Knowledge Sources
Domains Robotics, Simulation, High_Performance_Computing
Last Updated 2026-02-15 08:00 GMT

Overview

Core algorithms (inverse kinematics, physics stepping, observation computation) provide both CPU and GPU implementations, automatically selecting the appropriate backend based on the simulation configuration.

Description

The CPU GPU Dual Backend principle ensures that computationally intensive operations work correctly on both single-environment CPU simulation and massively parallel GPU simulation. The key challenge is that CPU backends operate on single instances with NumPy arrays and SAPIEN's analytical solvers, while GPU backends operate on batched tensors with PyTorch and batched Jacobian solvers. The same logical operation (e.g., computing inverse kinematics) must produce equivalent results on both backends.

ManiSkill implements this through backend-aware code paths. The Kinematics class uses SAPIEN's PinocchioModel on CPU and pytorch_kinematics on GPU. The BaseEnv class auto-selects the PhysX CPU or CUDA backend based on num_envs. Controllers adapt their tensor operations to work with both scalar and batched inputs. This dual-backend design enables the same environment and agent code to run in interactive debugging mode (CPU, 1 env) and high-throughput training mode (GPU, 1000+ envs).

Usage

This principle applies whenever:

  • An algorithm must work identically in single-env CPU mode and batched GPU mode.
  • A new IK solver, controller, or observation function is added that performs tensor operations.
  • Sim2real pipelines require CPU-only mode (real robot hardware cannot use GPU simulation).

Theoretical Basis

Backend Resolution: At environment creation, the sim_backend parameter ("auto", "physx_cpu", "physx_cuda") determines whether CPU or GPU paths are used. "auto" selects GPU when num_envs > 1.

Tensor Abstraction: Operations are written using PyTorch tensors that work on both CPU and CUDA devices. Shape conventions use (batch, ...) where batch=1 for CPU mode.

Solver Selection: IK uses PinocchioModel (CPU, analytical, single-instance) vs pytorch_kinematics (GPU, batched Jacobian, parallel). Both produce joint positions from end-effector poses but through different computational paths.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment