Principle:Google deepmind Mujoco Vectorized Simulation
| Knowledge Sources | |
|---|---|
| Domains | GPU_Computing, Reinforcement_Learning, JAX |
| Last Updated | 2026-02-15 06:00 GMT |
Overview
Technique of running many independent simulation instances in parallel on a GPU by vectorizing the physics step across a batch dimension.
Description
Vectorized Simulation uses JAX's vmap transformation to automatically vectorize mjx.step across a batch of simulation states. This enables running thousands of independent simulations in parallel on a single GPU — essential for reinforcement learning where many environment rollouts are needed simultaneously. The model is shared (not batched) while the data is batched along axis 0.
Usage
Use for reinforcement learning training loops, population-based optimization, or any scenario requiring many parallel simulations. Combine with jax.jit for compiled batched execution and jax.lax.scan for multi-step rollouts.
Theoretical Basis
Vectorization maps a function over a batch dimension:
For MJX:
- Model: Shared across batch (in_axes=None)
- Data: Batched along first axis (in_axes=0)
This is equivalent to running N independent simulations but executes as a single GPU kernel with SIMD parallelism.