Implementation:Haosulab ManiSkill Gym Make BaseEnv
| Field | Value |
|---|---|
| implementation_name | Haosulab_ManiSkill_Gym_Make_BaseEnv |
| overview | Concrete tool for creating GPU-parallelized ManiSkill environments via gym.make and the BaseEnv constructor
|
| type | Library API |
| domains | Simulation, Reinforcement_Learning, Robotics |
| last_updated | 2026-02-15 |
| related_pages | Principle:Haosulab_ManiSkill_Environment_Configuration |
Overview
Description
The gym.make() function, combined with ManiSkill's environment registration system, is the primary entry point for creating ManiSkill simulation environments. When import mani_skill.envs is executed, all ManiSkill environments are registered with Gymnasium's registry. Calling gym.make(env_id, **kwargs) then instantiates the corresponding BaseEnv subclass with the specified configuration.
The BaseEnv.__init__ constructor (defined in mani_skill/envs/sapien_env.py, lines 192-259) handles:
- Selecting the simulation backend based on
num_envsandsim_backendparameters - Initializing GPU physics via PhysX CUDA when parallel simulation is requested
- Configuring observation modes, control modes, and sensor/camera settings
- Setting up the reconfiguration frequency for domain randomization
ManiSkill also provides its own make() function in mani_skill/utils/registration.py (lines 171-183) that looks up environments in its internal registry (REGISTERED_ENVS) and delegates to the environment spec's make() method.
Usage
Use gym.make() with ManiSkill environment IDs as the first step of any RL training or evaluation pipeline. This must be called before any wrapping or policy instantiation.
Code Reference
| Field | Value |
|---|---|
| Repository | https://github.com/haosulab/ManiSkill |
| File (BaseEnv.__init__) | mani_skill/envs/sapien_env.py (lines 192-259)
|
| File (registration) | mani_skill/utils/registration.py (lines 171-183)
|
Function Signature (gym.make with ManiSkill parameters):
gym.make(
env_id: str, # registered environment name, e.g. "PickCube-v1"
num_envs: int = 1, # number of parallel envs (>1 triggers GPU sim)
obs_mode: Optional[str] = None, # "state", "rgbd", "pointcloud", "sensor_data"
reward_mode: Optional[str] = None, # "normalized_dense", "dense", "sparse", etc.
control_mode: Optional[str] = None, # "pd_joint_delta_pos", "pd_ee_delta_pos", etc.
render_mode: Optional[str] = None, # "rgb_array", "human", "sensors"
sim_backend: str = "auto", # "auto", "physx_cpu", "physx_cuda"
render_backend: str = "gpu", # "gpu", "cpu"
reconfiguration_freq: Optional[int] = None, # how often to randomize assets (0=never)
robot_uids: Union[str, BaseAgent, list] = None, # override default robot
sensor_configs: Optional[dict] = dict(), # camera/sensor configuration overrides
sim_config: Union[SimConfig, dict] = dict(), # physics simulation parameters
enhanced_determinism: bool = False, # stricter determinism guarantees
**kwargs
) -> BaseEnv
Registration make function:
# mani_skill/utils/registration.py
def make(env_id, **kwargs):
"""Instantiate a ManiSkill environment."""
if env_id not in REGISTERED_ENVS:
raise KeyError("Env {} not found in registry".format(env_id))
env_spec = REGISTERED_ENVS[env_id]
env = env_spec.make(**kwargs)
return env
Import:
import gymnasium as gym
import mani_skill.envs # registers all ManiSkill environments with Gymnasium
I/O Contract
| Direction | Name | Type | Description |
|---|---|---|---|
| Input | env_id | str |
Registered environment name (e.g., "PickCube-v1", "PegInsertionSide-v1")
|
| Input | num_envs | int |
Number of parallel environments. Values >1 automatically select GPU simulation |
| Input | obs_mode | Optional[str] |
Observation format: "state", "rgbd", "pointcloud", "sensor_data"
|
| Input | control_mode | Optional[str] |
Robot control parameterization (e.g., "pd_joint_delta_pos")
|
| Input | sim_backend | str |
Physics backend selection. "auto" picks physx_cuda when num_envs > 1
|
| Output | env | BaseEnv |
Gymnasium-compatible environment instance with .reset(), .step(), .observation_space, .action_space
|
Key properties of the returned BaseEnv instance:
env.observation_space-- Gymnasium space describing observation shape and boundsenv.action_space-- Gymnasium space describing action shape and boundsenv.single_observation_space-- Observation space for a single environment (unbatched)env.single_action_space-- Action space for a single environment (unbatched)env.device-- The torch device where tensors reside (cudaorcpu)env.num_envs-- Number of parallel environments
Backend auto-selection logic (from BaseEnv.__init__):
if sim_backend == "auto":
if num_envs > 1:
sim_backend = "physx_cuda"
else:
sim_backend = "physx_cpu"
Usage Examples
Example 1: Create a GPU-parallelized training environment
import gymnasium as gym
import mani_skill.envs
# Create 512 parallel PickCube environments on GPU
envs = gym.make(
"PickCube-v1",
num_envs=512,
obs_mode="state",
control_mode="pd_joint_delta_pos",
render_mode="rgb_array",
sim_backend="physx_cuda",
)
# Observations and actions are batched PyTorch tensors on GPU
obs, info = envs.reset(seed=42)
print(obs.shape) # torch.Size([512, obs_dim])
print(obs.device) # cuda:0
Example 2: Create a single-env CPU environment for debugging
import gymnasium as gym
import mani_skill.envs
# Single CPU environment for debugging
env = gym.make(
"StackCube-v1",
num_envs=1,
obs_mode="state",
control_mode="pd_joint_delta_pos",
sim_backend="physx_cpu",
)
obs, info = env.reset()
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)
Example 3: Create training and evaluation environments (from PPO baseline)
import gymnasium as gym
import mani_skill.envs
env_kwargs = dict(obs_mode="state", render_mode="rgb_array", sim_backend="physx_cuda")
# Training envs: 512 parallel, no reconfiguration
envs = gym.make(
"PickCube-v1",
num_envs=512,
reconfiguration_freq=None,
control_mode="pd_joint_delta_pos",
**env_kwargs,
)
# Eval envs: 8 parallel, reconfigure each reset for object randomization
eval_envs = gym.make(
"PickCube-v1",
num_envs=8,
reconfiguration_freq=1,
control_mode="pd_joint_delta_pos",
**env_kwargs,
)
Related Pages
- Principle:Haosulab_ManiSkill_Environment_Configuration -- The principle this implementation realizes
- Implementation:Haosulab_ManiSkill_ManiSkillVectorEnv -- Wrapping the created environment for RL training
- Implementation:Haosulab_ManiSkill_BaseEnv_Step_Reset -- The step/reset interface of the returned environment
- Environment:Haosulab_ManiSkill_Python_SAPIEN_Core
- Heuristic:Haosulab_ManiSkill_Num_Envs_Backend_Selection