Implementation:Haosulab ManiSkill Demo Random Action CLI
| Field | Value |
|---|---|
| Page Type | Implementation (External Tool Doc) |
| Title | ManiSkill Demo Random Action CLI |
| Domain | Simulation, Robotics, Environment_Design, Quality_Assurance |
| Related Principle | Principle:Haosulab_ManiSkill_Environment_Testing |
| Source File | mani_skill/examples/demo_random_action.py
|
| Date | 2026-02-15 |
| Repository | Haosulab/ManiSkill |
Overview
Description
The demo_random_action.py script is a CLI tool for testing ManiSkill environments by running random actions and optionally rendering the results. It creates the specified environment via gym.make(), resets it, and then steps through the environment in a loop, sampling random actions from the action space. When the render mode is set to "human", a GUI window opens showing the simulation. The tool prints observation spaces, action spaces, reward values, termination flags, and info dictionaries to the console for inspection.
This tool serves as the primary smoke test for custom task environments and is also useful for generating demonstration videos.
Usage
Run from the command line as a Python module:
python -m mani_skill.examples.demo_random_action -e <env_id> [options]
Code Reference
File
mani_skill/examples/demo_random_action.py
Entry Point
# CLI entry point using tyro argument parser
if __name__ == "__main__":
parsed_args = tyro.cli(Args)
main(parsed_args)
Core Loop
def main(args: Args):
# Create environment via gym.make with specified options
env: BaseEnv = gym.make(args.env_id, **env_kwargs)
# Optionally wrap with RecordEpisode for video recording
if record_dir:
env = RecordEpisode(env, record_dir, ...)
# Reset and run random actions
obs, _ = env.reset(seed=args.seed, options=dict(reconfigure=True))
while True:
action = env.action_space.sample() if env.action_space is not None else None
obs, reward, terminated, truncated, info = env.step(action)
if verbose:
print("reward", reward)
print("terminated", terminated)
print("truncated", truncated)
print("info", info)
if args.render_mode == "human":
env.render()
if args.render_mode is None or args.render_mode != "human":
if (terminated | truncated).any():
break
env.close()
I/O Contract
CLI Arguments
| Argument | Short | Type | Default | Description |
|---|---|---|---|---|
--env-id |
-e |
str |
"PushCube-v1" |
Environment ID of the task to test |
--obs-mode |
-o |
str |
"none" |
Observation mode: "none", "state", "state_dict", "rgb", "rgbd", etc. |
--robot-uids |
-r |
str |
None |
Robot UID(s). Comma-separated for multiple robots. None uses env default. |
--sim-backend |
-b |
str |
"auto" |
Simulation backend: "auto", "cpu", "gpu" |
--render-backend |
-rb |
str |
"gpu" |
Render backend: "gpu", "cpu", "none" |
--reward-mode |
-- | str |
None |
Reward mode: "dense", "sparse", "normalized_dense", "none". None uses env default. |
--num-envs |
-n |
int |
1 |
Number of parallel environments |
--control-mode |
-c |
str |
None |
Control mode. None uses env default. |
--render-mode |
-- | str |
"rgb_array" |
Render mode: "human" (GUI), "rgb_array", "sensors", "none" |
--shader |
-- | str |
"default" |
Shader for cameras: "minimal", "default", "rt", "rt-fast" |
--record-dir |
-- | str |
None |
Directory to save video recordings |
--pause |
-p |
bool |
False |
Pause simulation upon loading (human render mode only) |
--quiet |
-- | bool |
False |
Suppress verbose output |
--seed |
-s |
int or list[int] |
None |
Random seed(s) for reproducibility |
Console Output (when not quiet)
The tool prints:
- Observation space shape/structure
- Action space shape
- Control mode
- Reward mode
- Per-step: reward values, terminated flags, truncated flags, info dict
Behavior
- In
humanrender mode: runs indefinitely until the user closes the GUI window. The--pauseflag starts in a paused state for inspection. - In non-human render modes: runs until any environment terminates or is truncated, then exits.
- With
--record-dir: wraps the env inRecordEpisodeto save MP4 videos. - Multiple environments (
--num-envs > 1) withhumanrender mode: renders all environments in a single viewer window usingparallel_in_single_scene=True.
Usage Examples
Basic Visual Test
# Open GUI to visually inspect custom task
python -m mani_skill.examples.demo_random_action \
-e "MyTask-v1" \
--render-mode="human"
Test With Dense Reward Output
# Run without GUI and print reward values
python -m mani_skill.examples.demo_random_action \
-e "MyTask-v1" \
--reward-mode="dense" \
--render-mode="none"
GPU Multi-Environment Test
# Test with 4 parallel GPU environments
python -m mani_skill.examples.demo_random_action \
-e "MyTask-v1" \
--num-envs 4 \
--render-mode="human" \
-b "gpu"
Record a Video
# Record a video of the environment
python -m mani_skill.examples.demo_random_action \
-e "MyTask-v1" \
--render-mode="rgb_array" \
--record-dir="./videos/{env_id}"
Test With Specific Robot and Seed
# Test with Fetch robot and fixed seed for reproducibility
python -m mani_skill.examples.demo_random_action \
-e "MyTask-v1" \
-r "fetch" \
-s 42 \
--render-mode="human"
Test Visual Observations
# Verify sensor camera configuration produces correct images
python -m mani_skill.examples.demo_random_action \
-e "MyTask-v1" \
-o "rgbd" \
--render-mode="sensors"
Test With Ray Tracing Shader
# Generate photorealistic renders
python -m mani_skill.examples.demo_random_action \
-e "MyTask-v1" \
--render-mode="rgb_array" \
--shader="rt" \
--record-dir="./videos_rt/{env_id}"
Common Issues and Diagnostics
| Symptom | Likely Cause | Fix |
|---|---|---|
KeyError: "MyTask-v1" not found in registry |
Environment not registered | Ensure @register_env decorator is applied and the module is imported before gym.make()
|
| Objects at origin (0,0,0) | _initialize_episode() not setting poses |
Verify set_pose() calls use correct batch size
|
AssertionError on actor name |
Duplicate actor name in _load_scene() |
Ensure all actors have unique names |
| NaN rewards | Division by zero in reward computation | Add epsilon to distance calculations |
| Physics explosion (objects flying) | No initial pose set on ActorBuilder | Set builder.initial_pose before building
|
| GPU crash with large num_envs | Insufficient GPU memory config | Increase GPUMemoryConfig values in _default_sim_config
|
Related Pages
- Principle:Haosulab_ManiSkill_Environment_Testing -- The principle this implements
- Implementation:Haosulab_ManiSkill_Register_Env_Decorator -- The environment must be registered first
- Implementation:Haosulab_ManiSkill_Evaluate_Dense_Reward -- Reward values inspected during testing
- Implementation:Haosulab_ManiSkill_Get_Obs_Extra_CameraConfig -- Camera configs verified through visual testing
- Environment:Haosulab_ManiSkill_Python_SAPIEN_Core