Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Haosulab ManiSkill Replay Trajectory CLI

From Leeroopedia
Field Value
Source Repository haosulab/ManiSkill
Type API Doc
Domains Imitation_Learning, Robotics, Data_Processing, Simulation
Last Updated 2026-02-15

Overview

Description

The Replay Trajectory CLI is a command-line tool for replaying ManiSkill trajectory HDF5 files with different observation modes, control modes, and simulation backends. It reads an existing .h5 trajectory file along with its companion .json metadata file, creates a simulation environment with the desired configuration, and re-executes each episode's actions (or restores environment states) step by step. The resulting trajectories -- with new observations, converted actions, and optionally recorded videos -- are saved to a new HDF5 file.

The tool supports two primary execution paths: CPU-based sequential replay (with optional Python multiprocessing parallelization) and GPU-based parallelized replay (using NVIDIA PhysX GPU backend). It also supports control mode conversion between certain controller types (primarily for Panda robot arms), environment state replay for guaranteed visual fidelity, and filtering options to discard failed or timed-out episodes.

Usage

This tool is used after downloading demonstrations and before loading data for training. It converts raw trajectory files into the observation mode and control mode required by the downstream learning algorithm.

Code Reference

Source Location

Field Value
Repository haosulab/ManiSkill
File mani_skill/trajectory/replay_trajectory.py
Lines (Args) L34-87
Lines (main) L478-626

Signature

CLI invocation:

python -m mani_skill.trajectory.replay_trajectory \
    --traj-path <path_to_h5> \
    -o <obs_mode> \
    -c <control_mode> \
    -b <sim_backend> \
    -n <num_envs> \
    --save-traj \
    --save-video

Args dataclass (L34-87):

@dataclass
class Args:
    traj_path: str
    """Path to the trajectory .h5 file to replay"""

    sim_backend: Annotated[Optional[str], tyro.conf.arg(aliases=["-b"])] = None
    """Which simulation backend to use. Can be 'physx_cpu', 'physx_gpu'.
    If not specified the backend used is the same as the one used to
    collect the trajectory data."""

    obs_mode: Annotated[Optional[str], tyro.conf.arg(aliases=["-o"])] = None
    """Target observation mode to record in the trajectory."""

    target_control_mode: Annotated[Optional[str], tyro.conf.arg(aliases=["-c"])] = None
    """Target control mode to convert the demonstration actions to."""

    verbose: bool = False
    """Whether to print verbose information during trajectory replays"""

    save_traj: bool = False
    """Whether to save trajectories to disk."""

    save_video: bool = False
    """Whether to save videos"""

    max_retry: int = 0
    """Maximum number of times to try and replay a trajectory until success."""

    discard_timeout: bool = False
    """Whether to discard episodes that timeout and are truncated"""

    allow_failure: bool = False
    """Whether to include episodes that fail"""

    vis: bool = False
    """Whether to visualize the trajectory replay via the GUI."""

    use_env_states: bool = False
    """Whether to replay by environment states instead of actions."""

    use_first_env_state: bool = False
    """Use the first env state in the trajectory to set initial state."""

    count: Optional[int] = None
    """Number of demonstrations to replay before exiting."""

    reward_mode: Optional[str] = None
    """Specifies the reward type that the env should use."""

    record_rewards: bool = False
    """Whether the replayed trajectory should include rewards"""

    shader: Optional[str] = None
    """Change shader used for rendering. Can be 'rt' or 'rt-fast'."""

    video_fps: Optional[int] = None
    """The FPS of saved videos. Defaults to the control frequency."""

    render_mode: str = "rgb_array"
    """The render mode used for saving videos."""

    num_envs: Annotated[int, tyro.conf.arg(aliases=["-n"])] = 1
    """Number of environments to run to replay trajectories."""

Key parameters:

Parameter Alias Type Default Description
traj_path -- str required Path to the input .h5 trajectory file
obs_mode -o Optional[str] None Target observation mode: state, rgbd, pointcloud, etc.
target_control_mode -c Optional[str] None Target control mode: pd_joint_pos, pd_joint_delta_pos, pd_ee_delta_pos, etc.
sim_backend -b Optional[str] None (auto) Simulation backend: physx_cpu or physx_gpu
num_envs -n int 1 Number of parallel environments for replay
save_traj -- bool False Save the converted trajectory to a new .h5 file
save_video -- bool False Save visualization videos of the replay
use_env_states -- bool False Replay by restoring environment states instead of actions
max_retry -- int 0 Maximum retry attempts for failed replays
count -- Optional[int] None (all) Number of episodes to replay

Import

This tool is primarily used from the command line:

python -m mani_skill.trajectory.replay_trajectory --traj-path trajectory.h5 -o state -c pd_joint_delta_pos --save-traj

For programmatic use:

from mani_skill.trajectory.replay_trajectory import main, parse_args

args = parse_args([
    "--traj-path", "trajectory.h5",
    "-o", "state",
    "-c", "pd_joint_delta_pos",
    "--save-traj"
])
main(args)

I/O Contract

Inputs:

Input Type Description
traj_path str (file path) Path to a ManiSkill .h5 trajectory file. A companion .json metadata file must exist at the same path with .json extension.

Outputs:

Output Type Description
New .h5 file HDF5 file Converted trajectory with naming format {original_name}.{obs_mode}.{control_mode}.{sim_backend}.h5. Contains per-episode obs, actions, terminated, truncated, and optionally rewards, env_states, success, fail.
New .json file JSON file Updated metadata for the converted trajectory with episode info and environment kwargs reflecting the new obs_mode and control_mode.
Video files (optional) MP4 files Visualization videos of each replayed episode (when --save-video is set).

Output naming convention:

trajectory.state.pd_joint_delta_pos.physx_cpu.h5
trajectory.rgbd.pd_joint_delta_pos.physx_gpu.h5

Usage Examples

Example 1: Convert raw demos to state observations with delta position control

python -m mani_skill.trajectory.replay_trajectory \
    --traj-path ~/.maniskill/demos/PickCube-v1/trajectory.h5 \
    -o state \
    -c pd_joint_delta_pos \
    --save-traj

Example 2: Convert to RGBD observations (keeping original control mode)

python -m mani_skill.trajectory.replay_trajectory \
    --traj-path ~/.maniskill/demos/StackCube-v1/trajectory.h5 \
    -o rgbd \
    --save-traj

Example 3: GPU-parallelized replay with environment state restoration

python -m mani_skill.trajectory.replay_trajectory \
    --traj-path ~/.maniskill/demos/PushCube-v1/trajectory.h5 \
    -b physx_gpu \
    -n 16 \
    -o state \
    --use-env-states \
    --save-traj

Example 4: Replay only first 10 episodes with video output, discarding failures

python -m mani_skill.trajectory.replay_trajectory \
    --traj-path ~/.maniskill/demos/PegInsertionSide-v1/trajectory.h5 \
    -o state \
    --save-traj \
    --save-video \
    --count 10 \
    --discard-timeout

Example 5: CPU-parallelized replay with 4 worker processes

python -m mani_skill.trajectory.replay_trajectory \
    --traj-path ~/.maniskill/demos/PlugCharger-v1/trajectory.h5 \
    -o state \
    -c pd_joint_delta_pos \
    -n 4 \
    --save-traj \
    --max-retry 3

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment