Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Haosulab ManiSkill RecordEpisode Wrapper

From Leeroopedia
Field Value
Implementation Name RecordEpisode Wrapper
Type API Doc
Domain Motion_Planning
Source File mani_skill/utils/wrappers/record.py (L216-307)
Date 2026-02-15
Repository Haosulab/ManiSkill

Overview

The RecordEpisode class is a Gymnasium wrapper that captures complete trajectory data (observations, actions, environment states, rewards, termination signals) during environment execution and persists them to HDF5 and JSON files. It optionally records rendered video frames as MP4 files. This wrapper is the primary mechanism for creating demonstration datasets in ManiSkill.

Description

RecordEpisode intercepts reset() and step() calls, accumulating data into an internal buffer (Step dataclass). On episode completion (or manual flush), the buffered data is written to an HDF5 file as a new trajectory group. The wrapper supports both single-environment (CPU) and multi-environment (GPU) modes, handling partial resets in the GPU case via per-environment episode pointers.

The wrapper should generally be applied as the outermost wrapper (closest to the raw environment), so it captures the true observations before any transformation wrappers modify them.

Usage

from mani_skill.utils.wrappers.record import RecordEpisode

env = RecordEpisode(
    env,
    output_dir="demos/PickCube-v1/motionplanning",
    trajectory_name="demo_run",
    save_video=True,
    video_fps=30,
    source_type="motionplanning",
    source_desc="official motion planning solution from ManiSkill contributors",
    record_reward=False,
    save_on_reset=False,
)

Code Reference

Constructor Signature

class RecordEpisode(gym.Wrapper):
    def __init__(
        self,
        env: BaseEnv,
        output_dir: str,
        save_trajectory: bool = True,
        trajectory_name: Optional[str] = None,
        save_video: bool = True,
        info_on_video: bool = False,
        save_on_reset: bool = True,
        save_video_trigger: Optional[Callable[[int], bool]] = None,
        max_steps_per_video: Optional[int] = None,
        clean_on_close: bool = True,
        record_reward: bool = True,
        record_env_state: bool = True,
        video_fps: int = 30,
        render_substeps: bool = False,
        avoid_overwriting_video: bool = False,
        source_type: Optional[str] = None,
        source_desc: Optional[str] = None,
    ) -> None:

Constructor Parameters

Parameter Type Default Description
env BaseEnv (required) The environment to record.
output_dir str (required) Directory for output files.
save_trajectory bool True Whether to save trajectory data to HDF5.
trajectory_name Optional[str] None Name for the .h5 file (uses timestamp if None).
save_video bool True Whether to save video files.
info_on_video bool False Overlay reward/action/info text on video frames.
save_on_reset bool True Auto-save previous trajectory on reset().
save_video_trigger Optional[Callable] None Function that takes elapsed steps, returns bool to enable/disable video.
max_steps_per_video Optional[int] None Max steps per video file before flushing. Required for GPU multi-env.
clean_on_close bool True Rename and prune trajectories on close for consecutive IDs.
record_reward bool True Whether to record reward values.
record_env_state bool True Whether to record environment state dictionaries.
video_fps int 30 Frames per second for output video files.
render_substeps bool False Capture images at every physics substep (slower but smoother).
avoid_overwriting_video bool False Increment video ID to avoid overwriting existing files.
source_type Optional[str] None Category of data source (e.g., "motionplanning", "rl", "human").
source_desc Optional[str] None Longer description of how data was generated.

Key Methods

def reset(self, *args, seed=None, options=None, save=True, **kwargs):
    """Reset environment; optionally flush previous trajectory and video."""

def step(self, action):
    """Step environment; accumulate trajectory data and video frames."""

def flush_trajectory(self, verbose=False, ignore_empty_transition=True,
                     env_idxs_to_flush=None, save=True):
    """Write buffered trajectory data to HDF5. Use save=False to discard."""

def flush_video(self, name=None, suffix="", verbose=False,
                ignore_empty_transition=True, save=True):
    """Write buffered video frames to MP4. Use save=False to discard."""

def close(self):
    """Flush remaining data, clean trajectories, and close HDF5 file."""

I/O Contract

Outputs

File Format Contents
{trajectory_name}.h5 HDF5 Per-episode groups (traj_0, traj_1, ...) each containing: actions [T, A], obs [T+1, ...], env_states [T+1, D], terminated [T], truncated [T], success [T] (optional), fail [T] (optional), rewards [T] (optional).
{trajectory_name}.json JSON env_info (env_id, env_kwargs, max_episode_steps), episodes (list of episode metadata with episode_id, reset_kwargs, control_mode, elapsed_steps, success/fail), source_type, source_desc, commit_info.
{video_id}.mp4 MP4 Rendered video at specified FPS.

HDF5 Data Compression

Image data (rgb, depth, seg) is stored with gzip compression (level 5) to reduce file size while maintaining random-access capability.

Usage Examples

import gymnasium as gym
from mani_skill.utils.wrappers.record import RecordEpisode

# Basic recording with auto-save on reset
env = gym.make("PickCube-v1", obs_mode="state", control_mode="pd_joint_pos")
env = RecordEpisode(env, output_dir="demos/PickCube-v1")
obs, info = env.reset(seed=42)
for _ in range(100):
    action = env.action_space.sample()
    obs, reward, terminated, truncated, info = env.step(action)
    if terminated or truncated:
        obs, info = env.reset()
env.close()  # Flushes remaining data

# Manual flush control (for motion planning)
env = RecordEpisode(env, output_dir="demos", save_on_reset=False)
obs, info = env.reset(seed=0)
# ... run solver ...
if success:
    env.flush_trajectory(save=True)
else:
    env.flush_trajectory(save=False)  # Discard failed trajectory

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment