Implementation:ARISE Initiative Robomimic Rollout
| Knowledge Sources | |
|---|---|
| Domains | Robotics, Evaluation, Simulation |
| Last Updated | 2026-02-15 08:00 GMT |
Overview
Concrete tool for executing detailed policy rollouts with trajectory capture and video recording provided by the robomimic run_trained_agent script.
Description
The rollout function executes a single evaluation episode by deploying a RolloutPolicy in an EnvBase environment. It captures the full trajectory (actions, rewards, dones, states, initial_state_dict, and optionally obs/next_obs) while supporting on-screen rendering, multi-camera off-screen video recording, and graceful handling of rollout exceptions.
Unlike the training rollout (run_rollout in train_utils.py), this function returns the full trajectory data and supports multi-camera video concatenation for richer qualitative evaluation.
Usage
Called by the run_trained_agent orchestrator function for each evaluation episode. Used in post-training evaluation scripts.
Code Reference
Source Location
- Repository: robomimic
- File: robomimic/scripts/run_trained_agent.py
- Lines: L73-175
Signature
def rollout(policy, env, horizon, render=False, video_writer=None, video_skip=5,
return_obs=False, camera_names=None):
"""
Helper function to carry out rollouts with trajectory capture.
Args:
policy (RolloutPolicy): policy loaded from checkpoint
env (EnvBase): environment loaded from checkpoint
horizon (int): maximum horizon for the rollout
render (bool): whether to render on-screen
video_writer (imageio writer): if provided, write rollout video
video_skip (int): frame skip for video recording
return_obs (bool): if True, include observations in trajectory output
camera_names (list): cameras for off-screen rendering
Returns:
stats (dict): rollout statistics (Return, Horizon, Success_Rate)
traj (dict): trajectory data (actions, rewards, dones, states, initial_state_dict,
optionally obs, next_obs)
"""
Import
from robomimic.scripts.run_trained_agent import rollout
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| policy | RolloutPolicy | Yes | Policy loaded from checkpoint |
| env | EnvBase | Yes | Simulation environment |
| horizon | int | Yes | Maximum timesteps per episode |
| render | bool | No | On-screen rendering. Default: False |
| video_writer | imageio.Writer | No | Video writer for recording. Default: None |
| video_skip | int | No | Record every N frames. Default: 5 |
| return_obs | bool | No | Include observations in output. Default: False |
| camera_names | list | No | Camera names for video recording |
Outputs
| Name | Type | Description |
|---|---|---|
| stats | dict | Keys: Return (float), Horizon (int), Success_Rate (float) |
| traj | dict | Keys: actions (np.array), rewards (np.array), dones (np.array), states (np.array), initial_state_dict (dict); optionally obs (dict of np.arrays), next_obs (dict of np.arrays) |
Usage Examples
CLI Evaluation
# Evaluate with video recording
python robomimic/scripts/run_trained_agent.py \
--agent /path/to/model.pth \
--n_rollouts 50 \
--horizon 400 \
--video_path /path/to/rollout_video.mp4 \
--camera_names agentview robot0_eye_in_hand
Programmatic Usage
from robomimic.scripts.run_trained_agent import rollout
import robomimic.utils.file_utils as FileUtils
# Load policy and env
policy, ckpt_dict = FileUtils.policy_from_checkpoint(ckpt_path="model.pth")
env, _ = FileUtils.env_from_checkpoint(ckpt_dict=ckpt_dict)
# Run single rollout
stats, traj = rollout(policy=policy, env=env, horizon=400)
print(f"Return: {stats['Return']:.2f}, Success: {stats['Success_Rate']:.0f}")
Related Pages
Implements Principle
Requires Environment
- Environment:ARISE_Initiative_Robomimic_PyTorch_CUDA_Environment
- Environment:ARISE_Initiative_Robomimic_Robosuite_Simulation_Backend