Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Workflow:ARISE Initiative Robosuite Teleoperation

From Leeroopedia
Knowledge Sources
Domains Robotics, Teleoperation, Human_Robot_Interaction
Last Updated 2026-02-15 06:00 GMT

Overview

End-to-end process for real-time teleoperation of a simulated robot using human input devices such as keyboard, SpaceMouse, DualSense controller, or MuJoCo GUI.

Description

This workflow enables a human operator to control a simulated robot in real time through a supported input device. The device provides 6-DoF end-effector control commands (3 translational + 3 rotational) which are mapped to robot joint actions through either an inverse kinematics solver or operational space controller. The operator can control the gripper, switch between arms in bimanual setups, and toggle camera views. The teleoperation loop processes device inputs at each timestep, converts them to environment actions, and advances the simulation.

Usage

Execute this workflow when you need to interactively control a robot in simulation for testing task feasibility, debugging controller behavior, evaluating workspace layouts, or as a precursor to collecting human demonstrations. This is particularly useful for validating new environments or robot configurations before running automated policies.

Execution Steps

Step 1: Select Task And Robot Configuration

Choose the manipulation environment, robot(s), and controller type. Specify whether to use an operational space controller (OSC) or inverse kinematics (IK) controller. For bimanual tasks, configure the arm layout (parallel or opposed) and which arm to control actively.

Key considerations:

  • IK controller: rotations are relative to end-effector frame
  • OSC controller: rotations are relative to global (camera) frame
  • Whole-body IK is available for humanoid robots (GR1) via the mink library

Step 2: Load Controller Configuration

Load the composite controller configuration for the selected robot using `load_composite_controller_config`. This resolves the controller type string (e.g., "BASIC", "WHOLE_BODY_MINK_IK") to a full JSON configuration specifying part controller assignments for arms, grippers, and mobile base.

Key considerations:

  • Controller configs can be specified as generic type names or paths to custom JSON files
  • Robot-specific defaults are stored in `controllers/config/robots/`

Step 3: Create And Wrap Environment

Instantiate the robosuite environment with on-screen rendering enabled and wrap it with the `VisualizationWrapper` to add visual indicator sites (e.g., gripper markers, target indicators) that aid the human operator during teleoperation.

Key considerations:

  • `has_renderer=True` is required for interactive visualization
  • `hard_reset=False` improves performance by reusing the MuJoCo model between resets
  • The VisualizationWrapper adds non-physical visual markers to the scene

Step 4: Initialize Input Device

Create an instance of the selected input device (Keyboard, SpaceMouse, DualSense, or MJGUI). Configure sensitivity parameters for position and rotation scaling. For keyboard control, register the keypress callback with the viewer.

Key considerations:

  • Keyboard provides discrete 6-DoF control via key mappings
  • SpaceMouse provides continuous 6-DoF analog control
  • DualSense uses joystick axes with optional XY reversal
  • MJGUI uses mouse drag-and-drop within the MuJoCo viewer (requires mjviewer renderer)

Step 5: Run Teleoperation Loop

Execute the main control loop: reset the environment, start device control, then repeatedly read device input, convert to action dictionary, construct the full environment action vector, step the simulation, and render. The loop handles arm switching for bimanual setups, maintains gripper state across arm switches, and respects frame rate limits for real-time operation.

Key considerations:

  • `device.input2action()` returns None to signal an episode reset
  • The action vector is assembled per-robot using `robot.create_action_vector()`
  • Gripper actions are position-based and must be maintained when switching active arms
  • For WholeBody controllers, the input type comes from `joint_action_policy.input_type`

Execution Diagram

GitHub URL

Workflow Repository