Implementation:ARISE Initiative Robosuite Device Input2action
Metadata:
- robosuite
- Teleoperation
- Human_Robot_Interaction
- last_updated: 2026-02-15 12:00 GMT
Overview
Concrete method for converting human input device signals into robot action dictionaries provided by the robosuite Device base class.
Description
The Device.input2action() method (defined in robosuite/devices/device.py) converts device-specific controller state into an action dictionary. It handles: mirroring actions for behind-robot viewing, goal update modes (target vs achieved), multi-arm routing, and coordinate transforms. Returns None when reset is triggered.
Available device implementations:
- Keyboard - Uses pynput library for keyboard event capture
- SpaceMouse - 3Dconnexion devices via USB HID interface
- DualSense - PlayStation 5 controller via USB HID interface
- MJGUI - MuJoCo viewer mouse-based interaction
Usage
Called in the teleoperation loop after device.start_control(). Returns dict passed to robot.create_action_vector(). The method is invoked continuously in the control loop to translate real-time human input into robot commands.
Code Reference
Source: robosuite
File: robosuite/devices/device.py
Lines: L90-202
Method Signature:
def input2action(self, mirror_actions=False, goal_update_mode="target") -> Optional[Dict]:
"""
Converts input from active device into valid action sequence for env.step().
Args:
mirror_actions (bool): If True, mirrors actions for viewing robot from behind
goal_update_mode (str): Either 'target' (updates target goals) or 'achieved' (updates achieved goals)
Returns:
Optional[Dict]: Action dictionary with part names as keys and action arrays as values,
or None when reset is triggered
"""
Import Statement:
from robosuite.devices import Keyboard, SpaceMouse, DualSense, MJGUI
Device Constructors
Keyboard Device:
Keyboard(env, pos_sensitivity=1.0, rot_sensitivity=1.0)
env: The robosuite environment instancepos_sensitivity: Scaling factor for positional movements (default 1.0)rot_sensitivity: Scaling factor for rotational movements (default 1.0)
SpaceMouse Device:
SpaceMouse(env, vendor_id=..., product_id=..., device_path=None,
pos_sensitivity=1.0, rot_sensitivity=1.0)
vendor_id: USB vendor ID for 3Dconnexion deviceproduct_id: USB product ID for specific SpaceMouse modeldevice_path: Optional explicit HID device pathpos_sensitivity: Scaling factor for positional movements (default 1.0)rot_sensitivity: Scaling factor for rotational movements (default 1.0)
DualSense Device:
DualSense(env, vendor_id=..., product_id=...,
pos_sensitivity=1.0, rot_sensitivity=1.0, reverse_xy=False)
vendor_id: USB vendor ID for Sony DualSense controllerproduct_id: USB product ID for DualSensepos_sensitivity: Scaling factor for positional movements (default 1.0)rot_sensitivity: Scaling factor for rotational movements (default 1.0)reverse_xy: If True, reverses X and Y axis mapping (default False)
MJGUI Device:
MJGUI(env, active_end_effector='right')
env: The robosuite environment instanceactive_end_effector: Which end effector to control ('right' or 'left')
I/O Contract
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
mirror_actions |
bool | No | If True, flips left/right actions for operators viewing robot from behind (default False) |
goal_update_mode |
str | No | Either 'target' (updates target goals) or 'achieved' (updates from achieved state). Default 'target' |
Outputs
Return Type: Optional[Dict]
Success Case: Dictionary with keys mapping robot part names (e.g., 'arm', 'gripper', 'base') to action arrays (numpy arrays). The structure depends on the robot configuration and controller type.
Reset Case: Returns None when the reset trigger is activated on the input device, signaling that the episode should be reset.
Action Dictionary Structure:
{
'arm_right': np.array([dx, dy, dz, droll, dpitch, dyaw]), # 6-DOF delta for right arm
'gripper_right': np.array([grasp]), # Gripper command (-1 to 1)
'arm_left': np.array([dx, dy, dz, droll, dpitch, dyaw]), # 6-DOF delta for left arm (if multi-arm)
'gripper_left': np.array([grasp]), # Left gripper (if multi-arm)
'base': np.array([dx, dy, dtheta]) # Mobile base (if mobile robot)
}
Usage Examples
Basic Teleoperation Loop with Keyboard
import numpy as np
import robosuite as suite
from robosuite.devices import Keyboard
# Create environment
env = suite.make(
env_name="Lift",
robots="Panda",
has_renderer=True,
has_offscreen_renderer=False,
use_camera_obs=False,
control_freq=20,
)
# Initialize keyboard device
device = Keyboard(env=env, pos_sensitivity=1.0, rot_sensitivity=1.0)
# Reset environment
obs = env.reset()
# Start device control
device.start_control()
# Teleoperation control loop
while True:
# Get action from device input
action_dict = device.input2action(
mirror_actions=False,
goal_update_mode="target"
)
# Check for reset trigger
if action_dict is None:
print("Reset triggered by device")
obs = env.reset()
continue
# Convert action dict to action vector
action = env.robots[0].create_action_vector(action_dict)
# Execute action in environment
obs, reward, done, info = env.step(action)
# Render
env.render()
# Check for episode termination
if done:
obs = env.reset()
Multi-Arm Teleoperation with Mirror Mode
from robosuite.devices import SpaceMouse
# Create dual-arm environment
env = suite.make(
env_name="TwoArmLift",
robots=["Panda", "Panda"],
has_renderer=True,
control_freq=20,
)
# Initialize SpaceMouse with custom sensitivity
device = SpaceMouse(
env=env,
pos_sensitivity=2.5,
rot_sensitivity=2.5
)
obs = env.reset()
device.start_control()
# Operator viewing from behind robot - enable mirroring
mirror_view = True
while True:
# Get mirrored actions for behind-robot viewing
action_dict = device.input2action(
mirror_actions=mirror_view,
goal_update_mode="target"
)
if action_dict is None:
obs = env.reset()
continue
# Actions are automatically routed to appropriate arms
action_robot0 = env.robots[0].create_action_vector(action_dict)
action_robot1 = env.robots[1].create_action_vector(action_dict)
# Combine actions for multi-robot environment
action = np.concatenate([action_robot0, action_robot1])
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
Achieved Goal Update Mode
from robosuite.devices import DualSense
# Initialize DualSense controller
device = DualSense(
env=env,
pos_sensitivity=1.5,
rot_sensitivity=1.5,
reverse_xy=False
)
device.start_control()
while True:
# Use 'achieved' mode for more direct control feedback
action_dict = device.input2action(
mirror_actions=False,
goal_update_mode="achieved" # Updates from current achieved state
)
if action_dict is None:
obs = env.reset()
continue
action = env.robots[0].create_action_vector(action_dict)
obs, reward, done, info = env.step(action)
env.render()