Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:ARISE Initiative Robosuite Input Device Abstraction

From Leeroopedia

Metadata:

Overview

Abstraction layer that converts heterogeneous human input devices into a unified action dictionary for robot teleoperation control.

Description

Robosuite supports multiple input devices (Keyboard, SpaceMouse, DualSense, MuJoCo GUI) through an abstract Device base class. Each device implements get_controller_state() returning a standardized dict with keys: dpos (position delta), rotation (rotation matrix), raw_drotation (raw rotation delta), grasp (gripper command), reset (episode reset trigger), base_mode (mobile base mode). The input2action() method on the Device base class converts this state into action dicts suitable for Robot.create_action_vector(). This abstraction allows swapping input devices without changing the control loop.

Usage

Use when implementing teleoperation interfaces. Choose a device based on available hardware:

  • Keyboard - Standard keyboard input using pynput library
  • SpaceMouse - 3D input device for precise 6-DOF control
  • DualSense - PlayStation 5 controller for gamepad-style control
  • MJGUI - Mouse-based control in MuJoCo viewer for simulation-only scenarios

Theoretical Basis

The input device abstraction follows the Strategy Pattern from software design, enabling polymorphic dispatch of device-specific input handling while maintaining a uniform interface. Each device class inherits from the abstract Device base class and implements device-specific methods for reading hardware state.

Key Design Elements

Unified Controller State: All devices produce a standardized controller state dictionary regardless of the underlying hardware interface (USB HID, keyboard events, MuJoCo mouse interaction). This normalization happens in each device's get_controller_state() implementation.

Action Transformation: The input2action() method handles several critical transformations:

  • Coordinate frame conversions between device space and robot space
  • Sensitivity scaling for position and rotation inputs
  • Multi-arm routing for dual-arm manipulation scenarios
  • Mirror mode for operators viewing the robot from behind
  • Goal update mode selection (target-based vs achievement-based)

Pseudocode

class Device(ABC):
    """Abstract base class for input devices"""

    @abstractmethod
    def get_controller_state(self) -> Dict:
        """Device-specific state reading - must implement"""
        pass

    def input2action(self, mirror_actions=False, goal_update_mode="target") -> Optional[Dict]:
        """Convert device state to robot actions - shared implementation"""
        state = self.get_controller_state()

        if state['reset']:
            return None

        # Apply coordinate transforms
        action = transform_coordinates(state['dpos'], state['rotation'])

        # Apply sensitivity scaling
        action = scale_sensitivity(action, self.pos_sensitivity, self.rot_sensitivity)

        # Route to appropriate arms if multi-arm
        action_dict = route_to_arms(action, state['grasp'])

        # Apply mirroring if viewing from behind
        if mirror_actions:
            action_dict = mirror_left_right(action_dict)

        return action_dict


class Keyboard(Device):
    def get_controller_state(self) -> Dict:
        # Read keyboard state via pynput
        return parse_keyboard_state()


class SpaceMouse(Device):
    def get_controller_state(self) -> Dict:
        # Read 3D mouse state via HID
        return parse_spacemouse_hid()

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment