Principle:ARISE Initiative Robosuite Joint Torque Control
| Knowledge Sources | |
|---|---|
| Domains | Robotics, Control Theory, Actuator Control |
| Last Updated | 2026-02-15 07:00 GMT |
Overview
Joint torque control directly passes commanded torque values to the robot's actuators with optional gravity compensation, providing the lowest-level and most direct form of robot joint control.
Description
Joint torque control is the most direct interface to a robot's actuators. Rather than computing torques internally based on position or velocity errors, this controller accepts explicit torque commands from the policy or planner and forwards them (after scaling and clamping) to the simulator's actuators. This makes it the lowest level of the control hierarchy and the most flexible, as any higher-level control law can be implemented externally and expressed through torque commands.
The controller's primary responsibility is input conditioning: raw actions from the policy are clipped to a configured input range, linearly scaled to an output range, and then further clamped to torque limits (which default to the actuator limits if not explicitly specified). Gravity compensation is optionally added by reading the simulator's bias force vector, which accounts for gravitational and Coriolis forces acting on the joints. This means the policy can either output torques that include gravity effects (compensation disabled) or purely task-related torques (compensation enabled), depending on the application.
An interpolator may optionally smooth the transition between consecutive torque setpoints when the policy runs at a lower frequency than the simulation. This prevents abrupt torque changes that could cause instabilities in the simulation or produce unrealistic behavior.
Usage
Apply joint torque control when the policy or planner is itself a torque-level controller (e.g., a model-based controller, a reinforcement learning policy trained with torque actions, or a force-control scheme). Torque control provides maximum flexibility and bandwidth but places the full burden of stability and safety on the commanding agent. It is the preferred mode for research in low-level robot learning where the policy must learn to handle dynamics directly.
Theoretical Basis
Joint Torque Passthrough Control Law:
Given:
- Desired torques: tau_des (N-dimensional, from policy)
- Torque limits: [tau_min, tau_max]
- Gravity compensation: g(q) (optional)
- Input scaling: [input_min, input_max] -> [output_min, output_max]
1. Scale the input action:
tau_scaled = scale(tau_des, input_range, output_range)
2. Clip to torque limits:
tau_goal = clip(tau_scaled, tau_min, tau_max)
3. (Optional) Interpolate between previous and current goal:
tau_current = interpolator.get_interpolated_goal()
4. Add gravity compensation if enabled:
tau_output = tau_current + g(q)
5. Return tau_output to the simulator
Key design decisions:
- Passthrough architecture: The controller performs minimal computation, preserving the bandwidth and flexibility needed for external control laws.
- Optional gravity compensation: When enabled, the policy can focus on task-relevant torques without modeling gravitational loads. When disabled, the policy has full control over the total torque applied.
- Torque limits: Default to actuator limits, but can be set to tighter bounds for safety. This provides a hardware-like safety layer even when the policy has direct torque access.
- Interpolation: Smoothing between setpoints mitigates the discontinuity that arises when the policy frequency is much lower than the simulation frequency.