Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Isaac sim IsaacGymEnvs Apply Randomizations

From Leeroopedia
Metadata
Knowledge Sources
Domains
Last Updated 2026-02-15 00:00 GMT

Overview

Core API methods that apply domain randomization to Isaac Gym environments. The base VecTask.apply_randomizations() handles standard manual DR, while ADRVecTask.apply_randomizations() extends it with ADR-specific logic including per-environment DR parameter dictionaries, boundary worker management, and ADR tensor sampling.

Description

The apply_randomizations() method is the central dispatch point for all domain randomization in IsaacGymEnvs. It processes the structured dr_params dictionary and applies randomized values to the simulation via Isaac Gym's property getter/setter API. The method handles two distinct randomization paths:

  • Non-environment randomizations (observations, actions, sim_params): Applied globally when enough steps have passed since last randomization.
  • Per-environment randomizations (actor_params): Applied per-environment on resets, iterating over actors, properties, and attributes.

The ADR override adds per-environment DR parameter dictionaries via get_dr_params_by_env_id(), which patches boundary worker environments with collapsed ranges for ADR evaluation.

Usage

Called from pre_physics_step() in task implementations.

# Manual DR (from AllegroHandDextreme.pre_physics_step)
if self.randomize and not self.use_adr:
    self.apply_randomizations(
        dr_params=self.randomization_params,
        randomisation_callback=self.randomisation_callback
    )

# ADR (from AllegroHandDextreme.pre_physics_step)
elif self.randomize and self.use_adr:
    env_mask_randomize = (self.reset_buf & ~self.apply_reset_buf).bool()
    self.apply_randomizations(
        dr_params=self.randomization_params,
        randomize_buf=env_mask_randomize,
        adr_objective=self.successes,
        randomisation_callback=self.randomisation_callback
    )

Code Reference

Source Location

  • File: isaacgymenvs/tasks/base/vec_task.py (lines 610--840) -- VecTask.apply_randomizations()
  • File: isaacgymenvs/tasks/dextreme/adr_vec_task.py (lines 920--1048+) -- ADRVecTask.apply_randomizations()

Signature

# VecTask base implementation
class VecTask:
    def apply_randomizations(self, dr_params):
        """Apply domain randomizations to the environment.

        Note that currently we can only apply randomizations only on resets,
        due to current PhysX limitations.

        Args:
            dr_params: parameters for domain randomization to use.
        """

# ADRVecTask override
class ADRVecTask(VecTaskDextreme):
    def apply_randomizations(self, dr_params, randomize_buf,
                             adr_objective=None,
                             randomisation_callback=None):
        """Apply domain randomizations to the environment.

        Args:
            dr_params: parameters for domain randomization to use.
            randomize_buf: selective randomisation of environments
            adr_objective: consecutive successes scalar
            randomisation_callback: callbacks from the environment class
        """

Import

from isaacgymenvs.tasks.base.vec_task import VecTask
from isaacgymenvs.tasks.dextreme.adr_vec_task import ADRVecTask

I/O Contract

Inputs

Input Contract
Name Type Description
dr_params dict Structured randomization parameter dictionary from YAML config (task.randomization_params).
randomize_buf torch.Tensor (bool) (ADR only) Per-environment mask indicating which environments to re-randomize.
adr_objective torch.Tensor (float) (ADR only) Per-environment performance metric (consecutive successes) used for ADR range updates.
randomisation_callback callable (ADR only) Callback for environment-specific side effects (e.g., storing randomized gravity vector).
self.first_randomization bool Flag indicating first call -- triggers original property caching and bucket checking.
self.envs list List of Isaac Gym environment handles.
self.sim handle Isaac Gym simulation handle.

Outputs

Output Contract
Name Type Description
Modified sim/actor properties Physics state Randomized physics properties applied via gym.set_actor_*_properties() and gym.set_sim_params().
self.dr_randomizations / self.obs_randomizations dict Noise lambda closures for observations and actions, keyed by parameter name.
self.action_randomizations dict Action noise lambda closure.
self.original_props dict Cached original property values (populated on first call).

Key Behavior

VecTask Base Flow

# Pseudocode for VecTask.apply_randomizations
def apply_randomizations(self, dr_params):
    rand_freq = dr_params.get("frequency", 1)

    if self.first_randomization:
        env_ids = list(range(self.num_envs))
        check_buckets(self.gym, self.envs, dr_params)
    else:
        env_ids = environments_past_frequency_and_resetting()

    # Non-physical params: build noise lambdas
    for param in ["observations", "actions"]:
        if param in dr_params:
            build_noise_lambda(dr_params[param])

    # Sim params: randomize gravity etc.
    if "sim_params" in dr_params:
        for attr in sim_param_attrs:
            apply_random_samples(prop, original, attr, params, step)
        gym.set_sim_params(sim, prop)

    # Actor params: loop over actors, envs, properties
    for actor, properties in dr_params["actor_params"].items():
        for env_id in env_ids:
            for prop_name, prop_attrs in properties.items():
                prop = getter(env, handle)
                for attr, params in prop_attrs.items():
                    apply_random_samples(prop, original, attr, params, step)
                setter(env, handle, prop)

    self.first_randomization = False

ADR Extension

The ADR override adds these additional steps:

  1. Calls self.adr_update() to update ADR ranges based on boundary worker performance.
  2. Computes current_adr_params by patching current ADR ranges into the DR params dictionary.
  3. For each environment, calls get_dr_params_by_env_id() to obtain the appropriate DR dictionary (boundary workers get collapsed ranges; rollout workers get current ADR ranges).
  4. Iterates over environments in the outer loop (rather than actors) to support per-environment customization.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment