Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Isaac sim IsaacGymEnvs Launch Rlg Hydra

From Leeroopedia
Sources Domains Last Updated
IsaacGymEnvs, Hydra Configuration, Training 2026-02-15 00:00 GMT

Overview

The Hydra-decorated main entry point function that composes hierarchical YAML configuration, preprocesses training parameters, and launches the rl-games training pipeline.

Description

launch_rlg_hydra is the @hydra.main-decorated function in train.py that serves as the single entry point for all IsaacGymEnvs training and evaluation runs. Hydra automatically composes a DictConfig object from the YAML files in cfg/ based on command-line arguments. The function then calls preprocess_train_config to merge task-level parameters (observation/action dimensions, device assignments) into the training configuration dictionary before passing it to the rl-games runner.

Usage

Invoked implicitly when running python train.py with task and parameter arguments.

Code Reference

Source Location: Repository: NVIDIA-Omniverse/IsaacGymEnvs, File: isaacgymenvs/train.py (L71-72, L38-68)

Signature:

@hydra.main(version_base="1.1", config_name="config", config_path="./cfg")
def launch_rlg_hydra(cfg: DictConfig):

Also covers preprocess_train_config:

def preprocess_train_config(cfg, config_dict):
    """
    Merges task-level parameters into the rl-games training config dict.

    Sets observation/action dimensions, device assignments, and
    environment count from the composed Hydra config into the
    flat training config dictionary expected by rl-games.
    """

Import:

from omegaconf import DictConfig, OmegaConf
import hydra

I/O Contract

Inputs:

Input Type Required Description
CLI arguments str (Hydra overrides) Yes Task selection and parameter overrides (e.g., task=Cartpole num_envs=256)
cfg/config.yaml YAML file Yes Top-level Hydra config with defaults list
cfg/task/*.yaml YAML files Yes Per-task environment configuration (selected by task= argument)
cfg/train/*.yaml YAML files Yes Per-task training configuration (auto-selected based on task name)

Outputs:

  • Merged DictConfig object containing all training, task, and global parameters
  • Launches rl-games training runner with the preprocessed configuration
  • Training artifacts written to Hydra's output directory (checkpoints, logs, TensorBoard events)

Usage Examples

Example 1 -- Train Cartpole with default settings:

python train.py task=Cartpole

Example 2 -- Train with overridden parameters:

python train.py task=Cartpole num_envs=512 headless=True seed=42

Example 3 -- Train Ant with custom learning rate:

python train.py task=Ant num_envs=4096 \
    train.params.config.learning_rate=3e-4 \
    train.params.config.mini_epochs=8

Example 4 -- Evaluate a trained checkpoint:

python train.py task=Ant test=True num_envs=64 \
    checkpoint=runs/Ant/nn/Ant.pth

Example 5 -- Multi-GPU training:

torchrun --nnodes=1 --nproc_per_node=2 train.py \
    task=ShadowHand num_envs=8192 multi_gpu=True

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment