Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Google deepmind Dm control Suite Load

From Leeroopedia
Metadata Value
Implementation Suite Load
Domain Reinforcement_Learning, Physics_Simulation
Source dm_control
Workflow Control_Suite_RL_Training
Last Updated 2026-02-15 00:00 GMT

Overview

Concrete tool for loading a benchmark RL environment from the dm_control Control Suite by specifying a domain name and a task name.

Description

The suite.load function is the primary user-facing entry point for obtaining a Control Suite environment. It delegates to suite.build_environment, which:

  1. Checks that domain_name exists in the internal _DOMAINS registry (populated at import time by introspecting all imported domain modules that expose a SUITE attribute).
  2. Checks that task_name exists within the domain's SUITE dictionary.
  3. Merges task_kwargs and environment_kwargs into a single keyword-argument dictionary.
  4. Calls the task constructor to produce an Environment instance.
  5. Sets the visualize_reward flag on the task.

The module also pre-computes several useful collections: ALL_TASKS, BENCHMARKING, EASY, HARD, EXTRA, and TASKS_BY_DOMAIN, which allow scripts to enumerate available environments.

Usage

Use this implementation when:

  • You need a single function call to obtain a ready-to-use RL environment.
  • You want to iterate over all benchmark environments for evaluation.
  • You want to pass custom time limits, random seeds, or flat-observation flags via environment_kwargs.

Code Reference

Attribute Detail
Source Location dm_control/suite/__init__.py:L93-150
Signature suite.load(domain_name, task_name, task_kwargs=None, environment_kwargs=None, visualize_reward=False)
Import from dm_control import suite

I/O Contract

Inputs

Name Type Required Description
domain_name str Yes Name of the domain (e.g. "cartpole", "cheetah", "humanoid").
task_name str Yes Name of the task within the domain (e.g. "balance", "run", "walk").
task_kwargs dict or None No Optional keyword arguments forwarded to the task constructor (e.g. {"random": 42}).
environment_kwargs dict or None No Optional keyword arguments forwarded to the Environment constructor (e.g. {"flat_observation": True}).
visualize_reward bool No If True, rendered frames colour objects proportionally to the reward. Default False.

Outputs

Name Type Description
return dm_control.rl.control.Environment A fully initialised environment conforming to the dm_env.Environment interface.

Exceptions

Exception Condition
ValueError domain_name is not a recognised domain.
ValueError task_name is not a recognised task within the given domain.

Usage Examples

Load a single environment:

from dm_control import suite

env = suite.load('cartpole', 'balance')

Load with a fixed random seed and flat observations:

from dm_control import suite

env = suite.load(
    domain_name='cheetah',
    task_name='run',
    task_kwargs={'random': 42},
    environment_kwargs={'flat_observation': True},
)

Enable reward visualisation for debugging:

from dm_control import suite

env = suite.load('walker', 'walk', visualize_reward=True)

Iterate over all benchmarking tasks:

from dm_control import suite

for domain_name, task_name in suite.BENCHMARKING:
    env = suite.load(domain_name, task_name)
    print(f"{domain_name}/{task_name}: action_spec={env.action_spec()}")

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment