Implementation:Google deepmind Dm control Suite Load
| Metadata | Value |
|---|---|
| Implementation | Suite Load |
| Domain | Reinforcement_Learning, Physics_Simulation |
| Source | dm_control |
| Workflow | Control_Suite_RL_Training |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
Concrete tool for loading a benchmark RL environment from the dm_control Control Suite by specifying a domain name and a task name.
Description
The suite.load function is the primary user-facing entry point for obtaining a Control Suite environment. It delegates to suite.build_environment, which:
- Checks that
domain_nameexists in the internal_DOMAINSregistry (populated at import time by introspecting all imported domain modules that expose aSUITEattribute). - Checks that
task_nameexists within the domain'sSUITEdictionary. - Merges
task_kwargsandenvironment_kwargsinto a single keyword-argument dictionary. - Calls the task constructor to produce an
Environmentinstance. - Sets the
visualize_rewardflag on the task.
The module also pre-computes several useful collections: ALL_TASKS, BENCHMARKING, EASY, HARD, EXTRA, and TASKS_BY_DOMAIN, which allow scripts to enumerate available environments.
Usage
Use this implementation when:
- You need a single function call to obtain a ready-to-use RL environment.
- You want to iterate over all benchmark environments for evaluation.
- You want to pass custom time limits, random seeds, or flat-observation flags via
environment_kwargs.
Code Reference
| Attribute | Detail |
|---|---|
| Source Location | dm_control/suite/__init__.py:L93-150
|
| Signature | suite.load(domain_name, task_name, task_kwargs=None, environment_kwargs=None, visualize_reward=False)
|
| Import | from dm_control import suite
|
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
domain_name |
str | Yes | Name of the domain (e.g. "cartpole", "cheetah", "humanoid").
|
task_name |
str | Yes | Name of the task within the domain (e.g. "balance", "run", "walk").
|
task_kwargs |
dict or None | No | Optional keyword arguments forwarded to the task constructor (e.g. {"random": 42}).
|
environment_kwargs |
dict or None | No | Optional keyword arguments forwarded to the Environment constructor (e.g. {"flat_observation": True}).
|
visualize_reward |
bool | No | If True, rendered frames colour objects proportionally to the reward. Default False.
|
Outputs
| Name | Type | Description |
|---|---|---|
| return | dm_control.rl.control.Environment |
A fully initialised environment conforming to the dm_env.Environment interface.
|
Exceptions
| Exception | Condition |
|---|---|
ValueError |
domain_name is not a recognised domain.
|
ValueError |
task_name is not a recognised task within the given domain.
|
Usage Examples
Load a single environment:
from dm_control import suite
env = suite.load('cartpole', 'balance')
Load with a fixed random seed and flat observations:
from dm_control import suite
env = suite.load(
domain_name='cheetah',
task_name='run',
task_kwargs={'random': 42},
environment_kwargs={'flat_observation': True},
)
Enable reward visualisation for debugging:
from dm_control import suite
env = suite.load('walker', 'walk', visualize_reward=True)
Iterate over all benchmarking tasks:
from dm_control import suite
for domain_name, task_name in suite.BENCHMARKING:
env = suite.load(domain_name, task_name)
print(f"{domain_name}/{task_name}: action_spec={env.action_spec()}")