Implementation:Google deepmind Dm control Soccer Load
Appearance
| Metadata | |
|---|---|
| Knowledge Sources | dm_control |
| Domains | Multi-Agent Reinforcement Learning, Environment Construction |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
Concrete tool for constructing a complete N-versus-N multi-agent MuJoCo soccer environment with a single function call, handling player creation, pitch sizing, ball selection, and task wiring.
Description
The soccer.load function is the primary public entry point for the dm_control soccer environment. It performs the following steps:
- Validates
team_size(must be 1--11). - Sets default pitch size limits (
min_size=(32, 24),max_size=(48, 36)for non-humanoid; computed from per-player area constants for humanoid walkers). - Selects ball type:
SoccerBall()for boxhead/ant,regulation_soccer_ball()for humanoid. - Optionally adjusts goal size to
MINI_FOOTBALL_GOAL_SIZEfor humanoid walkers. - Selects
TaskorMultiturnTaskbased onterminate_on_goal. - Builds players via
_make_players(team_size, walker_type). - Constructs a
RandomizedPitchwith the computed size bounds. - Wraps everything in a
composer.Environmentwith the giventime_limitandrandom_state.
Usage
This is the recommended way to create a soccer environment. All other components (walkers, pitch, task) are assembled automatically.
Code Reference
| Attribute | Value |
|---|---|
| Source Location | dm_control/locomotion/soccer/__init__.py, lines 92--152
|
| Signature | def load(team_size, time_limit=45.0, random_state=None, disable_walker_contacts=False, enable_field_box=False, keep_aspect_ratio=False, terminate_on_goal=True, walker_type=WalkerType.BOXHEAD)
|
| Import | from dm_control.locomotion import soccer
|
I/O Contract
Inputs:
| Parameter | Type | Description |
|---|---|---|
team_size |
int |
Number of players per team (1--11). |
time_limit |
float |
Maximum episode duration in seconds. Default 45.0.
|
random_state |
int, np.random.RandomState, or None |
Random seed or RNG instance. None uses a platform-dependent seed.
|
disable_walker_contacts |
bool |
Disable physical contacts between walkers. Default False.
|
enable_field_box |
bool |
Enable a physical bounding box for the ball. Default False.
|
keep_aspect_ratio |
bool |
Maintain constant pitch aspect ratio across randomisations. Default False.
|
terminate_on_goal |
bool |
If True, use Task (episode ends on goal). If False, use MultiturnTask. Default True.
|
walker_type |
WalkerType |
Walker morphology for all players. Default WalkerType.BOXHEAD.
|
Outputs:
| Return | Type | Description |
|---|---|---|
| environment | composer.Environment |
A fully configured multi-agent RL environment that implements the dm_env.Environment interface.
|
Usage Examples
from dm_control.locomotion import soccer
# 2v2 boxhead environment with default settings.
env = soccer.load(team_size=2)
# 3v3 humanoid environment, continuous play, 90 second episodes.
env = soccer.load(
team_size=3,
time_limit=90.0,
walker_type=soccer.WalkerType.HUMANOID,
terminate_on_goal=False,
)
# 1v1 with deterministic seed and no walker contacts.
env = soccer.load(
team_size=1,
random_state=42,
disable_walker_contacts=True,
)
# Inspect the environment.
timestep = env.reset()
print(type(timestep)) # <class 'dm_env._environment.TimeStep'>
print(len(timestep.observation)) # 2 * team_size observation dicts
action_specs = env.action_spec()
print(len(action_specs)) # 2 * team_size action specs
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment