Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Farama Foundation Gymnasium Discrete Action Space

From Leeroopedia
Knowledge Sources
Domains Reinforcement_Learning, Space_Definition
Last Updated 2026-02-15 03:00 GMT

Overview

A mathematical representation of a finite set of integers used to define discrete action or observation spaces in reinforcement learning environments.

Description

A Discrete space represents a finite set of consecutive integers {a,a+1,,a+n1} where n is the number of elements and a is the starting value (default 0). This is the most common action space type for environments with a finite number of distinct actions (e.g., left/right, hit/stand, up/down/left/right).

Key properties:

  • n: The number of elements in the space
  • start: The smallest element (default 0)
  • sample(): Uniform random sampling with optional masking or probability weighting
  • contains(): Membership testing

The Discrete space supports action masking (invalid actions set to 0) and probability-weighted sampling for advanced exploration strategies.

Usage

Use this space when the action or observation is one of a finite, enumerable set of choices. This is the standard choice for tabular RL (Q-learning, SARSA), grid-world navigation, and classification-style decisions. For continuous actions, use Box instead.

Theoretical Basis

A discrete space S={a,a+1,,a+n1} has uniform sampling probability:

P(x)=1n,xS

With an action mask m{0,1}n, sampling is restricted to valid actions:

P(x|m)=mximi,if imi>0

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment