Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:Guardrails ai Guardrails Python 3 10 Runtime

From Leeroopedia
Knowledge Sources
Domains Infrastructure, NLP, LLM_Validation
Last Updated 2026-02-14 12:00 GMT

Overview

Python 3.10+ runtime environment with core dependencies for the Guardrails AI validation framework.

Description

This environment defines the base Python runtime and core package dependencies required to run the Guardrails AI framework. The framework requires Python 3.10 or higher (but less than 4.0) due to its use of modern Python features including inspect.get_annotations(), dataclass kw_only parameter, and the built-in anext() function. The core dependency stack includes LiteLLM for unified LLM provider access, Pydantic v2 for schema validation, OpenAI SDK for structured output, and OpenTelemetry for observability.

Usage

Use this environment for any Guardrails AI workflow: LLM output validation, structured data generation, streaming validation, custom validator development, or server deployment. This is the mandatory base prerequisite for all Guardrails functionality.

System Requirements

Category Requirement Notes
OS Linux, macOS, or Windows (via WSL2) No OS-specific restrictions in source
Python >= 3.10, < 4.0 Uses inspect.get_annotations(), dataclass kw_only, built-in anext()
Disk ~500MB For core packages and LiteLLM model registry

Dependencies

Core Python Packages

  • `guardrails-ai` (the framework itself)
  • `openai` >= 1.30.1, < 3.0.0
  • `litellm` >= 1.37.14, < 2.0.0
  • `pydantic` >= 2.0.0, < 3.0
  • `langchain-core` >= 1.0.0, < 2.0
  • `lxml` >= 4.9.3, < 7.0.0
  • `rich` >= 13.6.0, < 15.0.0
  • `typer` >= 0.9.0, < 0.20
  • `click` <= 8.2.0
  • `tenacity` >= 8.1.0, < 10.0.0
  • `tiktoken` >= 0.5.1, < 1.0.0
  • `requests` >= 2.31.0, < 3.0.0
  • `jsonschema[format-nongpl]` >= 4.22.0, < 5.0.0
  • `jsonref` >= 1.1.0, < 2.0.0
  • `typing-extensions` >= 4.8.0, < 5.0.0
  • `python-dateutil` >= 2.8.2, < 3.0.0
  • `pydash` >= 7.0.6, < 9.0.0
  • `rstr` >= 3.2.2, < 4.0.0
  • `faker` >= 25.2.0, < 38.0.0
  • `pyjwt` >= 2.8.0, < 3.0.0
  • `semver` >= 3.0.2, < 4.0.0
  • `diff-match-patch` >= 20230430, < 20241101
  • `guardrails-hub-types` >= 0.0.4, < 0.1.0
  • `guardrails-api-client` >= 0.4.0, < 0.5.0

Optional Extras

  • [api] — Server deployment: `guardrails-api` >= 0.1.0a1, `boto3` > 1
  • [sql] — SQL validation: `sqlvalidator`, `sqlalchemy` >= 2.0.9, `sqlglot` >= 19.0.3
  • [vectordb] — Vector search: `faiss-cpu` >= 1.7.4, `numpy` >= 1.25
  • [huggingface] — HuggingFace: `transformers` >= 4.38.0, `torch` >= 2.1.1
  • [databricks] — MLflow: `mlflow` >= 2.0.1
  • [llama] — LlamaIndex: `llama-index` >= 0.11.0
  • [uv] — Event loop: `uvloop` >= 0.20.0

Credentials

The following environment variables are used for core authentication:

  • `GUARDRAILS_API_KEY`: API key for authenticating with a remote Guardrails server (optional for local use).
  • `OPENAI_API_KEY`: OpenAI API key, used when the LLM provider is OpenAI or when using remote validation.

See also: Environment:Guardrails_ai_Guardrails_LLM_Provider_API_Keys for full credentials reference.

Quick Install

# Install core framework
pip install guardrails-ai

# Install with server support
pip install "guardrails-ai[api]"

# Install with all optional ML dependencies
pip install "guardrails-ai[huggingface,vectordb,sql,databricks,llama]"

Code Evidence

Python version gate from `guardrails/api_client.py:42-45`:

_api_key = (
    self.api_key
    if sys.version_info.minor < 10
    else {"ApiKeyAuth": self.api_key}
)

Python 3.10+ polyfill from `guardrails/telemetry/guard_tracing.py:30-31`:

if sys.version_info.minor < 10:
    from guardrails.utils.polyfills import anext

Dataclass kw_only gate from `guardrails/classes/generic/serializeable.py:26-28`:

encoder_kwargs = {}
if sys.version_info.minor >= 10:
    encoder_kwargs["kw_only"] = True
    encoder_kwargs["default"] = SerializeableJSONEncoder

Python version constraint from `pyproject.toml`:

[tool.poetry.dependencies]
python = ">=3.10,<4.0"

Common Errors

Error Message Cause Solution
`ImportError: cannot import name 'anext'` Python < 3.10 Upgrade to Python 3.10+
`TypeError: __init__() got an unexpected keyword argument 'kw_only'` Python < 3.10 Upgrade to Python 3.10+
`ModuleNotFoundError: No module named 'guardrails'` Package not installed `pip install guardrails-ai`
`ImportError: faiss not found` Optional dep missing `pip install "guardrails-ai[vectordb]"`
`ImportError: llama_index is not installed` Optional dep missing `pip install "guardrails-ai[llama]"`
`ImportError: Please install mlflow` Optional dep missing `pip install "guardrails-ai[databricks]"`

Compatibility Notes

  • Python 3.9: Not supported as of the current version. The codebase has multiple `sys.version_info.minor < 10` guards but requires 3.10+.
  • Pydantic v1: Not supported. The codebase uses Pydantic v2 features (>=2.0.0). A legacy `@dataclass` decorator on the Validator class is a remnant of Pydantic v1 support marked for removal.
  • guardrails-api-client: Uses version 0.4.0a1 for Python < 3.10 and >= 0.4.0 for Python >= 3.10 (different API key format handling).
  • uvloop: Optional performance optimization for async validation. Only available on Linux/macOS (not Windows).

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment