Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:Guardrails ai Guardrails LLM Provider API Keys

From Leeroopedia
Knowledge Sources
Domains Infrastructure, LLM_Validation
Last Updated 2026-02-14 12:00 GMT

Overview

API keys and service credentials required for LLM providers, Guardrails Hub, and runtime configuration.

Description

This environment documents all environment variables and credentials used across the Guardrails AI framework. The framework uses LiteLLM as its unified LLM provider abstraction, which means it inherits LiteLLM's environment variable conventions for various providers (OpenAI, Anthropic, etc.). Additionally, Guardrails has its own set of configuration variables for the Hub service, API server, logging, and runtime behavior.

Usage

Use this environment reference when setting up any Guardrails deployment that requires LLM calls, Hub validator installation, server connectivity, or telemetry. Different variables are required depending on which workflow is active.

System Requirements

Category Requirement Notes
Network Outbound HTTPS To reach LLM APIs, Hub API, and OTLP collectors

Dependencies

No additional packages beyond the core `guardrails-ai` installation.

Credentials

LLM Provider Keys

  • `OPENAI_API_KEY`: OpenAI API key. Used directly in `guardrails/utils/openai_utils/v1.py` and forwarded via `x-openai-api-key` header in remote server mode.

Note: Additional LLM provider keys (e.g., `ANTHROPIC_API_KEY`, `COHERE_API_KEY`) are handled by LiteLLM. Refer to LiteLLM documentation for provider-specific variables.

Guardrails Hub

Note: The Hub token itself is stored in the local RC file (`~/.guardrailsrc`) and managed via `guardrails configure`. It is a JWT token that is validated for expiration on each use.

Guardrails Server

  • `GUARDRAILS_BASE_URL`: Remote server URL (default: `http://localhost:8000`). Used by `Guard.load()` to connect to a remote Guardrails API server.
  • `GUARDRAILS_API_KEY`: API key for server authentication (default: empty string).

Runtime Configuration

  • `GUARDRAILS_RUN_SYNC`: Force synchronous validation (default: `false`). When set to `true`, bypasses async event loop and uses `SequentialValidatorService`. Valid values: `true`, `false`.
  • `GUARD_HISTORY_ENABLED`: Enable/disable guard call history tracking (default: `true`). Set to `false` to reduce memory usage and API calls in server mode.
  • `GUARDRAILS_LOG_FILE_PATH`: Path for the SQLite trace log file (default: system temp directory). Used by the `TraceHandler` for call tracing.

OpenTelemetry

  • `OTEL_EXPORTER_OTLP_PROTOCOL`: OTLP exporter protocol (`http/protobuf` or `grpc`).
  • `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT`: Specific traces endpoint URL.
  • `OTEL_EXPORTER_OTLP_ENDPOINT`: Generic OTLP endpoint URL.
  • `OTEL_EXPORTER_OTLP_HEADERS`: Headers for OTLP requests.

Quick Install

# Configure Hub token interactively
guardrails configure

# Set LLM provider key
export OPENAI_API_KEY="sk-..."

# Set server connection (for remote mode)
export GUARDRAILS_BASE_URL="https://my-guardrails-server:8000"
export GUARDRAILS_API_KEY="my-api-key"

# Optional runtime tuning
export GUARDRAILS_RUN_SYNC="false"
export GUARD_HISTORY_ENABLED="true"

Code Evidence

OpenAI key retrieval from `guardrails/utils/openai_utils/v1.py:20`:

api_key = os.environ.get("OPENAI_API_KEY")

Hub service endpoint from `guardrails/hub_token/token.py:35-37`:

VALIDATOR_HUB_SERVICE = os.getenv(
    "GR_VALIDATOR_HUB_SERVICE", "https://hub.api.guardrailsai.com"
)

Sync/async mode selection from `guardrails/validator_service/__init__.py:30-37`:

def should_run_sync():
    run_sync = os.environ.get("GUARDRAILS_RUN_SYNC", "false")
    bool_values = ["true", "false"]
    if run_sync.lower() not in bool_values:
        warnings.warn(
            f"GUARDRAILS_RUN_SYNC must be one of {bool_values}! Defaulting to 'false'."
        )
    return run_sync.lower() == "true"

Guard history toggle from `guardrails/guard.py:1017`:

if os.environ.get("GUARD_HISTORY_ENABLED", "true").lower() == "true":
    guard_history = self._api_client.get_history(
        self.name, validation_output.call_id
    )

JWT token validation from `guardrails/hub_token/token.py:40-51`:

def get_jwt_token(rc: RC) -> Optional[str]:
    token = rc.token
    if token:
        try:
            jwt.decode(token, options={"verify_signature": False, "verify_exp": True})
        except ExpiredSignatureError:
            raise ExpiredTokenError(TOKEN_EXPIRED_MESSAGE)
        except DecodeError:
            raise InvalidTokenError(TOKEN_INVALID_MESSAGE)
    return token

Common Errors

Error Message Cause Solution
`Your token has expired` Hub JWT token expired Run `guardrails configure` to refresh token
`Your token is invalid` Corrupted or wrong token Run `guardrails configure` with a valid token from https://hub.guardrailsai.com/keys
`GUARDRAILS_RUN_SYNC must be one of ['true', 'false']` Invalid env var value Set to exactly `true` or `false` (case-insensitive)
`Could not obtain an event loop. Falling back to synchronous validation.` Nested event loops Set `GUARDRAILS_RUN_SYNC=true` or restructure async code
Empty `OPENAI_API_KEY` Key not set `export OPENAI_API_KEY="sk-..."`

Compatibility Notes

  • LiteLLM: All LLM provider environment variables supported by LiteLLM are implicitly supported by Guardrails. The framework delegates provider selection to LiteLLM.
  • JWT Tokens: The Hub token is validated for expiration but not for signature (using `verify_signature=False`). This is by design for offline validation.
  • Event Loop: When an event loop is already running (e.g., inside Jupyter notebooks), the async validator service falls back to synchronous mode with a warning.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment