Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Guardrails ai Guardrails Observability Configuration

From Leeroopedia
Knowledge Sources
Domains Observability, Monitoring
Last Updated 2026-02-14 00:00 GMT

Overview

An observability principle for configuring distributed tracing and monitoring of Guard execution in production deployments.

Description

Observability Configuration enables monitoring of Guardrails execution through OpenTelemetry integration. The framework instruments Guard calls, validation steps, LLM interactions, and individual validator executions with trace spans. Traces can be exported to any OTLP-compatible backend (Jaeger, Datadog, Grafana Tempo, etc.) for visualization and alerting.

The Settings singleton provides central configuration: use_server controls client-server routing, and disable_tracing controls telemetry collection. Environment variables configure the OTLP exporter endpoint and protocol.

Usage

Configure tracing by setting environment variables for the OTLP exporter. Optionally set settings.disable_tracing = True to suppress tracing. Use the Settings singleton for global configuration of server mode and tracing behavior.

Theoretical Basis

The observability stack:

  1. Instrumentation: @trace decorators on Guard, Runner, and Validator methods create spans
  2. Collection: OpenTelemetry SDK collects spans into traces
  3. Export: OTLP exporter sends traces to configured backend
  4. Visualization: Backend (Jaeger, Datadog) provides dashboards and alerting

Traced operations include:

  • Guard.__call__ - Full guard execution
  • Runner.step - Each validation/reask step
  • Validator._validate - Individual validator execution
  • LLM calls - External LLM API invocations

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment