Principle:Guardrails ai Guardrails Observability Configuration
| Knowledge Sources | |
|---|---|
| Domains | Observability, Monitoring |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
An observability principle for configuring distributed tracing and monitoring of Guard execution in production deployments.
Description
Observability Configuration enables monitoring of Guardrails execution through OpenTelemetry integration. The framework instruments Guard calls, validation steps, LLM interactions, and individual validator executions with trace spans. Traces can be exported to any OTLP-compatible backend (Jaeger, Datadog, Grafana Tempo, etc.) for visualization and alerting.
The Settings singleton provides central configuration: use_server controls client-server routing, and disable_tracing controls telemetry collection. Environment variables configure the OTLP exporter endpoint and protocol.
Usage
Configure tracing by setting environment variables for the OTLP exporter. Optionally set settings.disable_tracing = True to suppress tracing. Use the Settings singleton for global configuration of server mode and tracing behavior.
Theoretical Basis
The observability stack:
- Instrumentation: @trace decorators on Guard, Runner, and Validator methods create spans
- Collection: OpenTelemetry SDK collects spans into traces
- Export: OTLP exporter sends traces to configured backend
- Visualization: Backend (Jaeger, Datadog) provides dashboards and alerting
Traced operations include:
- Guard.__call__ - Full guard execution
- Runner.step - Each validation/reask step
- Validator._validate - Individual validator execution
- LLM calls - External LLM API invocations