Principle:Guardrails ai Guardrails Validation Outcome Handling
| Knowledge Sources | |
|---|---|
| Domains | Validation, Data_Structure |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
A result encapsulation principle that provides a structured container for LLM validation outcomes including both success data and failure diagnostics.
Description
Validation Outcome Handling defines how the results of a Guard execution are packaged and consumed. Rather than returning raw LLM output, the framework wraps results in a ValidationOutcome object that carries: the original raw output, the validated (and potentially fixed) output, a boolean pass/fail indicator, any remaining re-ask information if the re-ask budget was exhausted, error messages, and per-validator summaries. This enables callers to make informed decisions about how to use the output.
This principle follows the Result Object pattern (similar to Rust's Result type), where both success and failure paths produce structured, inspectable objects rather than using exceptions for control flow.
Usage
Apply this principle when consuming the result of any Guard execution. Always check validation_passed before using validated_output. If validation failed, inspect reask for details about what went wrong, or error for fatal errors.
Theoretical Basis
The outcome structure provides a decision tree for consumers:
# Pseudocode for consuming a ValidationOutcome
outcome = guard(model=..., messages=...)
if outcome.validation_passed:
use(outcome.validated_output) # Safe to use
elif outcome.reask:
# Validation failed but budget exhausted; inspect what failed
log(outcome.reask)
elif outcome.error:
# Fatal error during execution
handle_error(outcome.error)