Implementation:Guardrails ai Guardrails ValidationOutcome
| Knowledge Sources | |
|---|---|
| Domains | Validation, Data_Structure |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
Concrete data class for encapsulating the results of a Guard validation execution provided by the guardrails package.
Description
The ValidationOutcome class is a Pydantic model (generic over output type OT) that inherits from IValidationOutcome (the API client interface) and ArbitraryModel. It contains the raw LLM output, the validated/fixed output, a pass/fail boolean, optional reask data, error messages, and per-validator summaries. It supports iteration as a tuple and can be created from a Guard history Call object via the from_guard_history classmethod.
Usage
Consume this object as the return value of Guard.__call__() or Guard.parse(). Access .validated_output for the result, .validation_passed for the status, and .raw_llm_output for the original LLM text.
Code Reference
Source Location
- Repository: guardrails
- File: guardrails/classes/validation_outcome.py
- Lines: L19-135
Signature
class ValidationOutcome(IValidationOutcome, ArbitraryModel, Generic[OT]):
"""The final output from a Guard execution.
Attributes:
call_id: The id of the Call that produced this ValidationOutcome.
raw_llm_output: The raw, unchanged output from the LLM call.
validated_output: The validated, and potentially fixed, output.
reask: If validation fails and reasks are exhausted, the final reask.
validation_passed: Whether the LLM output passed validation.
error: Error message if validation failed.
"""
validation_summaries: Optional[List[ValidationSummary]] = Field(default=[])
raw_llm_output: Optional[str] = Field(default=None)
validated_output: Optional[OT] = Field(default=None)
reask: Optional[ReAsk] = Field(default=None)
validation_passed: bool = Field(...)
error: Optional[str] = Field(default=None)
Import
from guardrails.classes.validation_outcome import ValidationOutcome
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| call_id | str | Yes | The ID of the Call that produced this outcome |
| raw_llm_output | Optional[str] | No | The raw, unchanged output from the LLM call |
| validated_output | Optional[OT] | No | The validated/fixed output (typed by Guard's generic) |
| reask | Optional[ReAsk] | No | Final reask if budget exhausted |
| validation_passed | bool | Yes | Whether all validators passed |
| error | Optional[str] | No | Error message if execution failed |
| validation_summaries | Optional[List[ValidationSummary]] | No | Per-validator result summaries |
Outputs
| Name | Type | Description |
|---|---|---|
| .validated_output | Optional[OT] | Access the validated result |
| .validation_passed | bool | Check if validation succeeded |
| .raw_llm_output | Optional[str] | Access original LLM text |
| .reask | Optional[ReAsk] | Inspect remaining reask details |
| .error | Optional[str] | Check for fatal errors |
| iter() | Tuple | Iterate as (raw_llm_output, validated_output, reask, validation_passed, error) |
Usage Examples
Basic Outcome Handling
from guardrails import Guard
from guardrails.hub import RegexMatch
guard = Guard().use(RegexMatch(regex=r"\d+"))
result = guard(model="gpt-4o-mini", messages=[{"role": "user", "content": "A number please."}])
if result.validation_passed:
print(f"Valid output: {result.validated_output}")
else:
print(f"Validation failed: {result.error}")
if result.reask:
print(f"Last reask: {result.reask}")
Tuple Unpacking
raw, validated, reask, passed, error = result