Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Guardrails ai Guardrails Guard Call

From Leeroopedia
Knowledge Sources
Domains Validation, LLM_Integration
Last Updated 2026-02-14 00:00 GMT

Overview

Concrete method for calling an LLM with automatic output validation provided by the guardrails package.

Description

The Guard.__call__ method is the primary entry point for validated LLM interaction. It accepts an LLM API callable (or a model string for LiteLLM), chat messages, and validation parameters. Internally, it creates a Runner (or StreamRunner if stream=True) that manages the LLM call, output validation, and optional re-ask loop. The result is a ValidationOutcome containing the raw output, validated output, and validation metadata.

Usage

Use this method when you want Guardrails to both call the LLM and validate the output. For pre-existing LLM output that just needs validation, use Guard.parse() instead.

Code Reference

Source Location

  • Repository: guardrails
  • File: guardrails/guard.py
  • Lines: L745-795

Signature

@trace(name="/guard_call", origin="Guard.__call__")
def __call__(
    self,
    llm_api: Optional[Callable] = None,
    *args,
    prompt_params: Optional[Dict] = None,
    num_reasks: Optional[int] = 1,
    messages: Optional[List[Dict]] = None,
    metadata: Optional[Dict] = None,
    full_schema_reask: Optional[bool] = None,
    **kwargs,
) -> Union[ValidationOutcome[OT], Iterator[ValidationOutcome[OT]]]:
    """Call the LLM and validate the output.

    Args:
        llm_api: The LLM API to call (e.g. openai.completions.create)
        prompt_params: The parameters to pass to the prompt.format() method.
        num_reasks: The max times to re-ask the LLM for invalid output.
        messages: The message history to pass to the LLM.
        metadata: Metadata to pass to the validators.
        full_schema_reask: When reasking, whether to regenerate the full schema
                           or just the incorrect values.

    Returns:
        ValidationOutcome
    """

Import

from guardrails import Guard

I/O Contract

Inputs

Name Type Required Description
llm_api Optional[Callable] No LLM callable (e.g. litellm.completion); or pass model= as kwarg
messages Optional[List[Dict]] Yes Chat message history (required either here or in Guard constructor)
num_reasks Optional[int] No Max re-ask attempts on validation failure (default 1)
metadata Optional[Dict] No Metadata passed to validators during execution
prompt_params Optional[Dict] No Parameters for prompt template formatting
full_schema_reask Optional[bool] No Regenerate full schema or just incorrect values on reask

Outputs

Name Type Description
result ValidationOutcome[OT] Contains raw_llm_output, validated_output, validation_passed, reask, error

Usage Examples

Basic Validated LLM Call

from guardrails import Guard
from guardrails.hub import RegexMatch

guard = Guard().use(RegexMatch(regex=r"\d+"))

result = guard(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Give me a number."}],
    num_reasks=2,
)

print(result.validated_output)
print(result.validation_passed)

With Metadata

result = guard(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Summarize this text."}],
    metadata={"original_text": "The text to check against..."},
)

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment