Implementation:Guardrails ai Guardrails Guard Call Stream
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Streaming, LLM_Integration |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
Concrete method for invoking a Guard in streaming mode provided by the guardrails package.
Description
This is Guard.__call__ with stream=True passed as a keyword argument. When stream is True, the _exec method creates a StreamRunner instead of a regular Runner. The StreamRunner yields ValidationOutcome objects for each validated chunk. Note that num_reasks should be 0 for streaming as re-asking is not supported.
Usage
Call the Guard with stream=True and iterate over the returned generator to receive validated chunks in real time.
Code Reference
Source Location
- Repository: guardrails
- File: guardrails/guard.py
- Lines: L700-721 (StreamRunner creation), L745-795 (__call__ dispatch)
Signature
@trace(name="/guard_call", origin="Guard.__call__")
def __call__(
self,
llm_api: Optional[Callable] = None,
*args,
prompt_params: Optional[Dict] = None,
num_reasks: Optional[int] = 1,
messages: Optional[List[Dict]] = None,
metadata: Optional[Dict] = None,
full_schema_reask: Optional[bool] = None,
**kwargs, # stream=True passed here
) -> Union[ValidationOutcome[OT], Iterator[ValidationOutcome[OT]]]:
Import
from guardrails import Guard
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| llm_api | Optional[Callable] | No | LLM callable with streaming support, or model= kwarg |
| messages | Optional[List[Dict]] | Yes | Chat messages |
| stream | bool | Yes | Must be True (passed via kwargs) |
| num_reasks | Optional[int] | No | Should be 0 for streaming (reasks not supported) |
Outputs
| Name | Type | Description |
|---|---|---|
| generator | Iterator[ValidationOutcome[OT]] | Generator yielding ValidationOutcome for each validated chunk |
Usage Examples
Streaming Validation
from guardrails import Guard
from guardrails.hub import ToxicLanguage
guard = Guard().use(ToxicLanguage(on_fail="fix"))
stream = guard(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Write a poem about nature."}],
stream=True,
)
for chunk in stream:
if chunk.validation_passed:
print(chunk.validated_output, end="", flush=True)
Related Pages
Implements Principle
Uses Heuristic
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment