Principle:Protectai Llm guard Output Scanning
| Knowledge Sources | |
|---|---|
| Domains | NLP, Security, Output_Validation |
| Last Updated | 2026-02-14 12:00 GMT |
Overview
A sequential pipeline pattern that validates and sanitizes LLM-generated outputs by applying a configurable chain of security scanners to detect policy violations, hallucinations, and sensitive data leakage.
Description
Output scanning is the process of running LLM-generated text through one or more security scanners to detect risks such as PII leakage, toxic content, refusal patterns, irrelevant responses, and biased language. Unlike input scanning, output scanners receive both the original prompt and the model output, enabling context-aware validation (e.g., checking relevance between prompt and response).
Each scanner in the pipeline implements a dual-argument interface: scan(prompt, output) -> (str, bool, float). The pipeline follows the same sequential pattern as prompt scanning, with optional fail-fast behavior.
Usage
Use this principle after receiving a response from an LLM, before returning the result to the end user. Output scanning is essential for preventing sensitive data disclosure, detecting model hallucinations, and ensuring response quality and safety compliance.
Theoretical Basis
The core algorithm mirrors prompt scanning but with dual inputs:
# Pseudocode for output scanning pipeline
sanitized = output
for scanner in scanners:
sanitized, is_valid, risk_score = scanner.scan(prompt, sanitized)
results[scanner.name] = (is_valid, risk_score)
if fail_fast and not is_valid:
break
return sanitized, results
The key difference from prompt scanning is that output scanners receive the original prompt as context, enabling relationship-aware checks (e.g., relevance scoring via embedding similarity, deanonymization using vault state).