Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Protectai Llm guard Scan prompt

From Leeroopedia
Knowledge Sources
Domains NLP, Security, Input_Validation
Last Updated 2026-02-14 12:00 GMT

Overview

Concrete tool for sequentially scanning user prompts through a list of input scanners provided by the LLM Guard library.

Description

The scan_prompt function is the primary entry point for input scanning in LLM Guard. It iterates over a list of InputScanner instances, calling each scanner's scan method with the progressively sanitized prompt. It collects per-scanner validity flags and risk scores, and optionally stops early if fail_fast is enabled and a scanner fails.

Usage

Import this function when you need to run a prompt through a pipeline of configured input scanners before passing it to an LLM. This is the standard way to scan prompts in LLM Guard.

Code Reference

Source Location

  • Repository: llm-guard
  • File: llm_guard/evaluate.py
  • Lines: L23-73

Signature

def scan_prompt(
    scanners: list[InputScanner],
    prompt: str,
    fail_fast: bool = False,
) -> tuple[str, dict[str, bool], dict[str, float]]:
    """
    Scans a given prompt using the provided scanners.

    Args:
        scanners: A list of scanner objects inheriting from InputScanner.
        prompt: The input prompt string to be scanned.
        fail_fast: Stop scanning after the first scanner fails. Default False.

    Returns:
        A tuple containing:
            - The processed prompt string after applying all scanners.
            - A dictionary mapping scanner names to boolean validity flags.
            - A dictionary mapping scanner names to float risk scores (0=no risk, 1=high risk).
    """

Import

from llm_guard import scan_prompt

I/O Contract

Inputs

Name Type Required Description
scanners list[InputScanner] Yes List of configured input scanner instances
prompt str Yes Raw user prompt string to scan
fail_fast bool No Stop at first failing scanner (default: False)

Outputs

Name Type Description
sanitized_prompt str Cleaned prompt after all scanners applied
results_valid dict[str, bool] Scanner name to pass/fail mapping
results_score dict[str, float] Scanner name to risk score mapping (0.0-1.0)

Usage Examples

Basic Prompt Scanning

from llm_guard import scan_prompt
from llm_guard.input_scanners import Anonymize, Toxicity, PromptInjection
from llm_guard.vault import Vault

# 1. Configure scanners
vault = Vault()
input_scanners = [
    Anonymize(vault),
    Toxicity(threshold=0.5),
    PromptInjection(threshold=0.92),
]

# 2. Scan the prompt
prompt = "My name is John Smith and my email is john@example.com"
sanitized_prompt, results_valid, results_score = scan_prompt(input_scanners, prompt)

# 3. Check results
if all(results_valid.values()):
    print("Prompt is safe:", sanitized_prompt)
else:
    print("Prompt failed scanners:", {k: v for k, v in results_valid.items() if not v})

Fail-Fast Mode

from llm_guard import scan_prompt

# Stop scanning at the first failure
sanitized_prompt, results_valid, results_score = scan_prompt(
    input_scanners, prompt, fail_fast=True
)
# results_valid will only contain scanners up to and including the first failure

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment