Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Protectai Llm guard Input Scanner Base

From Leeroopedia
Knowledge Sources
Domains Software_Design, Security, Plugin_Architecture
Last Updated 2026-02-14 12:00 GMT

Overview

Concrete abstract base class defining the input scanner interface that all LLM Guard input scanners must implement.

Description

The Scanner base class in llm_guard.input_scanners.base defines the interface contract for all input scanners. It requires implementing a single scan method that takes a prompt string and returns a tuple of (sanitized_prompt, is_valid, risk_score).

Usage

Subclass this base class when creating custom input scanners for LLM Guard.

Code Reference

Source Location

  • Repository: llm-guard
  • File: llm_guard/input_scanners/base.py

Signature

class Scanner:
    def scan(self, prompt: str) -> tuple[str, bool, float]:
        """
        Scan a prompt string.

        Args:
            prompt: The input text to scan.

        Returns:
            tuple: (sanitized_prompt, is_valid, risk_score)
                - sanitized_prompt: The processed prompt
                - is_valid: True if prompt passes the scanner
                - risk_score: Float in [0,1] or -1 if not applicable
        """
        raise NotImplementedError

Import

from llm_guard.input_scanners.base import Scanner

I/O Contract

Inputs

Name Type Required Description
prompt str Yes Input text to scan

Outputs

Name Type Description
sanitized_prompt str Processed prompt (may be modified by scanner)
is_valid bool True if prompt passes, False if fails
risk_score float Risk score [0,1] or -1 if not applicable

Usage Examples

Custom Scanner Implementation

from llm_guard.input_scanners.base import Scanner

class CustomKeywordScanner(Scanner):
    def __init__(self, banned_words: list[str]):
        self._banned_words = banned_words

    def scan(self, prompt: str) -> tuple[str, bool, float]:
        prompt_lower = prompt.lower()
        for word in self._banned_words:
            if word.lower() in prompt_lower:
                return prompt, False, 1.0
        return prompt, True, -1.0

# Use in pipeline
from llm_guard import scan_prompt
scanner = CustomKeywordScanner(["classified", "secret"])
sanitized, valid, scores = scan_prompt([scanner], "This is classified info")

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment