Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Protectai Llm guard Benchmark run

From Leeroopedia
Knowledge Sources
Domains Performance_Testing, DevOps, Quality_Assurance
Last Updated 2026-02-14 12:00 GMT

Overview

Concrete tool for benchmarking individual LLM Guard scanner performance using timeit-based repetition and numpy statistics.

Description

The benchmarks/run.py module provides CLI-based scanner benchmarking. It builds a specified scanner instance, loads test data from JSON files, runs the scanner repeatedly using timeit.repeat, and reports latency statistics (average, variance, percentiles, QPS) in JSON format.

Usage

Run from the command line to benchmark specific scanners. Useful for comparing PyTorch vs ONNX performance and establishing latency baselines.

Code Reference

Source Location

  • Repository: llm-guard
  • File: benchmarks/run.py
  • Lines: L177-252

Signature

def benchmark_input_scanner(
    scanner_name: str,
    repeat_times: int,
    use_onnx: bool,
) -> tuple[list, int]:
    """Benchmark an input scanner. Returns (latency_list, input_length)."""

def benchmark_output_scanner(
    scanner_name: str,
    repeat_times: int,
    use_onnx: bool,
) -> tuple[list, int]:
    """Benchmark an output scanner. Returns (latency_list, output_length)."""

def main():
    """CLI entry point with argparse for type, scanner, --repeat, --use-onnx."""

Import

# CLI usage (not typically imported)
# python benchmarks/run.py input PromptInjection --repeat 10 --use-onnx True

I/O Contract

Inputs

Name Type Required Description
type str Yes "input" or "output" scanner type
scanner str Yes Scanner class name (e.g., "PromptInjection")
--repeat int No Number of repetitions (default: 5)
--use-onnx bool No Use ONNX runtime (default: False)

Outputs

Name Type Description
JSON output dict Contains scanner, type, input_length, latency stats, QPS

Usage Examples

CLI Benchmark

# Benchmark PromptInjection scanner
python benchmarks/run.py input PromptInjection --repeat 10

# Benchmark with ONNX
python benchmarks/run.py input PromptInjection --repeat 10 --use-onnx True

# Benchmark output scanner
python benchmarks/run.py output Relevance --repeat 5

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment