Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Protectai Llm guard Scanner Benchmarking

From Leeroopedia
Knowledge Sources
Domains Performance_Testing, DevOps, Quality_Assurance
Last Updated 2026-02-14 12:00 GMT

Overview

A performance measurement technique that repeatedly runs individual scanners on representative test data to collect latency statistics, variance, percentile distributions, and throughput metrics.

Description

Scanner benchmarking measures the execution performance of individual input and output scanners by running them multiple times on representative test data. It collects:

  • Average latency: Mean execution time in milliseconds.
  • Variance: Consistency of execution times.
  • Percentiles: P90, P95, P99 latency for tail analysis.
  • Throughput (QPS): Characters processed per second.

This is essential for selecting scanner configurations and ONNX optimizations that meet latency requirements in production deployments.

Usage

Use this principle when evaluating scanner performance before production deployment. Run benchmarks to compare PyTorch vs ONNX inference, select appropriate models, and establish performance baselines.

Theoretical Basis

# Pseudocode for scanner benchmarking
scanner = build_scanner(scanner_name, use_onnx)
test_data = load_test_data(scanner_name)

latencies = []
for _ in range(repeat_times):
    start = time.time()
    scanner.scan(test_data)
    latencies.append(time.time() - start)

report = {
    "average_latency_ms": mean(latencies) * 1000,
    "p90": percentile(latencies, 90) * 1000,
    "p95": percentile(latencies, 95) * 1000,
    "p99": percentile(latencies, 99) * 1000,
    "QPS": len(test_data) / mean(latencies),
}

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment