Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Protectai Llm guard API Endpoints

From Leeroopedia
Knowledge Sources
Domains API_Design, Web_Development, Security
Last Updated 2026-02-14 12:00 GMT

Overview

Concrete tool for exposing LLM Guard scanning via FastAPI REST endpoints with sequential and parallel execution modes.

Description

The API endpoints are registered by register_routes in the FastAPI application. Four primary scanning endpoints are provided:

  • POST /analyze/prompt - Sequential prompt scanning with sanitization
  • POST /analyze/output - Sequential output scanning with sanitization
  • POST /scan/prompt - Parallel prompt scanning (validation only)
  • POST /scan/output - Parallel output scanning (validation only)

Additional endpoints: GET / (root), GET /healthz, GET /readyz, GET /metrics (if Prometheus enabled).

Usage

Send HTTP POST requests to the scanning endpoints. Analyze endpoints return sanitized text; scan endpoints return only scores.

Code Reference

Source Location

  • Repository: llm-guard
  • File: llm_guard_api/app/app.py
  • Lines: L162-460

Signature

def register_routes(
    app: FastAPI,
    config: Config,
    input_scanners_func: Callable,
    output_scanners_func: Callable,
) -> None:
    """Register all API routes on the FastAPI app."""

Import

# Not typically imported directly; routes are registered by create_app
from llm_guard_api.app.app import register_routes

I/O Contract

Inputs (HTTP Requests)

Endpoint Method Body Fields Description
/analyze/prompt POST prompt: str, scanners_suppress: list[str] Sequential scan with sanitization
/analyze/output POST prompt: str, output: str, scanners_suppress: list[str] Sequential output scan with sanitization
/scan/prompt POST prompt: str, scanners_suppress: list[str] Parallel scan, scores only
/scan/output POST prompt: str, output: str, scanners_suppress: list[str] Parallel output scan, scores only

Outputs (HTTP Responses)

Endpoint Response Fields Description
/analyze/prompt sanitized_prompt, is_valid, scanners Sanitized text + validity + scores
/analyze/output sanitized_output, is_valid, scanners Sanitized output + validity + scores
/scan/prompt is_valid, scanners Validity + scores only
/scan/output is_valid, scanners Validity + scores only

Usage Examples

Analyze Prompt

curl -X POST http://localhost:8000/analyze/prompt \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-token" \
  -d '{"prompt": "My name is John Smith and my SSN is 123-45-6789"}'
{
  "sanitized_prompt": "My name is [REDACTED_PERSON_1] and my SSN is [REDACTED_US_SSN_1]",
  "is_valid": false,
  "scanners": {
    "Anonymize": 0.95,
    "PromptInjection": 0.0,
    "Toxicity": 0.0
  }
}

Scan with Suppression

curl -X POST http://localhost:8000/scan/prompt \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Hello world", "scanners_suppress": ["Toxicity"]}'

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment