Implementation:Protectai Llm guard Relevance
Appearance
| Knowledge Sources | |
|---|---|
| Domains | NLP, Quality_Assurance, Semantic_Similarity |
| Last Updated | 2026-02-14 12:00 GMT |
Overview
Concrete tool for checking output relevance to prompts via embedding cosine similarity using BGE models, provided by the LLM Guard library.
Description
The Relevance class is an output scanner that encodes the prompt and output into dense vector embeddings using a BGE model (default: BAAI/bge-base-en-v1.5), then computes cosine similarity. Outputs with similarity below the threshold are flagged as irrelevant. Supports PyTorch and ONNX backends.
Usage
Import this scanner to verify LLM outputs are relevant to their prompts. Useful in pipelines where off-topic or hallucinated responses must be caught.
Code Reference
Source Location
- Repository: llm-guard
- File: llm_guard/output_scanners/relevance.py
- Lines: L44-167
Signature
class Relevance(Scanner):
def __init__(
self,
*,
threshold: float = 0.5,
model: Model | None = None,
use_onnx: bool = False,
) -> None:
"""
Args:
threshold: Minimum similarity score. Default: 0.5.
model: Embedding model. Default: BAAI/bge-base-en-v1.5.
use_onnx: Use ONNX runtime. Default: False.
"""
def scan(self, prompt: str, output: str) -> tuple[str, bool, float]:
"""
Check output relevance to prompt via embedding similarity.
Returns:
- Original output (unmodified)
- False if similarity below threshold, True otherwise
- Risk score based on 1 - similarity
"""
Import
from llm_guard.output_scanners import Relevance
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| threshold | float | No | Minimum similarity score (default: 0.5) |
| model | Model | No | Embedding model (default: BAAI/bge-base-en-v1.5) |
| use_onnx | bool | No | Use ONNX runtime (default: False) |
| prompt | str | Yes (scan) | Original prompt |
| output | str | Yes (scan) | LLM output to check |
Outputs
| Name | Type | Description |
|---|---|---|
| output | str | Original output (unmodified) |
| is_valid | bool | False if similarity below threshold |
| risk_score | float | Risk score based on 1 - cosine similarity |
Usage Examples
Basic Relevance Check
from llm_guard.output_scanners import Relevance
scanner = Relevance(threshold=0.5)
prompt = "What is the capital of France?"
relevant_output = "The capital of France is Paris."
_, is_valid, _ = scanner.scan(prompt, relevant_output)
# is_valid: True
irrelevant_output = "The best pizza recipe uses mozzarella cheese."
_, is_valid, score = scanner.scan(prompt, irrelevant_output)
# is_valid: False
Related Pages
Implements Principle
Requires Environment
- Environment:Protectai_Llm_guard_Python_Runtime_Dependencies
- Environment:Protectai_Llm_guard_ONNX_Runtime_Acceleration
Uses Heuristic
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment