Environment:Protectai Llm guard Python Runtime Dependencies
| Knowledge Sources | |
|---|---|
| Domains | NLP, Security, Infrastructure |
| Last Updated | 2026-02-14 12:00 GMT |
Overview
Python 3.10+ environment with PyTorch, HuggingFace Transformers, Presidio NLP, and tiktoken for running LLM Guard scanner pipelines.
Description
This environment provides the core runtime for LLM Guard, a security framework for Large Language Models. It requires Python 3.10-3.12 with PyTorch for deep learning inference, HuggingFace Transformers for model loading and text classification pipelines, Presidio for PII entity recognition and anonymization, tiktoken for OpenAI-compatible token counting, and several supporting libraries for text processing. The framework auto-detects CUDA, MPS, or CPU devices at startup via torch.cuda.is_available() and torch.backends.mps.is_available().
Usage
Use this environment for any LLM Guard scanner operation, including the core library (llm_guard package), benchmarking, and integration examples. This is the mandatory prerequisite for all scanner implementations including Anonymize, PromptInjection, Toxicity, TokenLimit, Relevance, and others.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Linux, macOS, Windows (WSL recommended) | Tested on Ubuntu in CI |
| Python | 3.10, 3.11, or 3.12 | requires-python = ">=3.10,<3.13"
|
| Hardware | CPU (minimum), NVIDIA GPU (recommended) | CUDA auto-detected; MPS supported on Apple Silicon |
| RAM | 4GB minimum, 8GB+ recommended | Transformer models require significant memory |
Dependencies
System Packages
- Python 3.10+ runtime
git(for cloning and model downloads)
Python Packages
torch>= 2.4.0transformers== 4.51.3presidio-analyzer== 2.2.358presidio-anonymizer== 2.2.358tiktoken>= 0.9, < 1.0nltk>= 3.9.1, < 4faker>= 37, < 38fuzzysearch>= 0.7, < 0.9bc-detect-secrets== 1.5.43json-repair== 0.44.1regex== 2024.11.6structlog>= 24
Credentials
No credentials are required for the core library. However, certain HuggingFace models (gated models like deberta-v3-small-prompt-injection-v2) require:
HF_TOKEN: HuggingFace API token with read access for gated model downloads.
Quick Install
# Install core package
pip install llm-guard
# If torch install fails, install separately first
pip install wheel
pip install torch>=2.4.0
pip install llm-guard --no-build-isolation
# For development
pip install "llm-guard[dev]"
Code Evidence
Device auto-detection from llm_guard/util.py:103-111:
@lru_cache(maxsize=None) # Unbounded cache
def device():
torch = cast("torch", lazy_load_dep("torch"))
if torch.cuda.is_available():
return torch.device("cuda:0")
elif torch.backends.mps.is_available():
return torch.device("mps")
return torch.device("cpu")
Lazy dependency loading with warning from llm_guard/util.py:114-131:
def lazy_load_dep(import_name: str, package_name: str | None = None):
if package_name is None:
package_name = import_name
spec = importlib.util.find_spec(import_name)
if spec is None:
LOGGER.warning(
f"Optional feature dependent on missing package: {import_name} was initialized.\n"
f"Use `pip install {package_name}` to install the package if running locally."
)
return importlib.import_module(import_name)
Python version constraint from pyproject.toml:19:
requires-python = ">=3.10,<3.13"
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
ModuleNotFoundError: No module named 'torch' |
PyTorch not installed | pip install torch>=2.4.0
|
LookupError: punkt_tab |
NLTK tokenizer data missing | Auto-downloaded on first use; ensure internet access |
Optional feature dependent on missing package: tiktoken |
tiktoken not installed | pip install tiktoken
|
TOKENIZERS_PARALLELISM warning |
HuggingFace tokenizer threading conflict | Automatically set to false by Anonymize scanner
|
Compatibility Notes
- CUDA GPUs: Auto-detected via
torch.cuda.is_available(). Usescuda:0by default. - Apple Silicon (MPS): Supported via
torch.backends.mps.is_available(). - CPU-only: Full functionality available; ML-based scanners will run slower.
- Python 3.9: Listed in some documentation but
pyproject.tomlrequires>=3.10.
Related Pages
- Implementation:Protectai_Llm_guard_Scan_prompt
- Implementation:Protectai_Llm_guard_Scan_output
- Implementation:Protectai_Llm_guard_Anonymize
- Implementation:Protectai_Llm_guard_Deanonymize
- Implementation:Protectai_Llm_guard_Vault
- Implementation:Protectai_Llm_guard_PromptInjection
- Implementation:Protectai_Llm_guard_Toxicity
- Implementation:Protectai_Llm_guard_TokenLimit
- Implementation:Protectai_Llm_guard_NoRefusal
- Implementation:Protectai_Llm_guard_Relevance
- Implementation:Protectai_Llm_guard_Sensitive
- Implementation:Protectai_Llm_guard_Input_Scanner_Base
- Implementation:Protectai_Llm_guard_Benchmark_run