Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:Liu00222 Open Prompt Injection Python Dependencies

From Leeroopedia
Knowledge Sources
Domains Infrastructure, NLP, Security
Last Updated 2026-02-14 15:30 GMT

Overview

Conda environment with Python 3.9, PyTorch 2.3, HuggingFace Transformers 4.42, OpenAI SDK, Google Generative AI SDK, spaCy, and NLP evaluation libraries.

Description

This environment defines the complete software stack for the Open-Prompt-Injection toolkit. It is specified in the `environment.yml` file and includes: the core ML stack (PyTorch, Transformers, PEFT, bitsandbytes, accelerate), API client libraries (OpenAI, Google Generative AI), NLP tools (spaCy, tiktoken, sentencepiece), evaluation metrics (rouge, scipy), data handling (datasets, pandas, numpy), and the fastchat library for Vicuna model loading and PPL defense. The environment is Conda-managed and targets Python 3.9 on linux-64.

Usage

Use this environment for all workflows in the Open-Prompt-Injection toolkit. Every workflow (Prompt Injection Experiment, DataSentinel Detection, Detection+Localization Defense) requires this base Python environment. Install once using the provided `environment.yml` before running any experiments.

System Requirements

Category Requirement Notes
OS Linux (linux-64) Environment spec built for linux-64 architecture
Runtime Conda (Miniconda or Anaconda) Required for environment creation
Python 3.9.19 Pinned in environment.yml
Disk 10GB+ For Python packages and cached model tokenizers

Dependencies

Core ML Stack

  • `torch` == 2.3.1
  • `transformers` == 4.42.0
  • `peft` == 0.11.1
  • `bitsandbytes` == 0.43.1
  • `accelerate` == 0.32.0
  • `safetensors` == 0.4.3
  • `tokenizers` == 0.19.1
  • `triton` == 2.3.1

API Client Libraries

  • `openai` == 1.33.0 (for GPT/GPT-Azure models)
  • `google-generativeai` == 0.6.0 (for PaLM2 models)
  • `google-ai-generativelanguage` == 0.6.4
  • `tiktoken` == 0.7.0 (for OpenAI tokenization)

NLP and Evaluation

  • `rouge` == 1.0.1 (for gigaword summarization evaluation)
  • `scipy` == 1.13.1
  • `sentencepiece` == 0.2.0
  • `datasets` == 2.19.2 (HuggingFace datasets)
  • `fschat` == 0.2.36 (fastchat for Vicuna loading and PPL defense)

Data Handling

  • `numpy` == 1.26.4
  • `pandas` == 2.2.2

spaCy (for PromptLocate)

  • spaCy with `en_core_web_sm` model (loaded in `PromptLocate.initialize_spacy()`)

Credentials

The following credentials must be configured in the JSON model config files:

  • OpenAI API Key: Required for GPT models. Set in `configs/model_configs/gpt_config.json` under `api_key_info.api_keys`.
  • Azure OpenAI: Required for GPT-Azure. Needs `api_keys`, `deployment_name`, `api_version`, and `endpoint` in config.
  • Google PaLM2 API Key: Required for PaLM2 models. Set in `configs/model_configs/palm2_config.json` under `api_key_info.api_keys`.

Quick Install

# Install from the provided environment spec
conda env create -f environment.yml --name my_custom_env
conda activate my_custom_env

# Download spaCy model (required for PromptLocate)
python -m spacy download en_core_web_sm

Code Evidence

Conda environment creation from README.md:

conda env create -f environment.yml --name my_custom_env
conda activate my_custom_env

OpenAI API key loaded from config in `models/GPT.py:60-63`:

api_keys = config["api_key_info"]["api_keys"]
api_pos = int(config["api_key_info"]["api_key_use"])
assert (0 <= api_pos < len(api_keys)), "Please enter a valid API key to use"
self.api_key = api_keys[api_pos]

Google PaLM2 API key loaded from config in `models/PaLM2.py:10-14`:

api_keys = config["api_key_info"]["api_keys"]
api_pos = int(config["api_key_info"]["api_key_use"])
assert (0 <= api_pos < len(api_keys)), "Please enter a valid API key to use"
self.api_key = api_keys[api_pos]

spaCy model loading in `apps/PromptLocate.py:258`:

nlp = spacy.load("en_core_web_sm", disable=["parser", "senter"])

fastchat model loading for Vicuna/PPL in `apps/Application.py:84-93`:

self.surrogate_backbone, self.surrogate_tokenizer = load_model(
    'lmsys/vicuna-7b-v1.3',
    "cuda",
    8,
    "9GiB",
    False,
    False,
    revision="main",
    debug=False,
)

Common Errors

Error Message Cause Solution
`AssertionError: Please enter a valid API key to use` API key not configured in model config JSON Add valid API key(s) to the appropriate config file under `api_key_info.api_keys`
`OSError: Can't find model 'en_core_web_sm'` spaCy English model not downloaded Run `python -m spacy download en_core_web_sm`
`ImportError: No module named 'fastchat'` fschat package not installed `pip install fschat==0.2.36`
`ModuleNotFoundError: No module named 'rouge'` rouge package not installed `pip install rouge==1.0.1`

Compatibility Notes

  • API-only usage: If only using GPT or PaLM2 (API models), the CUDA-related packages are not required, but the environment.yml installs them regardless.
  • spaCy model: The `en_core_web_sm` model must be downloaded separately after installing the Conda environment. It is not included in `environment.yml`.
  • fastchat version: The `fschat==0.2.36` package is required specifically for Vicuna model loading and the PPL defense surrogate model.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment