Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Liu00222 Open Prompt Injection Application query

From Leeroopedia
Knowledge Sources
Domains Prompt_Injection, LLM
Last Updated 2026-02-14 15:00 GMT

Overview

Concrete query method for processing data prompts through the defense pipeline and LLM, provided by the Application class.

Description

The Application.query method is the central execution path that processes a data prompt through four internal stages: `__prehand_detection` (PPL/response-based filtering), `__preprocess_data_prompt` (retokenization/paraphrasing), `__construct_prompt` (system instruction + defense wrapping + data), and `__process_response` (sandwich answer extraction). It delegates model inference to `self.model.query()`.

Usage

Call this method on an Application instance to process a data prompt (clean or attacked) through the full defense pipeline and obtain the model response.

Code Reference

Source Location

Signature

def query(self, data_prompt, verbose=1, idx=-1, total=-1):
    """
    Process a data prompt through the defense pipeline and query the model.

    Args:
        data_prompt (str): Input text to process.
        verbose (int): Print verbosity level (default 1).
        idx (int): Current sample index for logging (default -1).
        total (int): Total samples for logging (default -1).
    Returns:
        str: Model response after defense pipeline processing.
             Returns "[Potentially harmful. Request blocked.]" if blocked by pre-hand detection.
    """

Import

from OpenPromptInjection.apps import Application
# Typically accessed via create_app result:
# app = PI.create_app(task, model, defense)
# app.query(data_prompt)

I/O Contract

Inputs

Name Type Required Description
data_prompt str Yes Input text (clean or attacked)
verbose int No Print verbosity (default 1)
idx int No Current sample index for progress logging
total int No Total sample count for progress logging

Outputs

Name Type Description
response str Model response after pipeline processing, or blocked message if rejected by defense

Usage Examples

Querying with Clean Data

import OpenPromptInjection as PI
from OpenPromptInjection.utils import open_config

target_task = PI.create_task(open_config("configs/task_configs/sst2_config.json"), 100)
model = PI.create_model(open_config("configs/model_configs/gpt_config.json"))
app = PI.create_app(target_task, model, defense='sandwich')

for i, (data_prompt, label) in enumerate(app):
    response = app.query(data_prompt, verbose=1, idx=i, total=len(app))
    print(f"Response: {response}")
    break

Related Pages

Implements Principle

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment