Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Liu00222 Open Prompt Injection Application Query Pipeline

From Leeroopedia
Knowledge Sources
Domains Prompt_Injection, LLM, Security
Last Updated 2026-02-14 15:00 GMT

Overview

A multi-stage processing pipeline that routes user data through defense mechanisms, prompt construction, model querying, and response post-processing within an LLM application.

Description

The Application Query Pipeline is the core execution path of an LLM-integrated application. When a data prompt is queried, it passes through up to four stages: (1) Pre-hand detection which may block suspicious inputs before they reach the model (PPL filtering, response-based filtering), (2) Preprocessing which transforms the data (retokenization, paraphrasing), (3) Prompt construction which assembles the system instruction with the data using defense-specific formatting (sandwich wrapping, delimiter insertion, XML tagging), and (4) Response processing which extracts the relevant answer from the model output (sandwich answer extraction).

Usage

Use this principle to understand how the Application processes queries during both baseline evaluation (clean data) and attack evaluation (injected data). The query pipeline is the main attack surface — defenses operate by modifying behavior at each pipeline stage.

Theoretical Basis

The pipeline implements a Chain of Responsibility pattern where each stage can either pass data through or short-circuit with a rejection:

Pseudo-code Logic:

# Multi-stage query pipeline
def query(data_prompt):
    # Stage 1: Pre-hand detection (may reject)
    if defense in ['ppl', 'response-based']:
        if is_suspicious(data_prompt):
            return blocked_message

    # Stage 2: Preprocessing (transforms data)
    if defense == 'retokenization':
        data_prompt = retokenize(data_prompt)
    elif defense == 'paraphrasing':
        data_prompt = paraphrase(data_prompt)

    # Stage 3: Prompt construction (wraps with instructions)
    prompt = system_instruction + defense_formatting(data_prompt)

    # Stage 4: Model query + response processing
    response = model.query(prompt)
    return extract_answer(response)

Related Pages

Implemented By

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment