Principle:Liu00222 Open Prompt Injection Attacker Creation
| Knowledge Sources | |
|---|---|
| Domains | Prompt_Injection, Adversarial_ML, Security |
| Last Updated | 2026-02-14 15:00 GMT |
Overview
A factory pattern for instantiating prompt injection attacker objects that implement different strategies for injecting malicious instructions into LLM application inputs.
Description
Attacker Creation enables systematic evaluation of prompt injection attacks by providing a factory that creates attacker objects for five distinct strategies: Naive (direct injection), Escape Character (newline prepending), Ignore ("ignore previous instructions" prefix), Fake Completion (simulated task completion before injection), and Combine (all strategies combined). Each attacker takes clean data and produces an attacked prompt containing the injected task instruction.
Usage
Use this principle when setting up the adversarial component of a prompt injection experiment. The attacker is created after the injected task is instantiated and is used during the attack execution phase to modify clean data prompts.
Theoretical Basis
Prompt injection attacks exploit the inability of LLMs to distinguish between trusted instructions and untrusted data. The five attack strategies represent an escalating sophistication hierarchy:
- Naive: Direct concatenation — `clean_data + inject_instruction + inject_data`
- Escape: Boundary exploitation — `clean_data + "\n" + inject_instruction + inject_data`
- Ignore: Instruction override — `clean_data + "Ignore previous instructions." + inject_instruction + inject_data`
- Fake Completion: Context manipulation — `clean_data + fake_output + inject_instruction + inject_data`
- Combine: Maximum pressure — All of the above combined