Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Liu00222 Open Prompt Injection Model Creation

From Leeroopedia
Knowledge Sources
Domains NLP, LLM, Model_Loading
Last Updated 2026-02-14 15:00 GMT

Overview

A factory pattern for instantiating LLM wrapper objects with a unified query interface across diverse providers (OpenAI, Google, local HuggingFace models).

Description

Model Creation provides a single entry point to instantiate any supported language model behind a common Model.query(prompt) -> str interface. This abstraction is essential for prompt injection research because it enables testing attacks and defenses across multiple model architectures (GPT-3.5/4, PaLM 2, Flan-T5, Llama 2/3, Vicuna, DeepSeek, InternLM) without changing experiment code. Each wrapper handles provider-specific API calls, tokenization, and response formatting internally.

Usage

Use this principle when you need to instantiate an LLM for querying in an experiment. The model creation step occurs after configuration loading and before application assembly.

Theoretical Basis

The pattern follows Abstract Factory with runtime dispatch based on the provider string in configuration:

Pseudo-code Logic:

# Abstract factory dispatch
provider = config["model_info"]["provider"]
if provider == "openai":
    return GPT(config)
elif provider == "google":
    return PaLM2(config)
elif provider == "flan":
    return Flan(config)
# ... etc for each provider

All returned models satisfy the interface: `model.query(prompt: str) -> str`

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment