Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Liu00222 Open Prompt Injection Model Query Interface

From Leeroopedia
Knowledge Sources
Domains NLP, LLM, Software_Design
Last Updated 2026-02-14 15:00 GMT

Overview

An abstract interface pattern that defines a uniform query contract for all language model wrappers, enabling provider-agnostic prompt processing across diverse LLM backends.

Description

The Model Query Interface establishes a common contract where any LLM wrapper must implement a `query(prompt: str) -> str` method. This abstraction decouples the experiment pipeline from specific LLM providers. Concrete implementations handle provider-specific details: API authentication (OpenAI, Google), local model loading and inference (HuggingFace Transformers), chat template formatting (Llama 3, Vicuna), response post-processing (DeepSeek think-block removal), and error handling with retries.

Usage

Use this interface when implementing a new model wrapper or when querying a model directly (without the Application defense pipeline). The interface is consumed by `Application.query` for target task evaluation and directly by `main.py` for injected task baseline evaluation.

Theoretical Basis

The pattern implements the Strategy design pattern where each model wrapper is an interchangeable strategy for text generation:

Pseudo-code Logic:

# Abstract interface
class Model:
    def query(self, prompt: str) -> str:
        """Send prompt to LLM, return response text."""
        raise NotImplementedError

# Concrete strategies
class GPT(Model):
    def query(self, prompt):
        return openai_api_call(prompt)

class Flan(Model):
    def query(self, prompt):
        return huggingface_generate(prompt)

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment