Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Liu00222 Open Prompt Injection Configuration Loading

From Leeroopedia
Knowledge Sources
Domains Configuration, Data_Loading
Last Updated 2026-02-14 15:00 GMT

Overview

A design pattern for loading structured experiment configurations from JSON files to parameterize machine learning experiment pipelines.

Description

Configuration Loading is the practice of externalizing experiment parameters (model selection, hyperparameters, API keys, dataset paths) into structured JSON files rather than hardcoding them. This enables systematic experimentation across different model-task combinations without modifying source code. In the prompt injection research context, this allows sweeping across multiple LLM providers, attack strategies, and defense mechanisms through configuration alone.

Usage

Use this principle at the start of any experiment pipeline where model identity, task selection, or defense parameters must be specified. It is the foundational step that parameterizes all downstream components (model creation, task loading, attacker instantiation).

Theoretical Basis

Configuration loading follows the Dependency Injection pattern: runtime behavior is determined by external configuration rather than compile-time constants. The JSON schema serves as a contract between the configuration files and the factory functions that consume them.

Pseudo-code Logic:

# Abstract pattern
config = load_json(config_path)
component = factory_function(config)

Key config schemas in this repository:

  • Model config: Contains `model_info` (provider, name), `params` (max_output_tokens, device), `api_key_info`
  • Task config: Contains `dataset_info` (dataset name, split, label mappings)

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment