Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Promptfoo Promptfoo synthesize

From Leeroopedia
Knowledge Sources
Domains Security_Testing, Adversarial_ML
Last Updated 2026-02-14 08:00 GMT

Overview

Concrete tool for synthesizing adversarial test cases by orchestrating purpose extraction, plugin execution, and strategy application, provided by the Promptfoo red team framework.

Description

The synthesize function is the main orchestrator for adversarial test generation. It extracts the target system's purpose and entities from its prompts, then runs all selected plugins to generate test cases, applies strategies to transform them, and returns the complete set with metadata.

Usage

Import this function for programmatic red team test generation. It is called internally by doGenerateRedteam and can be used directly for custom generation pipelines.

Code Reference

Source Location

  • Repository: promptfoo
  • File: src/redteam/index.ts
  • Lines: L700-1353

Signature

export async function synthesize({
  abortSignal,
  delay,
  entities: entitiesOverride,
  injectVar,
  inputs,
  language,
  maxConcurrency,
  plugins,
  prompts,
  provider,
  purpose: purposeOverride,
  strategies,
  targetIds,
  showProgressBar,
  excludeTargetOutputFromAgenticAttackGeneration,
  testGenerationInstructions,
}: SynthesizeOptions): Promise<{
  purpose: string;
  entities: string[];
  testCases: TestCaseWithPlugin[];
  injectVar: string;
  failedPlugins: FailedPluginInfo[];
}>

Import

import { synthesize } from './redteam';

I/O Contract

Inputs

Name Type Required Description
plugins RedteamPluginObject[] Yes Plugins with IDs and test counts
prompts string[] Yes System prompts of the target (minimum 1)
strategies RedteamStrategyObject[] Yes Attack strategies to apply
provider ApiProvider Yes LLM provider for test generation
injectVar string No Variable name for attack injection (default: auto-detected)
maxConcurrency number No Maximum parallel plugin executions (default: 1)
language string No Language for generated attacks

Outputs

Name Type Description
purpose string Extracted or overridden system purpose
entities string[] Extracted entity names for realistic attacks
testCases TestCaseWithPlugin[] All generated adversarial test cases
injectVar string The injection variable name used
failedPlugins FailedPluginInfo[] Plugins that failed during generation

Usage Examples

Generate Red Team Tests

import { synthesize } from './redteam';

const result = await synthesize({
  plugins: [
    { id: 'prompt-injection', numTests: 5 },
    { id: 'pii', numTests: 3 },
  ],
  prompts: ['You are a helpful customer support agent for Acme Corp.'],
  strategies: [{ id: 'basic' }, { id: 'jailbreak' }],
  provider: myLlmProvider,
  maxConcurrency: 2,
});

console.log(`Purpose: ${result.purpose}`);
console.log(`Generated ${result.testCases.length} test cases`);
console.log(`Failed plugins: ${result.failedPlugins.length}`);

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment