Principle:HKUDS AI Trader Parallel Agent Execution
| Knowledge Sources | |
|---|---|
| Domains | Concurrency, Backtesting |
| Last Updated | 2026-02-09 14:00 GMT |
Overview
A concurrency pattern that runs multiple LLM trading agents in parallel subprocesses for comparative backtesting.
Description
Parallel Agent Execution enables running multiple LLM-powered trading agents simultaneously, each with a different model configuration, over the same date range. This is essential for comparative evaluation: different LLMs (GPT-4o, Claude, DeepSeek, etc.) trade the same market with the same tools and rules, allowing direct performance comparison.
The pattern uses asyncio subprocess spawning: for each enabled model in the configuration, a new Python subprocess is launched running the same script with a --signature flag to isolate it. All subprocesses run concurrently and the parent waits for all to complete via asyncio.gather().
Usage
Use this principle when comparing multiple LLM trading agents on the same market and date range. It is the entry point for the multi-agent comparison workflow.
Theoretical Basis
# Pseudocode for parallel agent execution
enabled_models = [m for m in config["models"] if m["enabled"]]
if len(enabled_models) <= 1:
# Single model: run in current process
run_in_process(model)
else:
# Multiple models: spawn subprocesses
tasks = []
for model in enabled_models:
proc = spawn_subprocess(script, --signature=model.signature)
tasks.append(proc.wait())
await asyncio.gather(*tasks)
Key properties:
- Process isolation: Each agent runs in its own subprocess with its own runtime state
- Shared config: All agents read the same configuration file
- Signature-based routing: The --signature flag filters which model to run in each subprocess
- Graceful single-model fallback: If only one model is enabled, runs in-process without subprocesses