Implementation:Helicone Helicone ProxyForwarder
| Knowledge Sources | |
|---|---|
| Domains | LLM Observability, Proxy Forwarding, Provider Integration |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
Concrete proxy forwarding function for sending intercepted LLM requests to upstream providers, provided by the ProxyForwarder module in the Helicone Cloudflare Worker.
Description
proxyForwarder is the central asynchronous function in the Helicone worker proxy pipeline. It receives a validated RequestWrapper, the worker environment, an execution context, and a provider identifier, then orchestrates the full proxy lifecycle:
- Request mapping: Uses
HeliconeProxyRequestMapperto transform the wrapped request into aHeliconeProxyRequest. - Cache read: If cache headers are present and the organization has caching enabled, attempts to serve from
CACHE_KV. - Rate limiting: Evaluates token-bucket rate limit policies from headers or database configuration via
checkBucketRateLimit. - Prompt security: For OpenAI chat completions, optionally scans the latest user message for threats using the
PromptSecurityClient. - Content moderation: Optionally runs OpenAI moderation on the user message.
- Provider call: Delegates to
handleProxyRequestwhich performs the actual HTTP fetch to the upstream provider. - Cache write: On 200 responses with cache enabled, saves the response body and latency to KV.
- Async logging: Uses
ctx.waitUntilto fire-and-forget the logging pipeline, which authenticates, resolves organization context, computes cost, logs viaHeliconeProducer, records rate limit usage, and sends traces to DataDog.
The function returns the provider's response to the caller, with injected Helicone headers (cache status, rate limit counters).
Usage
Call proxyForwarder from a provider-specific route handler after the request has been intercepted and the provider type has been determined by the router.
Code Reference
Source Location
- Repository: Helicone
- File:
worker/src/lib/HeliconeProxyRequest/ProxyForwarder.ts(lines 45-461)
Signature
export async function proxyForwarder(
request: RequestWrapper,
env: Env,
ctx: ExecutionContext,
provider: Provider,
escrowInfo?: EscrowInfo
): Promise<Response>
Import
import { proxyForwarder } from "../lib/HeliconeProxyRequest/ProxyForwarder";
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| request | RequestWrapper |
Yes | The validated request wrapper containing parsed headers, body buffer, authentication state, and Helicone metadata. |
| env | Env |
Yes | Cloudflare Workers environment bindings with credentials for Supabase, S3, Upstash Kafka/SQS, cache KV, DataDog, and rate limiter durable objects. |
| ctx | ExecutionContext |
Yes | The Cloudflare Workers execution context, used for ctx.waitUntil() to run async logging after the response is returned.
|
| provider | Provider |
Yes | The LLM provider identifier (e.g. "OPENAI", "ANTHROPIC", "AZURE", "GOOGLE", etc.).
|
| escrowInfo | EscrowInfo |
No | Optional escrow billing context for AI Gateway passthrough billing scenarios. |
Outputs
| Name | Type | Description |
|---|---|---|
| response | Response |
The HTTP response from the upstream LLM provider (or a cached/rate-limited/error response), with additional Helicone headers injected (e.g. Helicone-Cache: MISS, rate limit headers).
|
Usage Examples
Basic Usage
import { proxyForwarder } from "../lib/HeliconeProxyRequest/ProxyForwarder";
import { RequestWrapper } from "../lib/RequestWrapper";
// Inside a route handler:
router.all("*", async (_, requestWrapper: RequestWrapper, env: Env, ctx: ExecutionContext) => {
return await proxyForwarder(requestWrapper, env, ctx, "OPENAI");
});
Related Pages
Implements Principle
Requires Environment
- Environment:Helicone_Helicone_Node_20_TypeScript_Runtime
- Environment:Helicone_Helicone_Cloudflare_Workers_Runtime
- Environment:Helicone_Helicone_Wrangler_CLI