Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Helicone Helicone ProxyForwarder

From Leeroopedia
Knowledge Sources
Domains LLM Observability, Proxy Forwarding, Provider Integration
Last Updated 2026-02-14 00:00 GMT

Overview

Concrete proxy forwarding function for sending intercepted LLM requests to upstream providers, provided by the ProxyForwarder module in the Helicone Cloudflare Worker.

Description

proxyForwarder is the central asynchronous function in the Helicone worker proxy pipeline. It receives a validated RequestWrapper, the worker environment, an execution context, and a provider identifier, then orchestrates the full proxy lifecycle:

  1. Request mapping: Uses HeliconeProxyRequestMapper to transform the wrapped request into a HeliconeProxyRequest.
  2. Cache read: If cache headers are present and the organization has caching enabled, attempts to serve from CACHE_KV.
  3. Rate limiting: Evaluates token-bucket rate limit policies from headers or database configuration via checkBucketRateLimit.
  4. Prompt security: For OpenAI chat completions, optionally scans the latest user message for threats using the PromptSecurityClient.
  5. Content moderation: Optionally runs OpenAI moderation on the user message.
  6. Provider call: Delegates to handleProxyRequest which performs the actual HTTP fetch to the upstream provider.
  7. Cache write: On 200 responses with cache enabled, saves the response body and latency to KV.
  8. Async logging: Uses ctx.waitUntil to fire-and-forget the logging pipeline, which authenticates, resolves organization context, computes cost, logs via HeliconeProducer, records rate limit usage, and sends traces to DataDog.

The function returns the provider's response to the caller, with injected Helicone headers (cache status, rate limit counters).

Usage

Call proxyForwarder from a provider-specific route handler after the request has been intercepted and the provider type has been determined by the router.

Code Reference

Source Location

  • Repository: Helicone
  • File: worker/src/lib/HeliconeProxyRequest/ProxyForwarder.ts (lines 45-461)

Signature

export async function proxyForwarder(
  request: RequestWrapper,
  env: Env,
  ctx: ExecutionContext,
  provider: Provider,
  escrowInfo?: EscrowInfo
): Promise<Response>

Import

import { proxyForwarder } from "../lib/HeliconeProxyRequest/ProxyForwarder";

I/O Contract

Inputs

Name Type Required Description
request RequestWrapper Yes The validated request wrapper containing parsed headers, body buffer, authentication state, and Helicone metadata.
env Env Yes Cloudflare Workers environment bindings with credentials for Supabase, S3, Upstash Kafka/SQS, cache KV, DataDog, and rate limiter durable objects.
ctx ExecutionContext Yes The Cloudflare Workers execution context, used for ctx.waitUntil() to run async logging after the response is returned.
provider Provider Yes The LLM provider identifier (e.g. "OPENAI", "ANTHROPIC", "AZURE", "GOOGLE", etc.).
escrowInfo EscrowInfo No Optional escrow billing context for AI Gateway passthrough billing scenarios.

Outputs

Name Type Description
response Response The HTTP response from the upstream LLM provider (or a cached/rate-limited/error response), with additional Helicone headers injected (e.g. Helicone-Cache: MISS, rate limit headers).

Usage Examples

Basic Usage

import { proxyForwarder } from "../lib/HeliconeProxyRequest/ProxyForwarder";
import { RequestWrapper } from "../lib/RequestWrapper";

// Inside a route handler:
router.all("*", async (_, requestWrapper: RequestWrapper, env: Env, ctx: ExecutionContext) => {
  return await proxyForwarder(requestWrapper, env, ctx, "OPENAI");
});

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment