Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Helicone Helicone Application Services

From Leeroopedia
Knowledge Sources
Domains Local Development, Services
Last Updated 2026-02-14 00:00 GMT

Overview

Application services are the three primary Helicone processes -- Jawn (backend API), Web (frontend dashboard), and Worker (LLM proxy) -- that developers start locally to build and test features.

Description

Helicone's application layer consists of three independently startable services:

Jawn (Backend API) is an Express.js server at valhalla/jawn/. It provides the REST API consumed by the web dashboard, handles request processing, and communicates with PostgreSQL, ClickHouse, and MinIO. The dev script (yarn dev) runs concurrently to start both the nodemon watcher and a Python type generator. It listens on port 8585 by default.

Web (Frontend Dashboard) is a Next.js 14 application at web/. The yarn dev:local script starts the Next.js dev server with Turbopack on port 3000. For Better Auth integration, yarn dev:better-auth starts on port 3008 using a dedicated .env.better-auth file. The web service connects to Jawn via the NEXT_PUBLIC_HELICONE_JAWN_SERVICE environment variable.

Worker (LLM Proxy) is a Cloudflare Workers application at worker/. Started via npx wrangler dev with environment-specific variables. Different worker types (OPENAI_PROXY, ANTHROPIC_PROXY, HELICONE_API, GATEWAY_API) are configured via the WORKER_TYPE var. The default OpenAI proxy listens on port 8787. The worker intercepts LLM API calls, logs them, and forwards to the actual provider.

All three services require the infrastructure stack (PostgreSQL, ClickHouse, MinIO) to be running and properly migrated.

Usage

Start these services after infrastructure is running and migrations are applied. In typical development, you start Jawn first (the web frontend depends on it), then the web dashboard. The worker is only needed when testing the LLM proxy flow.

Theoretical Basis

The microservice architecture separates concerns: the worker handles real-time proxying at the edge, Jawn handles business logic and data persistence, and the web frontend provides the user interface. Running them as separate processes allows developers to restart individual services without affecting others, and to focus on the service they are modifying.

The use of concurrently in Jawn's dev script allows parallel execution of the TypeScript server and the Python type generator, ensuring generated types stay in sync with code changes.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment