Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Openai Openai node Server To Client Stream Proxying

From Leeroopedia
Knowledge Sources
Domains Streaming, Server_Architecture
Last Updated 2026-02-15 00:00 GMT

Overview

A principle for converting server-side SDK stream objects into Web Streams API ReadableStreams that can be sent as HTTP response bodies to browser clients.

Description

Server-to-Client Stream Proxying enables real-time streaming of OpenAI API responses from a backend server to a frontend browser. The server calls the OpenAI API with streaming enabled, then converts the SDK's stream object into a standard ReadableStream that can be piped directly as the HTTP response body.

This pattern is essential for security: the API key stays on the server, while the client receives real-time updates. The conversion preserves the streaming semantics — each chunk is a newline-delimited JSON string encoded as UTF-8 bytes.

Usage

Use this principle when building web applications where the backend proxies OpenAI streaming responses to the browser. Works with Express, Next.js, raw Node.js HTTP, and any framework that accepts a ReadableStream as a response body.

Theoretical Basis

Stream proxying follows a Bridge Pattern between two streaming interfaces:

// Server-side (Node.js):
// 1. Call OpenAI with streaming
stream = client.chat.completions.stream({ model, messages })

// 2. Convert SDK stream to Web ReadableStream
readableStream = stream.toReadableStream()

// 3. Send as HTTP response body
response.body = readableStream
response.headers['Content-Type'] = 'text/event-stream'

// The ReadableStream produces:
// - Each chunk: JSON.stringify(chatCompletionChunk) + '\n'
// - Encoded as UTF-8 bytes

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment