Implementation:Openai Openai node ChatCompletionStream
| Knowledge Sources | |
|---|---|
| Domains | Streaming, Event_Driven_Architecture |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
Concrete tool for consuming streaming Chat Completion responses with event-driven and async-iterable interfaces provided by the openai-node SDK.
Description
ChatCompletionStream extends AbstractChatCompletionRunner and implements AsyncIterable<ChatCompletionChunk>. It accumulates streaming chunks into a ChatCompletionSnapshot, emits typed events (content.delta, content.done, chunk, tool_calls.function.arguments.delta, etc.), and provides finalChatCompletion() to await the fully accumulated response.
Key features:
- Event-driven API with typed events for content, tool calls, logprobs, and refusals
- Async iterable interface for simple for await consumption
- toReadableStream() for server-to-client proxying
- fromReadableStream() for client-side reconstruction
- Automatic content accumulation across chunks
Usage
Use client.chat.completions.stream() to create a ChatCompletionStream instance. This is preferred over the raw create({ stream: true }) when you need event-driven consumption, content accumulation, or stream proxying.
Code Reference
Source Location
- Repository: openai-node
- File: src/lib/ChatCompletionStream.ts
- Lines: L129-607
Signature
export class ChatCompletionStream<ParsedT = null>
extends AbstractChatCompletionRunner<ChatCompletionStreamEvents<ParsedT>, ParsedT>
implements AsyncIterable<ChatCompletionChunk>
{
constructor(params: ChatCompletionCreateParams | null);
get currentChatCompletionSnapshot(): ChatCompletionSnapshot | undefined;
static fromReadableStream(stream: ReadableStream): ChatCompletionStream<null>;
static createChatCompletion<ParsedT>(
client: OpenAI,
params: ChatCompletionStreamParams,
options?: RequestOptions,
): ChatCompletionStream<ParsedT>;
// Inherited from AbstractChatCompletionRunner:
on<Event>(event: Event, listener: EventListener): this;
off<Event>(event: Event, listener: EventListener): this;
once<Event>(event: Event, listener: EventListener): this;
finalChatCompletion(): Promise<ChatCompletion>;
toReadableStream(): ReadableStream;
}
Import
import OpenAI from 'openai';
// Created via: client.chat.completions.stream(...)
// Or: import { ChatCompletionStream } from 'openai/lib/ChatCompletionStream';
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| params | ChatCompletionStreamParams | Yes | Chat completion parameters (stream defaults to true) |
| client | OpenAI | Yes (via factory) | Client instance for making the API call |
| options | RequestOptions | No | Per-request overrides |
Outputs
| Name | Type | Description |
|---|---|---|
| (events) | Various | Typed events: 'content.delta', 'content.done', 'chunk', 'chatCompletion', 'tool_calls.function.arguments.delta', etc. |
| (async iterable) | ChatCompletionChunk | Each chunk from the SSE stream |
| finalChatCompletion() | Promise<ChatCompletion> | Accumulated final response after stream completes |
| toReadableStream() | ReadableStream | Web Streams API ReadableStream for proxying |
Usage Examples
Event-Driven Consumption
import OpenAI from 'openai';
const client = new OpenAI();
const stream = client.chat.completions.stream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a haiku about coding.' }],
});
stream.on('content.delta', ({ delta }) => {
process.stdout.write(delta);
});
const finalCompletion = await stream.finalChatCompletion();
console.log('\n\nFull response:', finalCompletion.choices[0].message.content);
Async Iteration
const stream = client.chat.completions.stream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Count to 5.' }],
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}