Implementation:Openai Openai node Completions Create
| Knowledge Sources | |
|---|---|
| Domains | NLP, API_Design |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
Concrete tool for invoking the Chat Completions endpoint provided by the openai-node SDK.
Description
The Completions.create() method sends a POST request to /chat/completions with the given request body. It supports three overloaded signatures: non-streaming (returns APIPromise<ChatCompletion>), streaming (returns APIPromise<Stream<ChatCompletionChunk>>), and a generic union. The method delegates to the internal HTTP client with retry and timeout handling.
Usage
Call this method to generate text from any OpenAI chat model. For streaming responses with high-level event handling, prefer completions.stream() instead.
Code Reference
Source Location
- Repository: openai-node
- File: src/resources/chat/completions/completions.ts
- Lines: L55-71
Signature
class Completions extends APIResource {
// Non-streaming overload
create(
body: ChatCompletionCreateParamsNonStreaming,
options?: RequestOptions,
): APIPromise<ChatCompletion>;
// Streaming overload
create(
body: ChatCompletionCreateParamsStreaming,
options?: RequestOptions,
): APIPromise<Stream<ChatCompletionChunk>>;
// Union overload (implementation)
create(
body: ChatCompletionCreateParams,
options?: RequestOptions,
): APIPromise<ChatCompletion> | APIPromise<Stream<ChatCompletionChunk>>;
}
Import
import OpenAI from 'openai';
// Access via: client.chat.completions.create(...)
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| body | ChatCompletionCreateParams | Yes | Request body with messages, model, and optional parameters |
| options | RequestOptions | No | Per-request overrides (headers, signal, timeout) |
Outputs
| Name | Type | Description |
|---|---|---|
| (non-streaming) | APIPromise<ChatCompletion> | Complete response with choices[].message.content |
| (streaming) | APIPromise<Stream<ChatCompletionChunk>> | Async iterable of incremental chunks |
Usage Examples
Non-Streaming
import OpenAI from 'openai';
const client = new OpenAI();
const completion = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' },
],
});
console.log(completion.choices[0].message.content);
Low-Level Streaming
const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Count to 10.' }],
stream: true,
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}