Implementation:Ollama Ollama ToChatCompletion
| Knowledge Sources | |
|---|---|
| Domains | API_Design, Data_Transformation |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
Concrete tool for translating Ollama responses to OpenAI chat completion format provided by the openai package.
Description
ToChatCompletion converts a final Ollama api.ChatResponse into an OpenAI ChatCompletion with choices, usage, and finish reason.
ToChunk converts a streaming Ollama response into a ChatCompletionChunk for SSE delivery.
ToCompletion handles the legacy completion format for /v1/completions.
ChatWriter is a custom http.ResponseWriter that intercepts writes from the native ChatHandler, deserializes the Ollama response, translates it, and writes the OpenAI format to the actual HTTP response.
Usage
Invoked automatically by the ChatMiddleware's custom response writer.
Code Reference
Source Location
- Repository: ollama
- File: openai/openai.go (ToChatCompletion, ToChunk, ToCompletion), middleware/openai.go (ChatWriter.Write)
- Lines: openai.go:L262-294 (ToChatCompletion), openai.go:L295-326 (ToChunk), openai.go:L336-357 (ToCompletion), openai.go:L126-134 (ChatWriter.Write)
Signature
func ToChatCompletion(id string, r api.ChatResponse) ChatCompletion
func ToChunk(id string, r api.ChatResponse, toolCallSent bool) ChatCompletionChunk
func ToCompletion(id string, r api.GenerateResponse) Completion
func (w *ChatWriter) Write(data []byte) (int, error)
Import
import "github.com/ollama/ollama/openai"
import "github.com/ollama/ollama/middleware"
I/O Contract
Inputs (ToChatCompletion)
| Name | Type | Required | Description |
|---|---|---|---|
| id | string | Yes | Unique completion ID (e.g., "chatcmpl-xxx") |
| r | api.ChatResponse | Yes | Ollama chat response with message, metrics, done flag |
Outputs (ToChatCompletion)
| Name | Type | Description |
|---|---|---|
| ChatCompletion | ChatCompletion | OpenAI format with ID, Object, Created, Model, Choices, Usage |
Usage Examples
OpenAI-Format Response
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "llama3",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help?"
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 8,
"total_tokens": 20
}
}