Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Ollama Ollama Chat Prompt

From Leeroopedia
Knowledge Sources
Domains NLP, Prompt_Engineering
Last Updated 2026-02-14 00:00 GMT

Overview

Concrete tool for constructing model-specific prompts from chat messages provided by the server package.

Description

The chatPrompt function accepts a list of chat messages and produces the final prompt string and image data for the inference engine. It performs context window truncation by iteratively removing older messages until the rendered prompt fits within the model's num_ctx limit. System messages are always preserved regardless of truncation.

The companion Template.Execute method renders messages using Go's text/template engine with model-specific format strings. Parse parses the template string from model metadata.

Usage

Called internally by GenerateHandler and ChatHandler before dispatching to the inference engine. Not directly accessible to API clients.

Code Reference

Source Location

  • Repository: ollama
  • File: server/prompt.go (chatPrompt), template/template.go (Template.Execute, Parse)
  • Lines: prompt.go:L23-131 (chatPrompt), template.go:L257-290 (Execute), template.go:L145-256 (Parse)

Signature

func chatPrompt(
    ctx context.Context,
    m *Model,
    tokenize tokenizeFunc,
    opts *api.Options,
    msgs []api.Message,
    tools []api.Tool,
    think *api.ThinkValue,
    truncate bool,
) (prompt string, images []llm.ImageData, _ error)
func (t *Template) Execute(w io.Writer, v Values) error
func Parse(s string) (*Template, error)

Import

import "github.com/ollama/ollama/server"
import "github.com/ollama/ollama/template"

I/O Contract

Inputs

Name Type Required Description
ctx context.Context Yes Request context
m *Model Yes Model with template, config, projector paths
tokenize tokenizeFunc Yes Function to tokenize text (from the loaded runner)
opts *api.Options Yes Options including NumCtx (context window size)
msgs []api.Message Yes Chat messages with Role, Content, Images fields
tools []api.Tool No Tool/function definitions for function calling
think *api.ThinkValue No Chain-of-thought thinking mode setting
truncate bool Yes Whether to truncate messages to fit context window

Outputs

Name Type Description
prompt string Rendered prompt string in model-specific format
images []llm.ImageData Extracted image data for multimodal models (ID + bytes)
error error Non-nil if template execution or tokenization fails

Usage Examples

Internal Usage

// From ChatHandler in server/routes.go
prompt, images, err := chatPrompt(
    c.Request.Context(),
    model,
    runner.Tokenize,
    &opts,
    msgs,
    req.Tools,
    req.Think,
    true, // truncate to fit context
)
if err != nil {
    // handle error
}
// Pass prompt and images to inference engine

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment