Environment:Ollama Ollama Go Runtime
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Go |
| Last Updated | 2026-02-14 22:00 GMT |
Overview
Go 1.24.1+ runtime environment with CGo enabled, required for building and running the Ollama server and all its subcommands.
Description
This environment provides the Go toolchain required to compile and run the Ollama project. The project uses Go modules for dependency management and requires CGo (CGO_ENABLED=1) for integration with the llama.cpp C++ backend. The build uses standard Go tooling with platform-specific CGo compiler flags for optimal performance on each OS.
The project targets Linux (x86_64, arm64), macOS (arm64, minimum version 14.0), and Windows (x86_64) as supported platforms. All API servers, CLI commands, model conversion, tokenizer parsing, and registry operations run within this Go runtime.
Usage
Use this environment for all Ollama operations. Every Implementation page in this repository requires the Go runtime: the HTTP server (Serve), model pulling (PullModel), model creation (CreateHandler), conversion (ConvertModel), prompt construction (Chat_Prompt), sampling (Sampler_Sample), and all registry operations. This is the foundational prerequisite for the entire codebase.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Linux, macOS 14.0+, Windows | Rocky Linux / Ubuntu for CI; macOS requires minimum deployment target 14.0 |
| Hardware | x86_64 or arm64 CPU | arm64 for macOS (Apple Silicon); both supported on Linux |
| Disk | 2GB+ free space | For Go toolchain, module cache, and build artifacts |
Dependencies
System Packages
- `go` >= 1.24.1
- `git` (for module download and version tagging)
- `gcc` >= 10.2.1 (Linux x86_64) or `clang` (Linux arm64, macOS)
- `cmake` >= 3.31.2 (for building llama.cpp C++ backend)
- `ccache` (recommended for faster rebuilds)
Go Module Dependencies
Key dependencies from `go.mod`:
- `github.com/gin-gonic/gin` — HTTP routing framework
- `github.com/spf13/cobra` — CLI framework
- `github.com/mattn/go-sqlite3` — SQLite (CGo)
- `github.com/x448/float16` — Float16 support for tensor operations
- `golang.org/x/sync` — Concurrency primitives
- `golang.org/x/sys` — Low-level OS interfaces
Credentials
The following environment variables are optionally used at runtime:
- `OLLAMA_HOST`: Server bind address (default: `127.0.0.1:11434`)
- `OLLAMA_MODELS`: Path to model storage directory (default: `$HOME/.ollama/models`)
- `OLLAMA_ORIGINS`: Comma-separated list of allowed CORS origins
- `OLLAMA_REMOTES`: Allowed remote model hosts (default: `ollama.com`)
- `HTTP_PROXY` / `HTTPS_PROXY` / `NO_PROXY`: Proxy configuration for network access
Quick Install
# Install Go 1.24.1+ (see https://go.dev/dl/)
# Then build Ollama:
CGO_ENABLED=1 go build -o ollama .
# Or with optimized CGo flags:
CGO_CFLAGS="-O3" CGO_CXXFLAGS="-O3" CGO_ENABLED=1 go build -o ollama .
Code Evidence
Go version requirement from `go.mod:1-3`:
module github.com/ollama/ollama
go 1.24.1
CGo enabled in CI from `.github/workflows/test.yaml:205`:
env:
CGO_ENABLED: '1'
Platform-specific CGo flags from `ml/backend/ggml/ggml/src/ggml-cpu/cpu.go:3-9`:
// #cgo CFLAGS: -O3 -Wno-implicit-function-declaration
// #cgo CXXFLAGS: -std=c++17
// #cgo linux CPPFLAGS: -D_GNU_SOURCE
// #cgo darwin,arm64 CPPFLAGS: -DGGML_USE_ACCELERATE -DACCELERATE_NEW_LAPACK
// #cgo darwin,arm64 LDFLAGS: -framework Accelerate
macOS minimum version from `scripts/build_darwin.sh:17-19`:
export CGO_CFLAGS="-O3 -mmacosx-version-min=14.0"
export CGO_CXXFLAGS="-O3 -mmacosx-version-min=14.0"
export CGO_LDFLAGS="-mmacosx-version-min=14.0"
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `cgo: C compiler not found` | GCC or Clang not installed | Install `gcc` (Linux) or Xcode Command Line Tools (macOS) |
| `go: go.mod requires go >= 1.24.1` | Go version too old | Update Go to 1.24.1 or newer |
| `undefined reference to ...` | Missing C++ standard library | Ensure `libstdc++` is installed (Linux: `gcc-c++` package) |
| `this model may be incompatible with your version of Ollama` | Model format mismatch with llama.cpp version | Re-pull the model with `ollama pull <model>` |
Compatibility Notes
- macOS: Requires macOS 14.0 (Sonoma) or later. Apple Silicon (arm64) uses the Accelerate framework for optimized BLAS operations.
- Linux: Supports both x86_64 and arm64. CI builds use Rocky Linux 8 (GCC 10.2.1) for x86_64 and Clang for arm64.
- Windows: Requires CGo with MSVC or MinGW toolchain. Environment variables are case-insensitive on Windows.
- Build tags: Integration tests require `//go:build integration`; MLX backend requires `//go:build mlx`.
Related Pages
- Implementation:Ollama_Ollama_Serve
- Implementation:Ollama_Ollama_PullModel
- Implementation:Ollama_Ollama_Chat_Prompt
- Implementation:Ollama_Ollama_Sampler_Sample
- Implementation:Ollama_Ollama_Chat_Handler
- Implementation:Ollama_Ollama_ParseFile
- Implementation:Ollama_Ollama_ParseFromModel
- Implementation:Ollama_Ollama_ConvertAdapter
- Implementation:Ollama_Ollama_CreateHandler
- Implementation:Ollama_Ollama_ConvertModel
- Implementation:Ollama_Ollama_ParseTensors
- Implementation:Ollama_Ollama_LlamaModel_KV
- Implementation:Ollama_Ollama_ParseTokenizer
- Implementation:Ollama_Ollama_WriteGGUF
- Implementation:Ollama_Ollama_GenerateRoutes_OpenAI
- Implementation:Ollama_Ollama_FromChatRequest
- Implementation:Ollama_Ollama_Inference_Handler
- Implementation:Ollama_Ollama_ToChatCompletion
- Implementation:Ollama_Ollama_ToEmbeddingList
- Implementation:Ollama_Ollama_Auth_Sign
- Implementation:Ollama_Ollama_ParseNamedManifest
- Implementation:Ollama_Ollama_DownloadBlob
- Implementation:Ollama_Ollama_WriteManifest
- Implementation:Ollama_Ollama_VerifyBlob