Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Ggml org Llama cpp Conversion Pip Dependencies

From Leeroopedia
Field Value
Implementation Name Conversion Pip Dependencies
Type External Tool Doc
Tool pip (Python package installer)
Status Active

Overview

Description

The llama.cpp project provides a dedicated requirements file for the HuggingFace-to-GGUF conversion pipeline. This file, requirements/requirements-convert_hf_to_gguf.txt, specifies all Python dependencies needed to run convert_hf_to_gguf.py. It chains to a base requirements file for legacy conversion dependencies and adds PyTorch with platform-specific index URLs.

The dependency set is intentionally minimal, covering only what is required for:

  • Loading HuggingFace model weights and tokenizer configurations
  • Performing tensor type conversions and quantization
  • Writing output in GGUF format via the gguf Python library

Additionally, the gguf Python package itself is maintained within the llama.cpp repository at gguf-py/ and declares its own dependencies in gguf-py/pyproject.toml.

Usage

Install all conversion dependencies into your Python environment:

pip install -r requirements/requirements-convert_hf_to_gguf.txt

This single command resolves the full dependency chain, including the base requirements from requirements/requirements-convert_legacy_llama.txt.

Code Reference

Source Location

File Lines Description
requirements/requirements-convert_hf_to_gguf.txt 1-10 Main conversion requirements file
requirements/requirements-convert_legacy_llama.txt 1-8 Base requirements (chained via -r)
gguf-py/pyproject.toml 1-27 gguf package metadata and dependencies

Signature

The requirements file is consumed by pip, not imported as a Python module. Its effective signature is:

pip install -r requirements/requirements-convert_hf_to_gguf.txt

Import

After installation, the conversion script imports the following top-level packages:

import numpy as np
import torch
from transformers import AutoConfig
import gguf
import sentencepiece

I/O Contract

Direction Type Description
Input Requirements file requirements/requirements-convert_hf_to_gguf.txt (text file with pip-compatible dependency specifications)
Output Installed packages Python packages installed into the active environment's site-packages
Side Effects Network access Downloads packages from PyPI and the PyTorch wheel index

Dependency manifest (from requirements-convert_legacy_llama.txt):

Package Version Constraint Purpose
numpy ~=1.26.4 Numerical array operations for tensor data
sentencepiece >=0.1.98,<0.3.0 Tokenizer model loading (SentencePiece .model files)
transformers >=4.57.1,<5.0.0 HuggingFace model config and tokenizer loading
gguf >=0.1.0 GGUF format writer and quantization support
protobuf >=4.21.0,<5.0.0 Protocol buffer parsing for model configs

Additional dependencies (from requirements-convert_hf_to_gguf.txt):

Package Version Constraint Purpose
torch ~=2.6.0 (non-s390x) PyTorch framework for tensor loading and dtype conversion
torch >=0.0.0.dev0 (s390x) Nightly PyTorch build for s390x architecture

gguf package dependencies (from gguf-py/pyproject.toml):

Package Version Constraint Purpose
numpy >=1.17 Array operations within the gguf library
tqdm >=4.27 Progress bars during tensor writing
pyyaml >=5.1 YAML metadata parsing
requests >=2.25 HTTP requests for remote tensor access

Usage Examples

Basic installation in a virtual environment:

python -m venv llama-convert-env
source llama-convert-env/bin/activate
pip install -r requirements/requirements-convert_hf_to_gguf.txt

Verification that imports succeed:

python -c "import torch; import numpy; import gguf; import transformers; print('All imports OK')"

CPU-only installation (default):

The requirements file already specifies --extra-index-url https://download.pytorch.org/whl/cpu, so pip will prefer CPU-only PyTorch wheels by default. For GPU-enabled builds, override the index URL:

pip install -r requirements/requirements-convert_hf_to_gguf.txt \
  --extra-index-url https://download.pytorch.org/whl/cu121

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment