Implementation:Ggml org Llama cpp HF Upload GGUF
| Field | Value |
|---|---|
| Implementation Name | HF Upload GGUF |
| Type | Wrapper Doc |
| Wraps | huggingface_hub.HfApi.upload_file()
|
| Status | Active |
Overview
Description
The hf-upload-gguf-model.py script provides a simple wrapper around the HuggingFace Hub API for uploading GGUF model files to HuggingFace repositories. It is located at examples/model-conversion/scripts/utils/hf-upload-gguf-model.py (58 lines).
The script exposes a single function, upload_gguf_file(), which validates the local file exists, determines the filename for the repository, and calls HfApi.upload_file() with the appropriate parameters. It also provides a CLI interface with argparse for direct invocation.
Authentication is handled via the HF_TOKEN environment variable, which must be set to a HuggingFace API token with write permissions for the target repository.
Usage
export HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxx
python examples/model-conversion/scripts/utils/hf-upload-gguf-model.py \
--gguf-model-path ./llama-3.1-8b-f16.gguf \
--repo-id username/llama-3.1-8b-gguf \
--name llama-3.1-8b-instruct-f16.gguf
Code Reference
Source Location
| File | Lines | Description |
|---|---|---|
examples/model-conversion/scripts/utils/hf-upload-gguf-model.py |
7-46 | upload_gguf_file() function
|
examples/model-conversion/scripts/utils/hf-upload-gguf-model.py |
48-58 | CLI argument parsing and invocation |
Signature
upload_gguf_file() function:
def upload_gguf_file(local_file_path, repo_id, filename_in_repo=None):
"""
Upload a GGUF file to a Hugging Face model repository
Args:
local_file_path: Path to your local GGUF file
repo_id: Your repository ID (e.g., "username/model-name")
filename_in_repo: Optional custom name for the file in the repo
"""
if not os.path.exists(local_file_path):
print(f"File not found: {local_file_path}")
return False
if filename_in_repo is None:
filename_in_repo = os.path.basename(local_file_path)
if filename_in_repo is None or filename_in_repo == "":
filename_in_repo = os.path.basename(local_file_path)
print(f"Uploading {local_file_path} to {repo_id}/{filename_in_repo}")
api = HfApi()
try:
api.upload_file(
path_or_fileobj=local_file_path,
path_in_repo=filename_in_repo,
repo_id=repo_id,
repo_type="model",
commit_message=f"Upload {filename_in_repo}"
)
print("Upload successful!")
print(f"File available at: https://huggingface.co/{repo_id}/blob/main/{filename_in_repo}")
return True
except Exception as e:
print(f"Upload failed: {e}")
return False
CLI argument parser:
parser = argparse.ArgumentParser(description='Upload a GGUF model to a Huggingface model repository')
parser.add_argument('--gguf-model-path', '-m', help='The GGUF model file to upload', required=True)
parser.add_argument('--repo-id', '-r', help='The repository to upload to', required=True)
parser.add_argument('--name', '-o', help='The name in the model repository', required=False)
Import
from huggingface_hub import HfApi
import argparse
import os
I/O Contract
| Direction | Type | Description |
|---|---|---|
| Input | str (path) |
local_file_path: Local path to the GGUF file to upload
|
| Input | str |
repo_id: Target HuggingFace repository ID (e.g., "username/model-name")
|
| Input | str (optional) |
filename_in_repo: Custom filename in the repository; defaults to the local filename's basename
|
| Input (env) | HF_TOKEN |
HuggingFace API token with write permissions (required for authentication) |
| Output | bool |
True on successful upload, False on failure
|
| Output | stdout | Progress messages and the resulting file URL |
| Side Effects | Network | Uploads the file to https://huggingface.co/{repo_id}
|
| Side Effects | Remote repository | Creates a new commit in the target repository containing the uploaded file |
CLI arguments:
| Argument | Short | Required | Description |
|---|---|---|---|
--gguf-model-path |
-m |
Yes | Path to the local GGUF file |
--repo-id |
-r |
Yes | HuggingFace repository ID |
--name |
-o |
No | Custom filename in the repository |
HfApi.upload_file() parameters used:
| Parameter | Value | Description |
|---|---|---|
path_or_fileobj |
local_file_path |
The local GGUF file to upload |
path_in_repo |
filename_in_repo |
Destination path within the repository |
repo_id |
repo_id |
Target repository identifier |
repo_type |
"model" |
Repository type (always "model" for GGUF uploads) |
commit_message |
f"Upload {filename_in_repo}" |
Auto-generated commit message |
Usage Examples
Upload with default filename:
export HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxx
python examples/model-conversion/scripts/utils/hf-upload-gguf-model.py \
-m ./llama-3.1-8b-instruct-f16.gguf \
-r myusername/llama-3.1-8b-gguf
This uploads the file as llama-3.1-8b-instruct-f16.gguf in the repository.
Upload with a custom filename:
export HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxx
python examples/model-conversion/scripts/utils/hf-upload-gguf-model.py \
-m ./output/model-f16.gguf \
-r myusername/llama-3.1-8b-gguf \
-o llama-3.1-8b-instruct-f16.gguf
Programmatic usage:
import os
os.environ["HF_TOKEN"] = "hf_xxxxxxxxxxxxxxxxxxxxx"
# Add script directory to path
import sys
sys.path.insert(0, "examples/model-conversion/scripts/utils")
from hf_upload_gguf_model import upload_gguf_file
success = upload_gguf_file(
local_file_path="./llama-3.1-8b-instruct-f16.gguf",
repo_id="myusername/llama-3.1-8b-gguf",
filename_in_repo="llama-3.1-8b-instruct-f16.gguf"
)
if success:
print("Upload complete")
Expected output on success:
Uploading ./llama-3.1-8b-instruct-f16.gguf to myusername/llama-3.1-8b-gguf/llama-3.1-8b-instruct-f16.gguf
Upload successful!
File available at: https://huggingface.co/myusername/llama-3.1-8b-gguf/blob/main/llama-3.1-8b-instruct-f16.gguf
Expected output on failure (file not found):
File not found: ./nonexistent-model.gguf