Implementation:Unslothai Unsloth Push To Hub Merged
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Model_Deployment, Model_Sharing |
| Last Updated | 2026-02-07 00:00 GMT |
Overview
Concrete tool for merging LoRA weights and uploading models to HuggingFace Hub provided by the Unsloth library.
Description
model.push_to_hub_merged merges LoRA adapters, saves the model locally to a temporary directory, then uploads all files to HuggingFace Hub. It also auto-generates a model card. A companion function model.push_to_hub_gguf handles GGUF-format uploads.
Usage
Call on a trained PeftModel after training. Requires a HuggingFace token. Pass private=True for private repositories.
Code Reference
Source Location
- Repository: unsloth
- File: unsloth/save.py
- Lines: L1378-1505 (push_to_hub_merged), L2060-2526 (push_to_hub_gguf), L1506-1594 (upload_to_huggingface helper)
Signature
def unsloth_push_to_hub_merged(
self,
repo_id: str,
tokenizer = None,
save_method: str = "merged_16bit",
use_temp_dir: Optional[bool] = None,
commit_message: Optional[str] = "Trained with Unsloth",
private: Optional[bool] = None,
token: Union[bool, str, None] = None,
max_shard_size: Union[int, str, None] = "5GB",
create_pr: bool = False,
safe_serialization: bool = True,
revision: str = None,
commit_description: str = "Upload model trained with Unsloth 2x faster",
tags: Optional[List[str]] = None,
temporary_location: str = "_unsloth_temporary_saved_buffers",
maximum_memory_usage: float = 0.75,
) -> None:
"""
Merges LoRA weights and pushes to HuggingFace Hub.
Args:
repo_id: HuggingFace repo ID (e.g., "username/model-name").
save_method: "merged_16bit", "merged_4bit", or "lora".
private: Create private repository. Default None (public).
token: HuggingFace auth token.
commit_message: Git commit message for the upload.
"""
Import
# Called as a method on the model instance:
model.push_to_hub_merged("username/my-model", tokenizer=tokenizer, token="hf_xxx")
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| repo_id | str | Yes | HuggingFace Hub repository ID (user/model-name) |
| tokenizer | PreTrainedTokenizer | No | Tokenizer to upload alongside model |
| save_method | str | No | "merged_16bit", "merged_4bit", or "lora" (default: "merged_16bit") |
| private | bool | No | Create private repo (default: None/public) |
| token | str | No | HuggingFace authentication token |
Outputs
| Name | Type | Description |
|---|---|---|
| Hub repository | Remote | Model uploaded to HuggingFace Hub with model card, config, weights, tokenizer |
Usage Examples
Push Merged Model
model.push_to_hub_merged(
"myuser/llama-3.2-finetuned",
tokenizer=tokenizer,
save_method="merged_16bit",
token="hf_your_token",
)
Push GGUF to Hub
model.push_to_hub_gguf(
"myuser/llama-3.2-finetuned-GGUF",
tokenizer=tokenizer,
quantization_method=["q4_k_m", "q8_0"],
token="hf_your_token",
)
Related Pages
Implements Principle
Requires Environment
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment