Environment:Shiyu coder Kronos HuggingFace Hub Access
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Model_Distribution |
| Last Updated | 2026-02-09 13:47 GMT |
Overview
HuggingFace Hub integration for downloading pretrained Kronos model weights via `PyTorchModelHubMixin`.
Description
Kronos models (both tokenizer and predictor) inherit from `PyTorchModelHubMixin`, enabling seamless model downloading from HuggingFace Hub using the `from_pretrained()` method. The models are publicly hosted under the `NeoQuasar` organization and do not require authentication tokens. The `huggingface_hub` library handles caching, versioning, and SafeTensors weight loading.
Usage
Use this environment whenever loading pretrained Kronos models for the first time. After initial download, models are cached locally and no network access is needed. Required for all workflows that begin with model loading (prediction, finetuning).
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| Network | Internet access | Required for first-time model download; cached afterwards |
| Disk | 500MB - 2GB | Depends on model size (mini: ~16MB, small: ~100MB, base: ~400MB) |
Dependencies
Python Packages
- `huggingface_hub` == 0.33.1
- `safetensors` == 0.6.2
Credentials
No credentials required. All Kronos models are publicly accessible:
- `NeoQuasar/Kronos-mini` (4.1M parameters)
- `NeoQuasar/Kronos-small` (24.7M parameters)
- `NeoQuasar/Kronos-base` (102.3M parameters)
- `NeoQuasar/Kronos-Tokenizer-2k`
- `NeoQuasar/Kronos-Tokenizer-base`
For private models or rate-limited access, set:
- `HF_TOKEN`: HuggingFace API token (optional, for private or gated models)
Quick Install
pip install huggingface_hub==0.33.1 safetensors==0.6.2
Code Evidence
Model classes with HuggingFace mixin from `model/kronos.py:13,180`:
class KronosTokenizer(nn.Module, PyTorchModelHubMixin):
...
class Kronos(nn.Module, PyTorchModelHubMixin):
...
Model loading via from_pretrained from `finetune/train_predictor.py:213-217`:
tokenizer = KronosTokenizer.from_pretrained(config['finetuned_tokenizer_path'])
model = Kronos.from_pretrained(config['pretrained_predictor_path'])
Import statement from `model/kronos.py:4`:
from huggingface_hub import PyTorchModelHubMixin
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `OSError: Can't load ... from 'NeoQuasar/Kronos-small'` | No internet or model not found | Check network connectivity; verify model ID spelling |
| `EntryNotFoundError` | SafeTensors file missing from model repo | Ensure `huggingface_hub >= 0.33.1` and model repo has `model.safetensors` |
| `ConnectionError: ... timed out` | Network timeout | Retry or set `HF_HUB_OFFLINE=1` if model already cached |
Compatibility Notes
- Offline mode: After initial download, set `HF_HUB_OFFLINE=1` to prevent network calls.
- Cache location: Models cached at `~/.cache/huggingface/hub/` by default. Override with `HF_HOME` environment variable.
- SafeTensors: Models use SafeTensors format for safe and fast weight loading.