Environment:Shiyu coder Kronos PyTorch CUDA Environment
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Deep_Learning |
| Last Updated | 2026-02-09 13:47 GMT |
Overview
Python 3.x environment with PyTorch >= 2.0.0, optional CUDA GPU acceleration, and core scientific computing libraries for financial time series forecasting.
Description
This environment provides the base runtime for all Kronos model operations including inference, finetuning, and backtesting. It is built around PyTorch >= 2.0.0 with automatic device detection supporting NVIDIA CUDA GPUs, Apple Silicon MPS, and CPU fallback. The environment includes numerical computing libraries (NumPy, Pandas), tensor operation utilities (einops), and HuggingFace Hub integration for model downloading. It supports both single-GPU inference and multi-GPU distributed training via the DDP environment extension.
Usage
Use this environment for all Kronos workflows: single-series prediction, batch prediction, Qlib finetuning, CSV finetuning, and web UI inference. This is the mandatory base prerequisite for every Implementation in the repository.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Linux (Ubuntu 20.04+), macOS, or Windows | Linux recommended for GPU training |
| Hardware | NVIDIA GPU (optional) | CUDA-capable GPU for acceleration; MPS on Apple Silicon; CPU fallback supported |
| VRAM | 4GB+ recommended | Kronos-mini (4.1M params) runs on limited VRAM; Kronos-base (102.3M params) needs more |
| Python | 3.8+ | Compatible with standard CPython |
| Disk | 2GB minimum | For model weights and dependencies |
Dependencies
System Packages
- CUDA Toolkit (optional, for GPU acceleration)
- `git` (for cloning repository)
Python Packages
- `torch` >= 2.0.0
- `numpy`
- `pandas` == 2.2.2
- `einops` == 0.8.1
- `huggingface_hub` == 0.33.1
- `matplotlib` == 3.9.3
- `tqdm` == 4.67.1
- `safetensors` == 0.6.2
Credentials
No credentials required for the base environment. Model weights are publicly available on HuggingFace Hub without authentication:
- `NeoQuasar/Kronos-mini` (4.1M parameters)
- `NeoQuasar/Kronos-small` (24.7M parameters)
- `NeoQuasar/Kronos-base` (102.3M parameters)
- `NeoQuasar/Kronos-Tokenizer-2k`
- `NeoQuasar/Kronos-Tokenizer-base`
Quick Install
# Install all core dependencies
pip install torch>=2.0.0 numpy pandas==2.2.2 einops==0.8.1 huggingface_hub==0.33.1 matplotlib==3.9.3 tqdm==4.67.1 safetensors==0.6.2
Code Evidence
Device auto-detection from `model/kronos.py:494-501`:
# Auto-detect device if not specified
if device is None:
if torch.cuda.is_available():
device = "cuda:0"
elif hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
device = "mps"
else:
device = "cpu"
HuggingFace Hub integration from `model/kronos.py:13`:
class KronosTokenizer(nn.Module, PyTorchModelHubMixin):
Einops dependency from `model/module.py:2`:
from einops import rearrange, reduce
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `ModuleNotFoundError: No module named 'einops'` | einops not installed | `pip install einops==0.8.1` |
| `RuntimeError: CUDA out of memory` | Insufficient GPU VRAM | Use smaller model (Kronos-mini) or reduce batch size |
| `ModuleNotFoundError: No module named 'model'` | Incorrect Python path | Ensure repository root is in PYTHONPATH or run from repo root |
Compatibility Notes
- NVIDIA CUDA GPUs: Full support with automatic detection. Recommended for training.
- Apple Silicon (MPS): Supported for inference via `torch.backends.mps`. Training not tested.
- CPU: Fully supported but significantly slower. Suitable for small-scale inference only.
- PyTorch >= 2.0: Required. Uses `F.scaled_dot_product_attention` which leverages FlashAttention when available on compatible hardware.
Related Pages
- Implementation:Shiyu_coder_Kronos_KronosTokenizer_From_Pretrained
- Implementation:Shiyu_coder_Kronos_Kronos_From_Pretrained
- Implementation:Shiyu_coder_Kronos_KronosPredictor_Init
- Implementation:Shiyu_coder_Kronos_KronosPredictor_Predict
- Implementation:Shiyu_coder_Kronos_KronosPredictor_Predict_Batch
- Implementation:Shiyu_coder_Kronos_Auto_Regressive_Inference
- Implementation:Shiyu_coder_Kronos_KronosTokenizer_Encode
- Implementation:Shiyu_coder_Kronos_Candlestick_Data_Preparation_Pattern
- Implementation:Shiyu_coder_Kronos_Plot_Prediction_Pattern
- Implementation:Shiyu_coder_Kronos_Config_Init
- Implementation:Shiyu_coder_Kronos_QlibDataPreprocessor_Usage
- Implementation:Shiyu_coder_Kronos_QlibDataset_Usage
- Implementation:Shiyu_coder_Kronos_Train_Model_Tokenizer_Qlib
- Implementation:Shiyu_coder_Kronos_Train_Model_Predictor_Qlib
- Implementation:Shiyu_coder_Kronos_QlibBacktest_Usage
- Implementation:Shiyu_coder_Kronos_Generate_Predictions_Qlib
- Implementation:Shiyu_coder_Kronos_CustomFinetuneConfig_Init
- Implementation:Shiyu_coder_Kronos_CustomKlineDataset_Usage
- Implementation:Shiyu_coder_Kronos_SequentialTrainer_Usage
- Implementation:Shiyu_coder_Kronos_WebUI_App
- Implementation:Shiyu_coder_Kronos_Prediction_Result_Output