Environment:Huggingface Optimum Python Core Dependencies
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Deep_Learning |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
Python 3.9+ environment with PyTorch >= 2.1.0, Transformers >= 4.36.0, and HuggingFace Hub >= 0.8.0 as core runtime dependencies for the Optimum library.
Description
This environment defines the base runtime dependencies for the Huggingface Optimum library. It is required for all workflows: model export, GPTQ quantization, accelerated inference, tensor parallelization, and FX graph optimization. The core stack centers on PyTorch and Transformers, with optional extensions for diffusers, ONNX Runtime, and hardware-specific backends. The library is OS-independent and runs on any platform where Python and PyTorch are available.
Usage
Use this environment as the foundation for any Optimum workflow. All Implementation pages in this wiki depend on these core dependencies being satisfied. This is the mandatory prerequisite before layering on workflow-specific dependencies (GPTQ, parallelization, inference acceleration).
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | OS Independent | Linux, macOS, Windows supported (classifiers: Production/Stable) |
| Python | >= 3.9.0 | Supported versions: 3.9, 3.10, 3.11 |
| Hardware | CPU minimum | GPU optional (CUDA for acceleration) |
Dependencies
Core Runtime Packages (install_requires)
- `transformers` >= 4.29 (setup.py), runtime check >= 4.36.0 (import_utils.py)
- `torch` >= 1.11 (setup.py), runtime check >= 2.1.0 (import_utils.py)
- `packaging` (no version constraint)
- `numpy` (no version constraint)
- `huggingface_hub` >= 0.8.0
Runtime Minimum Version Constants
These are enforced at runtime in `optimum/utils/import_utils.py`:
- `torch` >= 2.1.0 (TORCH_MINIMUM_VERSION)
- `transformers` >= 4.36.0 (TRANSFORMERS_MINIMUM_VERSION)
- `diffusers` >= 0.22.0 (DIFFUSERS_MINIMUM_VERSION, optional)
- `gptqmodel` >= 1.6.0 (GPTQMODEL_MINIMUM_VERSION, optional)
- `auto_gptq` >= 0.4.99 (AUTOGPTQ_MINIMUM_VERSION, optional, deprecated)
Optional Backend Packages
- `optimum-onnx` (ONNX export and ONNX Runtime inference)
- `optimum-intel` >= 1.23.0 (OpenVINO, IPEX, Neural Compressor)
- `optimum-habana` >= 1.17.0 (Habana Gaudi)
- `optimum-quanto` >= 0.2.4 (Quanto quantization)
- `optimum-amd` (AMD acceleration)
- `optimum-furiosa` (Furiosa acceleration)
Test Dependencies
- `pytest`, `accelerate`, `requests`, `parameterized`, `pytest-xdist`
- `Pillow`, `sacremoses`, `torchvision`, `torchaudio`, `einops`, `timm`
- `scikit-learn`, `sentencepiece`, `rjieba`, `hf_xet`
Credentials
No credentials are required for core functionality. Optional credentials include:
- `HF_TOKEN`: HuggingFace API token for accessing gated models or private repositories via `huggingface_hub`.
- `HF_HUB_OFFLINE`: Set to `1` to force offline mode (checked in parallelization utils).
Quick Install
# Install core Optimum library
pip install optimum
# Or install from source with all core dependencies
pip install transformers>=4.36.0 torch>=2.1.0 packaging numpy huggingface_hub>=0.8.0
# With specific backend extras
pip install optimum[onnxruntime] # ONNX Runtime backend
pip install optimum[openvino] # OpenVINO backend
pip install optimum[habana] # Habana Gaudi backend
Code Evidence
Minimum version constants from `optimum/utils/import_utils.py:29-34`:
TORCH_MINIMUM_VERSION = version.parse("2.1.0")
TRANSFORMERS_MINIMUM_VERSION = version.parse("4.36.0")
DIFFUSERS_MINIMUM_VERSION = version.parse("0.22.0")
AUTOGPTQ_MINIMUM_VERSION = version.parse("0.4.99") # Allows 0.5.0.dev0
GPTQMODEL_MINIMUM_VERSION = version.parse("1.6.0")
ORT_QUANTIZE_MINIMUM_VERSION = version.parse("1.4.0")
Package availability detection from `optimum/utils/import_utils.py:82-119`:
_timm_available = _is_package_available("timm")
_onnx_available = _is_package_available("onnx")
_datasets_available = _is_package_available("datasets")
_tensorrt_available = _is_package_available("tensorrt")
_pydantic_available = _is_package_available("pydantic")
_openvino_available = _is_package_available("openvino")
_gptqmodel_available = _is_package_available("gptqmodel")
_accelerate_available = _is_package_available("accelerate")
_torch_available, _torch_version = _is_package_available("torch", return_version=True)
_transformers_available, _transformers_version = _is_package_available("transformers", return_version=True)
Python version requirement from `setup.py:97`:
python_requires=">=3.9.0"
Core install requires from `setup.py:15-21`:
REQUIRED_PKGS = [
"transformers>=4.29",
"torch>=1.11",
"packaging",
"numpy",
"huggingface_hub>=0.8.0",
]
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `requires the transformers>=X library but it was not found` | Transformers not installed or version too old | `pip install -U transformers` |
| `requires the diffusers library but it was not found` | Diffusers not installed | `pip install diffusers` |
| `requires the datasets library but it was not found` | Datasets needed for GPTQ calibration | `pip install datasets` |
| `Found an incompatible version of gptqmodel` | GPTQModel version below 1.6.0 | `pip install -U gptqmodel` |
| `Found an incompatible version of numpy` | Numpy version incompatible with required range | Install compatible numpy version per error message |
Compatibility Notes
- Python 3.7/3.8: Not supported. Minimum is Python 3.9.0.
- setup.py vs runtime checks: The `setup.py` lists lower minimum versions (torch>=1.11, transformers>=4.29) for install compatibility, but the runtime constants in `import_utils.py` enforce stricter minimums (torch>=2.1.0, transformers>=4.36.0) for actual functionality.
- ONNX Runtime variants: The library checks for 16 different ONNX Runtime distribution variants (GPU, ROCm, OpenVINO, ARM, CANN, TVM, QNN, MIGraphX, etc.) to maximize hardware compatibility.
- Backend extensibility: New hardware backends are added as optional `optimum-*` subpackages, not as core dependencies.
Related Pages
- Implementation:Huggingface_Optimum_TasksManager_Infer_Task
- Implementation:Huggingface_Optimum_TasksManager_Get_Exporter_Config
- Implementation:Huggingface_Optimum_TasksManager_Determine_Framework
- Implementation:Huggingface_Optimum_Is_Backend_Available
- Implementation:Huggingface_Optimum_ExporterConfig_Generate_Dummy_Inputs
- Implementation:Huggingface_Optimum_ExporterConfig_Validation
- Implementation:Huggingface_Optimum_Model_Decomposition_Utils
- Implementation:Huggingface_Optimum_OptimizedModel_From_Pretrained
- Implementation:Huggingface_Optimum_Pipeline_Factory