Environment:Alibaba ROLL Ascend NPU Environment
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Deep_Learning, NPU_Computing |
| Last Updated | 2026-02-07 19:00 GMT |
Overview
Huawei Ascend NPU environment with HCCL communication backend and torch_npu runtime for running ROLL on Atlas 900 A2 PODc hardware.
Description
This environment provides NPU-accelerated context for ROLL on Huawei Ascend hardware. The platform uses `ASCEND_RT_VISIBLE_DEVICES` for device control and HCCL (Huawei Collective Communication Library) for distributed communication. The `NpuPlatform` is auto-detected when `torch_npu` is importable and CUDA is not available. Note that Ascend NPUs have specific limitations: int64 tensors must be converted to int32, UUID-based device identification is not supported, Flash Attention and Transformer Engine are unavailable, and only DeepSpeed training backend is supported (Megatron is not yet available).
Usage
Use this environment when running ROLL on Huawei Ascend NPU hardware (Atlas 900 A2 PODc). This is a reference implementation only and should be tested thoroughly before production use. Consult Huawei official channels for support.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Linux | Ascend driver and firmware required |
| Hardware | Atlas 900 A2 PODc | Ascend 910 NPU |
| CANN | 8.3.RC1 | Huawei Compute Architecture for Neural Networks |
| Disk | 50GB+ SSD | For model checkpoints and datasets |
Dependencies
System Packages
- CANN toolkit 8.3.RC1
- HCCL (Huawei collective communication library)
Python Packages
- `python` == 3.11
- `torch` == 2.7.1 (CPU-only base)
- `torch_npu` == 2.7.1
- `vllm` == 0.11.0
- `vllm-ascend` == 0.11.0rc1
- `transformers` >= 4.57.1
- `deepspeed` == 0.16.4
- `ray[default,cgraph]` == 2.48.0
Credentials
- `ASCEND_RT_VISIBLE_DEVICES`: Ascend NPU device visibility (set internally by ROLL)
- `RAY_EXPERIMENTAL_NOSET_ASCEND_RT_VISIBLE_DEVICES`: Prevents Ray from overriding device visibility
Quick Install
# Install Ascend-specific packages
pip install torch==2.7.1 torch_npu==2.7.1
pip install vllm==0.11.0 vllm-ascend==0.11.0rc1
pip install deepspeed==0.16.4
pip install transformers>=4.57.1
# Install common ROLL dependencies
pip install -r requirements_common.txt
pip install -e .
Code Evidence
Platform detection from `roll/platforms/__init__.py:40-44`:
try:
import torch_npu # noqa: F401
logger.debug("Detected torch_npu. Initializing NPU platform.")
return NpuPlatform()
except ImportError:
return CpuPlatform()
NPU int64 to int32 conversion from `roll/distributed/scheduler/protocol.py:181`:
logger.debug(f"[NPU] Converting Tensor {key} from int64 -> int32, shape={val.shape}")
HCCL backend from `roll/platforms/npu.py:16`:
communication_backend: str = "hccl"
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `ImportError: No module named 'torch_npu'` | torch_npu not installed | Install `torch_npu==2.7.1` matching your CANN version |
| `RuntimeError: vLLM is not installed` | vLLM Ascend not installed | Install both `vllm==0.11.0` and `vllm-ascend==0.11.0rc1` |
| int64 tensor errors | Ascend NPU does not support int64 | Framework auto-converts; ensure `protocol.py` patch is active |
Compatibility Notes
- Flash Attention: Not supported on Ascend NPU.
- Transformer Engine: Not supported on Ascend NPU.
- Megatron Training: Not yet supported; use DeepSpeed backend only.
- UUID Device ID: Not supported; device identification uses alternative methods.
- Status: Reference implementation only; consult official channels for production use.