Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Deepspeedai DeepSpeed Real Accelerator

From Leeroopedia


Knowledge Sources
Domains Accelerator, Hardware Detection
Last Updated 2026-02-09 00:00 GMT

Overview

Accelerator detection, selection, and singleton management module that determines which hardware backend DeepSpeed uses at runtime.

Description

The real_accelerator module implements the factory pattern for accelerator instantiation through the get_accelerator() function. It follows a three-step process: (1) Check DS_ACCELERATOR environment variable for explicit override, validating against SUPPORTED_ACCELERATOR_LIST (cuda, cpu, xpu, xpu.external, npu, mps, hpu, mlu, sdaa). (2) If no override, auto-detect by attempting imports in priority order: intel_extension_for_deepspeed (xpu.external), intel_extension_for_pytorch (xpu), native torch.xpu, torch_npu, torch_sdaa, torch.mps, habana_frameworks, torch_mlu, torch.cuda, falling back to cpu. (3) Instantiate the appropriate accelerator class and validate via _validate_accelerator() which checks inheritance from DeepSpeedAccelerator. The result is cached in a module-level ds_accelerator singleton. The set_accelerator() function allows manual override with validation.

Usage

Call get_accelerator() to obtain the current hardware backend. This is the standard entry point for all DeepSpeed components that need hardware access. Use set_accelerator() for custom accelerator injection in testing or specialized scenarios.

Code Reference

Source Location

Signature

SUPPORTED_ACCELERATOR_LIST = ['cuda', 'cpu', 'xpu', 'xpu.external',
                               'npu', 'mps', 'hpu', 'mlu', 'sdaa']

ds_accelerator = None

def get_accelerator():
    global ds_accelerator
    if ds_accelerator is not None:
        return ds_accelerator

    # Step 1: Check DS_ACCELERATOR environment variable
    if "DS_ACCELERATOR" in os.environ.keys():
        accelerator_name = os.environ["DS_ACCELERATOR"]
        # Validate and set ds_set_method = "override"

    # Step 2: Auto-detect by trying imports
    if accelerator_name is None:
        # Try intel_extension_for_deepspeed (xpu.external)
        # Try intel_extension_for_pytorch (xpu)
        # Try torch.xpu
        # Try torch_npu
        # Try torch_sdaa
        # Try torch.mps
        # Try habana_frameworks (hpu)
        # Try torch_mlu
        # Try torch.cuda (cuda)
        # Fallback to 'cpu'
        ds_set_method = "auto detect"

    # Step 3: Instantiate and validate
    if accelerator_name == "cuda":
        from .cuda_accelerator import CUDA_Accelerator
        ds_accelerator = CUDA_Accelerator()
    elif accelerator_name == "cpu":
        from .cpu_accelerator import CPU_Accelerator
        ds_accelerator = CPU_Accelerator()
    # ... other backends ...

    _validate_accelerator(ds_accelerator)
    return ds_accelerator

def set_accelerator(accel_obj):
    global ds_accelerator
    _validate_accelerator(accel_obj)
    ds_accelerator = accel_obj

def _validate_accelerator(accel_obj):
    # Check isinstance of DeepSpeedAccelerator
    # Handle both build-time and run-time import paths

def is_current_accelerator_supported():
    return get_accelerator().device_name() in SUPPORTED_ACCELERATOR_LIST

Import

from deepspeed.accelerator import get_accelerator, set_accelerator

I/O Contract

Inputs

Name Type Required Description
accel_obj DeepSpeedAccelerator Required Custom accelerator for set_accelerator()

Outputs

Name Type Description
accelerator DeepSpeedAccelerator Singleton accelerator instance
is_supported bool Whether current accelerator is supported

Usage Examples

# Get auto-detected accelerator
from deepspeed.accelerator import get_accelerator

accelerator = get_accelerator()
print(f"Device: {accelerator.device_name()}")
print(f"Backend: {accelerator.communication_backend_name()}")

# Force specific accelerator via environment
import os
os.environ['DS_ACCELERATOR'] = 'cuda'
from deepspeed.accelerator import get_accelerator
accelerator = get_accelerator()  # Will use CUDA

# Manually set custom accelerator
from deepspeed.accelerator import set_accelerator
from deepspeed.accelerator.cuda_accelerator import CUDA_Accelerator

custom_accel = CUDA_Accelerator()
set_accelerator(custom_accel)

# Check if current accelerator is supported
from deepspeed.accelerator import is_current_accelerator_supported
if is_current_accelerator_supported():
    print("Accelerator is officially supported")

# Auto-detection priority order:
# 1. DS_ACCELERATOR env var (override)
# 2. intel_extension_for_deepspeed (xpu.external)
# 3. intel_extension_for_pytorch (xpu)
# 4. torch.xpu
# 5. torch_npu
# 6. torch_sdaa
# 7. torch.mps
# 8. habana_frameworks (hpu)
# 9. torch_mlu
# 10. torch.cuda (cuda)
# 11. cpu (fallback)

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment