Implementation:Hpcaitech ColossalAI Launch From Torch
| Knowledge Sources | |
|---|---|
| Domains | Distributed_Computing, Infrastructure |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
Concrete tool for initializing distributed training environments provided by ColossalAI, designed for use with torchrun launcher.
Description
launch_from_torch() is a convenience wrapper around ColossalAI's launch() function that automatically reads distributed configuration from environment variables set by torchrun or torch.distributed.launch. It initializes the PyTorch distributed backend, sets CUDA devices, and configures global random seeds.
Usage
Call this function at the start of any ColossalAI training script launched via torchrun. It replaces manual calls to torch.distributed.init_process_group() and provides additional ColossalAI-specific initialization.
Code Reference
Source Location
- Repository: ColossalAI
- File: colossalai/initialize.py
- Lines: 154-184
Signature
def launch_from_torch(
backend: str = "nccl",
seed: int = 1024,
verbose: bool = True,
) -> None:
"""
A wrapper for colossalai.launch for torchrun or torch.distributed.launch
by reading rank and world size from the environment variables set by PyTorch.
Args:
backend: Backend for torch.distributed (default: "nccl")
seed: Random seed for every process (default: 1024)
verbose: Whether to print logs (default: True)
"""
Import
import colossalai
# or
from colossalai import launch_from_torch
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| backend | str | No | Distributed backend ("nccl" for GPU, "gloo" for CPU). Default: "nccl" |
| seed | int | No | Global random seed. Default: 1024 |
| verbose | bool | No | Print initialization logs. Default: True |
| Environment: RANK | env var | Yes | Process rank (set by torchrun) |
| Environment: LOCAL_RANK | env var | Yes | Local GPU rank (set by torchrun) |
| Environment: WORLD_SIZE | env var | Yes | Total number of processes (set by torchrun) |
| Environment: MASTER_ADDR | env var | Yes | Master node address (set by torchrun) |
| Environment: MASTER_PORT | env var | Yes | Master node port (set by torchrun) |
Outputs
| Name | Type | Description |
|---|---|---|
| Process group | torch.distributed | Initialized global process group for collective operations |
| CUDA device | torch.device | Each process assigned to GPU[LOCAL_RANK] |
| Random seed | None | Global seed set across all processes |
Usage Examples
Basic Initialization
import colossalai
from colossalai.cluster import DistCoordinator
# Initialize distributed environment
colossalai.launch_from_torch()
# Create coordinator for rank-aware operations
coordinator = DistCoordinator()
Launch Command
# Launch with torchrun on 4 GPUs
torchrun --standalone --nproc_per_node=4 train.py