Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:AUTOMATIC1111 Stable diffusion webui Python And PyTorch Runtime

From Leeroopedia


Knowledge Sources
Domains Infrastructure, Deep_Learning
Last Updated 2026-02-08 08:00 GMT

Overview

Python 3.10 runtime with PyTorch 2.1.2, CUDA 12.1 toolkit, and Gradio 3.41.2 web framework for running Stable Diffusion inference and training.

Description

This environment provides the base Python and PyTorch runtime required by all WebUI workflows. It targets Python 3.10 (the only version supported on Windows; Linux/macOS support 3.7-3.11) with PyTorch 2.1.2 built against CUDA 12.1. The web interface is built on Gradio 3.41.2, which is pinned to an exact version due to tight coupling with the UI code. The environment also requires the Stability AI model repositories (stablediffusion, generative-models), k-diffusion, BLIP, CLIP, and OpenCLIP as git-cloned dependencies.

Usage

Use this environment for all WebUI workflows: text-to-image generation, image-to-image generation, postprocessing/upscaling, checkpoint merging, textual inversion training, hypernetwork training, and LoRA network application. This is the mandatory base runtime environment.

System Requirements

Category Requirement Notes
OS Linux (Ubuntu 20.04+), Windows 10+, macOS WSL2 supported via conda environment
Python 3.10 (Windows), 3.7-3.11 (Linux/macOS) 3.10.6 is the tested reference version
Hardware CPU minimum; GPU recommended See GPU_Compute_Backend environment for GPU details
Disk ~10GB for base install Additional space needed for model checkpoints (2-7GB each)
RAM 8GB minimum 16GB+ recommended for model loading

Dependencies

System Packages

  • git (for cloning dependency repositories)
  • python3.10 (or compatible version)
  • pip (Python package manager)

Python Packages (Pinned Versions)

  • torch == 2.1.2 (with CUDA 12.1 wheels by default)
  • torchvision == 0.16.2
  • gradio == 3.41.2
  • transformers == 4.30.2
  • accelerate == 0.21.0
  • safetensors == 0.4.2
  • pytorch_lightning == 1.9.4
  • numpy == 1.26.2
  • Pillow == 9.5.0
  • omegaconf == 2.2.3
  • einops == 0.4.1
  • kornia == 0.6.7
  • open-clip-torch == 2.20.0
  • spandrel == 0.3.4
  • facexlib == 0.3.0
  • lark == 1.1.2
  • fastapi == 0.94.0
  • protobuf == 3.20.0

Git Repository Dependencies

  • Stable Diffusion (Stability-AI/stablediffusion) @ cf1d67a
  • Stable Diffusion XL (Stability-AI/generative-models) @ 45c443b
  • k-diffusion (crowsonkb/k-diffusion) @ ab527a9
  • BLIP (salesforce/BLIP) @ 48211a1
  • CLIP (openai/CLIP) @ d50d76d
  • OpenCLIP (mlfoundations/open_clip) @ bb6e834

Credentials

No credentials are required for basic operation. Optional environment variables:

  • HF_ENDPOINT: HuggingFace endpoint URL (default: https://huggingface.co)
  • GRADIO_ANALYTICS_ENABLED: Analytics toggle (default: 'False')

Quick Install

# Default installation (CUDA 12.1)
pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements_versions.txt

Code Evidence

Python version validation from modules/launch_utils.py:34-62:

def check_python_version():
    is_windows = platform.system() == "Windows"
    major = sys.version_info.major
    minor = sys.version_info.minor
    micro = sys.version_info.micro

    if is_windows:
        supported_minors = [10]
    else:
        supported_minors = [7, 8, 9, 10, 11]

    if not (major == 3 and minor in supported_minors):
        modules.errors.print_error_explanation(f"""
INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}.
""")

PyTorch and library version checks from modules/errors.py:103-149:

expected_torch_version = "2.1.2"
expected_xformers_version = "0.0.23.post1"
expected_gradio_version = "3.41.2"

if version.parse(torch.__version__) < version.parse(expected_torch_version):
    print_error_explanation(f"""
You are running torch {torch.__version__}.
The program is tested to work with torch {expected_torch_version}.
""")

CUDA test at startup from modules/launch_utils.py:386-390:

if not args.skip_torch_cuda_test and not check_run_python("import torch; assert torch.cuda.is_available()"):
    raise RuntimeError(
        'Torch is not able to use GPU; '
        'add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'
    )

Common Errors

Error Message Cause Solution
INCOMPATIBLE PYTHON VERSION Python version not in supported list Install Python 3.10.6 from python.org
RuntimeError: Couldn't install torch Wrong Python version or pip issue Use Python 3.10 and ensure pip is up to date
Torch is not able to use GPU CUDA not available or driver issue Install NVIDIA drivers; or use --skip-torch-cuda-test
You are running gradio X.Y.Z Gradio version mismatch Run via launch.py (not webui.py); check extensions
You are running torch X.Y.Z Torch version too old Use --reinstall-torch flag

Compatibility Notes

  • Windows: Only Python 3.10 is supported. Use the binary release for easiest setup.
  • Linux/macOS: Python 3.7 through 3.11 are supported, but 3.10.6 is the reference version.
  • WSL2: A conda environment file (environment-wsl2.yaml) is provided with Python 3.10 and cudatoolkit 11.8.
  • Nightly PyTorch: Dev/git builds have their version number truncated automatically to prevent downstream parsing errors.
  • Intel XPU (Windows): Requires special torch wheels from Nuullll/intel-extension-for-pytorch; only works with Python 3.10.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment