Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Huggingface Diffusers DiffusionPipeline From Pretrained

From Leeroopedia
Knowledge Sources
Domains Diffusion_Models, Model_Serialization, Pipeline_Architecture
Last Updated 2026-02-13 21:00 GMT

Overview

Concrete tool for instantiating a fully configured diffusion pipeline from pretrained weights provided by the Diffusers library.

Description

DiffusionPipeline.from_pretrained is the primary class method for loading any diffusion pipeline. It accepts a Hugging Face Hub repository ID or a local directory path and returns a fully initialized pipeline with all components (UNet/Transformer, VAE, text encoders, tokenizers, scheduler) loaded and connected. The method reads the model_index.json configuration to determine the pipeline class and its components, then iterates through each component subfolder to load weights and configs. It supports half-precision loading via torch_dtype, variant selection (e.g., "fp16"), SafeTensors format preference, device mapping for multi-GPU setups, and quantization configuration. The pipeline is set to evaluation mode (model.eval()) by default.

Usage

Use DiffusionPipeline.from_pretrained as the first step in any text-to-image, image-to-image, or inpainting workflow. Import DiffusionPipeline from the diffusers package and call this class method with the model identifier. You can also call from_pretrained on specific pipeline subclasses like StableDiffusionXLPipeline to enforce a particular pipeline type.

Code Reference

Source Location

  • Repository: diffusers
  • File: src/diffusers/pipelines/pipeline_utils.py
  • Lines: 609-1124

Signature

@classmethod
def from_pretrained(
    cls,
    pretrained_model_name_or_path: str | os.PathLike,
    **kwargs
) -> Self:

Import

from diffusers import DiffusionPipeline

I/O Contract

Inputs

Name Type Required Description
pretrained_model_name_or_path str or os.PathLike Yes Hub repo ID (e.g., "stabilityai/stable-diffusion-xl-base-1.0") or local directory path containing pipeline weights saved with save_pretrained.
torch_dtype torch.dtype or dict[str, torch.dtype] No Override the default dtype. Pass a single dtype (e.g., torch.float16) for all components, or a dict mapping component names to dtypes with an optional "default" key.
variant str No Load weights from a variant filename such as "fp16" or "ema".
use_safetensors bool No If True, forcibly load SafeTensors weights. If None (default), SafeTensors are used when available.
device_map str No Device placement strategy. Currently only "balanced" is supported for distributing components across multiple GPUs.
quantization_config PipelineQuantizationConfig No Configuration for loading quantized models (e.g., 4-bit or 8-bit quantization via bitsandbytes).
custom_pipeline str No Hub repo ID, community pipeline name, or local path to a custom pipeline definition.
cache_dir str or os.PathLike No Directory for caching downloaded model files.
force_download bool No Force re-download even if cached files exist. Defaults to False.
local_files_only bool No Only load from local cache, do not attempt Hub downloads. Defaults to False.
token str or bool No Authentication token for accessing gated or private models on the Hub.
revision str No Specific model version (branch, tag, or commit hash). Defaults to "main".
low_cpu_mem_usage bool No Load pretrained weights without initializing, reducing peak CPU memory. Defaults to True for PyTorch >= 1.9.
disable_mmap bool No Disable memory-mapped file loading for SafeTensors. Useful for network mounts. Defaults to False.

Outputs

Name Type Description
pipeline DiffusionPipeline (or subclass) A fully initialized pipeline instance with all components loaded, set to evaluation mode, and ready for inference.

Usage Examples

Basic Usage

from diffusers import DiffusionPipeline
import torch

# Load SDXL pipeline in float16 for faster inference
pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    use_safetensors=True,
    variant="fp16",
)
pipe = pipe.to("cuda")

# Generate an image
image = pipe("A majestic eagle soaring over mountains at sunset").images[0]
image.save("eagle.png")

Loading With Custom Components

from diffusers import DiffusionPipeline, EulerDiscreteScheduler
import torch

# Load pipeline with a specific scheduler
scheduler = EulerDiscreteScheduler.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    subfolder="scheduler",
)

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    scheduler=scheduler,
    torch_dtype=torch.float16,
)

Loading From Local Directory

from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("./my_local_pipeline/")

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment