Implementation:Vllm project Vllm Pip Install Vllm
| Knowledge Sources | |
|---|---|
| Domains | Machine Learning, Infrastructure, Package Management |
| Last Updated | 2026-02-08 13:00 GMT |
Overview
Concrete tool for installing the vLLM inference engine provided by pip and PyPI.
Description
The pip install vllm command downloads and installs the vLLM package along with all of its runtime dependencies. On supported platforms (Linux x86_64 with CUDA 12.x), a pre-built wheel is fetched from PyPI, which includes compiled CUDA kernels for PagedAttention and other custom operators. The package registers the vllm CLI entry point and makes the vllm Python package importable.
The project metadata in pyproject.toml specifies:
- Python requirement: >=3.10, <3.14
- Build-time dependencies: cmake>=3.26.1, ninja, torch==2.9.1, setuptools>=77.0.3, jinja2
- Build backend: setuptools with setuptools-scm for version management
Usage
Run this command once to set up a new environment for vLLM-based inference or serving. Re-run with --upgrade to update to a newer release.
Code Reference
Source Location
- Repository: vllm
- File: pyproject.toml
- Lines: 1-44
Signature
pip install vllm
Import
# After installation, verify the package is importable:
import vllm
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| package name | str | Yes | The package identifier on PyPI, i.e. vllm
|
| --upgrade | flag | No | Upgrade to the latest available version |
| --extra-index-url | str | No | Additional index URL, useful for nightly or CUDA-specific wheels (e.g. https://wheels.vllm.ai/nightly/)
|
Outputs
| Name | Type | Description |
|---|---|---|
| installed package | directory | The vllm package installed into site-packages
|
| CLI entry point | executable | The vllm command-line tool for serving and utilities
|
Usage Examples
Basic Installation
# Install the latest stable release
pip install vllm
Installation with Upgrade
# Upgrade to the latest version
pip install --upgrade vllm
Installation in a Virtual Environment
# Create and activate a virtual environment, then install
python -m venv vllm-env
source vllm-env/bin/activate
pip install vllm
Verify Installation
from vllm import LLM, SamplingParams
print("vLLM installed successfully")