Implementation:EvolvingLMMs Lab Lmms eval Package Installation
| Knowledge Sources | |
|---|---|
| Domains | Evaluation, Infrastructure |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
Concrete tool for installing the lmms-eval evaluation framework and its dependencies, provided by the lmms-eval project's packaging configuration.
Description
The pyproject.toml file defines the complete installation specification for the lmms-eval package (version 0.5.0). It declares the build system (setuptools), the Python version requirement (>=3.9), all core dependencies, optional dependency groups, console script entry points, and package discovery rules. The entry point lmms-eval maps to lmms_eval.__main__:cli_evaluate, which is the main CLI for launching evaluations.
The file also configures package exclusion patterns to ensure that assets, benchmarks, docs, test data, checkpoints, and wandb logs are not bundled into the distributed wheel.
Usage
Use this configuration to install lmms-eval via pip or uv:
- Standard install:
pip install lmms-evalorpip install -e .(editable) - Lockfile install:
uv sync(reproducible environment from uv.lock) - With extras:
pip install lmms-eval[video,audio,metrics]orpip install lmms-eval[all] - Adding a dependency:
uv add package_name
Code Reference
Source Location
- Repository: lmms-eval
- File:
pyproject.toml - Lines: 1-203
Signature
[project]
name = "lmms_eval"
version = "0.5.0"
requires-python = ">=3.9"
[project.scripts]
lmms-eval = "lmms_eval.__main__:cli_evaluate"
lmms-eval-ui = "lmms_eval.tui.cli:main"
Import
pip install lmms-eval
# or
uv sync
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| pyproject.toml | TOML file | Yes | Package metadata, dependency declarations, and entry point definitions |
| uv.lock | Lockfile | No | Pinned dependency versions for reproducible installs (used with uv sync)
|
| extras | str | No | Optional dependency group names, e.g. [video], [audio], [metrics], [server], [all]
|
Outputs
| Name | Type | Description |
|---|---|---|
| lmms-eval CLI | Executable | Console script entry point invoking lmms_eval.__main__:cli_evaluate
|
| lmms-eval-ui CLI | Executable | Console script entry point invoking lmms_eval.tui.cli:main
|
| lmms_eval package | Python package | Installed package with all submodules (models, tasks, api, loggers, utils) |
Usage Examples
Basic Example
# Install the package with core dependencies
pip install lmms-eval
# Install with all optional dependencies
pip install "lmms-eval[all]"
# Install in development mode with lockfile
uv sync
# Run an evaluation
lmms-eval --model qwen2_5_vl \
--model_args pretrained=Qwen/Qwen2.5-VL-3B-Instruct,max_pixels=12845056,attn_implementation=sdpa \
--tasks mmmu,mme,mmlu_flan_n_shot_generative \
--batch_size 128 --limit 8 --device cuda:0
Key Dependencies
# Core dependencies with minimum versions:
# accelerate>=0.29.1 - distributed training/inference
# datasets>=2.19.0 - HuggingFace dataset loading
# torch>=2.1.0 - PyTorch (SDPA attention support)
# transformers>=4.39.2 - HuggingFace model loading
# loguru - structured logging
# wandb>=0.16.0 - experiment tracking
# pydantic - data validation