Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:EvolvingLMMs Lab Lmms eval Server Mode Environment

From Leeroopedia
Knowledge Sources
Domains Infrastructure, Web_Services
Last Updated 2026-02-14 00:00 GMT

Overview

FastAPI and uvicorn server environment for running lmms-eval as a persistent HTTP evaluation service with job scheduling.

Description

This environment provides the additional dependencies needed to run lmms-eval in server mode, where it operates as a persistent HTTP service accepting evaluation jobs via REST API. The server uses FastAPI for the web framework and uvicorn as the ASGI server. It includes a job scheduler that manages evaluation lifecycle (queued, running, completed, failed) and supports both synchronous and asynchronous client connections.

Usage

Use this environment when deploying lmms-eval as a persistent evaluation service. It is the mandatory prerequisite for the Server Mode Evaluation workflow, including the HTTP server, job scheduler, and client SDK components.

System Requirements

Category Requirement Notes
OS Linux (Ubuntu 20.04+ recommended) Server mode is designed for Linux deployment
Network Open port for HTTP server Default port configured in ServerArgs
Memory 4GB+ RAM beyond GPU VRAM For web server, job queue, and result storage

Dependencies

Python Packages

  • fastapiWeb framework for REST API
  • uvicornASGI server
  • pydanticRequest/response model validation
  • httpx >= 0.23.3 — Client HTTP library

Install via optional dependency group:

pip install lmms_eval[server]

Credentials

No additional credentials beyond those in Environment:EvolvingLMMs_Lab_Lmms_eval_API_Credentials_Environment.

Quick Install

# Install with server dependencies
pip install lmms_eval[server]

# Or install with all dependencies
pip install lmms_eval[core]

# Launch evaluation server
python -m lmms_eval.launch_server

Code Evidence

Optional server dependency group from pyproject.toml:86-89:

server = [
    "fastapi",
    "uvicorn",
]

Server launch entry point from lmms_eval/launch_server.py:

# Standalone entry point for the evaluation server

FastAPI server implementation from lmms_eval/entrypoints/http_server.py:

# Implements FastAPI-based HTTP server with /evaluate endpoint

Pydantic protocol models from lmms_eval/entrypoints/protocol.py:

# Defines all Pydantic models for request/response validation

Common Errors

Error Message Cause Solution
ImportError: fastapi Server dependencies not installed pip install lmms_eval[server]
Address already in use Port conflict Change server port or kill existing process
Job stuck in "queued" state Server overloaded Check GPU availability and job scheduler status

Compatibility Notes

  • TUI mode: The TUI (Terminal UI) has its own FastAPI requirement (fastapi >= 0.100.0). Install with pip install lmms_eval[tui].
  • Client SDK: Both synchronous (EvalClient) and asynchronous (AsyncEvalClient) clients are available for interacting with the server.
  • Concurrency: The job scheduler manages one evaluation at a time by default.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment