Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Sgl project Sglang Launch Server

From Leeroopedia


Knowledge Sources
Domains LLM_Serving, API_Server, Deployment
Last Updated 2026-02-10 00:00 GMT

Overview

Concrete tool for launching the SGLang HTTP server with OpenAI-compatible endpoints provided by the SGLang runtime.

Description

The launch_server function starts a FastAPI HTTP server wrapping the SGLang engine. It creates all subprocesses (TokenizerManager, Scheduler, DetokenizerManager), registers API routes, runs server warmup, and starts the uvicorn ASGI server. The server can be launched via CLI (python -m sglang.launch_server) or programmatically.

Usage

Use launch_server for production deployment of LLMs as HTTP services. Use the CLI for standard deployments and the Python API for custom server configurations or embedding in larger applications.

Code Reference

Source Location

  • Repository: sglang
  • File: python/sglang/srt/entrypoints/http_server.py
  • Lines: L1819-1826
  • CLI Entry: python/sglang/launch_server.py:L28-34

Signature

def launch_server(
    server_args: ServerArgs,
    init_tokenizer_manager_func: Callable = init_tokenizer_manager,
    run_scheduler_process_func: Callable = run_scheduler_process,
    run_detokenizer_process_func: Callable = run_detokenizer_process,
    execute_warmup_func: Callable = _execute_server_warmup,
    launch_callback: Optional[Callable[[], None]] = None,
):
    """Launch SRT (SGLang Runtime) Server."""

Import

from sglang.srt.entrypoints.http_server import launch_server
from sglang.srt.server_args import ServerArgs

I/O Contract

Inputs

Name Type Required Description
server_args ServerArgs Yes Full server configuration
launch_callback Optional[Callable] No Callback invoked after server is ready

Outputs

Name Type Description
(blocking) None Function blocks while server is running; does not return until shutdown

Usage Examples

CLI Launch

# Launch with default settings
python -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct

# Launch with tensor parallelism and custom port
python -m sglang.launch_server \
    --model-path meta-llama/Llama-3.1-70B-Instruct \
    --tp-size 4 \
    --port 8080 \
    --host 0.0.0.0

Programmatic Launch

from sglang.srt.server_args import ServerArgs
from sglang.srt.entrypoints.http_server import launch_server

server_args = ServerArgs(
    model_path="meta-llama/Llama-3.1-8B-Instruct",
    port=30000,
    host="0.0.0.0",
    tp_size=2,
)
launch_server(server_args)  # Blocks until shutdown

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment