Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Sgl project Sglang Fetch Metrics

From Leeroopedia


Knowledge Sources
Domains Performance Monitoring, CI/CD
Last Updated 2026-02-10 00:00 GMT

Overview

CLI script to fetch and process SGLang nightly benchmark metrics from GitHub Actions artifacts.

Description

fetch_metrics.py is a command-line tool that authenticates via the GITHUB_TOKEN environment variable or the gh auth token CLI, then queries the GitHub Actions API for completed nightly-test-nvidia.yml workflow runs. It downloads consolidated-metrics-* zip artifacts, extracts JSON benchmark data, and writes aggregated results to a JSON file. The script supports filtering by date range (--days), specific run ID (--run-id), and trigger event type (--event with choices schedule, workflow_dispatch, or push).

Key functions include get_github_token() for authentication, fetch_workflow_runs() for querying the GitHub Actions API, download_artifact() for retrieving zip files, extract_metrics_from_zip() for parsing JSON from archives, and main() which ties everything together with argparse-based CLI argument parsing.

Usage

Use this script to collect benchmark data for the SGLang performance dashboard. It is typically run periodically or on-demand to fetch nightly test results for visualization and regression tracking.

Code Reference

Source Location

Signature

def get_github_token() -> Optional[str]: ...
def get_headers(token: Optional[str]) -> dict: ...
def fetch_workflow_runs(token: Optional[str], days: int = 30, event: Optional[str] = None) -> list: ...
def fetch_run_artifacts(token: Optional[str], run_id: int) -> list: ...
def download_artifact(token: Optional[str], artifact_id: int) -> Optional[bytes]: ...
def extract_metrics_from_zip(zip_content: bytes) -> Optional[dict]: ...
def fetch_metrics_for_run(token: Optional[str], run: dict) -> Optional[dict]: ...
def fetch_single_run(token: Optional[str], run_id: int) -> Optional[dict]: ...
def main(): ...

Import

import argparse
import io
import json
import os
import sys
import zipfile
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Optional

import requests

I/O Contract

Inputs

Name Type Required Description
--output / -o str No (default: "metrics_data.json") Output JSON file path
--days int No (default: 30) Number of days of history to fetch
--run-id int No Fetch a specific workflow run by ID
--event str No Filter by trigger event type (schedule, workflow_dispatch, push)
--scheduled-only flag No Only fetch scheduled (nightly) runs
GITHUB_TOKEN env var No GitHub personal access token for API authentication

Outputs

Name Type Description
metrics_data.json JSON file Array of metric records, each containing run_id, run_date, commit_sha, branch, and benchmark results

Usage Examples

# Fetch last 30 days of metrics
python fetch_metrics.py --output metrics_data.json

# Fetch last 7 days of metrics
python fetch_metrics.py --output metrics_data.json --days 7

# Fetch a specific run
python fetch_metrics.py --output metrics_data.json --run-id 21338741812

# Fetch only nightly scheduled runs
python fetch_metrics.py --output metrics_data.json --scheduled-only

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment