Implementation:Triton inference server Server L0 Metrics Test
| Knowledge Sources | |
|---|---|
| Domains | Testing, Metrics |
| Last Updated | 2026-02-13 17:00 GMT |
Overview
QA test script for validating Prometheus metrics collection and reporting in the Triton Inference Server.
Description
This test validates that the Triton Inference Server correctly exposes Prometheus-compatible metrics through its metrics endpoint. It verifies that inference count, inference duration, queue time, GPU utilization, GPU memory usage, and other key metrics are accurately reported and updated after inference requests. The test also checks metric formatting, label correctness, and that per-model and per-GPU metrics are properly disaggregated.
Usage
Run as part of the Triton QA test suite. Requires a GPU-enabled Docker environment with pre-generated test models.
Code Reference
Source Location
- Repository: Triton Inference Server
- File: qa/L0_metrics/test.sh
- Lines: 1-489
Signature
#!/bin/bash
source ../common/util.sh
# Test orchestration for Prometheus metrics collection and reporting
Import
source ../common/util.sh
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| DATADIR | env var | No | Test data directory |
| MODEL_REPO | directory | Yes | Test model repository |
Outputs
| Name | Type | Description |
|---|---|---|
| exit code | int | 0 on success, 1 on failure |
| test logs | files | Server and test output logs |
Usage Examples
Running the Test
cd qa/L0_metrics/
bash test.sh