Implementation:ArroyoSystems Arroyo Server Common
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Streaming, Server, Observability |
| Last Updated | 2026-02-08 08:00 GMT |
Overview
Provides shared server infrastructure for all Arroyo services including logging initialization, admin HTTP server with metrics/profiling endpoints, gRPC server construction with tracing and error logging middleware, telemetry event reporting, and cluster identity management.
Description
This module is the shared foundation for all Arroyo server processes (controller, node, worker):
- Logging -- init_logging / init_logging_with_filter configures the tracing subscriber with support for Plaintext, Logfmt, and JSON output formats, optional non-blocking I/O, file/line metadata, static fields, and panic hook registration.
- Admin HTTP Server -- start_admin_server creates an Axum HTTP(S) server with endpoints for /status, /name, /metrics (Prometheus text), /metrics.pb (Prometheus protobuf), /details (version info), /config (TOML config dump), /debug/pprof/heap (jemalloc profiling), and /debug/pprof/profile (CPU profiling). Supports TLS and API key authentication.
- gRPC Server -- grpc_server / grpc_server_with_tls constructs a tonic Server with a tracing layer and GrpcErrorLogMiddlewareLayer that inspects grpc-status headers and logs error codes.
- Telemetry -- AnalyticsEventLogger implements EventLogger by posting analytics events to events.arroyo.dev with cluster_id, version, git SHA, and custom metrics.
- Cluster Identity -- set_cluster_id / get_cluster_id manages a persistent cluster UUID stored in the user's config directory.
- Constants -- BUILD_TIMESTAMP, GIT_SHA, VERSION ("0.16.0-dev").
Usage
Use init_logging at the start of every Arroyo binary, start_admin_server to expose operational endpoints, and grpc_server/grpc_server_with_tls when setting up gRPC service handlers.
Code Reference
Source Location
- Repository: ArroyoSystems_Arroyo
- File: crates/arroyo-server-common/src/lib.rs
Signature
pub fn init_logging(name: &str) -> Option<WorkerGuard>;
pub fn init_logging_with_filter(name: &str, filter: EnvFilter) -> Option<WorkerGuard>;
pub async fn start_admin_server(service: &str) -> anyhow::Result<()>;
pub fn grpc_server() -> Server<...>;
pub async fn grpc_server_with_tls(tls_config: &TlsConfig) -> anyhow::Result<Server<...>>;
pub async fn wrap_start(name: &str, addr: SocketAddr, result: impl Future<...>) -> anyhow::Result<()>;
pub fn set_cluster_id(cluster_id: &str);
pub fn get_cluster_id() -> String;
pub const VERSION: &str = "0.16.0-dev";
pub const GIT_SHA: &str;
pub const BUILD_TIMESTAMP: &str;
Import
use arroyo_server_common::{
init_logging, start_admin_server, grpc_server, grpc_server_with_tls,
wrap_start, set_cluster_id, get_cluster_id, VERSION,
};
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| service | &str | Yes | Service name for admin server identification |
| tls_config | &TlsConfig | No (for TLS) | TLS certificate and key configuration |
Outputs
| Name | Type | Description |
|---|---|---|
| Server | tonic::transport::Server | Configured gRPC server with middleware layers |
| WorkerGuard | Option<WorkerGuard> | Guard that keeps the async log writer alive |
Usage Examples
use arroyo_server_common::{init_logging, start_admin_server, grpc_server};
// Initialize logging
let _guard = init_logging("controller");
// Start admin server in background
tokio::spawn(start_admin_server("controller"));
// Build gRPC server
let server = grpc_server()
.add_service(my_grpc_service)
.serve(addr)
.await?;
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment