Principle:Arize ai Phoenix Span Transmission
| Knowledge Sources | |
|---|---|
| Domains | AI Observability, OpenTelemetry, Telemetry Transport |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
Transmitting telemetry data from an instrumented client application to a collector requires span processors that manage the lifecycle of completed spans and span exporters that serialize and deliver them over a network transport protocol.
Description
Once an application produces spans through instrumentation, those spans must be reliably transmitted to a collector (such as a Phoenix server) for storage, visualization, and analysis. This transmission involves two distinct components working in concert:
Span Processors
A SpanProcessor sits between the TracerProvider and the SpanExporter. It receives notification when spans start and end, and is responsible for deciding when and how to forward completed spans to the exporter. The two standard processing strategies are:
- SimpleSpanProcessor: Exports each span synchronously and immediately upon completion. The calling thread blocks until the export operation finishes. This provides the lowest latency between span creation and export but adds overhead to the application's hot path.
- BatchSpanProcessor: Buffers completed spans in an internal queue and exports them in batches on a configurable schedule. This decouples the application's execution from the export operation, reducing per-span overhead at the cost of slightly delayed export. Batch processing is the recommended strategy for production workloads.
Span Exporters
A SpanExporter serializes spans into a wire format and transmits them to a collector endpoint. The OTLP (OpenTelemetry Protocol) standard defines two transport mechanisms:
- HTTP + Protobuf: Sends spans as serialized protobuf messages in HTTP POST requests to a
/v1/tracesendpoint. This is firewall-friendly and works through HTTP proxies and load balancers. - gRPC: Sends spans over a persistent gRPC connection using the
ExportTraceServiceRPC. This offers lower per-message overhead and bidirectional streaming capabilities but requires gRPC-compatible infrastructure.
Usage
Use this principle whenever:
- Choosing between HTTP and gRPC transport based on infrastructure constraints.
- Deciding between simple (synchronous) and batch (asynchronous) processing for production vs. development environments.
- Configuring authentication headers for secured collector endpoints.
- Tuning batch export parameters (queue size, batch size, schedule delay, export timeout) for high-throughput workloads.
- Troubleshooting span delivery failures related to network, authentication, or endpoint misconfiguration.
Theoretical Basis
The span transmission pipeline can be modeled as a producer-consumer system:
Application Threads (Producers)
|
| span.end() callback
v
[SpanProcessor Queue]
|
| Synchronous (Simple) or Batched (Batch)
v
[SpanExporter]
|
| HTTP POST /v1/traces OR gRPC ExportTraceService
v
[Collector Endpoint]
SimpleSpanProcessor Behavior
on_end(span):
exporter.export([span]) # Blocking call on the application thread
The application thread is blocked during export, making this unsuitable for high-throughput or latency-sensitive workloads.
BatchSpanProcessor Behavior
on_end(span):
queue.enqueue(span) # Non-blocking enqueue
background_thread (runs every schedule_delay_millis):
batch = queue.dequeue(up_to=max_export_batch_size)
exporter.export(batch) # Runs on background thread
The BatchSpanProcessor is configurable via environment variables:
| Environment Variable | Parameter | Default | Description |
|---|---|---|---|
OTEL_BSP_SCHEDULE_DELAY |
schedule_delay_millis | 5000 | Delay between batch export attempts (ms) |
OTEL_BSP_MAX_QUEUE_SIZE |
max_queue_size | 2048 | Maximum number of spans in the buffer queue |
OTEL_BSP_MAX_EXPORT_BATCH_SIZE |
max_export_batch_size | 512 | Maximum number of spans per export batch |
OTEL_BSP_EXPORT_TIMEOUT |
export_timeout_millis | 30000 | Timeout for each export attempt (ms) |
Transport Protocol Selection
The choice between HTTP and gRPC is influenced by several factors:
| Factor | HTTP + Protobuf | gRPC |
|---|---|---|
| Firewall compatibility | High (standard HTTPS port 443) | Lower (requires gRPC port, typically 4317) |
| Proxy support | Full HTTP proxy support | Requires gRPC-aware proxies |
| Connection overhead | New connection per request (unless HTTP/2) | Persistent connection with multiplexing |
| Compression | gzip, deflate via Content-Encoding header | Built-in gRPC compression |
| Endpoint inference | URL path contains /v1/traces |
Port matches gRPC port (default 4317) |
Authentication
Both transport protocols support header-based authentication. The Phoenix wrappers automatically inject authentication headers from:
- Explicit
headersparameter passed to the exporter constructor. PHOENIX_CLIENT_HEADERSorOTEL_EXPORTER_OTLP_HEADERSenvironment variables.PHOENIX_API_KEYenvironment variable (injected asAuthorization: Bearer {key}).