Principle:ClickHouse ClickHouse TCP Server Startup
| Knowledge Sources | |
|---|---|
| Domains | Deployment, Server_Administration |
| Last Updated | 2026-02-08 00:00 GMT |
Overview
ClickHouse uses the Poco `TCPServer` framework to accept and dispatch incoming TCP and HTTP connections using a factory pattern, thread pools, and a dedicated accept loop running in a separate thread.
Description
When ClickHouse starts, it creates one or more `TCPServer` instances (or its `HTTPServer` subclass) bound to configured ports. The default ports are 9000 for the native TCP protocol and 8123 for the HTTP protocol. Each server instance combines three components:
Server Socket: A `ServerSocket` that has been bound to an address and placed into listening state before being passed to the `TCPServer` constructor. The socket handles the low-level `accept` system call to receive new connections.
Connection Factory: A `TCPServerConnectionFactory` (or `HTTPRequestHandlerFactory` for HTTP) that produces connection handler objects. When a new connection arrives, the factory's `createConnection` method is called to produce a `TCPServerConnection` instance that knows how to handle the specific protocol (native ClickHouse wire protocol, HTTP, MySQL compatibility protocol, etc.). This is a classic application of the Factory Method pattern.
Thread Pool: A `Poco::ThreadPool` that provides worker threads for handling connections. The `TCPServer` either uses a provided thread pool or creates connections via the default pool. The number of active threads adjusts dynamically based on the connection queue depth, up to the configured maximum.
The lifecycle of a `TCPServer` proceeds as follows:
- The server is constructed with a factory, a bound server socket, and optional parameters and thread pool.
- Calling `start` spawns a new thread that runs the accept loop.
- The accept loop blocks on the server socket, waiting for incoming connections.
- When a connection arrives, it is optionally passed through a `TCPServerConnectionFilter` (for IP-based filtering).
- Accepted connections are placed into an internal queue.
- Worker threads from the thread pool dequeue connections, invoke the factory to create a `TCPServerConnection`, call its `start` method, and delete the connection object when `start` returns.
- Calling `stop` terminates the accept loop, discards queued connections, and waits for active connections to complete.
The `HTTPServer` subclass extends `TCPServer` with HTTP-specific functionality and adds `stopAll`, which can optionally abort in-progress connections by shutting down their underlying sockets.
systemd Integration: The ClickHouse server process is managed by systemd using `Type=notify`. The service unit specifies `User=clickhouse`, `Group=clickhouse`, `Restart=always` with `RestartSec=30`, `LimitNOFILE=500000` for high file descriptor limits, and `TimeoutStopSec=infinity` to allow the server to complete shutdown gracefully using its own `shutdown_wait_unfinished_queries` and `shutdown_wait_unfinished` settings rather than being forcibly killed by systemd.
The service is granted capabilities `CAP_NET_ADMIN` (network configuration), `CAP_IPC_LOCK` (memory locking for performance), `CAP_SYS_NICE` (scheduling priority adjustment), and `CAP_NET_BIND_SERVICE` (binding to ports below 1024) through both `CapabilityBoundingSet` and `AmbientCapabilities`.
Usage
The TCP server startup mechanism is exercised every time ClickHouse starts, whether via `systemctl start clickhouse-server`, the init script, or direct binary invocation. Understanding this architecture is important when configuring the number of listener ports, tuning thread pool sizes, implementing custom connection filters (e.g., IP allowlists), or debugging connection acceptance issues.
Theoretical Basis
Factory Method Pattern: The `TCPServerConnectionFactory` applies the Factory Method pattern, decoupling the accept loop from the protocol-specific connection handling. This allows the same `TCPServer` infrastructure to serve the native ClickHouse protocol, HTTP, MySQL wire protocol, PostgreSQL wire protocol, and gRPC on different ports, each with its own factory producing the appropriate connection handler.
Reactor vs. Thread-per-Connection: The Poco `TCPServer` uses a hybrid model. A single reactor thread runs the accept loop, while a pool of worker threads handle individual connections. Connections are queued between the acceptor and workers, providing backpressure when the thread pool is saturated. This is more scalable than a pure thread-per-connection model (which would exhaust threads under high concurrency) while being simpler to implement than a full event-driven reactor (which would require non-blocking I/O throughout the protocol handlers).
Connection Filtering: The optional `TCPServerConnectionFilter` provides an interception point before a connection enters the thread pool. By rejecting connections early (before allocating a worker thread), the filter protects the thread pool from exhaustion by unwanted connections. The filter must be set before `start` is called to avoid race conditions.
Graceful Shutdown: The two-phase shutdown model (stop accepting new connections, then wait for active connections) allows in-flight queries to complete rather than being abruptly terminated. The `HTTPServer::stopAll` method provides a more aggressive option that can forcibly close active connections when fast shutdown is required.
systemd-notify Readiness: Using `Type=notify` rather than `Type=simple` means systemd considers the service started only after ClickHouse explicitly signals readiness via `sd_notify(READY=1)`. This ensures that dependent services and health checks see a fully initialized server, not one still loading table metadata or replaying the write-ahead log.