Workflow:Fede1024 Rust rdkafka Mock Cluster Testing
| Knowledge Sources | |
|---|---|
| Domains | Testing, Kafka, Mocking |
| Last Updated | 2026-02-07 19:30 GMT |
Overview
End-to-end process for testing Kafka produce-consume logic without a real broker by using MockCluster to simulate a multi-broker Kafka cluster in-process.
Description
This workflow uses the rust-rdkafka MockCluster API to create an in-process simulated Kafka cluster with configurable broker count, topics, partitions, and replication factors. Producers and consumers connect to the mock cluster's bootstrap servers instead of a real broker, enabling fast, deterministic integration tests that require no external infrastructure. The mock cluster supports standard produce/consume operations, partition assignment, and offset management.
Key outcomes:
- Broker-free integration testing with MockCluster
- Create topics with configurable partition count and replication factor
- Standard FutureProducer and StreamConsumer usage against mock brokers
- Latency measurement and performance validation without network overhead
Usage
Execute this workflow when you need to write integration tests for Kafka-dependent code without deploying a real Kafka cluster. This is ideal for CI/CD pipelines, local development, and unit-level integration tests where you want to verify produce-consume logic in isolation from infrastructure concerns.
Execution Steps
Step 1: Create Mock Cluster
Instantiate a MockCluster with a specified number of brokers (e.g., 3). The mock cluster starts an in-process Kafka simulation that provides a bootstrap servers string for client configuration. The cluster is automatically cleaned up when the MockCluster value is dropped.
Key considerations:
- The broker count parameter determines how many virtual brokers are simulated
- MockCluster::new() returns a Result that should be unwrapped or handled
- The mock cluster lives for the lifetime of the MockCluster value
Step 2: Create Topics on Mock Cluster
Call mock_cluster.create_topic() with the topic name, number of partitions, and replication factor. Unlike real Kafka, topic creation is synchronous and immediate. Topics must be created before producing to them.
Key considerations:
- Topic creation is synchronous (no need to wait for propagation)
- Partition count and replication factor are respected within the limits of the broker count
- Multiple topics can be created for complex test scenarios
Step 3: Configure Clients Against Mock Cluster
Build ClientConfig for both FutureProducer and StreamConsumer using mock_cluster.bootstrap_servers() as the bootstrap.servers value. No other broker-specific configuration changes are needed; clients interact with the mock cluster identically to a real cluster.
Key considerations:
- Use mock_cluster.bootstrap_servers() to get the connection string
- All standard client configuration options (group.id, timeouts, etc.) work with the mock cluster
- The producer and consumer are standard types, not mock-specific variants
Step 4: Produce and Consume Messages
Use the standard FutureProducer::send() and StreamConsumer::recv() APIs to produce and consume messages. The mock cluster handles partition assignment, offset tracking, and delivery reports just like a real broker. Messages produced to a topic are immediately available for consumption.
Key considerations:
- Message delivery is in-process, so latencies are much lower than with a real broker
- Use send_result() for non-queuing sends that return the future directly
- Consumer subscription and group management work identically to real brokers
- Timestamps, keys, payloads, and partitions all function normally
Step 5: Verify and Measure Results
Assert on consumed message content, delivery reports, and optionally measure latency using timestamp comparisons. The mock cluster supports realistic timing semantics, making it suitable for performance baseline testing. Clean up happens automatically when the MockCluster is dropped.
Key considerations:
- Compare produce timestamps with consume timestamps for latency measurement
- All standard message accessors (payload, key, offset, partition, timestamp) work correctly
- The mock cluster is automatically torn down on drop, cleaning up all resources