Role of an Apache Kafka Consumer
A Kafka consumer’s primary responsibilities include:- Subscribing to one or more topics.
- Continuously polling brokers for new records.
- Processing the payload (e.g., updating a database or UI).
- Triggering downstream actions or alerts.
When a vehicle arrives, the station’s sensor sends an “occupied” event to Kafka. A consumer listening on that topic picks up the message and updates the mobile app in real time, reducing the available charger count.
Apache Kafka is optimized for event streaming with configurable retention policies. It is not intended as a long-term or archival database.
Connecting Your Consumer to Kafka
To start consuming events, configure these core settings:| Configuration | Purpose |
|---|---|
bootstrap.servers | Broker addresses for establishing connections |
group.id | Consumer group identifier for load-balanced fetching |
topics | One or more Kafka topics to subscribe and poll |
- Join the specified consumer group.
- Fetch assigned partitions.
- Poll each partition in offset order—never deleting data, only reading it.
You can reset consumer offsets to replay historical data if it’s still within Kafka’s retention window.
Key Features of Kafka Consumers
| Feature | Description |
|---|---|
| Sequential Access | Reads records in offset order within a partition—guaranteeing strict order. |
| Partition Independence | Processes partitions in parallel; ordering only applies per partition. |
| Historical Control | Supports offset resets to replay events, subject to retention policies. |

Pull Architecture & Speed Management
Kafka’s consumer model is pull-based: consumers request batches of messages from brokers, giving fine-grained control over:- Throughput: adjust
max.poll.recordsor batch size. - Latency: tune polling intervals and timeouts.
- Back-pressure: throttle fetch requests to match processing speed.
