Skip to main content
In this lesson, we’ve revisited the core Kafka concepts you need for reliable, scalable event streaming.

Offset Management

Offsets record a consumer’s position within a topic partition. Kafka can:
  • Auto-commit offsets at regular intervals
  • Let you manually commit offsets for precise control
Proper offset handling ensures:
  • Fault tolerance
  • Seamless consumer restarts
  • Exactly-once or at-least-once delivery semantics
Consider manual commits when you need tight control over message acknowledgment and processing guarantees.

Poison Pill

A poison pill is a malformed or unexpected message that can crash your consumer and stall the pipeline. Best practices include:
  1. Catch exceptions around message deserialization or processing
  2. Log the offending payload for analysis
  3. Route bad records to a dead letter queue (DLQ)
  4. Resume the pipeline without interruption
Failing to handle poison pills can halt downstream systems and lead to data loss.

Legacy Coordination: ZooKeeper

ZooKeeper has historically managed:
  • Cluster metadata
  • Broker configurations
  • Leader election
While mature and reliable, it introduces operational complexity and overhead.

Modern Coordination: KRaft (Kafka Raft)

KRaft is Kafka’s built-in consensus layer, replacing ZooKeeper by using the Raft protocol to handle:
  • Metadata storage
  • Controller duties
Benefits of KRaft:
  • Simplified architecture
  • Easier deployments in containers and Kubernetes
  • Faster cluster scaling
The image is a quick recap of Kafka concepts, including offset management, poison pills, ZooKeeper's role, and Kafka KRaft. Each concept is briefly explained with an icon.

Coordination Comparison

MechanismAdvantagesDrawbacks
ZooKeeperBattle-tested, stableAdditional cluster to manage
KRaftNative consensus, simpler deploymentsNewer, evolving community

KRaft in Action

With KRaft:
  • Brokers fetch metadata directly from the controller broker
  • Eliminates ZooKeeper setup steps
  • Speeds up cluster scaling
  • Simplifies broker integration in dynamic environments (e.g., Kubernetes)

Security in Kafka

Kafka’s security stack includes:
FeatureMechanismBenefit
EncryptionTLSProtects data in transit
AuthenticationSASL (PLAIN, SCRAM, etc.)Verifies client identity
AuthorizationACLsGranular access control for topics/users
These controls are critical for enterprise deployments, ensuring that only authorized clients can produce or consume data.
The image is a quick recap of Kafka concepts, including KRaft in action, understanding Kafka security, and securing Kafka with TLS encryption and SASL authentication.

We’ve now covered:
  • Offset management strategies
  • Handling poison-pill messages
  • Cluster coordination with ZooKeeper vs. KRaft
  • End-to-end security using TLS, SASL, and ACLs
That concludes this lesson. See you next time!