When a broker goes down, what will the Controller do?
B
Explanation:
When a broker goes down, the Controller detects the failure and triggers a leader election for all
partitions that had their leader on the failed broker. The leader is chosen from the in-sync replicas
(ISRs) of each partition.
Which technology can be used to perform event stream processing? (Choose two.)
B, C
Explanation:
Kafka Streams is a client library for building real-time applications that process and analyze data
stored in Kafka.
ksqlDB enables event stream processing using SQL-like queries, allowing real-time transformation
and analysis of Kafka topics.
How can load balancing of Kafka clients across multiple brokers be accomplished?
A
Explanation:
Partitions are the primary mechanism for achieving load balancing in Kafka. When a topic has
multiple partitions, Kafka clients (producers and consumers) can distribute the load across brokers
hosting these partitions.
A company is setting up a log ingestion use case where they will consume logs from numerous
systems. The company wants to tune Kafka for the utmost throughput.
In this scenario, what acknowledgment setting makes the most sense?
A
Explanation:
acks=0 provides the highest throughput because the producer does not wait for any
acknowledgment from the broker. This minimizes latency and maximizes performance.
However, it comes at the cost of no durability guarantees — messages may be lost if the broker fails
before writing them. This setting is suitable when throughput is critical and occasional data loss is
acceptable, such as in some log ingestion use cases where logs are also stored elsewhere.
Your Kafka cluster has four brokers. The topic t1 on the cluster has two partitions, and it has a
replication factor of three. You create a Consumer Group with four consumers, which subscribes to
t1.
In the scenario above, how many Controllers are in the Kafka cluster?
A
Explanation:
In a Kafka cluster, only one broker acts as the Controller at any given time. The Controller is
responsible for managing cluster metadata, such as partition leadership and broker status. Even if the
cluster has multiple brokers (in this case, four), only one is elected as the Controller, and others serve
as regular brokers. If the current Controller fails, another broker is automatically elected to take its
place.
You want to increase Producer throughput for the messages it sends to your Kafka cluster by tuning
the batch size (‘batch size’) and the time the Producer waits before sending a batch (‘linger.ms’).
According to best practices, what should you do?
D
Explanation:
Increasing batch.size allows the producer to accumulate more messages into a single batch,
improving compression and reducing the number of requests sent to the broker.
Increasing linger.ms gives the producer more time to fill up batches before sending them, which
improves batching efficiency and throughput.
This combination is a best practice for maximizing throughput, especially when message volume is
high or consistent latency is not a strict requirement.
When using Kafka ACLs, when is the resource authorization checked?
A
Explanation:
Kafka ACLs (Access Control Lists) perform authorization checks every time a client attempts to access
a resource (e.g., topic, consumer group). This ensures continuous enforcement of permissions, not
just at connection time or intervals. This approach provides fine-grained security, preventing
unauthorized actions at any time during a session.
An employee in the reporting department needs assistance because their data feed is slowing down.
You start by quickly checking the consumer lag for the clients on the data stream.
Which command will allow you to quickly check for lag on the consumers?
B
Explanation:
The kafka-consumer-groups.sh script is used to inspect consumer group details, including consumer
lag, which indicates how far behind a consumer is from the latest data in the partition.
The typical usage is bin/kafka-consumer-groups.sh --bootstrap-server <broker> --describe --group
<group_id>
What are benefits to gracefully shutting down brokers? (Choose two.)
A, B
Explanation:
A graceful shutdown ensures that logs are flushed to disk, minimizing recovery time during restart.
Kafka performs controlled leader migration during a graceful shutdown to avoid disruption and
ensure availability.
How does Kafka guarantee message integrity after a message is written on a disk?
B
Explanation:
Kafka ensures message immutability for data integrity. Once a message is written to a Kafka topic
and persisted to disk, it cannot be modified. This immutability guarantees that consumers always
receive the original message content, which is critical for auditability, fault tolerance, and data
reliability.