confluent ccaak practice test

Exam Title: Confluent Certified Administrator for Apache Kafka

Last update: Nov 27 ,2025
Question 1

When a broker goes down, what will the Controller do?

  • A. Wait for a follower to take the lead.
  • B. Trigger a leader election among the remaining followers to distribute leadership.
  • C. Become the leader for the topic/partition that needs a leader, pending the broker return in the cluster.
  • D. Automatically elect the least loaded broker to become the leader for every orphan's partitions.
Answer:

B


Explanation:
When a broker goes down, the Controller detects the failure and triggers a leader election for all
partitions that had their leader on the failed broker. The leader is chosen from the in-sync replicas
(ISRs) of each partition.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 2

Which technology can be used to perform event stream processing? (Choose two.)

  • A. Confluent Schema Registry
  • B. Apache Kafka Streams
  • C. Confluent ksqlDB
  • D. Confluent Replicator
Answer:

B, C


Explanation:
Kafka Streams is a client library for building real-time applications that process and analyze data
stored in Kafka.
ksqlDB enables event stream processing using SQL-like queries, allowing real-time transformation
and analysis of Kafka topics.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 3

How can load balancing of Kafka clients across multiple brokers be accomplished?

  • A. Partitions
  • B. Replicas
  • C. Offsets
  • D. Connectors
Answer:

A


Explanation:
Partitions are the primary mechanism for achieving load balancing in Kafka. When a topic has
multiple partitions, Kafka clients (producers and consumers) can distribute the load across brokers
hosting these partitions.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 4

A company is setting up a log ingestion use case where they will consume logs from numerous
systems. The company wants to tune Kafka for the utmost throughput.
In this scenario, what acknowledgment setting makes the most sense?

  • A. acks=0
  • B. acks=1
  • C. acks=all
  • D. acks=undefined
Answer:

A


Explanation:
acks=0 provides the highest throughput because the producer does not wait for any
acknowledgment from the broker. This minimizes latency and maximizes performance.
However, it comes at the cost of no durability guarantees — messages may be lost if the broker fails
before writing them. This setting is suitable when throughput is critical and occasional data loss is
acceptable, such as in some log ingestion use cases where logs are also stored elsewhere.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 5

Your Kafka cluster has four brokers. The topic t1 on the cluster has two partitions, and it has a
replication factor of three. You create a Consumer Group with four consumers, which subscribes to
t1.
In the scenario above, how many Controllers are in the Kafka cluster?

  • A. One
  • B. two
  • C. three
  • D. Four
Answer:

A


Explanation:
In a Kafka cluster, only one broker acts as the Controller at any given time. The Controller is
responsible for managing cluster metadata, such as partition leadership and broker status. Even if the
cluster has multiple brokers (in this case, four), only one is elected as the Controller, and others serve
as regular brokers. If the current Controller fails, another broker is automatically elected to take its
place.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 6

You want to increase Producer throughput for the messages it sends to your Kafka cluster by tuning
the batch size (‘batch size’) and the time the Producer waits before sending a batch (‘linger.ms’).
According to best practices, what should you do?

  • A. Decrease ‘batch.size’ and decrease ‘linger.ms’
  • B. Decrease ‘batch.size’ and increase ‘linger.ms’
  • C. Increase ‘batch.size’ and decrease ‘linger.ms’
  • D. Increase ‘batch.size’ and increase ‘linger.ms’
Answer:

D


Explanation:
Increasing batch.size allows the producer to accumulate more messages into a single batch,
improving compression and reducing the number of requests sent to the broker.
Increasing linger.ms gives the producer more time to fill up batches before sending them, which
improves batching efficiency and throughput.
This combination is a best practice for maximizing throughput, especially when message volume is
high or consistent latency is not a strict requirement.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 7

When using Kafka ACLs, when is the resource authorization checked?

  • A. Each time the resource is accessed.
  • B. The initial time the resource is accessed.
  • C. Each time the resource is accessed within the configured authorization interval.
  • D. When the client connection is first established.
Answer:

A


Explanation:
Kafka ACLs (Access Control Lists) perform authorization checks every time a client attempts to access
a resource (e.g., topic, consumer group). This ensures continuous enforcement of permissions, not
just at connection time or intervals. This approach provides fine-grained security, preventing
unauthorized actions at any time during a session.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 8

An employee in the reporting department needs assistance because their data feed is slowing down.
You start by quickly checking the consumer lag for the clients on the data stream.
Which command will allow you to quickly check for lag on the consumers?

  • A. bin/kafka-consumer-lag.sh
  • B. bin/kafka-consumer-groups.sh
  • C. bin/kafka-consumer-group-throughput.sh
  • D. bin/kafka-reassign-partitions.sh
Answer:

B


Explanation:
The kafka-consumer-groups.sh script is used to inspect consumer group details, including consumer
lag, which indicates how far behind a consumer is from the latest data in the partition.
The typical usage is bin/kafka-consumer-groups.sh --bootstrap-server <broker> --describe --group
<group_id>

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 9

What are benefits to gracefully shutting down brokers? (Choose two.)

  • A. It will sync all its logs to disk to avoid needing to do any log recovery when it restarts.
  • B. It will migrate any partitions the server is the leader for to other replicas prior to shutting down.
  • C. It will automatically re-elect leaders on restart.
  • D. It will balance the partitions across brokers before restarting.
Answer:

A, B


Explanation:
A graceful shutdown ensures that logs are flushed to disk, minimizing recovery time during restart.
Kafka performs controlled leader migration during a graceful shutdown to avoid disruption and
ensure availability.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 10

How does Kafka guarantee message integrity after a message is written on a disk?

  • A. A message can be edited by the producer, producing to the message offset.
  • B. A message cannot be altered once it has been written. C A message can be grouped with message sharing the same key to improve read performance
  • D. Only message metadata can be altered using command line (CLI) tools.
Answer:

B


Explanation:
Kafka ensures message immutability for data integrity. Once a message is written to a Kafka topic
and persisted to disk, it cannot be modified. This immutability guarantees that consumers always
receive the original message content, which is critical for auditability, fault tolerance, and data
reliability.

vote your answer:
A
B
D
A 0 B 0 D 0
Comments
Page 1 out of 5
Viewing questions 1-10 out of 54
Go To
page 2