confluent ccdak practice test

Exam Title: Confluent Certified Developer for Apache Kafka

Last update: Nov 27 ,2025
Question 1

What client protocol is supported for the schema registry? (select two)

  • A. HTTP
  • B. HTTPS
  • C. JDBC
  • D. Websocket
  • E. SASL
Answer:

A, B


Explanation:
clients can interact with the schema registry using the HTTP or HTTPS interface

vote your answer:
A
B
C
D
E
A 0 B 0 C 0 D 0 E 0
Comments
Question 2

If I produce to a topic that does not exist, and the broker setting auto.create.topic.enable=true, what
will happen?

  • A. Kafka will automatically create the topic with 1 partition and 1 replication factor
  • B. Kafka will automatically create the topic with the indicated producer settings num.partitions and default.replication.factor
  • C. Kafka will automatically create the topic with the broker settings num.partitions and default.replication.factor
  • D. Kafka will automatically create the topic with num.partitions=#of brokers and replication.factor=3
Answer:

C


Explanation:
The broker settings comes into play when a topic is auto created

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 3

You want to perform table lookups against a KTable everytime a new record is received from the
KStream. What is the output of KStream-KTable join?

  • A. KTable
  • B. GlobalKTable
  • C. You choose between KStream or KTable
  • D. Kstream
Answer:

D


Explanation:
Here KStream is being processed to create another KStream.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 4

Using the Confluent Schema Registry, where are Avro schema stored?

  • A. In the Schema Registry embedded SQL database
  • B. In the Zookeeper node /schemas
  • C. In the message bytes themselves
  • D. In the _schemas topic
Answer:

D


Explanation:
The Schema Registry stores all the schemas in the _schemas Kafka topic

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 5

Which of the following setting increases the chance of batching for a Kafka Producer?

  • A. Increase batch.size
  • B. Increase message.max.bytes
  • C. Increase the number of producer threads
  • D. Increase linger.ms
Answer:

D


Explanation:
linger.ms forces the producer to wait to send messages, hence increasing the chance of creating
batches

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 6

You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company
stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if
the broker is restarted?

  • A. The broker will start, and other topics will also be deleted as the broker data on the disk got deleted
  • B. The broker will start, and won't be online until all the data it needs to have is replicated from other leaders
  • C. The broker will crash
  • D. The broker will start, and won't have any data. If the broker comes leader, we have a data loss
Answer:

B


Explanation:
Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk,
but can recover from replicating from other brokers. This makes Kafka amazing!

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 7

Select all that applies (select THREE)

  • A. min.insync.replicas is a producer setting
  • B. acks is a topic setting
  • C. acks is a producer setting
  • D. min.insync.replicas is a topic setting
  • E. min.insync.replicas matters regardless of the values of acks
  • F. min.insync.replicas only matters if acks=all
Answer:

C, D, F


Explanation:
acks is a producer setting min.insync.replicas is a topic or broker setting and is only effective when
acks=all

vote your answer:
A
B
C
D
E
F
A 0 B 0 C 0 D 0 E 0 F 0
Comments
Question 8

A customer has many consumer applications that process messages from a Kafka topic. Each
consumer application can only process 50 MB/s. Your customer wants to achieve a target throughput
of 1 GB/s. What is the minimum number of partitions will you suggest to the customer for that
particular topic?

  • A. 10
  • B. 20
  • C. 1
  • D. 50
Answer:

B


Explanation:
each consumer can process only 50 MB/s, so we need at least 20 consumers consuming one partition
so that 50 * 20 = 1000 MB target is achieved.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 9

Your producer is producing at a very high rate and the batches are completely full each time. How
can you improve the producer throughput? (select two)

  • A. Enable compression
  • B. Disable compression
  • C. Increase batch.size
  • D. Decrease batch.size
  • E. Decrease linger.ms Increase linger.ms
Answer:

A, C


Explanation:
batch.size controls how many bytes of data to collect before sending messages to the Kafka broker.
Set this as high as possible, without exceeding available memory. Enabling compression can also help
make more compact batches and increase the throughput of your producer. Linger.ms will have no
effect as the batches are already full

vote your answer:
A
B
C
D
E
A 0 B 0 C 0 D 0 E 0
Comments
Question 10

In Avro, adding a field to a record without default is a __ schema evolution

  • A. forward
  • B. backward
  • C. full
  • D. breaking
Answer:

A


Explanation:
Clients with old schema will be able to read records saved with new schema.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Page 1 out of 14
Viewing questions 1-10 out of 150
Go To
page 2