What client protocol is supported for the schema registry? (select two)
A, B
Explanation:
clients can interact with the schema registry using the HTTP or HTTPS interface
If I produce to a topic that does not exist, and the broker setting auto.create.topic.enable=true, what
will happen?
C
Explanation:
The broker settings comes into play when a topic is auto created
You want to perform table lookups against a KTable everytime a new record is received from the
KStream. What is the output of KStream-KTable join?
D
Explanation:
Here KStream is being processed to create another KStream.
Using the Confluent Schema Registry, where are Avro schema stored?
D
Explanation:
The Schema Registry stores all the schemas in the _schemas Kafka topic
Which of the following setting increases the chance of batching for a Kafka Producer?
D
Explanation:
linger.ms forces the producer to wait to send messages, hence increasing the chance of creating
batches
You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company
stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if
the broker is restarted?
B
Explanation:
Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk,
but can recover from replicating from other brokers. This makes Kafka amazing!
Select all that applies (select THREE)
C, D, F
Explanation:
acks is a producer setting min.insync.replicas is a topic or broker setting and is only effective when
acks=all
A customer has many consumer applications that process messages from a Kafka topic. Each
consumer application can only process 50 MB/s. Your customer wants to achieve a target throughput
of 1 GB/s. What is the minimum number of partitions will you suggest to the customer for that
particular topic?
B
Explanation:
each consumer can process only 50 MB/s, so we need at least 20 consumers consuming one partition
so that 50 * 20 = 1000 MB target is achieved.
Your producer is producing at a very high rate and the batches are completely full each time. How
can you improve the producer throughput? (select two)
A, C
Explanation:
batch.size controls how many bytes of data to collect before sending messages to the Kafka broker.
Set this as high as possible, without exceeding available memory. Enabling compression can also help
make more compact batches and increase the throughput of your producer. Linger.ms will have no
effect as the batches are already full
In Avro, adding a field to a record without default is a __ schema evolution
A
Explanation:
Clients with old schema will be able to read records saved with new schema.