Splunk splk-2002 practice test

Exam Title: Splunk Enterprise Certified Architect

Last update: Dec 14 ,2025
Question 1

Which of the following are client filters available in serverclass.conf? (Select all that apply.)

  • A. DNS name.
  • B. IP address.
  • C. Splunk server role.
  • D. Platform (machine type).
Answer:

A, B, D


Explanation:
The client filters available in serverclass.conf are DNS name, IP address, and platform (machine type).
These filters allow the administrator to specify which forwarders belong to a server class and receive
the apps and configurations from the deployment server. The Splunk server role is not a valid client
filter in serverclass.conf, as it is not a property of the forwarder. For more information, see [Use
forwarder management filters] in the Splunk documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 2

What log file would you search to verify if you suspect there is a problem interpreting a regular
expression in a monitor stanza?

  • A. btool.log
  • B. metrics.log
  • C. splunkd.log
  • D. tailing_processor.log
Answer:

D


Explanation:
The tailing_processor.log file would be the best place to search if you suspect there is a problem
interpreting a regular expression in a monitor stanza. This log file contains information about how
Splunk monitors files and directories, including any errors or warnings related to parsing the monitor
stanza. The splunkd.log file contains general information about the Splunk daemon, but it may not
have the specific details about the monitor stanza. The btool.log file contains information about the
configuration files, but it does not log the runtime behavior of the monitor stanza. The metrics.log
file contains information about the performance metrics of Splunk, but it does not log the event
breaking issues. For more information, see
About Splunk Enterprise logging
in the Splunk
documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 3

Which Splunk tool offers a health check for administrators to evaluate the health of their Splunk
deployment?

  • A. btool
  • B. DiagGen
  • C. SPL Clinic
  • D. Monitoring Console
Answer:

D


Explanation:
The Monitoring Console is the Splunk tool that offers a health check for administrators to evaluate
the health of their Splunk deployment. The Monitoring Console provides dashboards and alerts that
show the status and performance of various Splunk components, such as indexers, search heads,
forwarders, license usage, and search activity. The Monitoring Console can also run health checks on
the deployment and identify any issues or recommendations. The btool is a command-line tool that
shows the effective settings of the configuration files, but it does not offer a health check. The
DiagGen is a tool that generates diagnostic snapshots of the Splunk environment, but it does not
offer a health check. The SPL Clinic is a tool that analyzes and optimizes SPL queries, but it does not
offer a health check. For more information, see
About the Monitoring Console
in the Splunk
documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 4

In a four site indexer cluster, which configuration stores two searchable copies at the origin site, one
searchable copy at site2, and a total of four searchable copies?

  • A. site_search_factor = origin:2, site1:2, total:4
  • B. site_search_factor = origin:2, site2:1, total:4
  • C. site_replication_factor = origin:2, site1:2, total:4
  • D. site_replication_factor = origin:2, site2:1, total:4
Answer:

B


Explanation:
In a four site indexer cluster, the configuration that stores two searchable copies at the origin site,
one searchable copy at site2, and a total of four searchable copies is site_search_factor = origin:2,
site2:1, total:4. This configuration tells the cluster to maintain two copies of searchable data at the
site where the data originates, one copy of searchable data at site2, and a total of four copies of
searchable data across all sites. The site_search_factor determines how many copies of searchable
data are maintained by the cluster for each site. The site_replication_factor determines how many
copies of raw data are maintained by the cluster for each site. For more information, see
Configure
multisite indexer clusters with server.conf
in the Splunk documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 5

Which of the following is true regarding Splunk Enterprise's performance? (Select all that apply.)

  • A. Adding search peers increases the maximum size of search results.
  • B. Adding RAM to existing search heads provides additional search capacity.
  • C. Adding search peers increases the search throughput as the search load increases.
  • D. Adding search heads provides additional CPU cores to run more concurrent searches.
Answer:

C, D


Explanation:
The following statements are true regarding Splunk Enterprise performance:
Adding search peers increases the search throughput as search load increases. This is because adding
more search peers distributes the search workload across more indexers, which reduces the load on
each indexer and improves the search speed and concurrency.
Adding search heads provides additional CPU cores to run more concurrent searches. This is because
adding more search heads increases the number of search processes that can run in parallel, which
improves the search performance and scalability. The following statements are false regarding Splunk
Enterprise performance:
Adding search peers does not increase the maximum size of search results. The maximum size of
search results is determined by the maxresultrows setting in the limits.conf file, which is
independent of the number of search peers.
Adding RAM to an existing search head does not provide additional search capacity. The search
capacity of a search head is determined by the number of CPU cores, not the amount of RAM.
Adding RAM to a search head may improve the search performance, but not the search capacity. For
more information, see
Splunk Enterprise performance
in the Splunk documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 6

Which Splunk Enterprise offering has its own license?

  • A. Splunk Cloud Forwarder
  • B. Splunk Heavy Forwarder
  • C. Splunk Universal Forwarder
  • D. Splunk Forwarder Management
Answer:

C


Explanation:
The Splunk Universal Forwarder is the only Splunk Enterprise offering that has its own license. The
Splunk Universal Forwarder license allows the forwarder to send data to any Splunk Enterprise or
Splunk Cloud instance without consuming any license quota. The Splunk Heavy Forwarder does not
have its own license, but rather consumes the license quota of the Splunk Enterprise or Splunk Cloud
instance that it sends data to. The Splunk Cloud Forwarder and the Splunk Forwarder Management
are not separate Splunk Enterprise offerings, but rather features of the Splunk Cloud service. For
more information, see [About forwarder licensing] in the Splunk documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 7

Which component in the splunkd.log will log information related to bad event breaking?

  • A. Audittrail
  • B. EventBreaking
  • C. IndexingPipeline
  • D. AggregatorMiningProcessor
Answer:

D


Explanation:
The AggregatorMiningProcessor component in the splunkd.log file will log information related to
bad event breaking. The AggregatorMiningProcessor is responsible for breaking the incoming data
into events and applying the props.conf settings. If there is a problem with the event breaking, such
as incorrect timestamps, missing events, or merged events, the AggregatorMiningProcessor will log
the error or warning messages in the splunkd.log file. The Audittrail component logs information
about the audit events, such as user actions, configuration changes, and search activity. The
EventBreaking component logs information about the event breaking rules, such as the
LINE_BREAKER and SHOULD_LINEMERGE settings. The IndexingPipeline component logs information
about the indexing pipeline, such as the parsing, routing, and indexing phases. For more information,
see
About Splunk Enterprise logging
and [Configure event line breaking] in the Splunk
documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 8

Which Splunk server role regulates the functioning of indexer cluster?

  • A. Indexer
  • B. Deployer
  • C. Master Node
  • D. Monitoring Console
Answer:

C


Explanation:
The master node is the Splunk server role that regulates the functioning of the indexer cluster. The
master node coordinates the activities of the peer nodes, such as data replication, data searchability,
and data recovery. The master node also manages the cluster configuration bundle and distributes it
to the peer nodes. The indexer is the Splunk server role that indexes the incoming data and makes it
searchable. The deployer is the Splunk server role that distributes apps and configuration updates to
the search head cluster members. The monitoring console is the Splunk server role that monitors the
health and performance of the Splunk deployment. For more information, see
About indexer clusters
and index replication
in the Splunk documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 9

When adding or rejoining a member to a search head cluster, the following error is displayed:
Error pulling configurations from the search head cluster captain; consider performing a destructive
configuration resync on this search head cluster member.
What corrective action should be taken?

  • A. Restart the search head.
  • B. Run the splunk apply shcluster-bundle command from the deployer.
  • C. Run the clean raft command on all members of the search head cluster.
  • D. Run the splunk resync shcluster-replicated-config command on this member.
Answer:

D


Explanation:
When adding or rejoining a member to a search head cluster, and the following error is displayed:
Error pulling configurations from the search head cluster captain; consider performing a destructive
configuration resync on this search head cluster member.
The corrective action that should be taken is to run the splunk resync shcluster-replicated-config
command on this member. This command will delete the existing configuration files on this member
and replace them with the latest configuration files from the captain. This will ensure that the
member has the same configuration as the rest of the cluster. Restarting the search head, running
the splunk apply shcluster-bundle command from the deployer, or running the clean raft command
on all members of the search head cluster are not the correct actions to take in this scenario. For
more information, see
Resolve configuration inconsistencies across cluster members
in the Splunk
documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 10

Which of the following commands is used to clear the KV store?

  • A. splunk clean kvstore
  • B. splunk clear kvstore
  • C. splunk delete kvstore
  • D. splunk reinitialize kvstore
Answer:

A


Explanation:
The splunk clean kvstore command is used to clear the KV store. This command will delete all the
collections and documents in the KV store and reset it to an empty state. This command can be
useful for troubleshooting KV store issues or resetting the KV store data. The splunk clear kvstore,
splunk delete kvstore, and splunk reinitialize kvstore commands are not valid Splunk commands. For
more information, see
Use the CLI to manage the KV store
in the Splunk documentation.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Page 1 out of 15
Viewing questions 1-10 out of 160
Go To
page 2