qlikview qrep practice test

Exam Title: Qlik Replicate

Last update: Nov 27 ,2025
Question 1

Which two task logging components are associated with a Full Load to a target endpomt? (Select
two.)

  • A. TARGET_APPLY
  • B. TARGET_LOAD
  • C. FILE_TRANSFER
  • D. STREAM
  • E. SOURCE UNLOAD
Answer:

BE


Explanation:
When performing a Full Load to a target endpoint in Qlik Replicate, the task logging components that
are associated with this process are TARGET_LOAD and SOURCE_UNLOAD.
TARGET_LOAD: This component is responsible for loading the data into the target endpoint.
It
represents the process where Qlik Replicate reads all columns/rows from the Source database and
creates the exact same copy on the Target database1
.
SOURCE_UNLOAD: This component is involved in unloading the data from the source endpoint.
It is
part of the Full Load process where the data is read from the source and prepared for transfer to the
target2
.
The other options provided are not directly associated with the Full Load process to a target
endpoint:
TARGET_APPLY is related to the Change Data Capture (CDC) phase where changes from the source
are applied to the target2
.
FILE_TRANSFER is not a term directly associated with Qlik Replicate’s logging components.
STREAM refers to the Log Stream tasks, which is a different type of task configuration used for saving
data changes from the transaction log of a single source database and applying them to multiple
targets2
.
For a comprehensive understanding of the task types and options in Qlik Replicate, you can refer to
the official Qlik community articles on
Qlik Replicate Task Configuration Options
and
An Introduction
to Qlik Replicate Tasks: Full Load vs CDC
.

vote your answer:
A
B
C
D
E
A 0 B 0 C 0 D 0 E 0
Comments
Question 2

Which are valid source endpoint types for Qlik Replicate change processing (CDC)? (Select two )

  • A. Classic Relational RDBMS
  • B. MS Dynamics direct access
  • C. SAP ECC and Extractors
  • D. Generic REST APIs Data Lake file formats
Answer:

AC


Explanation:
For Qlik Replicate’s Change Data Capture (CDC) process, the valid source endpoint types include:
A . Classic Relational RDBMS: These are traditional relational database management systems that
support CDC.
Qlik Replicate can capture changes from these systems using log-based CDC tools
which are integrated to work with most ETL tools1
.
C . SAP ECC and Extractors: SAP ECC (ERP Central Component) and its extractors are also supported
as source endpoints for CDC in Qlik Replicate.
This allows for the replication of data changes from
SAP’s complex data structures1
.
The other options provided are not typically associated with CDC in Qlik Replicate:
B . MS Dynamics direct access: While Qlik Replicate can connect to various data sources, MS
Dynamics is not commonly listed as a direct source for CDC.
D . Generic REST APIs Data Lake file formats: REST APIs and Data Lake file formats are not standard
sources for CDC as they do not maintain transaction logs, which are essential for CDC to track
changes.
For detailed information on setting up source endpoints and enabling CDC, you can refer to the
official Qlik documentation and community articles that discuss the prerequisites and configurations
needed for various source endpoints2345
.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 3

How can the task diagnostic package be downloaded?

  • A. Open task from overview -> Monitor -> Tools -> Support -> Download diagnostic package
  • B. Open task from overview -> Run -> Tools -?
  • C. Download diagnostic package Go to server settings -> Logging -> Right-click task -> Support -> Download diagnostic package
  • D. Right-click task from overview -> Download diagnostic package
Answer:

A


Explanation:
To download the task diagnostic package in Qlik Replicate, you need to follow these steps:
Open the task from the overview in the Qlik Replicate Console.
Switch to the Monitor view.
Click on the Tools toolbar button.
Navigate to Support.
Select Download Diagnostic Package1
.
This process will generate a task-specific diagnostics package that contains the task log files and
various debugging data that may assist in troubleshooting task-related issues. Depending on your
browser settings, the file will either be automatically downloaded to your designated download
folder, or you will be prompted to download it.
The file will be named in the format
<task_name>__diagnostics__<timestamp>.zip12
.
The other options provided do not accurately describe the process for downloading a diagnostic
package in Qlik Replicate:
B is incomplete and does not provide a valid path.
C incorrectly suggests going to server settings and logging, which is not the correct procedure.
D suggests a method that is not documented in the official Qlik Replicate help resources.
Therefore, the verified answer is A, as it correctly outlines the steps to download a diagnostic
package in Qlik Replicate12
.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 4

An operative database can only commit two engines to Qlik Replicate (or initial loads at any given
time. How should the task settings be modified?

  • A. Apply Change Processing Tuning and increase the Apply batched changes intervals to 60 seconds
  • B. Qlik Replicate tasks only load one table at a time by default, so the task settings do not need to be modified.
  • C. Apply Full Load Settings to limit the number of engines to two.
  • D. Apply Full Load Tuning to read a maximum number of tables not greater than two.
Answer:

C


Explanation:
In a scenario where an operative database can commit only two engines to Qlik Replicate for initial
loads, the task settings should be modified to ensure that no more than two tables are loaded at any
given time. This can be achieved by:
C . Apply Full Load Settings to limit the number of engines to two: This setting allows you to specify
the maximum number of concurrent table loads during the Full Load operation.
By limiting this
number to two, you ensure that the operative database’s capacity is not exceeded1
.
The other options are not suitable because:
A . Apply Change Processing Tuning: This option is related to the CDC (Change Data Capture) phase
and not the initial Full Load phase. Increasing the apply batched changes interval would not limit the
number of engines used during the Full Load.
B . Qlik Replicate tasks only load one table at a time by default: This statement is not accurate as Qlik
Replicate can be configured to load multiple tables concurrently, depending on the task settings.
D . Apply Full Load Tuning to read a maximum number of tables not greater than two: While this
option seems similar to the correct answer, it is not a recognized setting in Qlik Replicate’s
configuration options.
For detailed guidance on configuring task settings in Qlik Replicate, particularly for managing the
number of concurrent loads, you can refer to the official Qlik community articles on
Qlik Replicate
Task Configuration Options
.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 5

Which is the default port of Qlik Replicate Server on Linux?

  • A. 3550
  • B. 443
  • C. 80
  • D. 3552
Answer:

D


Explanation:
The default port for Qlik Replicate Server on Linux is 3552. This port is used for outbound and
inbound communication unless it is overridden during the installation or configuration process.
Here’s a reference to the documentation that confirms this information:
The official Qlik Replicate documentation states that “Port 3552 (the default rest port) needs to be
opened for outbound and inbound communication, unless you override it as described below.” This
indicates that 3552 is the default port that needs to be considered during the installation and setup
of Qlik Replicate on a Linux system1
.
The other options provided do not correspond to the default port for Qlik Replicate Server on Linux:
A . 3550: This is not listed as the default port in the documentation.
B . 443: This is commonly the default port for HTTPS traffic, but not for Qlik Replicate Server.
C . 80: This is commonly the default port for HTTP traffic, but not for Qlik Replicate Server.
Therefore, the verified answer is D. 3552, as it is the port designated for Qlik Replicate Server on
Linux according to the official documentation1
.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 6

A Qlik Replicate administrator must deliver data from a source endpoint with minimal impact and
distribute it to several target endpoints.
How should this be achieved in Qlik Replicate?

  • A. Create a LogStream task followed by multiple tasks using an endpoint that reads changes from the log stream staging folder
  • B. Create a task streaming to a dedicated buffer database (e.g.. Oracle or MySQL) and consume that database in the following tasks as a source endpoint
  • C. Create a task streaming to a streaming target endpoint (e.g., Kafka) and consume that endpoint in the following tasks as a source endpoint
  • D. Create multiple tasks using the same source endpoint
Answer:

C


Explanation:
Questions no: 16 Verified Answer: = C. Create a task streaming to a streaming target
Explanation:endpoint (e.g., Kafka) and consume that endpoint in the following tasks as a source
endpoint
Step by Step Comprehensive and Detailed Explanation with all Reference: =
To deliver data from a source endpoint with minimal impact and distribute it to several target
endpoints in Qlik Replicate, the best approach is:
C . Create a task streaming to a streaming target endpoint (e.g., Kafka) and consume that endpoint in
the following tasks as a source endpoint: This method allows for efficient data distribution with
minimal impact on the source system.
By streaming data to a platform like Kafka, which is designed
for high-throughput, scalable, and fault-tolerant storage, Qlik Replicate can then use this data stream
as a source for multiple downstream tasks12
.
The other options are less optimal because:
A . Create a LogStream task followed by multiple tasks using an endpoint that reads changes from the
log stream staging folder: While this option involves a LogStream, it does not specify streaming to a
target endpoint that can be consumed by multiple tasks, which is essential for minimal impact
distribution.
B . Create a task streaming to a dedicated buffer database (e.g., Oracle or MySQL) and consume that
database in the following tasks as a source endpoint: This option introduces additional complexity
and potential performance overhead by using a buffer database.
D . Create multiple tasks using the same source endpoint: This could lead to increased load and
impact on the source endpoint, which is contrary to the requirement of minimal impact.
For more detailed information on how to set up streaming tasks to target endpoints like Kafka and
how to configure subsequent tasks to consume from these streaming endpoints, you can refer to the
official Qlik documentation on
Adding and managing target endpoints
.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 7

Which two endpomts have ARC (Attunity Replicate Connect) CDC (Change Data Capture) agents?
(Select two )

  • A. IBM IMS
  • B. IBMDB2Z/OS
  • C. Kafka Source
  • D. SAPHANA
  • E. HP NonStop
Answer:

AE


Explanation:
Questions no: 17 Verified Answer: = A. IBM IMS & E. HP NonStop
Explanation:
Step by Step Comprehensive and Detailed Explanation with all Reference: =
ARC (Attunity Replicate Connect) CDC agents are used for capturing changes (CDC) and can be
utilized with both relational and non-relational endpoints supported by ARC. The endpoints that
have ARC CDC agents include:
IBM IMS (A): This is a database and transaction management system, and it is listed as one of the
endpoints supported by ARC CDC agents1
.
HP NonStop (E): This is a platform for high-availability servers and is also supported by ARC CDC
agents1
.
The other options provided do not align with the endpoints that have ARC CDC agents:
B . IBMDB2Z/OS: While DB2 for z/OS is a common database system, it is not mentioned in the
context of ARC CDC agents.
C . Kafka Source: Kafka is a streaming platform, and while it can be an endpoint for data, it is not
listed as having ARC CDC agents.
D . SAPHANA: SAP HANA is an in-memory database, and it is not specified as having ARC CDC agents.
Therefore, the verified answers are A. IBM IMS and E. HP NonStop, as they are the endpoints that
utilize ARC CDC agents for capturing changes1
.

vote your answer:
A
B
C
D
E
A 0 B 0 C 0 D 0 E 0
Comments
Question 8

AQlik Replicate administrator requires data from a CRM application that can be accessed through
different methods. How should this be done?

  • A. Connect directly to the application
  • B. Export tables to CSVs in a shared folder and connect to that
  • C. Connect to the REST API provided by the application
  • D. Connect to the underlying RDBMS
Answer:

C


Explanation:
When a Qlik Replicate administrator needs to access data from a CRM application, the most efficient
and direct method is often through the application’s REST API. Here’s why:
Connect to the REST API provided by the application ©: Many modern CRM applications provide a
REST API for programmatic access to their data. This method is typically supported by data
integration tools like Qlik Replicate and allows for a more seamless and real-time data extraction
process.
The REST API can provide a direct and efficient way to access the required data without the
need for intermediate steps1
.
Connect directly to the application (A): While this option might seem straightforward, it is not always
possible or recommended due to potential limitations in direct application connections or the lack of
a suitable interface for data extraction.
Export tables to CSVs in a shared folder and connect to that (B): This method involves additional
steps and can be less efficient. It requires manual intervention to export the data and does not
support real-time data access.
Connect to the underlying RDBMS (D): Accessing the underlying relational database management
system (RDBMS) can be an option, but it may bypass the business logic implemented in the CRM
application and could lead to incomplete or inconsistent data extraction.
Given these considerations, the REST API method © is generally the preferred approach for accessing
CRM application data in a structured and programmable manner, which aligns with the capabilities of
Qlik Replicate213
.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 9

A customer needs to run daily reports about the changes that have occurred within the past 24 hours
When setting up a new Qlik Replicate task, which option must be set to see these changes?

  • A. Apply Changes
  • B. Store Changes
  • C. Stage Changes
  • D. Full Load
Answer:

B


Explanation:
To run daily reports about the changes that have occurred within the past 24 hours using Qlik
Replicate, the option that must be set is Store Changes. This feature enables Qlik Replicate to keep a
record of the changes that have occurred over a specified period, which in this case is the past 24
hours.
B . Store Changes: This setting allows Qlik Replicate to capture and store the changes made to the
data in the source system.
These stored changes can then be used to generate reports that reflect the
data modifications within the desired timeframe1
.
The other options are not specifically designed for the purpose of running daily change reports:
A . Apply Changes: This option is related to applying the captured changes to the target system,
which is a different stage of the replication process.
C . Stage Changes: Staging changes involves temporarily storing the changes before they are applied
to the target, which is not the same as storing changes for reporting purposes.
D. Full Load: The Full Load option is used to replicate the entire dataset from the source to the target,
which is not necessary for generating reports based on changes within a specific timeframe.
For more information on how to configure the Store Changes option and generate reports based on
the stored changes, you can refer to the official Qlik documentation and community discussions that
provide insights into best practices for setting up replication tasks and managing change data2
.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 10

During the process of handling data errors, the Qlik Replicate administrator recognizes that data
might be truncated Which process should be used to maintain full table integrity?

  • A. Stop Task
  • B. Suspend Table
  • C. Ignore Record
  • D. Log record to the exceptions table
Answer:

D


Explanation:
When handling data errors in Qlik Replicate, especially when data might be truncated, maintaining
full table integrity is crucial. The best approach to handle this situation is to log the record to the
exceptions table. Here’s why:
Log record to the exceptions table (D): This option allows the task to continue processing while
ensuring that any records that could not be applied due to errors, such as truncation, are captured
for review and resolution.
The exceptions table serves as a repository for such records, allowing
administrators to address the issues without losing the integrity of the full dataset1
.
Stop Task (A): While stopping the task will prevent further data processing, it does not provide a
mechanism to handle the specific records that caused the error.
Suspend Table (B): Suspending the table will halt processing for that specific table, but again, it does
not address the individual records that may be causing truncation issues.
Ignore Record ©: Ignoring the record would mean that the truncated data is not processed,
potentially leading to data loss and compromising table integrity.
Therefore, the verified answer is D. Log record to the exceptions table, as it allows for the
identification and resolution of specific data errors while preserving the integrity of the overall table
data12
.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Page 1 out of 5
Viewing questions 1-10 out of 60
Go To
page 2