MuleSoft mcia-level-1 practice test

MuleSoft Certified Integration Architect - Level 1 Exam

Last exam update: Jul 20 ,2024
Page 1 out of 8. Viewing questions 1-15 out of 130

Question 1

An ABC Farms project team is planning to build a new API that is required to work with data from
different domains across the organization.
The organization has a policy that all project teams should leverage existing investments by reusing
existing APIs and related resources and documentation that other project teams have already
developed and deployed.
To support reuse, where on Anypoint Platform should the project team go to discover and read
existing APIs, discover related resources and documentation, and interact with mocked versions of
those APIs?
A. Design Center
B. API Manager
C. Runtime Manager
D. Anypoint Exchange

Answer:

D

Explanation:
The mocking service is a feature of Anypoint Platform and runs continuously. You can run the
mocking service from the text editor, the visual editor, and from Anypoint Exchange. You can
simulate calls to the API in API Designer before publishing the API specification to Exchange or in
Exchange after publishing the API specification.
Reference:
https://docs.mulesoft.com/design-center/design-mocking-service

Discussions
0 / 1000

Question 2

An external web UI application currently accepts occasional HTTP requests from client web browsers
to change (insert, update, or delete) inventory pricing information in an inventory system's database.
Each inventory pricing change must be transformed and then synchronized with multiple customer
experience systems in near real-time (in under 10 seconds). New customer experience systems are
expected to be added in the future.
The database is used heavily and limits the number of SELECT queries that can be made to the
database to 10 requests per hour per user.
What is the most scalable, idiomatic (used for its intended purpose), decoupled. reusable, and
maintainable integration mechanism available to synchronize each inventory pricing change with the
various customer experience systems in near real-time?

  • A. Write a Mule application with a Database On Table Row event source configured for the inventory pricing database, with the watermark attribute set to an appropriate database column In the same now, use a Scatter-Gather to call each customer experience system's REST API with transformed inventory-pricing records
  • B. Add a trigger to the inventory-pricing database table so that for each change to the inventory pricing database, a stored procedure is called that makes a REST call to a Mule application Write the Mule application to publish each Mule event as a message to an Anypoint MQ exchange Write other Mule applications to subscribe to the Anypoint MQ exchange, transform each received message, and then update the Mule application's corresponding customer experience system(s)
  • C. Replace the external web UI application with a Mule application to accept HTTP requests from client web browsers In the same Mule application, use a Batch Job scope to test if the database request will succeed, aggregate pricing changes within a short time window, and then update both the inventory pricing database and each customer experience system using a Parallel For Each scope
  • D. Write a Mule application with a Database On Table Row event source configured for the inventory pricing database, with the ID attribute set to an appropriate database column In the same flow, use a Batch Job scope to publish transformed Inventory-pricing records to an Anypoint MQ queue Write other Mule applications to subscribe to the Anypoint MQ queue, transform each received message, and then update the Mule application's corresponding customer experience system(s)
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

In Anypoint Platform, a company wants to configure multiple identity providers (IdPs) for multiple
lines of business (LOBs). Multiple business groups, teams, and environments have been defined for
these LOBs.
What Anypoint Platform feature can use multiple IdPs across the companys business groups, teams,
and environments?

  • A. MuleSoft-hosted (CloudHub) dedicated load balancers
  • B. Client (application) management
  • C. Virtual private clouds
  • D. Permissions
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
To use a dedicated load balancer in your environment, you must first create an Anypoint VPC.
Because you can associate multiple environments with the same Anypoint VPC, you can use the
same dedicated load balancer for your different environments.
Reference:
https://docs.mulesoft.com/runtime-manager/cloudhub-dedicated-load-balancer

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

A Mule application uses APIkit for SOAP to implement a SOAP web service. The Mule application has
been deployed to a CloudHub worker in a testing environment.
The integration testing team wants to use a SOAP client to perform Integration testing. To carry out
the integration tests, the integration team must obtain the interface definition for the SOAP web
service.
What is the most idiomatic (used for its intended purpose) way for the integration testing team to
obtain the interface definition for the deployed SOAP web service in order to perform integration
testing with the SOAP client?

  • A. Retrieve the OpenAPI Specification file(s) from API Manager
  • B. Retrieve the WSDL file(s) from the deployed Mule application
  • C. Retrieve the RAML file(s) from the deployed Mule application
  • D. Retrieve the XML file(s) from Runtime Manager
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference:
https://docs.spring.io/spring-framework/docs/4.2.x/spring-framework-reference/html/integration-testing.html

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

A Mule application is running on a customer-hosted Mule runtime in an organization's network. The
Mule application acts as a producer of asynchronous Mule events. Each Mule event must be
broadcast to all interested external consumers outside the Mule application. The Mule events should
be published in a way that is guaranteed in normal situations and also minimizes duplicate delivery
in less frequent failure scenarios.
The organizational firewall is configured to only allow outbound traffic on ports 80 and 443. Some
external event consumers are within the organizational network, while others are located outside
the firewall.
What Anypoint Platform service is most idiomatic (used for its intended purpose) for publishing
these Mule events to all external consumers while addressing the desired reliability goals?

  • A. CloudHub VM queues
  • B. Anypoint MQ
  • C. Anypoint Exchange
  • D. CloudHub Shared Load Balancer
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Set the Anypoint MQ connector operation to publish or consume messages, or to accept (ACK) or not
accept (NACK) a message.
Reference:
https://docs.mulesoft.com/mq/

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

An organization is designing an integration Mule application to process orders by submitting them to
a back-end system for offline processing. Each order will be received by the Mule application through
an HTTPS POST and must be acknowledged immediately. Once acknowledged, the order will be
submitted to a back-end system. Orders that cannot be successfully submitted due to rejections from
the back-end system will need to be processed manually (outside the back-end system).
The Mule application will be deployed to a customer-hosted runtime and is able to use an existing
ActiveMQ broker if needed. The ActiveMQ broker is located inside the organizations firewall. The
back-end system has a track record of unreliability due to both minor network connectivity issues
and longer outages.
What idiomatic (used for their intended purposes) combination of Mule application components and
ActiveMQ queues are required to ensure automatic submission of orders to the back-end system
while supporting but minimizing manual order processing?

  • A. An Until Successful scope to call the back-end system One or more ActiveMQ long-retry queues One or more ActiveMQ dead-letter queues for manual processing
  • B. One or more On Error scopes to assist calling the back-end system An Until Successful scope containing VM components for long retries A persistent dead-letter VM queue configured in CloudHub
  • C. One or more On Error scopes to assist calling the back-end system One or more ActiveMQ long-retry queues A persistent dead-letter object store configured in the CloudHub Object Store service
  • D. A Batch Job scope to call the back-end system An Until Successful scope containing Object Store components for long retries A dead-letter object store configured in the Mule application
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

The ABC company has an Anypoint Runtime Fabric on VMs/Bare Metal (RTF-VM) appliance installed
on its own customer-hosted AWS infrastructure.
Mule applications are deployed to this RTF-VM appliance. As part of the company standards, the
Mule application logs must be forwarded to an external log management tool (LMT).
Given the company's current setup and requirements, what is the most idiomatic (used for its
intended purpose) way to send Mule application logs to the external LMT?

  • A. In RTF-VM, install and configure the external LTM's log-forwarding agent
  • B. In RTF-VM, edit the pod configuration to automatically install and configure an Anypoint Monitoring agent
  • C. In each Mule application, configure custom Log4j settings
  • D. In RTF-VM. configure the out-of-the-box external log forwarder
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference:
https://help.mulesoft.com/s/article/Enable-external-log-forwarding-for-Mule-
applications-deployed-in-RTF

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

An organization has deployed both Mule and non-Mule API implementations to integrate its
customer and order management systems. All the APIs are available to REST clients on the public
internet.
The organization wants to monitor these APIs by running health checks: for example, to determine if
an API can properly accept and process requests. The organization does not have subscriptions to any
external monitoring tools and also does not want to extend its IT footprint.
What Anypoint Platform feature provides the most idiomatic (used for its intended purpose) way to
monitor the availability of both the Mule and the non-Mule API implementations?

  • A. API Functional Monitoring
  • B. Runtime Manager
  • C. API Manager
  • D. Anypoint Visualizer
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference:
https://docs.mulesoft.com/visualizer/

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

An organization has implemented a continuous integration (CI) lifecycle that promotes Mule
applications through code, build, and test stages. To standardize the organization's CI journey, a new
dependency control approach is being designed to store artifacts that include information such as
dependencies, versioning, and build promotions.
To implement these process improvements, the organization will now require developers to maintain
all dependencies related to Mule application code in a shared location.
What is the most idiomatic (used for its intended purpose) type of system the organization should
use in a shared location to standardize all dependencies related to Mule application code?

  • A. A MuleSoft-managed repository at repository.mulesoft.org
  • B. A binary artifact repository
  • C. API Community Manager
  • D. The Anypoint Object Store service at cloudhub.io
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

A Mule application is synchronizing customer data between two different database systems.
What is the main benefit of using eXtended Architecture (XA) transactions over local transactions to
synchronize these two different database systems?

  • A. An XA transaction synchronizes the database systems with the least amount of Mule configuration or coding
  • B. An XA transaction handles the largest number of requests in the shortest time
  • C. An XA transaction automatically rolls back operations against both database systems if any operation falls
  • D. An XA transaction writes to both database systems as fast as possible
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference:
https://docs.oracle.com/middleware/1213/wls/PERFM/llrtune.htm#PERFM997

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11


To implement predictive maintenance on its machinery equipment, ACME Tractors has installed
thousands of IoT sensors that will send data for each machinery asset as sequences of JMS messages,
in near real-time, to a JMS queue named SENSOR_DATA on a JMS server. The Mule application
contains a JMS Listener operation configured to receive incoming messages from the JMS servers
SENSOR_DATA JMS queue. The Mule application persists each received JMS message, then sends a
transformed version of the corresponding Mule event to the machinery equipment back-end
systems.
The Mule application will be deployed to a multi-node, customer-hosted Mule runtime cluster.
Under normal conditions, each JMS message should be processed exactly once.
How should the JMS Listener be configured to maximize performance and concurrent message
processing of the JMS queue?

  • A. Set numberOfConsumers = 1 Set primaryNodeOnly = false
  • B. Set numberOfConsumers = 1 Set primaryNodeOnly = true
  • C. Set numberOfConsumers to a value greater than one Set primaryNodeOnly = true
  • D. Set numberOfConsumers to a value greater than one Set primaryNodeOnly = false
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference:
https://docs.mulesoft.com/jms-connector/1.8/jms-performance

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

A Mule application contains a Batch Job scope with several Batch Step scopes. The Batch Job scope is
configured with a batch block size of 25.
A payload with 4,000 records is received by the Batch Job scope.
When there are no errors, how does the Batch Job scope process records within and between the
Batch Step scopes?

  • A. The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed in parallel All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
  • B. The Batch Job scope processes each record block sequentially, one at a time Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one at a time All 4000 records must be completed before the blocks of records are available to the next Batch Step scope
  • C. The Batch Job scope processes multiple record blocks in parallel, and a block of 25 records can jump ahead to the next Batch Step scope over an earlier block of records Each Batch Step scope is invoked with one record in the payload of the received Mule event For each Batch Step scope, all 25 records within a block are processed sequentially, one record at a time All the records in a block must be completed before the block of 25 records is available to the next Batch Step scope
  • D. The Batch Job scope processes multiple record blocks in parallel Each Batch Step scope is invoked with a batch of 25 records in the payload of the received Mule event For each Batch Step scope, all 4000 records are processed in parallel Individual records can jump ahead to the next Batch Step scope before the rest of the records finish processing in the current Batch Step scope
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Reference:
https://docs.mulesoft.com/mule-runtime/4.4/batch-processing-concept

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

An organization is designing Mule application which connects to a legacy backend. It has been
reported that backend services are not highly available and experience downtime quite often. As an
integration architect which of the below approach you would propose to achieve high reliability
goals?

  • A. Alerts can be configured in Mule runtime so that backend team can be communicated when services are down
  • B. Until Successful scope can be implemented while calling backend API's
  • C. On Error Continue scope to be used to call in case of error again
  • D. Create a batch job with all requests being sent to backend using that job as per the availability of backend API's
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:

Correct answer is Untill Successful scope can be implemented while calling backend API's The Until
Successful scope repeatedly triggers the scope's components (including flow references) until they all
succeed or until a maximum number of retries is exceeded The scope provides option to control the
max number of retries and the interval between retries The scope can execute any sequence of
processors that may fail for whatever reason and may succeed upon retry

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

Insurance organization is planning to deploy Mule application in MuleSoft Hosted runtime plane. As
a part of requirement , application should be scalable . highly available. It also has regulatory
requirement which demands logs to be retained for at least 2 years. As an Integration Architect what
step you will recommend in order to achieve this?

  • A. It is not possible to store logs for 2 years in CloudHub deployment. External log management system is required.
  • B. When deploying an application to CloudHub , logs retention period should be selected as 2 years
  • C. When deploying an application to CloudHub, worker size should be sufficient to store 2 years data
  • D. Logging strategy should be configured accordingly in log4j file deployed with the application.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Correct answer is It is not possible to store logs for 2 years in CloudHub deployment. External log
management system is required. CloudHub has a specific log retention policy, as described in the
documentation: the platform stores logs of up to 100 MB per app & per worker or for up to 30 days,
whichever limit is hit first. Once this limit has been reached, the oldest log information is deleted in
chunks and is irretrievably lost. The recommended approach is to persist your logs to a external
logging system of your choice (such as Splunk, for instance) using a log appender. Please note that
this solution results in the logs no longer being stored on our platform, so any support cases you
lodge will require for you to provide the appropriate logs for review and case resolution

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

As a part of business requirement , old CRM system needs to be integrated using Mule application.
CRM system is capable of exchanging data only via SOAP/HTTP protocol. As an integration architect
who follows API led approach , what is the the below step you will perform so that you can share
document with CRM team?

  • A. Create RAML specification using Design Center
  • B. Create SOAP API specification using Design Center
  • C. Create WSDL specification using text editor
  • D. Create WSDL specification using Design Center
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Correct answer is Create WSDL specification using text editor SOAP services are specified using
WSDL. A client program connecting to a web service can read the WSDL to determine what functions

are available on the server. We can not create WSDL specification in Design Center. We need to use
external text editor to create WSDL.

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2