microsoft ai-100 practice test

Designing and Implementing an Azure AI

Note: Test Case questions are at the end of the exam
Last exam update: May 13 ,2024
Page 1 out of 14. Viewing questions 1-15 out of 219

Question 1 Topic 3, Mixed Questions

You are developing an application that uses an API pipeline. The application consumes and analyzes streaming data.
You API pipeline must perform face detection and sentiment analysis.
What actions should you take?

  • A. Use the Computer Vision API in the pipeline.
  • B. Use the Face API in the pipeline.
  • C. Use the Text Analytics in the pipeline.
  • D. Use the Video Indexer in the pipeline.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Azure Video Indexer is a cloud application built on Azure Media Analytics, Azure Search, Cognitive Services (such as the
Face API, Microsoft Translator, the Computer Vision API, and Custom Speech Service). It enables you to extract the insights
from your videos using Video Indexer video and audio models described below:
Visual text recognition (OCR): Extracts text that is visually displayed in the video.
Audio transcription: Converts speech to text in 12 languages and allows extensions.
Sentiment analysis: Identifies positive, negative, and neutral sentiments from speech and visual text. Face detection: Detects
and groups faces appearing in the video.
Reference: https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-overview

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2 Topic 3, Mixed Questions

You company's developers have created an Azure Data Factory pipeline that moves data from an on-premises server to
Azure Storage. The pipeline consumes Azure Cognitive Services APIs.
You need to deploy the pipeline. Your solution must minimize custom code.
You use Azure-SSIS Integration Runtime (IR) to move data to the cloud and Azure API Apps to consume Cognitive Services
APIs.
Does this action accomplish your objective?

  • A. Yes, it does
  • B. No, it does not
Answer:

B

User Votes:
A
50%
B
50%

Explanation:
With Azure-SSIS Integration Runtime you would need to write custom code
Reference: https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime https://docs.microsoft.com/en-
us/azure/logic-apps/logic-apps-examples-and-scenarios

Discussions
vote your answer:
A
B
0 / 1000

Question 3 Topic 3, Mixed Questions

You company's developers have created an Azure Data Factory pipeline that moves data from an on-premises server to
Azure Storage. The pipeline consumes Azure Cognitive Services APIs.
You need to deploy the pipeline. Your solution must minimize custom code.
You use Integration Runtime to move data to the cloud and Azure API Management to consume Cognitive Services APIs.
Does this action accomplish your objective?

  • A. Yes, it does
  • B. No, it does not
Answer:

B

User Votes:
A
50%
B
50%

Explanation:
Azure API Management is a turnkey solution for publishing APIs to external and internal customers.
Reference: https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime https://docs.microsoft.com/en-
us/azure/logic-apps/logic-apps-examples-and-scenarios

Discussions
vote your answer:
A
B
0 / 1000

Question 4 Topic 3, Mixed Questions

You have created an AI solution that uses several PersonGroup objects.
One of the PersonGroup objects contains thousands of entries and cannot accept any new entries.
You want to be able to add new entries to the PersonGroup object. The PersonGroup object must be identifiable by all the
entries.
Which of the following actions should you take?

  • A. Compress the entries from the PersonGroup object.
  • B. Create another PersonGroup object with the same name.
  • C. Migrate the PersonGroup to a LargePersonGroup object.
  • D. Archive some of the entries from the PersonGroup object.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain
up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale
operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new
architecture.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-use-large-scale
Topic 3, Implement and monitor AI solutions

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5 Topic 3, Mixed Questions

You are designing an Azure Batch AI solution that will perform image recognition. The solution will be used to train several
Azure Machine Learning models.
You need to recommend a compute infrastructure for the solution that minimizes the processing time.
What should you recommend?

  • A. Compute optimized virtual machines.
  • B. Memory optimized virtual machines.
  • C. GPU optimized virtual machines.
  • D. General purpose virtual machines.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
GPU optimized virtual machines are specialized virtual machines targeted for heavy graphic rendering and video editing, as
well as model training and inferencing (ND) with deep learning.
Reference:
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6 Topic 3, Mixed Questions

You are developing an app that consumes data from several Azure IoT Edge devices.
You need to implement a storage solution for the app. Your solution must allow data to be queried in real-time as it streams
into the solution. You need to ensure that your solution provides the least amount of latency for loading data.
You want the data files to persist on the devices for at least 14 days.
What storage solution should you implement?

  • A. Azure Data Lake Analytics
  • B. Azure Data Factory Edge
  • C. Azure HDInsight Hadoop cluster
  • D. Azure SQL database with In-Memory OLTP
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
You can use HDInsight to process streaming data that's received in real time from a variety of devices.
Internet of Things (IoT)
You can use HDInsight to build applications that extract critical insights from data. You can also use Azure Machine Learning
on top of that to predict future trends for your business.
By combining enterprise-scale R analytics software with the power of Apache Hadoop and Apache Spark, Microsoft R
Server for HDInsight gives you the scale and performance you need. Multi-threaded math libraries and transparent
parallelization in R Server handle up to 1000x more data and up to 50x faster speeds than open-source R, which helps you
to train more accurate models for better predictions.
Reference:
https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-introduction

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7 Topic 3, Mixed Questions

You are developing a bot for an ecommerce application.
You must implement user authentication for the bot. You want the authentication process to be encrypted.
What actions should you take?

  • A. Make use of NTLM and smart cards
  • B. Make use of SSL/TLS and JSON Web Token (JWT)
  • C. Make use of API keys and access keys
  • D. Make use of HTTPS and Kerberos
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Your bot communicates with the Bot Connector service using HTTP over a secured channel (SSL/TLS).
JSON Web Tokens are used to encode tokens that are sent to and from the bot.
Reference:
https://docs.microsoft.com/en-us/azure/bot-service/rest-api/bot-framework-rest-connector-authentication

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8 Topic 3, Mixed Questions

You are developing an app for a conference provider. The app will use speech-to-text to provide transcription at a
conference in English. It will also use the Translator Text API to translate the transcripts to the language preferred by the
conference attendees.
You test the translation features on the app and discover that the translations are fairly poor.
You want to improve the quality of the translations.
Which of the following actions should you take?

  • A. Use Text Analytics to perform the translations.
  • B. Use the Language Understanding (LUIS) API to perform the translations.
  • C. Perform the translations by training a custom model using Custom Translator.
  • D. Use the Computer Vision API to perform the translations.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Custom Translator is a feature of the Microsoft Translator service. With Custom Translator, enterprises, app developers, and
language service providers can build neural translation systems that understand the terminology used in their own business
and industry. The customized translation system will then seamlessly integrate into existing applications, workflows and
websites.
Custom Translator allows users to customize Microsoft Translators advanced neural machine translation for Translators
supported neural translation languages. Custom Translator can be used for customizing text when using the Microsoft
Translator Text API, and speech translation using the Microsoft Speech services.
Reference:
https://www.microsoft.com/en-us/translator/business/customization/

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9 Topic 3, Mixed Questions

You are designing an AI system for your company. Your system will consume several Apache Kafka data streams.
You want your system to be able to process the data streams at scale and in real-time.
Which of the following actions should you take?

  • A. Make use of Azure HDInsight with Apache HBase
  • B. Make use of Azure HDInsight with Apache Spark
  • C. Make use of Azure HDInsight with Apache Storm
  • D. Make use of Azure HDInsight with Microsoft Machine Learning Server
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Apache Storm is a distributed, fault-tolerant, open-source computation system. You can use Storm to process streams of
data in real time with Apache Hadoop. Storm solutions can also provide guaranteed processing of data, with the ability to
replay data that wasn't successfully processed the first time.
Reference:
https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-streaming-at-scale-overview https://docs.microsoft.com/en-
us/azure/hdinsight/storm/apache-storm-overview

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10 Topic 3, Mixed Questions

You are developing an application that uses the Computer Vision API.
Your application will perform the following steps:
Take data from an on-premises database and load it to an Azure Blob storage account. Connect to an Azure Machine
Learning service.
You need to orchestrate the workflow.
What should you use?

  • A. Azure Kubernetes Service (AKS)
  • B. Azure Pipelines
  • C. Azure Data Factory
  • D. An Azure HDInsight cluster
  • E. Azure Data Lake
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
With Azure Data Factory you can use workflows to orchestrate data integration and data transformation processes at scale.
Build data integration, and easily transform and integrate big data processing and machine learning with the visual interface.
Reference:
https://azure.microsoft.com/en-us/services/data-factory/

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 11 Topic 3, Mixed Questions

You are developing a bot for an ecommerce application. The bot will support five languages.
The bot will use Language Understanding (LUIS) to detect the language of the customer, and QnA Maker to answer
common customer questions. LUIS supports all the languages.
You need to determine the minimum number of Azure resources that you must create for the bot.
You create five instances of QnA Maker and five instances Language Understanding (LUIS).
Does this action accomplish your objective?

  • A. Yes, it does
  • B. No, it does not
Answer:

A

User Votes:
A
50%
B
50%

Explanation:
You need to have a new QnA Maker resource for each language.
If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and
endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use Microsoft
Translator API to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive
the resulting scores.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-language-support

Discussions
vote your answer:
A
B
0 / 1000

Question 12 Topic 3, Mixed Questions

You are developing a bot for an ecommerce application. The bot will support five languages.
The bot will use Language Understanding (LUIS) to detect the language of the customer, and QnA Maker to answer
common customer questions. LUIS supports all the languages.
You need to determine the minimum number of Azure resources that you must create for the bot.
You create five instances of QnA Maker and one instance Language Understanding (LUIS).
Does this action accomplish your objective?

  • A. Yes, it does
  • B. No, it does not
Answer:

B

User Votes:
A
50%
B
50%

Explanation:
You need to have a new QnA Maker resource for each language.
If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and
endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use Microsoft
Translator API to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive
the resulting scores.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-language-support

Discussions
vote your answer:
A
B
0 / 1000

Question 13 Topic 3, Mixed Questions

You are developing a bot for an ecommerce application. The bot will support five languages.
The bot will use Language Understanding (LUIS) to detect the language of the customer, and QnA Maker to answer
common customer questions. LUIS supports all the languages.
You need to determine the minimum number of Azure resources that you must create for the bot.
You create one instance of QnA Maker and five instances Language Understanding (LUIS).
Does this action accomplish your objective?

  • A. Yes, it does
  • B. No, it does not
Answer:

B

User Votes:
A
50%
B
50%

Explanation:
You need to have a new QnA Maker resource for each language.
If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and
endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use Microsoft
Translator API to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive
the resulting scores.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-language-support

Discussions
vote your answer:
A
B
0 / 1000

Question 14 Topic 3, Mixed Questions

You have a developed an AI app that uses a Cognitive Services API to identify images.
You want all the processed images to be stored an on-premises datacenter.
What should you do?

  • A. Use a Docker container to host the Cognitive Services API. Yes, it does
  • B. Use an Azure Stack Edge Pro to host the Cognitive Services API. No, it does not
  • C. Use an Azure Private Endpoint to host the Cognitive Services API.
  • D. Use an Azure Data Box to host the Cognitive Services API.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and
reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable
package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and
settings.
Reference: https://www.docker.com/resources/what-container
Q27)
You are developing a bot for an ecommerce application. The bot will support five languages.
The bot will use Language Understanding (LUIS) to detect the language of the customer, and QnA Maker to answer
common customer questions. LUIS supports all the languages.
You need to determine the minimum number of Azure resources that you must create for the bot.
You create one instance of QnA Maker and one instance Language Understanding (LUIS).
Does this action accomplish your objective?
A. Yes, it does
B. No, it does not
Answer: B
You need to have a new QnA Maker resource for each language.
If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and
endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use Microsoft
Translator API to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive
the resulting scores.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-language-support

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15 Topic 3, Mixed Questions

You are developing an app that will analyze sensitive data from global users.
Your app must adhere the following compliance policies:
The app must not store data in the cloud.
The app not use services in the cloud to process the data.
Which of the following actions should you take?

  • A. Make use of Azure Machine Learning Studio
  • B. Make use of Docker containers for the Text Analytics
  • C. Make use of a Text Analytics container deployed to Azure Kubernetes Service
  • D. Make use of Microsoft Machine Learning (MML) for Apache Spark
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The Microsoft Machine Learning Library for Apache Spark (MMLSpark) assists in provisioning scalable machine learning
models for large datasets, especially for building deep learning problems.
MMLSpark works with SparkML pipelines, including Microsoft CNTK and the OpenCV library, which provide end-to-end
support for the ingress and processing of image input data, categorization of images, and text analytics using pre-trained
deep learning algorithms.
Reference:
https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781789131956/10/ch10lvl1sec61/an-
overview-of-the-microsoft-machine-learning-library-for-apache-spark-mmlspark

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2