You have an Azure Machine Learning model that is deployed to a web service.
You plan to publish the web service by using the name ml.contoso.com.
You need to recommend a solution to ensure that access to the web service is encrypted.
Which three actions should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
B D E
Explanation:
The process of securing a new web service or an existing one is as follows:
1. Get a domain name.
2. Get a digital certificate.
3. Deploy or update the web service with the SSL setting enabled.
4. Update your DNS to point to the web service.
Note: To deploy (or re-deploy) the service with SSL enabled, set the ssl_enabled parameter to True, wherever applicable.
Set the ssl_certificate parameter to the value of the certificate file and the ssl_key to the value of the key file.
References: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-secure-web-service
Your company recently deployed several hardware devices that contain sensors.
The sensors generate new data on an hourly basis. The data generated is stored on-premises and retained for several
years.
During the past two months, the sensors generated 300 GB of data.
You plan to move the data to Azure and then perform advanced analytics on the data.
You need to recommend an Azure storage solution for the data.
Which storage solution should you recommend?
C
Explanation:
References: https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/data-storage
You plan to design an application that will use data from Azure Data Lake and perform sentiment analysis by using Azure
Machine Learning algorithms.
The developers of the application use a mix of Windows- and Linux-based environments. The developers contribute to
shared GitHub repositories.
You need all the developers to use the same tool to develop the application.
What is the best tool to use? More than one answer choice may achieve the goal.
C
Explanation:
References: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/studio/algorithm-choice.md
You have several AI applications that use an Azure Kubernetes Service (AKS) cluster. The cluster supports a maximum of
32 nodes.
You discover that occasionally and unpredictably, the application requires more than 32 nodes.
You need to recommend a solution to handle the unpredictable application load.
Which scaling method should you recommend?
B
Explanation:
B: To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes
that run your workloads. The cluster autoscaler component can watch for pods in your cluster that can't be scheduled
because of resource constraints. When issues are detected, the number of nodes is increased to meet the application
demand. Nodes are also regularly checked for a lack of running pods, with the number of nodes then decreased as needed.
This ability to automatically scale up or down the number of nodes in your AKS cluster lets you run an efficient, cost-effective
cluster.
Reference:
https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler
You deploy an infrastructure for a big data workload.
You need to run Azure HDInsight and Microsoft Machine Learning Server. You plan to set the RevoScaleR compute
contexts to run rx function calls in parallel.
What are three compute contexts that you can use for Machine Learning Server? Each correct answer presents a complete
solution.
NOTE: Each correct selection is worth one point.
A B C
Explanation:
Remote computing is available for specific data sources on selected platforms. The following tables document the supported
combinations.
RxInSqlServer, sqlserver: Remote compute context. Target server is a single database node (SQL Server 2016 R
Services or SQL Server 2017 Machine Learning Services). Computation is parallel, but not distributed.
RxSpark, spark: Remote compute context. Target is a Spark cluster on Hadoop.
RxLocalParallel, localpar: Compute context is often used to enable controlled, distributed computations relying on
instructions you provide rather than a built-in scheduler on Hadoop. You can use compute context for manual distributed
computing.
References: https://docs.microsoft.com/en-us/machine-learning-server/r/concept-what-is-compute-context
Your company has 1,000 AI developers who are responsible for provisioning environments in Azure.
You need to control the type, size, and location of the resources that the developers can provision.
What should you use?
B
Explanation:
When an application needs access to deploy or configure resources through Azure Resource Manager in Azure Stack, you
create a service principal, which is a credential for your application. You can then delegate only the necessary permissions
to that service principal.
References: https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-create-service-principals
You are designing an AI solution in Azure that will perform image classification.
You need to identify which processing platform will provide you with the ability to update the logic over time. The solution
must have the lowest latency for inferencing without having to batch.
Which compute target should you identify?
B
Explanation:
FPGAs, such as those available on Azure, provide performance close to ASICs. They are also flexible and reconfigurable
over time, to implement new logic.
Incorrect Answers:
D: ASICs are custom circuits, such as Google's TensorFlow Processor Units (TPU), provide the highest efficiency. They
can't be reconfigured as your needs change.
References: https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas
You have a solution that runs on a five-node Azure Kubernetes Service (AKS) cluster. The cluster uses an N-series virtual
machine.
An Azure Batch AI process runs once a day and rarely on demand.
You need to recommend a solution to maintain the cluster configuration when the cluster is not in use. The solution must not
incur any compute costs.
What should you include in the recommendation?
A
Explanation:
An AKS cluster has one or more nodes.
References: https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads
HOTSPOT You are designing an AI solution that will be used to find buildings in aerial pictures.
Users will upload the pictures to an Azure Storage account. A separate JSON document will contain for the pictures.
The solution must meet the following requirements:
Store metadata for the pictures in a data store.
Run a custom vision Azure Machine Learning module to identify the buildings in a picture and the position of the buildings
edges.
Run a custom mathematical module to calculate the dimensions of the buildings in a picture based on the metadata and
data from the vision module.
You need to identify which Azure infrastructure services are used for each component of the AI workflow. The solution must
execute as quickly as possible.
What should you identify? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Explanation:
Box 1: Azure Blob Storage
Containers and blobs support custom metadata, represented as HTTP headers.
Box 2: NV
The NV-series enables powerful remote visualisation workloads and other graphics-intensive applications backed by the
NVIDIA Tesla M60 GPU.
Note: The N-series is a family of Azure Virtual Machines with GPU capabilities. GPUs are ideal for compute and graphics-
intensive workloads, helping customers to fuel innovation through scenarios like high-end remote visualisation, deep learning
and predictive analytics.
Box 3: F
F-series VMs feature a higher CPU-to-memory ratio. Example use cases include batch processing, web servers, analytics
and gaming.
Incorrect:
A-series VMs have CPU performance and memory configurations best suited for entry level workloads like development and
test.
References:
https://azure.microsoft.com/en-in/pricing/details/virtual-machines/series/
Your company has recently deployed 5,000 Internet-connected sensors for a planned AI solution.
You need to recommend a computing solution to perform a real-time analysis of the data generated by the sensors.
Which computing solution should you recommend?
C
Explanation:
Azure HDInsight makes it easy, fast, and cost-effective to process massive amounts of data.
You can use HDInsight to process streaming data that's received in real time from a variety of devices.
References: https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-introduction