A company has several on-premises Internet Small Computer Systems Interface (iSCSI) network
storage servers The company wants to reduce the number of these servers by moving to the AWS
Cloud. A solutions architect must provide low-latency access to frequently used data and reduce the
dependency on on-premises servers with a minimal number of infrastructure changes.
Which solution will meet these requirements?
D
Explanation:
Storage Gateway Volume Gateway (Cached Volumes): This configuration allows you to store your
primary data in Amazon S3 while retaining frequently accessed data locally in a cache for low-latency
access.
Low-Latency Access: Frequently accessed data is cached locally on-premises, providing low-latency
access while the less frequently accessed data is stored cost-effectively in Amazon S3.
Implementation:
Deploy a Storage Gateway appliance on-premises or in a virtual environment.
Configure it as a volume gateway with cached volumes.
Create volumes and configure your applications to use these volumes.
Minimal Infrastructure Changes: This solution integrates seamlessly with existing on-premises
infrastructure, requiring minimal changes and reducing dependency on on-premises storage servers.
Reference:
AWS Storage Gateway Volume Gateway
Volume Gateway Cached Volumes
A marketing company receives a large amount of new clickstream data in Amazon S3 from a
marketing campaign The company needs to analyze the clickstream data in Amazon S3 quickly. Then
the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
B
Explanation:
AWS Glue Crawler: AWS Glue is a fully managed ETL (Extract, Transform, Load) service that makes it
easy to prepare and load data for analytics. A Glue crawler can automatically discover new data and
schema in Amazon S3, making it easy to keep the data catalog up-to-date.
Crawling the Data:
Set up an AWS Glue crawler to scan the S3 bucket containing the clickstream data.
The crawler will automatically detect the schema and create/update the tables in the AWS Glue Data
Catalog.
Amazon Athena:
Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard
SQL.
Once the data catalog is updated by the Glue crawler, use Athena to query the clickstream data
directly in S3.
Operational Efficiency: This solution leverages fully managed services, reducing operational
overhead. Glue crawlers automate data cataloging, and Athena provides a serverless, pay-per-query
model for quick data analysis without the need to set up or manage infrastructure.
Reference:
AWS Glue
Amazon Athena
A company has applications that run on Amazon EC2 instances in a VPC One of the applications
needs to call the Amazon S3 API to store and read objects. According to the company's security
regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?
A
Explanation:
VPC Endpoint for S3: A gateway endpoint for Amazon S3 enables you to privately connect your VPC
to S3 without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect
connection.
Configuration Steps:
In the VPC console, navigate to "Endpoints" and create a new endpoint.
Select the service name for S3 (com.amazonaws.region.s3).
Choose the VPC and the subnets where your EC2 instances are running.
Update the route tables for the selected subnets to include a route pointing to the endpoint.
Security Compliance: By configuring an S3 gateway endpoint, all traffic between the VPC and S3 stays
within the AWS network, complying with the company's security regulations to avoid internet
traversal.
Reference:
VPC Endpoints for Amazon S3
A company wants to isolate its workloads by creating an AWS account for each workload. The
company needs a solution that centrally manages networking components for the workloads. The
solution also must create accounts with automatic security controls (guardrails).
Which solution will meet these requirements with the LEAST operational overhead?
A
Explanation:
AWS Control Tower: Provides a managed service to set up and govern a secure, multi-account AWS
environment based on AWS best practices. It automates the setup of AWS Organizations and applies
security controls (guardrails).
Networking Account:
Create a centralized networking account that includes a VPC with both private and public subnets.
This centralized VPC will manage and control the networking resources.
AWS Resource Access Manager (AWS RAM):
Use AWS RAM to share the subnets from the networking account with the other workload accounts.
This allows different workload accounts to utilize the shared networking resources without the need
to manage their own VPCs.
Operational Efficiency: Using AWS Control Tower simplifies the setup and governance of multiple
AWS accounts, while AWS RAM facilitates centralized management of networking resources,
reducing operational overhead and ensuring consistent security and compliance.
Reference:
AWS Control Tower
AWS Resource Access Manager
A company's SAP application has a backend SQL Server database in an on-premises environment. The
company wants to migrate its on-premises application and database server to AWS. The company
needs an instance type that meets the high demands of its SAP database. On-premises performance
data shows that both the SAP application and the database have high memory utilization.
Which solution will meet these requirements?
C
Explanation:
Memory Optimized Instances: These instances are designed to deliver fast performance for
workloads that process large data sets in memory. They are ideal for high-performance databases
like SAP and applications with high memory utilization.
High Memory Utilization: Both the SAP application and the SQL Server database have high memory
demands as per the on-premises performance data. Memory optimized instances provide the
necessary memory capacity and performance.
Instance Types:
For the SAP application, using a memory optimized instance ensures the application has sufficient
memory to handle the high workload efficiently.
For the SQL Server database, memory optimized instances ensure optimal database performance
with high memory throughput.
Operational Efficiency: Using the same instance family for both the application and the database
simplifies management and ensures both components meet performance requirements.
Reference:
Amazon EC2 Instance Types
SAP on AWS
A company plans to rehost an application to Amazon EC2 instances that use Amazon Elastic Block
Store (Amazon EBS) as the attached storage
A solutions architect must design a solution to ensure that all newly created Amazon EBS volumes
are encrypted by default. The solution must also prevent the creation of unencrypted EBS volumes
Which solution will meet these requirements?
A
Explanation:
EC2 Account Attributes: Amazon EC2 allows you to set account attributes to automatically encrypt
new EBS volumes. This ensures that all new volumes created in your account are encrypted by
default.
Configuration Steps:
Go to the EC2 Dashboard.
Select "Account Attributes" and then "EBS encryption".
Enable default EBS encryption and select the default AWS KMS key or a customer-managed key.
Prevention of Unencrypted Volumes: By setting this account attribute, you ensure that it is not
possible to create unencrypted EBS volumes, thereby enforcing compliance with security
requirements.
Operational Efficiency: This solution requires minimal configuration changes and provides automatic
enforcement of encryption policies, reducing operational overhead.
Reference:
Amazon EC2 Default EBS Encryption
A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond
latency. The company has a high performance computing (HPC) environment in its data center and
wants to expand its forecasting capabilities.
A solutions architect must identify a highly available cloud storage solution that can handle large
amounts of sustained throughput Files that are stored in the solution should be accessible to
thousands of compute instances that will simultaneously access and process the entire dataset.
What should the solutions architect do to meet these requirements?
B
Explanation:
Amazon FSx for Lustre: Lustre is a high-performance file system designed for workloads that require
fast storage with sustained high throughput and low latency. It integrates with Amazon S3, making it
suitable for HPC environments.
Persistent File Systems:
Persistent Storage: Suitable for long-term storage and recurrent use, providing durability and
availability.
High Throughput and Low Latency: Persistent Lustre file systems can handle large amounts of data
with sub-millisecond latency, meeting the needs of high-performance computing workloads.
Simultaneous Access: FSx for Lustre allows thousands of compute instances to access and process
large datasets concurrently, ensuring that the high volume of data is handled efficiently.
Highly Available: FSx for Lustre is designed to provide high availability and is managed by AWS,
reducing the operational burden.
Reference:
Amazon FSx for Lustre
High-Performance Computing on AWS
A company plans to run a high performance computing (HPC) workload on Amazon EC2 Instances
The workload requires low-latency network performance and high network throughput with tightly
coupled node-to-node communication.
Which solution will meet these requirements?
A
Explanation:
Cluster Placement Group: This type of placement group is designed to provide low-latency network
performance and high throughput by grouping instances within a single Availability Zone. It is ideal
for applications that require tightly coupled node-to-node communication.
Configuration:
When launching EC2 instances, specify the option to launch them in a cluster placement group.
This ensures that the instances are physically located close to each other, reducing latency and
increasing network throughput.
Benefits:
Low-Latency Communication: Instances in a cluster placement group benefit from enhanced
networking capabilities, enabling low-latency communication.
High Network Throughput: The network performance within a cluster placement group is optimized
for high throughput, which is essential for HPC workloads.
Reference:
Placement Groups
High Performance Computing on AWS
A company needs a secure connection between its on-premises environment and AWS. This
connection does not need high bandwidth and will handle a small amount of traffic. The connection
should be set up quickly.
What is the MOST cost-effective method to establish this type of connection?
D
Explanation:
AWS Site-to-Site VPN: This provides a secure and encrypted connection between an on-premises
environment and AWS. It is a cost-effective solution suitable for low bandwidth and small traffic
needs.
Quick Setup:
Site-to-Site VPN can be quickly set up by configuring a virtual private gateway on the AWS side and a
customer gateway on the on-premises side.
It uses standard IPsec protocol to establish the VPN tunnel.
Cost-Effectiveness: Compared to AWS Direct Connect, which requires dedicated physical connections
and higher setup costs, a Site-to-Site VPN is less expensive and easier to implement for smaller traffic
requirements.
Reference:
AWS Site-to-Site VPN
A company is designing a new multi-tier web application that consists of the following components:
• Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
• An Amazon RDS DB instance for data storage
A solutions architect needs to limit access to the application servers so that only the web servers can
access them. Which solution will meet these requirements?
D
Explanation:
Application Load Balancer (ALB): ALB is suitable for routing HTTP/HTTPS traffic to the application
servers. It provides advanced routing features and integrates well with Auto Scaling groups.
Target Group Configuration:
Create a target group for the application servers and register the Auto Scaling group with this target
group.
Configure the ALB to forward requests from the web servers to the application servers.
Security Group Setup:
Configure the security group of the application servers to only allow traffic from the web servers'
security group.
This ensures that only the web servers can access the application servers, meeting the requirement
to limit access.
Benefits:
Security: Using security groups to restrict access ensures a secure environment where only intended
traffic is allowed.
Scalability: ALB works seamlessly with Auto Scaling groups, ensuring the application can handle
varying loads efficiently.
Reference:
Application Load Balancer
Security Groups for Your VPC