amazon AWS Certified Solutions Architect - Associate practice test

Last update: Nov 27 ,2025
Question 1

A company has several on-premises Internet Small Computer Systems Interface (iSCSI) network
storage servers The company wants to reduce the number of these servers by moving to the AWS
Cloud. A solutions architect must provide low-latency access to frequently used data and reduce the
dependency on on-premises servers with a minimal number of infrastructure changes.
Which solution will meet these requirements?

  • A. Deploy an Amazon S3 File Gateway
  • B. Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3
  • C. Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes
  • D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.
Answer:

D


Explanation:
Storage Gateway Volume Gateway (Cached Volumes): This configuration allows you to store your
primary data in Amazon S3 while retaining frequently accessed data locally in a cache for low-latency
access.
Low-Latency Access: Frequently accessed data is cached locally on-premises, providing low-latency
access while the less frequently accessed data is stored cost-effectively in Amazon S3.
Implementation:
Deploy a Storage Gateway appliance on-premises or in a virtual environment.
Configure it as a volume gateway with cached volumes.
Create volumes and configure your applications to use these volumes.
Minimal Infrastructure Changes: This solution integrates seamlessly with existing on-premises
infrastructure, requiring minimal changes and reducing dependency on on-premises storage servers.
Reference:
AWS Storage Gateway Volume Gateway
Volume Gateway Cached Volumes

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 2

A marketing company receives a large amount of new clickstream data in Amazon S3 from a
marketing campaign The company needs to analyze the clickstream data in Amazon S3 quickly. Then
the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create external tables in a Spark catalog Configure jobs in AWS Glue to query the data
  • B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
  • C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
  • D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data
Answer:

B


Explanation:
AWS Glue Crawler: AWS Glue is a fully managed ETL (Extract, Transform, Load) service that makes it
easy to prepare and load data for analytics. A Glue crawler can automatically discover new data and
schema in Amazon S3, making it easy to keep the data catalog up-to-date.
Crawling the Data:
Set up an AWS Glue crawler to scan the S3 bucket containing the clickstream data.
The crawler will automatically detect the schema and create/update the tables in the AWS Glue Data
Catalog.
Amazon Athena:
Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard
SQL.
Once the data catalog is updated by the Glue crawler, use Athena to query the clickstream data
directly in S3.
Operational Efficiency: This solution leverages fully managed services, reducing operational
overhead. Glue crawlers automate data cataloging, and Athena provides a serverless, pay-per-query
model for quick data analysis without the need to set up or manage infrastructure.
Reference:
AWS Glue
Amazon Athena

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 3

A company has applications that run on Amazon EC2 instances in a VPC One of the applications
needs to call the Amazon S3 API to store and read objects. According to the company's security
regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?

  • A. Configure an S3 gateway endpoint.
  • B. Create an S3 bucket in a private subnet.
  • C. Create an S3 bucket in the same AWS Region as the EC2 instances.
  • D. Configure a NAT gateway in the same subnet as the EC2 instances
Answer:

A


Explanation:
VPC Endpoint for S3: A gateway endpoint for Amazon S3 enables you to privately connect your VPC
to S3 without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect
connection.
Configuration Steps:
In the VPC console, navigate to "Endpoints" and create a new endpoint.
Select the service name for S3 (com.amazonaws.region.s3).
Choose the VPC and the subnets where your EC2 instances are running.
Update the route tables for the selected subnets to include a route pointing to the endpoint.
Security Compliance: By configuring an S3 gateway endpoint, all traffic between the VPC and S3 stays
within the AWS network, complying with the company's security regulations to avoid internet
traversal.
Reference:
VPC Endpoints for Amazon S3

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 4

A company wants to isolate its workloads by creating an AWS account for each workload. The
company needs a solution that centrally manages networking components for the workloads. The
solution also must create accounts with automatic security controls (guardrails).
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
  • B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
  • C. Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
  • D. Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
Answer:

A


Explanation:
AWS Control Tower: Provides a managed service to set up and govern a secure, multi-account AWS
environment based on AWS best practices. It automates the setup of AWS Organizations and applies
security controls (guardrails).
Networking Account:
Create a centralized networking account that includes a VPC with both private and public subnets.
This centralized VPC will manage and control the networking resources.
AWS Resource Access Manager (AWS RAM):
Use AWS RAM to share the subnets from the networking account with the other workload accounts.
This allows different workload accounts to utilize the shared networking resources without the need
to manage their own VPCs.
Operational Efficiency: Using AWS Control Tower simplifies the setup and governance of multiple
AWS accounts, while AWS RAM facilitates centralized management of networking resources,
reducing operational overhead and ensuring consistent security and compliance.
Reference:
AWS Control Tower
AWS Resource Access Manager

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 5

A company's SAP application has a backend SQL Server database in an on-premises environment. The
company wants to migrate its on-premises application and database server to AWS. The company
needs an instance type that meets the high demands of its SAP database. On-premises performance
data shows that both the SAP application and the database have high memory utilization.
Which solution will meet these requirements?

  • A. Use the compute optimized Instance family for the application Use the memory optimized instance family for the database.
  • B. Use the storage optimized instance family for both the application and the database
  • C. Use the memory optimized instance family for both the application and the database
  • D. Use the high performance computing (HPC) optimized instance family for the application. Use the memory optimized instance family for the database.
Answer:

C


Explanation:
Memory Optimized Instances: These instances are designed to deliver fast performance for
workloads that process large data sets in memory. They are ideal for high-performance databases
like SAP and applications with high memory utilization.
High Memory Utilization: Both the SAP application and the SQL Server database have high memory
demands as per the on-premises performance data. Memory optimized instances provide the
necessary memory capacity and performance.
Instance Types:
For the SAP application, using a memory optimized instance ensures the application has sufficient
memory to handle the high workload efficiently.
For the SQL Server database, memory optimized instances ensure optimal database performance
with high memory throughput.
Operational Efficiency: Using the same instance family for both the application and the database
simplifies management and ensures both components meet performance requirements.
Reference:
Amazon EC2 Instance Types
SAP on AWS

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 6

A company plans to rehost an application to Amazon EC2 instances that use Amazon Elastic Block
Store (Amazon EBS) as the attached storage
A solutions architect must design a solution to ensure that all newly created Amazon EBS volumes
are encrypted by default. The solution must also prevent the creation of unencrypted EBS volumes
Which solution will meet these requirements?

  • A. Configure the EC2 account attributes to always encrypt new EBS volumes.
  • B. Use AWS Config. Configure the encrypted-volumes identifier Apply the default AWS Key Management Service (AWS KMS) key.
  • C. Configure AWS Systems Manager to create encrypted copies of the EBS volumes. Reconfigure the EC2 instances to use the encrypted volumes
  • D. Create a customer managed key in AWS Key Management Service (AWS KMS) Configure AWS Migration Hub to use the key when the company migrates workloads.
Answer:

A


Explanation:
EC2 Account Attributes: Amazon EC2 allows you to set account attributes to automatically encrypt
new EBS volumes. This ensures that all new volumes created in your account are encrypted by
default.
Configuration Steps:
Go to the EC2 Dashboard.
Select "Account Attributes" and then "EBS encryption".
Enable default EBS encryption and select the default AWS KMS key or a customer-managed key.
Prevention of Unencrypted Volumes: By setting this account attribute, you ensure that it is not
possible to create unencrypted EBS volumes, thereby enforcing compliance with security
requirements.
Operational Efficiency: This solution requires minimal configuration changes and provides automatic
enforcement of encryption policies, reducing operational overhead.
Reference:
Amazon EC2 Default EBS Encryption

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 7

A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond
latency. The company has a high performance computing (HPC) environment in its data center and
wants to expand its forecasting capabilities.
A solutions architect must identify a highly available cloud storage solution that can handle large
amounts of sustained throughput Files that are stored in the solution should be accessible to
thousands of compute instances that will simultaneously access and process the entire dataset.
What should the solutions architect do to meet these requirements?

  • A. Use Amazon FSx for Lustre scratch file systems
  • B. Use Amazon FSx for Lustre persistent file systems.
  • C. Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.
  • D. Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.
Answer:

B


Explanation:
Amazon FSx for Lustre: Lustre is a high-performance file system designed for workloads that require
fast storage with sustained high throughput and low latency. It integrates with Amazon S3, making it
suitable for HPC environments.
Persistent File Systems:
Persistent Storage: Suitable for long-term storage and recurrent use, providing durability and
availability.
High Throughput and Low Latency: Persistent Lustre file systems can handle large amounts of data
with sub-millisecond latency, meeting the needs of high-performance computing workloads.
Simultaneous Access: FSx for Lustre allows thousands of compute instances to access and process
large datasets concurrently, ensuring that the high volume of data is handled efficiently.
Highly Available: FSx for Lustre is designed to provide high availability and is managed by AWS,
reducing the operational burden.
Reference:
Amazon FSx for Lustre
High-Performance Computing on AWS

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 8

A company plans to run a high performance computing (HPC) workload on Amazon EC2 Instances
The workload requires low-latency network performance and high network throughput with tightly
coupled node-to-node communication.
Which solution will meet these requirements?

  • A. Configure the EC2 instances to be part of a cluster placement group
  • B. Launch the EC2 instances with Dedicated Instance tenancy.
  • C. Launch the EC2 instances as Spot Instances.
  • D. Configure an On-Demand Capacity Reservation when the EC2 instances are launched.
Answer:

A


Explanation:
Cluster Placement Group: This type of placement group is designed to provide low-latency network
performance and high throughput by grouping instances within a single Availability Zone. It is ideal
for applications that require tightly coupled node-to-node communication.
Configuration:
When launching EC2 instances, specify the option to launch them in a cluster placement group.
This ensures that the instances are physically located close to each other, reducing latency and
increasing network throughput.
Benefits:
Low-Latency Communication: Instances in a cluster placement group benefit from enhanced
networking capabilities, enabling low-latency communication.
High Network Throughput: The network performance within a cluster placement group is optimized
for high throughput, which is essential for HPC workloads.
Reference:
Placement Groups
High Performance Computing on AWS

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 9

A company needs a secure connection between its on-premises environment and AWS. This
connection does not need high bandwidth and will handle a small amount of traffic. The connection
should be set up quickly.
What is the MOST cost-effective method to establish this type of connection?

  • A. Implement a client VPN
  • B. Implement AWS Direct Connect.
  • C. Implement a bastion host on Amazon EC2.
  • D. Implement an AWS Site-to-Site VPN connection.
Answer:

D


Explanation:
AWS Site-to-Site VPN: This provides a secure and encrypted connection between an on-premises
environment and AWS. It is a cost-effective solution suitable for low bandwidth and small traffic
needs.
Quick Setup:
Site-to-Site VPN can be quickly set up by configuring a virtual private gateway on the AWS side and a
customer gateway on the on-premises side.
It uses standard IPsec protocol to establish the VPN tunnel.
Cost-Effectiveness: Compared to AWS Direct Connect, which requires dedicated physical connections
and higher setup costs, a Site-to-Site VPN is less expensive and easier to implement for smaller traffic
requirements.
Reference:
AWS Site-to-Site VPN

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 10

A company is designing a new multi-tier web application that consists of the following components:
• Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
• An Amazon RDS DB instance for data storage
A solutions architect needs to limit access to the application servers so that only the web servers can
access them. Which solution will meet these requirements?

  • A. Deploy AWS PrivateLink in front of the application servers. Configure the network ACL to allow only the web servers to access the application servers.
  • B. Deploy a VPC endpoint in front of the application servers Configure the security group to allow only the web servers to access the application servers
  • C. Deploy a Network Load Balancer with a target group that contains the application servers' Auto Scaling group Configure the network ACL to allow only the web servers to access the application servers.
  • D. Deploy an Application Load Balancer with a target group that contains the application servers' Auto Scaling group. Configure the security group to allow only the web servers to access the application servers.
Answer:

D


Explanation:
Application Load Balancer (ALB): ALB is suitable for routing HTTP/HTTPS traffic to the application
servers. It provides advanced routing features and integrates well with Auto Scaling groups.
Target Group Configuration:
Create a target group for the application servers and register the Auto Scaling group with this target
group.
Configure the ALB to forward requests from the web servers to the application servers.
Security Group Setup:
Configure the security group of the application servers to only allow traffic from the web servers'
security group.
This ensures that only the web servers can access the application servers, meeting the requirement
to limit access.
Benefits:
Security: Using security groups to restrict access ensures a secure environment where only intended
traffic is allowed.
Scalability: ALB works seamlessly with Auto Scaling groups, ensuring the application can handle
varying loads efficiently.
Reference:
Application Load Balancer
Security Groups for Your VPC

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Page 1 out of 52
Viewing questions 1-10 out of 527
Go To
page 2