amazon AWS Certified Solutions Architect - Professional Exam practice test

Last update: Nov 27 ,2025
Question 1

A company used Amazon EC2 instances to deploy a web fleet to host a blog site The EC2 instances
are behind an Application Load Balancer (ALB) and are configured in an Auto ScaSng group The web
application stores all blog content on an Amazon EFS volume.
The company recently added a feature 'or Moggers to add video to their posts, attracting 10 times
the previous user traffic At peak times of day. users report buffering and timeout issues while
attempting to reach the site or watch videos
Which is the MOST cost-efficient and scalable deployment that win resolve the issues for users?

  • A. Reconfigure Amazon EFS to enable maximum I/O.
  • B. Update the Nog site to use instance store volumes tor storage. Copy the site contents to the volumes at launch and to Amazon S3 al shutdown.
  • C. Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket, and migrate the videos from EFS to Amazon S3.
  • D. Set up an Amazon CloudFront distribution for all site contents, and point the distribution at the ALB.
Answer:

C


Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-connection-fails/
Using an Amazon S3 bucket
Using a MediaStore container or a MediaPackage channel
Using an Application Load Balancer
Using a Lambda function URL
Using Amazon EC2 (or another custom origin)
Using CloudFront origin groups
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/restrict-access-to-load-balancer.html

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 2

A company with global offices has a single 1 Gbps AWS Direct Connect connection to a single AWS
Region. The company's on-premises network uses the connection to communicate with the
company's resources in the AWS Cloud. The connection has a single private virtual interface that
connects to a single VPC.
A solutions architect must implement a solution that adds a redundant Direct Connect connection in
the same Region. The solution also must provide connectivity to other Regions through the same pair
of Direct Connect connections as the company expands into other Regions.
Which solution meets these requirements?

  • A. Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interlace on each connection, and connect both private victual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC.
  • B. Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new private virtual interface on the new connection, and connect the new private virtual interface to the single VPC.
  • C. Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new public virtual interface on the new connection, and connect the new public virtual interface to the single VPC.
  • D. Provision a transit gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the transit gateway. Associate the transit gateway with the single VPC.
Answer:

A


Explanation:
A Direct Connect gateway is a globally available resource. You can create the Direct Connect gateway
in any Region and access it from all other Regions. The following describe scenarios where you can
use a Direct Connect gateway.https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 3

A company has a web application that allows users to upload short videos. The videos are stored on
Amazon EBS volumes and analyzed by custom recognition software for categorization.
The website contains stat c content that has variable traffic with peaks in certain months. The
architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web
application and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue The
company wants to re-architect the application to reduce operational overhead using AWS managed
services where possible and remove dependencies on third-party software.
Which solution meets these requirements?

  • A. Use Amazon ECS containers for the web application and Spot Instances for the Auto Scaling group that processes the SQS queue. Replace the custom software with Amazon Recognition to categorize the videos.
  • B. Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances for Te web application. Process the SOS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
  • C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
  • D. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS queue Replace the custom software with Amazon Rekognition to categorize the videos.
Answer:

C


Explanation:
Option C is correct because hosting the web application in Amazon S3, storing the uploaded videos in
Amazon S3, and using S3 event notifications to publish events to the SQS queue reduces the
operational overhead of managing EC2 instances and EBS volumes. Amazon S3 can serve static
content such as HTML, CSS, JavaScript, and media files directly from S3 buckets. Amazon S3 can also
trigger AWS Lambda functions through S3 event notifications when new objects are created or
existing objects are updated or deleted. AWS Lambda can process the SQS queue with an AWS
Lambda function that calls the Amazon Rekognition API to categorize thevideos. This solution
eliminates the need for custom recognition software and third-party dependencies345
Reference: 1: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html 2:
https://aws.amazon.com/efs/pricing/ 3:https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html 4:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html 5:
https://docs.aws.amazon.com/rekognition/latest/dg/what-
is.html 6:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 4

A company has a serverless application comprised of Amazon CloudFront, Amazon API Gateway, and
AWS Lambda functions. The current deployment process of the application code is to create a new
version number of the Lambda function and run an AWS CLI script to update. If the new function
version has errors, another CLI script reverts by deploying the previous working version of the
function. The company would like to decrease the time to deploy new versions of the application
logic provided by the Lambda functions, and also reduce the time to detect and revert when errors
are identified.
How can this be accomplished?

  • A. Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of the AWS CloudFront distribution and API Gateway, and the child stack containing the Lambda function. For changes to Lambda, create an AWS CloudFormation change set and deploy; if errors are triggered, revert the AWS CloudFormation change set to the previous version.
  • B. Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered.
  • C. Refactor the AWS CLI scripts into a single script that deploys the new Lambda version. When deployment is completed, the script tests execute. If errors are detected, revert to the previous Lambda version.
  • D. Create and deploy an AWS CloudFormation stack that consists of a new API Gateway endpoint that references the new Lambda version. Change the CloudFront origin to the new API Gateway endpoint, monitor errors and if detected, change the AWS CloudFront origin to the previous API Gateway endpoint.
Answer:

B


Explanation:
https://aws.amazon.com/about-aws/whats-new/2017/11/aws-lambda-supports-traffic-shifting-and-
phased-deployments-with-aws-codedeploy/

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 5

A company is using an on-premises Active Directory service for user authentication. The company
wants to use the same authentication service to sign in to the company's AWS accounts, which are
using AWS Organizations. AWS Site-to-Site VPN connectivity already exists between the on-premises
environment and all the company's AWS accounts.
The company's security policy requires conditional access to the accounts based on user groups and
roles. User identities must be managed in a single location.
Which solution will meet these requirements?

  • A. Configure AWS Single Sign-On (AWS SSO) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross- domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attribute-based access controls (ABACs).
  • B. Configure AWS Single Sign-On (AWS SSO) by using AWS SSO as an identity source. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using AWS SSO permission sets.
  • C. In one of the company's AWS accounts, configure AWS Identity and Access Management (IAM) to use a SAML 2.0 identity provider. Provision IAM users that are mapped to the federated users. Grant access that corresponds to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM users.
  • D. In one of the company's AWS accounts, configure AWS Identity and Access Management (IAM) to use an OpenID Connect (OIDC) identity provider. Provision IAM roles that grant access to the AWS account for the federated users that correspond to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM roles.
Answer:

D


Explanation:
https://aws.amazon.com/blogs/aws/new-attributes-based-access-control-with-aws-single-sign-on/

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 6

A software company has deployed an application that consumes a REST API by using Amazon API
Gateway. AWS Lambda functions, and an Amazon DynamoDB table. The application is showing an
increase in the number of errors during PUT requests. Most of the PUT calls come from a small
number of clients that are authenticated with specific API keys.
A solutions architect has identified that a large number of the PUT requests originate from one client.
The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are
displayed to customers and are causing damage to the API's reputation.
What should the solutions architect recommend to improve the customer experience?

  • A. Implement retry logic with exponential backoff and irregular variation in the client application. Ensure that the errors are caught and handled with descriptive error messages.
  • B. Implement API throttling through a usage plan at the API Gateway level. Ensure that the client application handles code 429 replies without error.
  • C. Turn on API caching to enhance responsiveness for the production stage. Run 10-minute load tests. Verify that the cache capacity is appropriate for the workload.
  • D. Implement reserved concurrency at the Lambda function level to provide the resources that are needed during sudden increases in traffic.
Answer:

B


Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/aws-batch-requests-error/
https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-429-limit/

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 7

A company is running a data-intensive application on AWS. The application runs on a cluster of
hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store
200 TB of dat
a. The application reads and modifies the data on the shared file system and generates a report. The
job runs once monthly, reads a subset of the files from the shared file system, and takes about 72
hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host
the shared file system run continuously. The compute and storage instances are all in the same AWS
Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file
system must provide high performance access to the needed data for the duration of the 72-hour
run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?

  • A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
  • B. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch template. Use the EBS volume as the shared storage for the duration of the job. Detach the EBS volume when the job is complete.
  • C. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
  • D. Migrate the data from the existing shared file system to an Amazon S3 bucket. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the job. Delete the file gateway when the job is complete.
Answer:

A


Explanation:
https://aws.amazon.com/blogs/storage/new-enhancements-for-moving-data-between-amazon-fsx-
for-lustre-and-amazon-s3/

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 8

A company is developing a new service that will be accessed using TCP on a static port A solutions
architect must ensure that the service is highly available, has redundancy across Availability Zones,
and is accessible using the DNS name myservice.com, which is publicly accessible The service must
use fixed address assignments so other companies can add the addresses to their allow lists.
Assuming that resources are deployed in multiple Availability Zones in a single Region, which
solution will meet these requirements?

  • A. Create Amazon EC2 instances with an Elastic IP address for each instance Create a Network Load Balancer (NLB) and expose the static TCP port Register EC2instances with the NLB Create a new name server record set named my service com, and assign the Elastic IP addresses of the EC2 instances to the record set Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their allow lists
  • B. Create an Amazon ECS cluster and a service definition for the application Create and assign public IP addresses for the ECS cluster Create a Network Load Balancer (NLB) and expose the TCP port Create a target group and assign the ECS cluster name to the NLB Create a new A record set named my service com and assign the public IP addresses of the ECS cluster to the record set Provide the public IP addresses of the ECS cluster to the other companies to add to their allow lists
  • C. Create Amazon EC2 instances for the service Create one Elastic IP address for each Availability Zone Create a Network Load Balancer (NLB) and expose the assigned TCP port Assign the Elastic IP addresses to the NLB for each Availability Zone Create a target group and register the EC2 instances with the NLB Create a new A (alias) record set named my service com, and assign the NLB DNS name to the record set.
  • D. Create an Amazon ECS cluster and a service definition for the application Create and assign public IP address for each host in the cluster Create an Application Load Balancer (ALB) and expose the static TCP port Create a target group and assign the ECS service definition name to the ALB Create a new CNAME record set and associate the public IP addresses to the record set Provide the Elastic IP addresses of the Amazon EC2 instances to the other companies to add to their allow lists
Answer:

C


Explanation:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP
addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances
with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name
to the record set. As it uses the NLB as the resource in the A-record, traffic will be routed through the
NLB, and it will automatically route the traffic to the healthy instances based on the health checks
and also it provides the fixed address assignments as the other companies can add the NLB's Elastic
IP addresses to their allow lists.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 9

A company uses an on-premises data analytics platform. The system is highly available in a fully
redundant configuration across 12 servers in the company's data center.
The system runs scheduled jobs, both hourly and daily, in addition to one-time requests from
users.Scheduled jobs can take between 20 minutes and 2 hours to finish running and have tight SLAs.
The scheduled jobs account for 65% of the system usage. User jobs typically finish running in less
than 5 minutes and have no SL

  • A. Split the 12 instances across two Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run four instances in each Availability Zone as Spot Instances.
  • B. Split the 12 instances across three Availability Zones in the chosen AWS Region. In one of the Availability Zones, run all four instances as On-Demand Instances with Capacity Reservations. Run the remaining instances as Spot Instances.
  • C. Split the 12 instances across three Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with a Savings Plan. Run two instances in each Availability Zone as Spot Instances.
  • D. Split the 12 instances across three Availability Zones in the chosen AWS Region. Run three instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run one instance in each Availability Zone as a Spot Instance.
Answer:

D


Explanation:
By splitting the 12 instances across three Availability Zones, the system can maintain high availability
and availability of resources in case of a failure. Option D also uses a combination of On-Demand
Instances with Capacity Reservations and Spot Instances, which allows for scheduled jobs to be run
on the On-Demand instances with guaranteed capacity, while also taking advantage of the cost
savings from Spot Instances for the user jobs which have lower SLA requirements.

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 10

A security engineer determined that an existing application retrieves credentials to an Amazon RDS
for MySQL database from an encrypted file in Amazon S3. For the next version of the application, the
security engineer wants to implement the following application design changes to improve security:
The database must use strong, randomly generated passwords stored in a secure AWS managed
service.
The application resources must be deployed through AWS CloudFormation.
The application must rotate credentials for the database every 90 days.
A solutions architect will generate a CloudFormation template to deploy the application.
Which resources specified in the CloudFormation template will meet the security engineer's
requirements with the LEAST amount of operational overhead?

  • A. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Specify a Secrets Manager RotationSchedule resource to rotate the database password every 90 days.
  • B. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Create an AWS Lambda function resource to rotate the database password. Specifya Parameter Store RotationSchedule resource to rotate the database password every 90 days.
  • C. Generate the database password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the database password. Create an Amazon EventBridge scheduled rule resource to trigger the Lambda function password rotation every 90 days.
  • D. Generate the database password as a SecureString parameter type using AWS Systems Manager Parameter Store. Specify an AWS AppSync DataSource resource to automatically rotate the database password every 90 days.
Answer:

B


Explanation:
https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-
functions-by-using-aws-secrets-manager/
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_cloudformation.html

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Page 1 out of 56
Viewing questions 1-10 out of 569
Go To
page 2