AWS Solutions Architect Associate — Free Practice Exam

Chapter 26: AWS Solutions Architect Practice Exam, including 65 questions, precise diagrams, and clear images.

Gonzalo Fernandez Plaza
16 min readJun 14, 2023

During these last months, we have been studying a lot of AWS services. In this chapter, we will take random questions from each chapter to practice for the official exam. Remember, we need around 75% to pass it! If we don’t reach this percentage, I recommend you reread the course chapters and try the test again. Good luck, and go for it!

Complete AWS Solutions Architect Associate Practice Exam.
Complete AWS Solutions Architect Associate Practice Exam.

This exam will be divided into three parts. Here you can find the links to the other parts:

Remember that you can find this exam FOR FREE at FullCertified. Take it now with our exam simulator!

Remember that you can find all the chapters from the course at the following link:

EXAM QUESTIONS WITH SOLUTIONS

1-: We need to design a managed multi-region database with replication. The requirements indicate that the master database should be in the EU (Ireland) region, and databases will be located in 4 other regions to service local read traffic. Which AWS service can deliver these requirements with a cost-effective and secure approach?

  1. RDS with Multi-AZ
  2. RDS with cross-region Read Replicas
  3. EC2 instances with EBS replication
  4. EC2 instances with CloudFront

Solution: 2. This is important to understand. RDS with Multi-AZ provides high availability within a single region but does not support replication across regions. In this case, we’d need Cross-region Read Replicas, which allow you to replicate your database to multiple other regions. This will distribute read traffic to the local replicas in each region, providing low-latency access to the data. You can see this approach in the following image:

Diagram explaining how RDS with cross-region Read Replicas works.
Diagram explaining how RDS with cross-region Read Replicas works.

2-: Data stored in Amazon Glacier must be delivered within 5 minutes of a retrieval request. Which features in Amazon Glacier can help meet this requirement?

  1. Standard retrieval.
  2. Bulk retrieval.
  3. Expedited retrieval.
  4. Vault Lock.

Solution: 3. Amazon Glacier offers three options for access to archives: expedited, standard, and bulk retrievals. Expedited retrievals are designed to complete quickly, within 1–5 minutes, which fits the requirement in the question.
The retrieval request time for the standard retrieval is around 3–5 hours, and bulk retrieval is approximately 5–12 hours.

3-: A web application is deployed in multiple regions behind an Application Load Balancer. We need routing to the closest region and automatic failover, and traffic should traverse the AWS global network for consistent performance. How can this be achieved?

  1. Place an EC2 Proxy in front of the ALB and configure automatic failover
  2. Configure AWS Global Accelerator and configure the ALBs as targets
  3. Create alias records for each ALB and configure a latency-based routing policy
  4. Use a CloudFront distribution with multiple custom origins in each region and configure it for high availability

Solution: 2. AWS Global Accelerator is a networking service that improves the performance of your users’ traffic by up to 60% using Amazon Web Services’ global network infrastructure. It allows us to connect faster to our applications reducing the latency by directly connecting to an Edge Location, which will route directly to our service. You can see how AWS Global Accelerator works in the following image:

Diagram explaining how AWS Global Accelerator works.
How AWS Global Accelerator works.

4-: A Solutions Architect must design a CSV storage solution for incoming billing reports. The data will be analyzed infrequently and discarded after 30 days. Which combination of services will be MOST cost-effective in meeting these requirements?

  1. Write the files to an S3 bucket and use Amazon Athena to query the data.
  2. Import the logs to an Amazon Redshift cluster.
  3. Use AWS Data Pipeline to import the logs into a DynamoDB table.
  4. Import the logs into an RDS MySQL instance.

Solution: 1. Amazon Athena is an interactive query service that easily analyzes data in Amazon S3 using standard SQL. It is a serverless service, and you only pay for the queries you run, so it’s a cost-effective solution for infrequent data analysis. You can see what the UI of Amazon Athena looks like in the following image:

Example of how to query Amazon S3 files using Amazon Athena.

5-: Which AWS service should a Solutions Architect recommend to his company’s development team if they want to upload a Java application's “.war” source code file while handling the provisioning and management of the underlying resources it will run on?

  1. AWS Elastic Beanstalk
  2. AWS CodeDeploy
  3. AWS CloudFormation
  4. AWS OpsWorks

Solution: 1. AWS Elastic Beanstalk is a service that makes deploying and scaling web applications and services easy. It abstracts away the underlying infrastructure management and allows you to focus on deploying your application code. Elastic Beanstalk supports various programming languages, including Java, and provides a platform to upload and deploy your code to a fleet of Amazon EC2 instances.

6-: Some images must be encrypted at rest in Amazon S3, but the company doesn’t want to spend time managing and rotating the keys, although this company wants to control who can access those keys. What should a solutions architect use to accomplish this?

  1. Server-Side Encryption with keys stored in an S3 bucket.
  2. Server-Side Encryption with Customer-Provided Keys (SSE-C).
  3. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3).
  4. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS).

Solution: 4. AWS Key Management Service (AWS KMS) is a key management system scaled for the cloud, and we can use it to encrypt our Amazon S3 objects. Using SSE-KMS will allow you to use KMS to handle key management, including key creation, rotation, disabling, and deletion.
SSE-C requires the company to manage the keys itself. SSE-S3 won’t provide the same granular control over who can access the keys. Storing encryption keys in an S3 bucket is not secure, and you should never do it.
You can see how SSE-KMS works in the following image:

How SSE-KMS encryption works.
How SSE-KMS encryption works.

7-: We are designing an application that includes an Auto Scaling group of Amazon EC2 Instances running behind an Elastic Load Balancer. All the web servers must be accessible only through the Elastic Load Balancer and none directly from the Internet. How should the Architect meet these requirements?

  1. With a CloudFront distribution in front of the Elastic Load Balancer.
  2. Denying traffic from the Internet in the web server’s security group.
  3. Configure the web tier security group to allow only traffic from the Elastic Load Balancer.
  4. Install a Load Balancer on an Amazon EC2 instance.

Solution: 3. To ensure that your web servers are only accessible through the Elastic Load Balancer (ELB), you would configure the security group for your web servers to only allow inbound traffic from the Elastic Load Balancer. In a Security Group, you cannot create deny rules. Besides, denying all internet traffic would also block legitimate traffic from your ELB.

8-: A High-Performance Computing (HPC), which requires low network latency and high throughput between nodes, will be deployed in a single AZ. How should the application be deployed for the best inter-node performance?

  1. In a partition placement group.
  2. In a load balancer placement group.
  3. In a spread placement group.
  4. In a cluster placement group.

Solution: 4. The cluster placement group is the best choice for High-Performance Computing (HPC) applications that need low network latency and high network throughput, as it groups the instances within a single Availability Zone.
The spread placement group spreads instances across distinct underlying hardware (different networks and power sources) to reduce correlated failures. In contrast, the partition placement group spreads the instances across logical partitions, providing isolation at the infrastructure level. The load balancer placement group doesn’t exist.

Comparisson between the different Amazon EC2 Placement Group Strategies.
All the different Amazon EC2 Placement Group strategies.

9-: What advantage from this list does Amazon CloudFront provide?

  1. A private network link to the AWS cloud
  2. Automated deployment of resources
  3. Provides serverless computing services
  4. Reduced latency

Solution: 4. Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data to customers globally with low latency. CloudFront uses a global network of edge locations worldwide to serve content with low latency.

CloudFront Edge Locations around the World.

10:- How can a systems administrator specify a script to run on an EC2 instance during launch?

  1. Metadata.
  2. User Data.
  3. Launch Template.
  4. AWS ECS.

Solution: 2. User Data is a feature in AWS EC2 that allows you to run scripts or set configuration details upon instance launch. This can be used to automate certain tasks like installing software, configuring settings, or even starting services when the instance is launched.

Script created using Amazon EC2 User Data.

11:- A database currently uses an in-memory cache. We must deliver a solution that supports high availability and replication for the caching layer. Which service should we use?

  1. Amazon ElastiCache Redis
  2. Amazon RDS Multi-AZ
  3. Amazon ElastiCache Memcached
  4. Amazon Redshift

Solution: 1. Amazon ElastiCache is a web service that makes it easy to deploy and operate an in-memory cache in the cloud. ElastiCache provides two caching engines: Redis and Memcached. However, only ElastiCache Redis provides high availability and replication. You can see the differences between the two different engines in the following image:

Comparison between ElastiCache Memcache and ElastiCache Redis.
Comparison between Memcache and Redis.

12-: Three AWS accounts are owned by the same company but in different regions. Account Z has two AWS Direct Connect connections to two separate company offices. Accounts A and B require routing across account Z’s Direct Connect connections to each company office. A Solutions Architect has created an AWS Direct Connect gateway in account Z.

  1. Associate the Direct Connect gateway to a transit gateway in each region
  2. Associate the Direct Connect gateway to just one transit gateway, and connect it to each region
  3. Associate the Direct Connect gateway to a virtual private gateway in accounts A and B
  4. Create a VPC Endpoint to the Direct Connect gateway in accounts A and B

Solution: 3. AWS Direct Connect is a cloud service that links your on-premise network directly to your AWS VPC in a private way, bypassing the internet to deliver more consistent, lower-latency performance. It’s important to know that this connection is not encrypted, but it’s private.
You can use an AWS Direct Connect gateway to connect your AWS Direct Connect connection over a private virtual interface to one or more VPCs in any account located in the same or different Regions. You associate a Direct Connect gateway with the virtual private gateway for the VPC. Then, you create a private virtual interface for your AWS Direct Connect connection to the Direct Connect gateway.
You can see this behavior in the following image:

How AWS Direct Connect Gateway works.
How AWS Direct Connect Gateway works.

13-: Amazon EC2 instances run between 10 am and 6 pm Monday-Thursday in a development environment. Production instances run 24/7. Which pricing models should be used? (Select TWO)

  1. Use Spot instances for the development environment.
  2. Use scheduled reserved instances for the development environment.
  3. Use Reserved instances for the production environment.
  4. Use Reserved instances for the development environment.
  5. Use On-Demand instances for the production environment.

Solution: 2, 3. Scheduled Reserved Instances allow you to reserve capacity for your Amazon EC2 instances in specific time windows. They are a good choice for workloads that do not run continuously but do run on a regular schedule, so this is ideal for the development environment.
On the other hand, Reserved instances are a good choice for workloads that run continuously. They provide a significant discount (up to 75%) compared to On-Demand instance pricing. This is the best option for the production environment, where instances run 24/7.

14-: The Systems Administrators in a company currently use Chef for configuration management of on-premise servers. Which AWS service can a Solutions Architect use to provide a fully-managed configuration management service that will enable the use of existing Chef cookbooks?

  1. Opsworks Backups
  2. Opsworks Volumes
  3. OpsWorks for Chef Automate
  4. Opsworks Stacks

Solution: 3. AWS OpsWorks for Chef Automate is a fully managed configuration management service that hosts Chef Automate, a suite of automation tools from Chef for configuration management, compliance and security, and continuous deployment. When they ask about Chef for configuration management, it will always be OpsWorks for Chef Automate.

15-: A company is deploying a big data and analytics workload that will run from thousands of EC2 instances across multiple AZs. The company must store the data on a shared storage layer that can be mounted and accessed concurrently by all EC2 instances. Extremely high throughput is required. What storage layer would be most suitable for this requirement?

  1. Amazon EFS in General Purpose mode.
  2. Amazon EBS PIOPS.
  3. Amazon EFS in Max I/O mode.
  4. Amazon S3.

Solution: 3. For use cases requiring high levels of throughput from many EC2 instances, you should use Amazon EFS in Max I/O mode, as it’s optimized to provide the highest possible throughput.

16-: Which is the MOST cost-effective storage option for a service that provides offsite backups for different devices and has to support millions of customers, in which the images will be retrieved infrequently but must be available for retrieval immediately

  1. Amazon S3 Standard-Infrequent Access.
  2. Amazon Glacier with expedited retrievals.
  3. Amazon S3 Standard.
  4. Amazon EFS.

Solution: 1. Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is designed for data accessed less frequently but requires rapid access when needed. The other options are incorrect; Amazon Glacier is used for long-term archival storage and is not intended for immediate data retrieval. Amazon S3 Standard might be a viable option; however, it’s more expensive than S3 Standard-IA for infrequently accessed data. You can see the different S3 storage options in the following diagram:

Different Amazon S3 Storage classes.
Different Amazon S3 Storage classes.

17-: A manual script developed in NodeJS runs a couple of times a week and takes 10 minutes to run. It needs to be replaced with an automated solution. Which option should we use?

  1. Use a cron job on an Amazon EC2 instance
  2. Use AWS Batch
  3. Use AWS Lambda
  4. Use AWS Elastic Beanstalk

Solution: 3. AWS Lambda is a serverless computing service that lets you run your code without provisioning or managing servers. It has a maximum execution time of 15 minutes, so the script will be completed during this time. It’s also the most cost-effective solution, as you are only charged for the computing time you consume — there is no charge when your code is not running.

18-: Your Business Intelligence team uses SQL tools to analyze data. What would be the best solution for performing queries on structured data received at a high velocity?

  1. EMR using Hive
  2. Kinesis Firehose with RDS
  3. EMR running Apache Spark
  4. Kinesis Firehose with RedShift

Solution: 4. Amazon Kinesis Firehose is a fully managed service to deliver real-time streaming data directly to other AWS services such as Amazon S3, Amazon Redshift, or Amazon OpenSearch (Elasticsearch). You will later analyze this data in Amazon Redshift. You can see how Kinesis Firehose works in the following image:

Amazon Kinesis Firehose can be used to load data into different destinations, like S3, OpenSearch, Redshift, or API Gateway.
Amazon Kinesis Firehose can be used to load data into different destinations.

19-: If we have to ensure that the Amazon EC2 instances from an application can be launched in another AWS Region in the event of a disaster, what steps should be taken? (Select TWO)

  1. Launch instances in the second region using the S3 API.
  2. Create AMIs of the instances and copy them to another Region.
  3. Launch instances in the second region from the AMIs.
  4. Copy the snapshots using Amazon S3 cross-region replication.
  5. Enable cross-region snapshots for the Amazon EC2 instances.

Solution: 2, 3. AMIs (Amazon Machine Images) are a convenient way to capture the configuration and state of an EC2 instance. By creating AMIs of the instances, you can easily replicate and launch the instances in another region.

20-: A multi-tier web application currently hosts two web services on the same set of instances, listening for traffic on different ports. Which AWS service should we use to route traffic to the service based on the incoming request path?

  1. Amazon Route 53
  2. Amazon CloudFront
  3. Application Load Balancer (ALB)
  4. Classic Load Balancer (CLB)

Solution: 3. The Application Load Balancer operates at the request level (Layer 7), making it suitable for routing traffic based on application content, such as HTTP headers and the URL in the request, and distributing traffic across multiple targets or instances.
The Classic Load Balancer operates at both the request level (Layer 7) and connection level (Layer 4), but it does not support path-based routing like the Application Load Balancer.
You can see an example of this behavior in the following image:

Image explaining how the AWS Application Load Balancer distributes traffic depending on the HTTP URL.
Application Load Balancer distributing traffic depending on the HTTP URL.

21-: An organization wants to share regular updates using static web pages. The pages are expected to generate a large number of views from around the world. The files are stored in an Amazon S3 bucket. Which action should we take to accomplish this goal, designing an efficient and effective solution?

  1. Create EC2 instances around the world, and host this website in every instance.
  2. Use cross-Region replication to all Regions.
  3. Use the geo proximity feature of AWS S3.
  4. Use Amazon CloudFront with the S3 bucket as its origin.

Solution: 4. Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data to customers globally with low latency. CloudFront uses a global network of edge locations worldwide to serve content with low latency. You can use an Amazon S3 bucket as its origin to provide a global CDN, which caches content closer to the end users, thus reducing latency.
Creating EC2 instances worldwide would be costly and require a lot of management. Cross-Region replication would also be expensive and wouldn’t necessarily reduce latency. Finally, S3 doesn’t have any geo-proximity feature, that’s why these options are incorrect.
You can see this solution in the following image:

Amazon CloudFront integrating with Amazon S3 using OAI.
Amazon CloudFront integrating with Amazon S3 using OAI.

22-: We want to launch an Amazon EC2 instance with multiple attached volumes by modifying the block device mapping. Which block device can be specified in a block device mapping to be used with an EC2 instance? (Select TWO)

  1. EBS volume.
  2. EFS volume.
  3. Instance store volume.
  4. Snapshot.
  5. S3 bucket.

Solution: 1, 3. Amazon Elastic Block Store (EBS) provides block-level storage volumes for Amazon EC2 instances. You can attach an EBS volume to any running instance, and the storage can persist independently.
Instance store provides temporary block-level storage, and the data persists only during the life of the associated Amazon EC2 instance; if you stop, terminate, or reboot the instance, all data on the instance store volume is lost.
EFS is not a block-level storage and can’t be specified in a block device mapping. Amazon S3 is an object storage service, so this option is also incorrect.

23-: A web application is composed of a web and a database layer. Some reports suggested that the webserver layer may be vulnerable to cross-site scripting (XSS) attacks. What should we do to remediate this vulnerability?

  1. Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  2. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  3. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  4. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard.

Solution: 3. AWS WAF (Web Application Firewall) is a firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF is only available in the Application Load Balancer (ALB), protecting the web layer against XSS attacks.
AWS Shield Standard primarily provides DDoS protection, not designed to mitigate XSS attacks.
In the following example, you can see how AWS WAF protects your application by blocking requests:

Example of some request that have been blocked by AWS WAF.
Example of some requests that AWS WAF has blocked.

24-: We must develop a serverless application to analyze data using SQL. We must upload this data to S3, which should always be encrypted. Which AWS Services should we use to encrypt and query the data?

  1. Use Amazon S3 server-side encryption and Amazon Redshift Spectrum to query the data.
  2. Use AWS KMS encryption keys for the S3 bucket and Amazon Redshift Spectrum to query the data.
  3. Use AWS KMS encryption keys for the S3 bucket and Amazon Athena to query the data.
  4. Use Amazon S3 server-side encryption and Amazon QuickSight to query the data.

Solution: 3. Amazon S3 can be used with AWS Key Management Service (AWS KMS) to encrypt server-side data. Apart from that, Amazon Athena is a serverless service that makes it easy to analyze data in Amazon S3 using standard SQL.
The other options involve Amazon Redshift Spectrum, which is not optional for a serverless application as it requires a running Redshift cluster, and QuickSight is a BI tool for visualizing data. You can see the behavior of AWS SS3-KMS in the following image:

How SSE-KMS encryption works.
How SSE-KMS encryption works.

25-: A High-Performance Computing (HPC) application needs to provide 135,000 IOPS. The storage layer is replicated across all instances in a cluster. What is the most optimal and cost-effective storage solution that provides the required performance?

  1. Use Amazon EBS Provisioned IOPS volume with 135,000 IOPS.
  2. Use Amazon Instance Store.
  3. Use Amazon S3 with byte-range fetch.
  4. Use Amazon EC2 Enhanced Networking with an EBS HDD Throughput Optimized Volume.

Solution: 2. Amazon EC2 instance store provides temporary block-level storage for instances. This storage is located on disks physically attached to the host computer, and data is lost if the instance is stopped or fails. It can provide very high IOPS (input/output operations per second) compared to EBS volumes. It also provides low latency, ideal for workloads requiring millions of transactions per second. However, it’s essential to remember that the data on Instance Store is ephemeral.
As we see in the following table, no EBS device provides 135.000 IOPS. If we don’t care about losing the data or implementing a process to store the data in a different service, like Amazon S3, we can use Instance Store.

Table explaining the different EBS Volume types.
Different EBS Volume types.

More Questions?

You can find the next 20 questions at the following link.

  • Do you want more than 500 AWS practice questions?
  • Access to a real exam simulator to thoroughly prepare for the exam.
  • Do you want to download these questions in PDF FOR FREE?
  • Download the ultimate cheat sheet for the AWS SAA exam!

All of this and more at FullCertified!

Thanks for Reading!

If you like my work and you want to support me…

  1. You can follow me on Medium here.
  2. Feel free to clap if this post is helpful for you! :)

--

--

Gonzalo Fernandez Plaza

Computer Science Engineer & Tech Lead 🖥️. Publishing AWS & Snowflake ❄️ courses & exams. https://www.fullcertified.com