AWS Solutions Architect Associate — Practice Exam Questions
Chapter 28: Last 20 questions from the AWS Solutions Architect Practice Exam series!
Welcome to the last chapter related to the AWS Solutions Architect Associate Practice Exam series. This time, we will review the last 20 questions of the complete exam. Let’s finish it!
This exam will be divided into three parts. Here you can find the links to the other parts:
- AWS Solutions Architect Associate Complete Exam (part 1)
- AWS Solutions Architect Associate Complete Exam (part 2)
Remember that you can find this exam FOR FREE at FullCertified. Take it now with our exam simulator!
Remember that you can find all the chapters from the course at the following link:
EXAM QUESTIONS WITH SOLUTIONS
46-: A MySQL database will be migrated to the AWS Cloud. The cloud DB should be a managed solution that supports high availability and automatic failover in the event of an Availability Zone (AZ) outage. How can we achieve it?
- Use the AWS Database Migration Service (DMS) to directly migrate the database to an Amazon RDS MySQL Multi-AZ deployment.
- Use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon RDS MySQL using the Schema Conversion Tool (SCT)
- Use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon EC2.
- Use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon EBS.
Solution: 1. The AWS Database Migration Service (DMS) can be used to migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. You don’t need to use the Schema Conversion Tool (SCT), as it’s unnecessary to migrate to the same service, in this case, MySQL to MySQL.
47-: A company stores important data in an Amazon S3 bucket. A solutions architect needs to ensure that data can be recovered in case of accidental deletion. How can we do that?
- Enable Amazon S3 Intelligent Tiering.
- Enable an Amazon S3 lifecycle policy.
- Enable Amazon S3 cross-Region replication.
- Enable Amazon S3 versioning.
Solution: 4. Using Amazon S3 versioning, you can keep multiple variants of an object (including all writes and deletes) in the same bucket. This makes it easy to recover from unintended user actions and application failures.
S3 lifecycle policy automatically moves data between different storage classes and deletes data after a specific time. S3 cross-Region replication is used to create backup copies of data across other regions, but it won’t prevent accidental deletions because deletions would also be replicated.
48-: An application is expected to be extremely popular, and the back-end DynamoDB database may not perform as required. How can we enable in-memory read performance with microsecond response times for the DynamoDB database?
- Enable read replicas
- Configure DynamoDB Auto Scaling
- Configure Amazon DAX
- Increase the provisioned throughput
Solution: 3. Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement. It lets you offload read traffic from your tables to the cache.
49-: How can we be notified by email when an RDS database exceeds certain metric thresholds?
- Create a CloudTrail alarm and configure a notification event to send an SMS.
- Create a CloudWatch alarm and associate an SNS topic with it that sends an email notification.
- Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function to send emails using AWS SES.
- Set up an RDS alarm to send emails.
Solution: 2. CloudWatch alarms can be configured to send notifications or automatically change the resources you monitor based on the rules you define. CloudWatch cannot send notifications by itself; you need to create an SNS topic, subscribe your email to that topic, and then configure your CloudWatch Alarm to notify this SNS topic when it triggers. You can find more information about SNS at the following link.
50-: A website developed using HTML, CSS, client-side JavaScript, and images need to be hosted. Which solution is the MOST cost-effective?
- Containerize the website and host it in AWS Fargate.
- Create an Amazon S3 bucket and host the website there.
- Deploy a web server on an Amazon EC2 instance to host the website.
- Configure an Application Load Balancer with an AWS Lambda target.
Solution: 2. Amazon S3 allows you to host static websites without a traditional web server. Given that the website is composed of static files (HTML, CSS, JavaScript, images), you can upload these files to an S3 bucket and configure the bucket for static website hosting. This would be the most cost-effective solution and the easiest one to implement. You can see how to activate it in the following image, under the bucket properties:
51-: A web application runs on several Amazon EC2 instances behind an Application Load Balancer (ALB). Which protocols can we use to do the health check? (Select TWO)
- SSL.
- TCP.
- ICMP.
- HTTP.
- HTTPS.
Solution: 4, 5. The Application Load Balancer operates at the request level (Layer 7) and supports health checks using HTTP and HTTPS protocols. These health checks ensure that the instances respond appropriately and are healthy enough to receive traffic. If an instance fails a health check, the ALB will stop sending traffic to that instance until it passes the health check again.
52-: An application requires a MySQL database that will only be used several times a week for short periods. The database needs to provide automatic instantiation and scaling. Which database service is most suitable?
- Amazon RDS MySQL
- Amazon EC2 instance with MySQL database installed
- Amazon Aurora
- Amazon Aurora Serverless
Solution: 4. Amazon Aurora Serverless is an on-demand, auto-scaling version of Amazon Aurora, designed to start automatically, shut down, and scale capacity up or down based on your application’s needs. It’s a simple, cost-effective option for infrequent workloads, which is ideal since the database will only be used several times a week for short periods.
53-: We are working on an application for a social media website where users can be friends with each other, like each other’s posts, and send messages between them. Which database do you recommend to perform some complicated queries?
- Amazon RDS
- Amazon Redshift
- Amazon Neptune
- Amazon Elasticsearch
Solution: 3. Graph databases excel at managing interconnected data and providing high performance on queries that navigate the data graph. Amazon Neptune is the AWS graph database service specifically designed for handling these datasets.
Whenever they ask social media questions regarding databases in the AWS exam, it will always be Amazon Neptune.
54-: An application receives and processes files of around 4GB in size. The application extracts metadata from the files, which typically takes a few seconds for each file, with times of little activity and then multiple uploads within a short period. What architecture should we use to have the most cost-efficient solution?
- Use an RDS to store the file and use Lambda for processing
- Store the file in an S3 bucket, which another EC2 instance can then access to extract the metadata
- Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a Lambda function to extract the metadata
- Place the files in an EBS volume, and use a fleet of EC2 instances to extract the metadata
Solution: 3. This is a perfect use case for S3 event notifications. When the file is uploaded into S3, it will generate an event that a lambda can process to extract the metadata. This is also the most cost-effective solution as we don’t have any servers running, and the costs of S3 and Lambda are very low. You can see it in the following diagram:
55-: An application runs on a series of EC2 instances in an Auto Scaling group in a private subnet. How can we enable the application to download software updates from the Internet involving minimal ongoing systems management effort?
- Create a NAT gateway.
- Launch a NAT instance.
- Attach Elastic IP addresses.
- Create a Virtual Private Gateway.
Solution: 1. A NAT Gateway is a managed service that provides EC2 instances in a private subnet with outbound internet connectivity while preventing inbound traffic initiated by external sources. The NAT Gateway is a better choice than a NAT instance because it is a fully managed service involving less ongoing systems management effort. Apart from that, it’s highly available within each AZ.
Elastic IPs would require placing the instances in a public subnet and exposing them to the internet; that’s why this option is not correct. A virtual private gateway has nothing to do with providing internet connectivity to instances in a private subnet.
56-: Two web services on the same set of instances require each to listen for traffic on different ports. Which AWS service should we use to route traffic to the service based on the incoming request path?
- Amazon Route 53.
- Amazon CloudFront.
- Application Load Balancer (ALB).
- Classic Load Balancer (CLB).
Solution: 3. The Application Load Balancer operates at the request level (Layer 7), making it suitable for routing traffic based on application content, such as HTTP headers and the URL in the request, and distributing traffic across multiple targets or instances.
The Classic Load Balancer operates at both the request level (Layer 7) and connection level (Layer 4), but it does not support path-based routing like the Application Load Balancer.
You can see an example of this behavior in the following image:
57-: A Solutions Architect needs to transform some data uploaded into S3. The uploads happen sporadically, and an event should trigger the transformation. The transformed data should then be loaded into a target data store.
What combination of services should we use to accomplish this cost-effectively? (Select TWO)
- Configure S3 event notifications to trigger an EC2 XL instance when data is uploaded and use this instance to trigger the ETL job
- Configure S3 event notifications to trigger a Lambda function when data is uploaded and use the Lambda function to trigger the ETL job
- Configure CloudFormation to provision a Kinesis data stream to transform the data and load it into S3
- Use AWS Glue to extract, transform and load the data into the target datastore.
- Configure CloudFormation to provision AWS Data Pipeline to transform the data
Solution: 2, 4. AWS Lambda can be used to process event notifications from S3. When an object is uploaded to S3, an event notification can be sent to a Lambda function. AWS Lambda is a serverless service, so you only pay for the compute time you consume, making this solution cost-efficient.
Also, this is an excellent example of using AWS Glue (ETL serverless service) for transforming the data and loading it into the target datastore. We can see the solution in the following diagram:
58-: An application needs to retain information about each user session, and we have decided to implement a layer within the application architecture to store it. Which of the options below could be used? (Select TWO)
- Sticky sessions on an Elastic Load Balancer (ELB)
- A block storage service such as Elastic Block Store (EBS)
- Amazon Redshift to store data
- A relational data store such as Amazon RDS
- A key/value store such as ElastiCache Redis
Solution: 1, 5. Sticky Sessions allow the ALB to bind a user’s session to a specific target (EC2 instance or container) behind the load balancer. This ensures that subsequent requests from the same user are routed to the same target. This may be a solution, although a more sophisticated one would be to put a cache to write the information about the sessions. You can see these two techniques in the following image:
59-: A data lake solution in Amazon S3 must analyze huge datasets from time to time (infrequent SQL queries only). Which AWS service should be used to meet these requirements if we want to minimize infrastructure costs?
- Amazon Aurora
- Amazon Athena
- Amazon Redshift
- Amazon Redshift Spectrum
Solution: 2Amazon Athena is an interactive query service that easily analyzes data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. This would be especially cost-effective for infrequent SQL queries, as it allows for querying directly against the data in your S3 data lake without needing a dedicated data warehouse or database infrastructure.
This is the main difference with Amazon Redshift Spectrum, which can also be used for this purpose, but it runs in EC2 instances (requiring an Amazon Redshift cluster to maintain), and this can be costly if you’re making infrequent queries.
60-: A web application will run on Amazon EC2 instances behind Elastic Load Balancers in multiple regions in an active/passive configuration. The website address the application runs on is “myrealcode.com”, and we need to use AWS Route 53 to perform the DNS resolution for the application. How should we configure AWS Route 53 in this scenario? (Select TWO)
- Use a Failover Routing Policy
- Set Evaluate Target Health to “No” for the primary
- Use a Weighted Routing Policy
- Connect the ELBs using Alias records
- Connect the ELBs using CNAME records
Solution: 1, 4. A Failover Routing Policy is used for active/passive configurations. You set up DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy one. It means that a primary resource or group of resources is available most of the time, and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable, as you can see in the following image:
61-: How can we encrypt a master database (which is not encrypted) in an Amazon RDS Read Replica deployed in a separate region?
- Enable Encryption when creating the cross-region Read Replica
- Encrypt a snapshot from the master DB instance, create an encrypted cross-region Read Replica from the snapshot
- Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted Read Replica
- Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica
Solution: 4. You can enable encryption for an Amazon RDS DB instance only when you create it, not after the DB instance is created. You can’t have an encrypted read replica of an unencrypted DB instance or an unencrypted read replica of an encrypted DB instance. You can’t either restore an unencrypted backup or snapshot to an encrypted DB instance.
Because of the previous limitations, you should follow the process indicated in the solution. You can read more information about this at the following link.
62-: A solutions architect is designing a web application that consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company. How should security groups be configured? (Select TWO)
- Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
- Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
- Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
- Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.
- Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
Solution: 1, 5. An inbound rule is required to allow traffic to your web application from anywhere on the internet. Because of that, we should set it to 0.0.0.0/0. We should open port 443 to enable HTTPS.
SQL Server uses port 1433 for communication by default, and you should allow access to this port from your web tier. This should be set to accept traffic only from the security group attached to your web tier to maintain security. With this approach, the Security Group will block access if a user tries to connect directly to the DB from the Internet without going through the EC2 instance. We can see the diagram in the following image:
63-: A single volume requires 500 GiB in size and needs to support 20,000 IOPS. What EBS volume type should be selected?
- EBS General Purpose SSD
- EBS Provisioned IOPS SSD
- EBS General Purpose SSD in a RAID 1 configuration
- EBS Throughput Optimized HDD
Solution: 2. Amazon EBS Provisioned IOPS SSD volumes are designed to meet the needs of I/O-intensive workloads that require low latency and consistent performance. They offer a defined level of IOPS that you can provision with the volume, up to a maximum of 64,000 IOPS per volume. The other options are below this number. You can see the differences between the different EBS volumes in the following table:
64-: We plan to develop a new platform using Docker containers in a micro-services architecture in the AWS Cloud. We prefer to use AWS-managed infrastructure for running the containers as you do not want to manage EC2 instances. Which of the following options would deliver these requirements? (Select TWO)
- Use the Elastic Container Service (ECS) with the Fargate Launch Type
- Use the Elastic Container Service (ECS) with the EC2 Launch Type
- Put your container images in GitHub and connect them to ECS
- Put your container images in the Elastic Container Service (ECS)
- Put your container images in the Elastic Container Registry (ECR)
Solution: 1, 5. Elastic Container Registry (ECR) is a fully-managed container registry provided by AWS. It provides a suitable location to store container images where you can easily push your container images and integrate them seamlessly with other AWS services like Elastic Container Service (ECS) or Elastic Kubernetes Service (EKS).
Once we store the image in ECR, we need to run it. Amazon ECS (Elastic Container Service) offers two launch types: EC2 and Fargate. In this scenario, Fargate is the optimal choice. ECS Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying infrastructure, and you only pay for the running tasks. If we don’t want to manage EC2 instances, this has to be our choice.
65-: What is the most efficient service to establish network connectivity from on-premise to multiple VPCs in different AWS regions?
- AWS Direct Connect
- AWS VPC Endpoints
- AWS VPN
- AWS Transit Gateway
Solution: 4. AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway. You can connect VPCs and on-premises networks through a central hub. This acts as a cloud router, simplifying your network and putting everything in a single place, especially in a scenario with multiple VPCs in different AWS regions.
More Questions?
And that’s all for now! I hope these questions are useful for you, and I wish you all the best in your AWS Solutions Architect Associate exam. If you still need more preparation:
- Do you want more than 500 AWS practice questions?
- Access to a real exam simulator to thoroughly prepare for the exam.
- Do you want to download these questions in PDF FOR FREE?
- Download the ultimate cheat sheet for the AWS SAA exam!
All of this and more at FullCertified!
Thanks for Reading!
If you like my work and you want to support me…
- You can follow me on Medium here.
- Feel free to clap if this post is helpful for you! :)