AWS Solutions Architect Associate — Practice Test
Chapter 27: AWS Solutions Architect Practice Exam (part 2)
Get ready to test your AWS knowledge with another 20 practice questions in this new AWS Solutions Architect Associate course chapter. If you haven’t completed the first part yet, we highly recommend so. Let’s continue with the next 20 questions!
This exam will be divided into three parts. Here you can find the links to the other parts:
- AWS Solutions Architect Associate Complete Exam (part 1)
- AWS Solutions Architect Associate Complete Exam (part 3)
Remember that you can find this exam FOR FREE at FullCertified. Take it now with our exam simulator!
Remember that you can find all the chapters from the course at the following link:
EXAM QUESTIONS WITH SOLUTIONS
26-: What is the best way to transfer hundreds of terabytes of data from their on-premises data center into Amazon S3 with limited bandwidth?
- Use S3 Transfer Accelerator
- Contacting AWS Support to increase the bandwidth
- Use AWS Snowball
- Use EC2 instance in performance mode
Solution: 3. AWS Snowball is a data transfer service that uses secure devices to transfer large amounts of data into and out of the AWS Cloud. It’s specifically designed for high-volume data transfers that can’t be done efficiently over the network due to time, cost, or bandwidth considerations.
27-: An application that must have decoupling will send batches of up to 1000 messages per second that must be received in the correct order by the consumers. How can we achieve these requirements?
- Create an Amazon SQS Standard queue
- Create an Amazon SQS FIFO queue
- Create an Amazon SNS topic
- Create an Amazon Kinesis Analytics topic
Solution: 2. Amazon SQS FIFO (First-In-First-Out) queues are explicitly designed for applications that require strict message ordering. With FIFO queues, the messages are strictly ordered and processed exactly once. You can see how SQS FIFO queues work in the following image:
28-: We must migrate a MongoDB database to Amazon DynamoDB within the next few weeks. The database is too large to migrate over the company’s limited internet bandwidth, so we must use an alternative solution. What should we use?
- Migrate the database to Amazon S3 using the company bandwidth and then move it to DynamoDB using the AWS Database Migration Service (DMS).
- Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB.
- Compress the MongoDB database and use the AWS Database Migration Service (DMS) to migrate the database to Amazon DynamoDB directly.
- Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud.
Solution: 2. You can use Snowball Edge to move large amounts of data into AWS as a temporary storage tier for large local datasets. By doing that, you can avoid the limitation of the company’s internet bandwidth. Using SCT, you can convert the MongoDB schema into a compatible format for DynamoDB and load the data onto a Snowball Edge device. Once the data is in AWS, you can use AWS DMS to load the data into DynamoDB. You can see this behavior in the following image:
29-: Which service provides visibility into user activity by recording actions taken on your account?
- Amazon CloudWatch
- Amazon CloudFormation
- Amazon CloudTrail
- Amazon CloudHSM
Solution: 3. Amazon CloudTrail is a service that provides the event history of your AWS account activity. For example, you can see the user activity inside AWS. This is one of the main differences with Amazon CloudWatch, which is primarily a monitoring service for AWS resources and the applications you run on AWS. Using CloudWatch, you can collect and track metrics, collect and monitor log files, set alarms, etc., but it doesn’t record the actions taken on your account.
You can see how Amazon CloudTrail works in the following image:
30-: A company that manages an on-premise web application needs a solution to provide single sign-on and access to the AWS management console to manage resources in the AWS cloud. Which combination of services is BEST suited to delivering these requirements?
- Use IAM and Amazon Cognito
- Integrating the on-premise service with the AWS cloud using Amazon CloudTrail
- Use the AWS Secure Token Service (STS) and SAML
- Use IAM and MFA
Solution: 3. The AWS Secure Token Service (STS) enables you to request temporary security credentials to authenticate and authorize access to AWS resources. SAML (Security Assertion Markup Language) is an industry standard for exchanging authentication and authorization data between an identity provider (IdP) and a service provider (SP). By configuring a SAML-based trust relationship between the on-premises web application acting as the identity provider and AWS as the service provider, you can establish SSO and enable users to access the AWS Management Console using their on-premises credentials. Also, Amazon Cognito doesn’t provide SSO between on-premise directories, so we need to discard this solution.
31-: A web application should run on-premise and in AWS for some time. During the period of co-existence, the client would like 80% of the traffic to hit the AWS-based web servers and 20% to be directed to the on-premises web servers. How can we distribute traffic as requested?
- Use Route 53 with a weighted routing policy and configure the respective weights.
- Use Route 53 with a simple routing policy
- Use an Application Load Balancer to distribute traffic based on IP address
- Use a Network Load Balancer to distribute traffic based on the Instance ID
Solution: 1. Amazon Route 53 is a scalable Domain Name System (DNS) web service. It provides routing policies such as simple, failover, geolocation, geoproximity, latency, and weighted.
The weighted routing policy lets you split your traffic based on different weights assigned. In this scenario, you can assign a weight of 80 to the AWS-based web servers and 20 to the on-premises web servers. You can see this routing policy in the following image:
32-: The DB of an application runs on Amazon RDS. We want that a reporting tool to access this data. How can we achieve it, considering that the reporting tool must be highly available without impacting the application’s performance?
- Create a cross-region Multi-AZ deployment and create a read replica in the second region.
- Move the instance to Amazon EC2 and create and manage snapshots manually.
- Create a Multi-AZ RDS Read Replica of the RDS DB instance.
- Create a Single-AZ RDS Read Replica of the RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica.
Solution: 3. Amazon RDS Read Replicas can offload read traffic from your primary database instance. This is exactly what you need to support your reporting tool without impacting the performance of your application.
33-: A company plans to use Amazon S3 to store documents uploaded by its customers. The company must encrypt the images at rest in Amazon S3. The company does not want to manage and rotate the keys but wants to control who can access them. What services should a solutions architect use to accomplish this?
- Client-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
- Client-Side Encryption with Customer-Provided Keys (SSE-C)
- Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
- Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
Solution: 4. AWS Key Management Service (AWS KMS) is a key management system scaled for the cloud, and we can use it to encrypt our Amazon S3 objects. Using SSE-KMS will allow you to use KMS to handle key management, including key creation, rotation, disabling, and deletion. You can read more about how to encrypt objects in Amazon S3 at the following link.
34-: You have created an application in a VPC that uses a Network Load Balancer (NLB). The application will be offered to other accounts within the region to consume. What AWS service will be used to provide the service for consumption?
- Amazon Cognito
- Route 53
- VPC Endpoint Services using AWS PrivateLink
- API Gateway
Solution: 3. With PrivateLink, you can expose your application running in a VPC to other AWS accounts securely and scalable.
35-: An application writes data to a DynamoDB table, and we need to implement a function that runs code in response to item-level changes in the table. How should we implement that?
- Enable server access logging and create an event source mapping between AWS Lambda and the S3 bucket to which the logs are written.
- Enable DynamoDB Streams and create an event source mapping between AWS Lambda and the relevant stream.
- Create a script to detect changes in DynamoDB tables and perform operations according to the type of event.
- Use Kinesis Data Streams and configure DynamoDB as a producer.
Solution: 2. DynamoDB Streams capture item-level modifications (INSERT, UPDATE, DELETE) and store them in a stream. Creating an event source mapping between AWS Lambda and the DynamoDB stream allows you to trigger a Lambda function to run code in response to these modifications. This approach ensures that your Lambda function is invoked in a scalable and event-driven manner whenever items in the DynamoDB table are modified, allowing you to respond to those changes effectively. You can see this behavior in the following diagram:
36-: Some Amazon EC2 instances in a VPC need to make API calls to Amazon DynamoDB. If we want to avoid using DynamoDB public endpoints (because we don’t want to use the Internet), what is the most EFFICIENT and secure method to accomplish it? (Select TWO)
- Create a route table entry for the endpoint
- Create a gateway endpoint for DynamoDB
- Create an interface endpoint for DynamoDB
- Create a new private DynamoDB table that uses the endpoint
- Create a VPC peering connection between the VPC and DynamoDB
Solution: 1, 2. A VPC endpoint enables private connections between your Virtual Private Cloud (VPC) and supported AWS services without requiring an internet gateway, a VPN connection, or AWS Direct Connect. AWS supports Interface Endpoints and Gateway Endpoints. In the case of DynamoDB and Amazon S3, they require gateway endpoints. You also need to create a route table entry for the endpoint to direct the traffic destined for DynamoDB to the endpoint.
37-: A fleet of EC2 instances running in a private subnet must connect to the Internet using the IPv6 protocol. What service should we configure to enable this connectivity?
- Connect the instances to Route 53
- A NAT Instance
- An Egress-Only Internet Gateway
- AWS Direct Connect
Solution: 3. An Egress-Only Internet Gateway provides egress-only internet access for IPv6 traffic from your VPC to the internet. This way, your EC2 instances in the private subnet can access the internet, but internet resources cannot initiate connections to your instances. This is the best option for enabling IPv6 outbound internet access while maintaining the private nature of the subnet.
38-: A company plans to replicate a limited set of core services to the Disaster Recovery site, ready to take over seamlessly during a disaster. The company will switch off all other services. Which Disaster Recovery should the company use?
- Backup and restore
- Pilot light
- Warm standby
- Multi-site
Solution: 2. With the pilot light approach, you replicate your data from one region to another and provide a copy of your core workload infrastructure (just the critical parts). Resources required to support data replication and back-ups, such as databases and object storage, are always on. Other elements, such as application servers, are switched off and are only used during disaster recovery failover, always having the option to quickly provision a full-scale production environment by switching on and scaling out your application servers. You can see how this strategy works in the following image:
39-: A security team wants to limit access to specific services in several accounts belonging to a large AWS organization. The solution must be scalable, and there must be a single point where we can maintain permissions. How can we accomplish it?
- Create an ACL to block the IPs of these accounts
- Create roles in each account to deny access to the services
- Create a global role shared between accounts to deny access to the services
- Create a service control policy in the root organizational unit to deny access to the services
Solution: 4. In AWS Organizations, a service control policy (SCP) is a policy you can use to manage permissions in your organization. It is a policy you attach to an organizational unit (OU), which is a container for accounts. By doing that, you can effectively control access to specific services across multiple accounts within your organization. In this case, we will block some AWS services.
40-: A company is generating large datasets with millions of rows that must be summarized by column, and reports will be built using business intelligence tools. Which storage service meets the requirements?
- Amazon Elasticsearch
- Amazon RDS
- Amazon ElastiCache
- Amazon Redshift
Solution: 4. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. It is designed for heavy-duty analytics workloads and integrates with popular business intelligence tools. It allows complex querying across millions of rows of data. It uses columnar storage and parallel query execution, among other features, to deliver high performance on analytic queries.
RDS is a general-purpose relational database service and won’t provide the same level of performance as a data warehouse. Elasticache is an in-memory cache service we cannot use in this situation.
41-: We are designing a web application that runs on Amazon EC2 instances behind an Elastic Load Balancer. One requirement is that it must encrypt all the data in transit. How could we do that? (Select TWO)
- Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2 instances.
- Use sticky sessions with the Application Load Balancer (ALB).
- Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL certificates on the ALB and EC2 instances.
- Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL certificates on the NLB and EC2 instances.
- Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on EC2 instances.
Solution: 1, 3. You cannot use HTTPS in Network Load Balancer (NLB), as it works in layer 4. However, you can use an NLB with a TCP listener and terminate SSL on the EC2 instances. This means the NLB would pass through the SSL encrypted traffic to the EC2 instances where the SSL termination would happen.
Another, and more accessible option, is to use an Application Load Balancers (ALB), which supports SSL termination at the Load Balancer level. You would need to install SSL certificates on the ALB, and then it would handle the encryption and decryption of data, offloading that work from your EC2 instances.
42-: A company shares some videos stored in an Amazon S3 bucket via CloudFront. We want to restrict access to private content so that only users from specific IP addresses can access the videos. Also, ensuring direct access via the Amazon S3 bucket shouldn’t be possible. How can this be achieved?
- Configure CloudFront to require users to access the files using signed cookies, create an origin access identity (OAI), and instruct users to log in with the OAI.
- Configure CloudFront to require users to access the files using a signed URL, create an origin access identity (OAI), and restrict access to the files in the Amazon S3 bucket to the OAI.
- Configure CloudFront to require users to access the files using signed cookies and move the files to an encrypted EBS volume.
- Configure CloudFront to require users to access the files using a signed URL and configure the S3 bucket as a website endpoint.
Solution: 2. This answer provides two layers of security. The first one is using Signed URLs, which you can use to restrict content access to specific users. The second one is the Origin Access Identity (OAI), which can be used to restrict access to your S3 content by allowing CloudFront to access your objects while preventing others from accessing them. So you can access these objects using CloudFront, but you cannot access them directly with an Amazon S3 Object URL.
43-: A Kinesis consumer application is reading slower than expected. It has been identified that multiple consumer applications have total reads exceeding the per-shard limits. How can this situation be resolved?
- Increase the number of shards in the Kinesis data stream
- Implement API throttling to restrict the number of requests per-shard
- Increase the number of reading transactions per shard
- Implement read throttling for the Kinesis data stream
Solution: 1. Each Amazon Kinesis Data Stream shard has a limited capacity. If the volume of data or the number of transactions is too large for a shard to handle, you can add more shards to the stream to increase capacity.
The other options don’t address the core issue: the Kinesis stream is overloaded.
44-: A shared VPC is being set up for several AWS accounts, and we will use it to share an application. How can this be set up with the least administrative effort if we do not allow consumers to connect to other instances in the VPC? (Select TWO)
- Create a Network Load Balancer (NLB)
- Use AWS PublicLink to expose the application as an endpoint service
- Use AWS ClassicLink to expose the application as an endpoint service
- Create an Application Load Balancer (ALB)
- Use AWS PrivateLink to expose the application as an endpoint service
Solution: 1, 5. PrivateLink is the right service to use for this use case, as it allows you to privately access services in your VPC without using public IPs and without requiring the traffic to traverse across the Internet. This means you can expose your application as an endpoint service, and others can access it without having access to other resources in your VPC.
As the service provider, you should create a Network Load Balancer in your VPC as the service front end. You then select this load balancer when you create the VPC endpoint service configuration.
Previously, it was impossible to use AWS PrivateLink with Application Load Balancers. However, it’s now possible, as you can see at the following link.
45-: An application runs its compute layer across EC2 instances and should scale based on the number of jobs to be processed. The compute layer is stateless. Which design should we use to ensure that the application is loosely coupled and the job items are durably stored?
- Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.
- Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.
- Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage.
- Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
Solution: 4. The key factors in this problem are the durability of jobs and the loose coupling of the application. SNS is a pub/sub messaging service but does not provide durable storage for the jobs. Also, it’s impossible to scale based on the number of notifications.
Amazon SQS is a message queue service used for decoupling applications, and it provides durable storage for the messages, ensuring that they are not lost in case of any failure. We need to scale based on the number of jobs waiting to be processed, not on the network usage; that’s why the selected option is correct.
A typical approach to scale based on the items in the SQS queue, as you can see in the following image:
More Questions?
You can find the last 20 questions at the following link.
- Do you want more than 500 AWS practice questions?
- Access to a real exam simulator to thoroughly prepare for the exam.
- Download the ultimate cheat sheet for the AWS SAA exam!
All of this and more at FullCertified!
Thanks for Reading!
If you like my work and you want to support me…
- You can follow me on Medium here.
- Feel free to clap if this post is helpful for you! :)