Amazon S3 Practice Exam Questions — AWS Solutions Architect Associate

Chapter 8: Exam questions with solutions about AWS S3

7 min readAug 23, 2021

--

The questions on AWS S3 will correspond to a large part of your AWS exam score. It is an essential service with many features, so you must be clear about everything. Let’s see the typical questions that they usually ask in the exam!

Amazon S3 Practice Exam Questions for the AWS Solutions Architect Associate Certification.
Amazon S3 Practice Exam Questions for the AWS Solutions Architect Associate Certification.

Remember that all the chapters from the course can be found in the following link:

QUESTIONS & ANSWERS

Which is the MOST cost-effective storage option for a service that provides offsite backups for different devices and has to support millions of customers, in which the images will be retrieved infrequently but must be available for retrieval immediately

  1. Amazon S3 Standard-Infrequent Access.
  2. Amazon Glacier with expedited retrievals.
  3. Amazon S3 Standard.
  4. Amazon EFS.

Solution: 1. Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is designed for data accessed less frequently but requires rapid access when needed. The other options are incorrect; Amazon Glacier is used for long-term archival storage and is not intended for immediate data retrieval. Amazon S3 Standard might be a viable option; however, it’s more expensive than S3 Standard-IA for infrequently accessed data. You can see the different S3 storage options in the following diagram:

Object Storage Classes Summary Table.
Object Storage Classes Summary Table.

An application processes stored images on S3 using AWS Lambda to add graphical elements. The processed images need to be available for users to download for 30 days, after which we can delete them. Processed images can be easily recreated from original images. The Original images need to be immediately available for 30 days and be accessible within 24 hours for another 90 days. Which combination of Amazon S3 storage classes is most cost-effective for the original and processed images? (Select TWO)

  1. Store the original images in STANDARD for 30 days, transition to GLACIER for 90 days, then expire the data.
  2. Store the original images in STANDARD_IA for 30 days and then transition to DEEP_ARCHIVE.
  3. Store the processed images in ONEZONE_IA and then expire the data after 30 days.
  4. Store the original images in STANDARD for 30 days, transition to DEEP_ARCHIVE for 90 days, then expire the data.
  5. Store the processed images in STANDARD and then transition to GLACIER after 30 days.

Solution: 1, 3. The original images must be immediately available for 30 days, so STANDARD storage suits this. After that, they must be accessible within 24 hours for another 90 days, which aligns with the GLACIER retrieval times. After a total of 120 days, the data can be expired.

The processed images can be easily recreated from the original ones, and they must be available for users to download for 30 days. Because of these two requirements, it’s safe to use ONEZONE_IA storage for these as it’s cheaper than STANDARD storage. It offers the same retrieval time, even though your images will be stored in just one Availability Zone.
STANDARD_IA is for infrequently accessed data, which is not the case, and DEEP_ARCHIVE has a 48-hour retrieval time, which doesn’t meet the requirement of this question. There is no need to transition the processed images to GLACIER after 30 days because they only need to be available for 30 days.

We must deliver data stored in Amazon Glacier within 5 minutes of a retrieval request. Which features in Amazon Glacier can help meet this requirement?

  1. Standard retrieval.
  2. Bulk retrieval.
  3. Expedited retrieval.
  4. Vault Lock.

Solution: 3. Remember the three types of retrieval for S3 Glacier:

  • Expedited → 1–5 min of a retrieval request.
  • Standard retrieval → 3–5 hours of a retrieval request.
  • Bulk retrieval → 5–12 hours of a retrieval request.

A company stores essential data in an Amazon S3 bucket. A solutions architect needs to ensure that the company can recover data in case of accidental deletion. How can we do that?

  1. Enable Amazon S3 Intelligent-Tiering.
  2. Enable an Amazon S3 lifecycle policy.
  3. Enable Amazon S3 cross-Region replication.
  4. Enable Amazon S3 versioning.

Solution: 4. Using Amazon S3 versioning, you can keep multiple variants of an object (including all writes and deletes) in the same bucket. This makes it easy to recover from unintended user actions and application failures.

S3 lifecycle policy automatically moves data between different storage classes and deletes data after a specific time. S3 cross-Region replication is used to create backup copies of data across other regions, but it won’t prevent accidental deletions because deletions would also be replicated.

An application receives and processes files of around 4GB in size. The application extracts metadata from the files, typically taking a few seconds for each file, with times of little activity and multiple uploads within a short period. What architecture should we use to have the most cost-efficient solution?

  1. Use an RDS to store the file, and use Lambda for processing
  2. Store the file in an S3 bucket, which another EC2 instance can then access to extract the metadata
  3. Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a Lambda function to extract the metadata
  4. Place the files in an EBS volume, and use a fleet of EC2 instances to extract the metadata

Solution: 3. This is a perfect use case for S3 event notifications. When the file is uploaded into S3, it will generate an event that a lambda can process to extract the metadata. This is also the most cost-effective solution as we don’t have any servers running, and the costs of S3 and Lambda are very low. You can see it in the following image:

How S3 Event Notification works.
How S3 Event Notification works.

A website developed using HTML, CSS, client-side JavaScript, and images need to be hosted. Which solution is the MOST cost-effective?

  1. Containerize the website and host it in AWS Fargate.
  2. Create an Amazon S3 bucket and host the website there.
  3. Deploy a web server on an Amazon EC2 instance to host the website.
  4. Configure an Application Load Balancer with an AWS Lambda target.

Solution: 2. Amazon S3 allows you to host static websites without a traditional web server. Given that the website is composed of static files (HTML, CSS, JavaScript, images), you can upload these files to an S3 bucket and configure the bucket for static website hosting. This would be the most cost-effective solution and the easiest one to implement. You can see how to activate it in the following image, under the bucket properties:

You can enable Amazon S3 Static website hosting under the bucket properties.
You can enable Amazon S3 Static website hosting under the bucket properties.

We must encrypt some images at rest in Amazon S3, but the company doesn’t want to spend time managing and rotating the keys, although this company wants to control who can access those keys. What should a solutions architect use to accomplish this?

  1. Server-Side Encryption with keys stored in an S3 bucket.
  2. Server-Side Encryption with Customer-Provided Keys (SSE-C).
  3. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3).
  4. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS).

Solution: 4. AWS Key Management Service (AWS KMS) is a key management system scaled for the cloud, and we can use it to encrypt our Amazon S3 objects. Using SSE-KMS will allow you to use KMS to handle key management, including key creation, rotation, disabling, and deletion.

SSE-C requires the company to manage the keys itself. SSE-S3 won’t provide the same granular control over who can access the keys. Storing encryption keys in an S3 bucket is not secure, and you should never do it.

You can see how SSE-KMS works in the following image:

SSE-KMS Encryption diagram.
SSE-KMS Encryption diagram.

A Solutions Architect must design a CSV storage solution for incoming billing reports. The data will be analyzed infrequently and discarded after 30 days. Which combination of services will be MOST cost-effective in meeting these requirements?

  1. Write the files to an S3 bucket and use Amazon Athena to query the data.
  2. Import the logs to an Amazon Redshift cluster.
  3. Use AWS Data Pipeline to import the logs into a DynamoDB table.
  4. Import the logs into an RDS MySQL instance.

Solution: 1. Amazon Athena is an interactive query service that easily analyzes data in Amazon S3 using standard SQL. It is a serverless service, and you only pay for the queries you run, so it’s a cost-effective solution for infrequent analysis of data. You can see what the UI of Amazon Athena looks like in the following image:

Query your Amazon S3 buckets using Amazon Athena.
Query your Amazon S3 buckets using Amazon Athena.

More Questions?

  • Do you want more than 500 AWS practice questions?
  • Access to a real exam simulator to thoroughly prepare for the exam.
  • You can download all of the AWS questions on PDF.

All of this and more at FullCertified!

Thanks for Reading!

If you like my work and want to support me…

  1. The BEST way is to follow me on Medium here.
  2. Feel free to clap if this post is helpful for you! :)

--

--

Gonzalo Fernandez Plaza
Gonzalo Fernandez Plaza

Written by Gonzalo Fernandez Plaza

Computer Science Engineer & Tech Lead 🖥️. Publishing AWS & Snowflake ❄️ courses & exams. https://www.fullcertified.com

Responses (2)