AWS Solutions Architect Exam Dumps and Braindumps

AWS Solutions Architect Exam Simulator

Despite the title of this article, this isn’t an AWS Solutions Architect Associate braindump in the traditional sense.

I don’t believe in cheating.

Traditionally, the word “braindump” referred to someone taking an exam, memorizing the questions, and posting them online for others to use. That’s unethical and a direct violation of the AWS certification agreement. There’s no integrity or personal growth in that.

Better than AWS certification exam dumps

This is not an AWS braindump.

All of these questions come from my AWS Solutions Architect Associate Udemy course and from the certificationexams.pro website, which hosts hundreds of free AWS practice questions.

Each question has been carefully written to align with the official AWS Certified Solutions Architect Associate exam domains. They are designed to mirror the tone, logic, and technical depth of real exam scenarios, but they are not copied from the test. They’re built to help you learn the right way. Passing the AWS certification exams with integrity matters.

If you can answer these questions and understand why the incorrect options are wrong, you won’t just pass the AWS Solutions Architect Associate exam, you’ll truly understand how to design secure, resilient, high-performing, and cost-optimized architectures on AWS.

So, if you want to call this your AWS Solutions Architect Associate braindump, go ahead—but know that every question here is built to teach, not to cheat.

Each question below includes a detailed explanation, along with key tips and strategies to help you think like an AWS Solutions Architect on exam day.

Study hard, learn deeply, and best of luck on your exam.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

A telemedicine platform runs on a fleet of EC2 instances in an Auto Scaling group. The first launch template, LT-Alpha, sets instance tenancy to dedicated, but it uses VPC-Blue where the VPC tenancy is default. Later, the team creates LT-Beta with tenancy set to default and launches into VPC-Green where the VPC tenancy is dedicated. What tenancy will the instances use when launched from LT-Alpha and from LT-Beta?

  • ❏ A. Instances from LT-Alpha will have default tenancy, and instances from LT-Beta will have dedicated tenancy

  • ❏ B. Instances from both LT-Alpha and LT-Beta will have default tenancy

  • ❏ C. Instances from both LT-Alpha and LT-Beta will have dedicated tenancy

  • ❏ D. Instances from LT-Alpha will have dedicated tenancy, and instances from LT-Beta will have default tenancy

A media analytics startup operates workloads across multiple AWS accounts under AWS Organizations. Some application maintenance is handled by third-party engineers. These contractors need secure AWS Management Console access and command-line access to Amazon Linux 2023 EC2 instances that reside in private subnets across three VPCs for troubleshooting. The company requires full auditing of activities and wants the smallest possible external attack surface. Which approach is the most secure?

  • ❏ A. Deploy a hardened bastion host in a public subnet, restrict SSH to known contractor CIDR ranges with security groups, issue IAM user credentials for console sign-in, and distribute SSH key pairs for hopping to the private instances via the bastion

  • ❏ B. Install AWS Systems Manager Agent on each instance with an instance profile, create short-lived IAM users in every account for contractors to reach the console, and require Systems Manager Session Manager for instance access

  • ❏ C. Install AWS Systems Manager Agent on all instances with an IAM role for Systems Manager, onboard contractors with permission sets in AWS IAM Identity Center for console access, and use Systems Manager Session Manager for shell access so no inbound ports are opened

  • ❏ D. Establish AWS Client VPN from the contractors’ office network, create IAM users in each account for console access, and allow SSH from the VPN to the private EC2 instances using security groups

PulsePath, a healthtech startup, runs a microservices platform on Amazon EKS. The platform team needs a managed way to collect and consolidate cluster metrics and application logs and to visualize CPU and memory usage across EKS namespaces, services, and pods in a single centralized dashboard without migrating away from EKS. Which solution should they adopt?

  • ❏ A. Configure AWS X-Ray and store traces in Amazon OpenSearch Service

  • ❏ B. Use Amazon Managed Service for Prometheus integrated with Amazon Managed Grafana

  • ❏ C. Enable Amazon CloudWatch Container Insights for the existing EKS cluster and use the CloudWatch console dashboards

  • ❏ D. Deploy the Amazon CloudWatch agent as a DaemonSet and view metrics and logs in the CloudWatch console

A prominent South American motorsports federation has granted exclusive live streaming rights for its races in Canada to an Austin-based media platform. The agreement mandates that only viewers physically located in Canada can watch the live broadcasts, and access from any other country must be denied. How should the platform enforce these location-based restrictions using AWS services? (Choose 2)

  • ❏ A. Turn on CloudFront geo restriction to whitelist Canada and block all other countries

  • ❏ B. Use Amazon Route 53 latency-based routing to send viewers to the lowest-latency endpoint

  • ❏ C. Configure Amazon Route 53 geolocation routing to resolve Canadian DNS queries to the streaming endpoints and return a denial host for other countries

  • ❏ D. Secure streams with Amazon CloudFront signed URLs to control who can watch

  • ❏ E. Apply Amazon Route 53 weighted routing to split traffic across endpoints

A healthcare analytics firm operates a patient risk scoring application in two private subnets behind an Application Load Balancer within a VPC. The VPC has both a NAT gateway and an internet gateway. The application processes sensitive records and then writes summaries to an Amazon S3 bucket used for internal reporting. The organization requires all traffic to remain on the AWS private network without traversing the public internet, and leadership wants the lowest-cost compliant approach. What is the most cost-effective solution to meet these requirements?

  • ❏ A. Create an S3 interface VPC endpoint and attach a security group that allows the application to connect to S3 privately

  • ❏ B. Provision a gateway VPC endpoint for Amazon S3 and update the private subnet route tables to route S3 prefixes to the endpoint

  • ❏ C. Route S3 traffic through the existing NAT gateway by updating routes and tightening security group and network ACL rules

  • ❏ D. Enable S3 Transfer Acceleration and restrict access with a bucket policy that allows only trusted IP ranges

A digital media startup, NimbusPlay, stores user profiles, session events, clickstream data, and viewed content in Amazon DynamoDB. Some workloads must sustain up to 7 million read requests per second with consistently low single-digit millisecond latency and high availability. To absorb heavy read bursts, the team wants to introduce a dedicated cache in front of DynamoDB. Which AWS services should be used as the caching layer for this requirement? (Choose 2)

  • ❏ A. Amazon OpenSearch Service

  • ❏ B. Amazon DynamoDB Accelerator (DAX)

  • ❏ C. Amazon Relational Database Service (Amazon RDS)

  • ❏ D. Amazon ElastiCache

  • ❏ E. Amazon Redshift

A media startup operates a stateless web service on nine Amazon EC2 instances managed by an Auto Scaling group in a single Availability Zone, and traffic is routed through an Application Load Balancer. The company must withstand an Availability Zone outage without changing the application code. What architecture change should the solutions architect implement to achieve high availability?

  • ❏ A. Create an Amazon CloudFront distribution with the ALB as a custom origin

  • ❏ B. Reconfigure the Auto Scaling group to span three Availability Zones and maintain three instances per AZ

  • ❏ C. Create a new Auto Scaling group in a second Region and use Amazon Route 53 to split traffic between Regions

  • ❏ D. Create a new launch template to quickly add instances in another Region during an incident

A regional logistics company is adopting a hybrid model that keeps core systems in its primary colocation site while using AWS for object storage and analytics, moving about 30 TB of data to Amazon S3 each month. The network team requires a dedicated private link to AWS for steady performance, but they also want to maintain uptime by failing over to an encrypted path over the public internet during an outage. Which combination of connectivity choices should they implement to satisfy these needs? (Choose 2)

  • ❏ A. Use Egress-Only Internet Gateway as a backup connection

  • ❏ B. Use AWS Direct Connect as the primary connection

  • ❏ C. Use AWS Global Accelerator as a backup connection

  • ❏ D. Use AWS Site-to-Site VPN as the primary connection

  • ❏ E. Use AWS Site-to-Site VPN as a backup connection

A travel booking marketplace runs on an Amazon RDS for PostgreSQL database. Several read-heavy reports join many tables and have become slow and costly at peak load, so the team wants to cache the join results. The cache engine must be able to leverage multiple CPU threads concurrently. Which AWS service should the solutions architect recommend?

  • ❏ A. Amazon ElastiCache for Redis

  • ❏ B. AWS Global Accelerator

  • ❏ C. Amazon ElastiCache for Memcached

  • ❏ D. Amazon DynamoDB Accelerator (DAX)

NovaStream, a global video training provider, runs its web application on AWS. The service uses Amazon EC2 instances in private subnets behind an Application Load Balancer. New distribution agreements require that traffic from several specific countries be blocked. What is the simplest way to implement this requirement?

  • ❏ A. Associate an AWS WAF web ACL with the ALB and add a geo match rule to block the specified countries

  • ❏ B. Edit the security group on the ALB to reject connections from those countries

  • ❏ C. Put Amazon CloudFront in front of the ALB and enable geo restriction to deny the listed countries

  • ❏ D. Use VPC network ACLs to block the public IP ranges for those countries

A media analytics startup runs a batch data transformation job that usually finishes in about 75 minutes. The pipeline uses checkpoints so it can pause and resume, and it can tolerate compute interruptions without losing progress. Which approach offers the lowest cost for running these jobs?

  • ❏ A. Amazon EC2 On-Demand Instances

  • ❏ B. AWS Lambda

  • ❏ C. Amazon EC2 Spot Instances

  • ❏ D. Amazon EC2 Reserved Instances

A translational medicine institute ingests genomic sequencing datasets for more than twenty hospitals. Each hospital keeps the raw data in its own relational database. The institute must extract the data, run hospital-specific transformation logic, and deliver curated outputs to Amazon S3. Because the data is highly sensitive, it must be protected while ETL runs and when stored in Amazon S3, and each hospital requires the use of its own encryption keys to satisfy compliance. The institute wants the simplest day-to-day operations possible. Which approach will meet these needs with the least operational effort?

  • ❏ A. Use AWS Glue to run a single multi-tenant ETL job for all hospitals, tag records by hospital, and select a hospital’s AWS KMS key dynamically to write results to Amazon S3 with SSE-KMS

  • ❏ B. Create separate AWS Glue ETL jobs per hospital and attach a security configuration that uses that hospital’s AWS KMS key for SSE-KMS during job runs and for objects written to Amazon S3

  • ❏ C. Build containerized ETL with AWS Batch on Fargate, encrypt task storage with KMS and write outputs to Amazon S3 using SSE-S3

  • ❏ D. Operate one shared Amazon EMR cluster for all hospitals, secure traffic with TLS, and store outputs in Amazon S3 using SSE-S3

A global logistics provider equips 18,000 cargo pallets with IoT GPS tags that check for significant movement about every 90 seconds and push updated coordinates. The devices currently send updates to a backend running on Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones in one AWS Region. A recent surge in events overwhelmed the application, some location updates were dropped, and there is no ability to replay missed data. The company wants an ingestion design that avoids future data loss during bursts and keeps day-to-day operations lightweight. What should a solutions architect implement?

  • ❏ A. Amazon Kinesis Data Streams with multiple shards and EC2 consumers

  • ❏ B. Create an Amazon SQS queue to buffer location messages and have the application poll and process them

  • ❏ C. Configure an Amazon Kinesis Data Firehose delivery stream to store updates in Amazon S3 for the application to scan periodically

  • ❏ D. Use an AWS IoT Core rule to publish device updates to an Amazon SNS topic and have the application poll the topic

Blue Harbor Games operates a latency sensitive multiplayer arena on Amazon EC2 instances deployed in a single Availability Zone. Players connect using UDP at Layer 4, and traffic spikes during weekend events while dropping significantly overnight. Leadership requires higher availability and a way to lower costs during off-peak hours. What should the solutions architect implement to meet these goals? (Choose 2)

  • ❏ A. Switch to smaller instance types and increase the number of instances in the same Availability Zone

  • ❏ B. Use an EC2 Auto Scaling group across at least two Availability Zones to scale based on demand

  • ❏ C. Front the service with an Application Load Balancer

  • ❏ D. Place a Network Load Balancer in front of the EC2 fleet to handle Layer 4 connections

  • ❏ E. AWS Global Accelerator

A fintech startup runs a batch processing app on a single Amazon EC2 instance that must read and write to a DynamoDB table named SalesLedgerV3 within the same AWS account. What should a solutions architect implement to grant the instance the required permissions in a secure, maintainable way?

  • ❏ A. Store IAM user access keys in AWS Secrets Manager and have the application retrieve them to call DynamoDB

  • ❏ B. Create an IAM role for EC2 with least-privilege access to the SalesLedgerV3 table and attach it to the instance using an instance profile

  • ❏ C. Create an IAM user with DynamoDB permissions and save its access keys on the instance file system for the application to load

  • ❏ D. Create an IAM role and add the specific EC2 instance ID to the role trust policy so the instance can assume it

NovaMetrics runs a critical Python job that ingests near–real-time data snapshots on a fixed cadence. The job must execute every 12 minutes, finishes in about 90 seconds, needs roughly 1.5 GB of memory, and is CPU bound. The company wants to minimize the cost of running this workload while meeting these requirements; which approach should they choose?

  • ❏ A. Use AWS App2Container to containerize the code and run it as an Amazon ECS task on AWS Fargate with 1 vCPU and 1536 MB of memory

  • ❏ B. Run the workload as a Kubernetes CronJob on Amazon EKS using AWS Fargate

  • ❏ C. Implement the job as an AWS Lambda function with 1536 MB of memory and schedule it to run every 12 minutes with Amazon EventBridge

  • ❏ D. Use AWS App2Container to containerize the application and run it on an Amazon EC2 instance, using Amazon CloudWatch to stop the instance when the job is idle

Borealis Media is retiring its on-site file server and needs a fully managed cloud file service. Employees frequently work remotely from different countries, and applications that still run on the corporate network must mount the same shared storage using standard file protocols. Which solution should the company implement to meet these needs?

  • ❏ A. AWS DataSync

  • ❏ B. Amazon WorkDocs

  • ❏ C. Amazon FSx for Windows File Server with SMB and AWS Client VPN

  • ❏ D. AWS Storage Gateway volume gateway

A regional retailer runs a monolithic web portal on a single Amazon EC2 instance. During weekend flash sales, customers report slow responses. Amazon CloudWatch shows CPU utilization hovering around 99% during these peak windows. The team wants to remove the bottleneck and raise availability while keeping costs in check. Which actions should they take? (Choose 2)

  • ❏ A. Configure an Auto Scaling group with an Application Load Balancer to scale vertically

  • ❏ B. Place the application behind an Application Load Balancer and use an EC2 Auto Scaling group to scale out

  • ❏ C. Use AWS Compute Optimizer to choose an instance type for horizontal scaling

  • ❏ D. Use AWS Compute Optimizer to right-size the instance to a larger class for vertical scaling

  • ❏ E. Add Amazon CloudFront in front of the EC2 instance to reduce load

UrbanLift, a ride-hailing startup, runs its platform on AWS using Amazon API Gateway and AWS Lambda with an Amazon RDS for PostgreSQL backend. The Lambda functions currently connect to the database using a stored username and password. The team wants to enhance security at the authentication layer by adopting short-lived, automatically generated credentials instead of static passwords. Which actions should you implement to achieve this? (Choose 2)

  • ❏ A. Place the Lambda function in the same VPC as RDS

  • ❏ B. Use IAM database authentication with Amazon RDS for PostgreSQL

  • ❏ C. Configure AWS Secrets Manager rotation and fetch passwords from Lambda

  • ❏ D. Assign an IAM execution role to the Lambda function

  • ❏ E. Limit the RDS security group inbound rule to the Lambda function security group

A digital publishing startup stores easily reproducible project files in Amazon S3. New uploads are heavily downloaded for the first couple of days, and access drops sharply after about eight days. The team still requires immediate, on-demand retrieval later but wants to minimize storage expenses as much as possible. What approach should they take to reduce costs while meeting these needs?

  • ❏ A. Set an S3 Lifecycle rule to transition objects to S3 Standard-IA after 30 days

  • ❏ B. Use S3 Intelligent-Tiering for all objects from day one

  • ❏ C. Set an S3 Lifecycle rule to transition objects to S3 One Zone-IA after 30 days

  • ❏ D. Set an S3 Lifecycle rule to transition objects to S3 One Zone-IA after 10 days

A regional design firm runs a reporting utility on a single Amazon EC2 instance that requires about 300 GiB of block storage. The tool is idle most of the time but has brief usage spikes on weekday mid-mornings and early evenings. Disk I/O varies, with peaks reaching up to 3,000 IOPS. The architect needs the most economical option that still delivers the required performance. Which Amazon EBS volume type should be selected?

  • ❏ A. Amazon EBS Provisioned IOPS SSD (io1)

  • ❏ B. Amazon EBS Cold HDD (sc1)

  • ❏ C. Amazon EBS General Purpose SSD (gp2)

  • ❏ D. Amazon EBS Throughput Optimized HDD (st1)

A solutions architect at a digital ticketing startup needs to store rolling application logs from its production web platform in Amazon S3 for roughly 90 days. The team cannot predict which log objects will be read or how often they will be retrieved, but they must be available immediately when needed. To keep costs low without adding operational overhead, which S3 storage class should be used?

  • ❏ A. S3 Standard-Infrequent Access (S3 Standard-IA)

  • ❏ B. S3 Glacier

  • ❏ C. S3 Intelligent-Tiering

  • ❏ D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

A retail analytics startup runs an Amazon RDS for MySQL instance to store purchase transactions. The database is already encrypted at rest with an AWS KMS key. The security team now requires every application connection to use TLS and validate the database server certificate. What should the solutions architect do to enable encryption in transit for these connections?

  • ❏ A. Enable encryption in transit in the RDS console and obtain a key from AWS KMS

  • ❏ B. Install a self-signed server certificate on the RDS instance and distribute it to the applications

  • ❏ C. Download the current AWS RDS CA root certificate bundle and configure the application clients to use it when connecting

  • ❏ D. Use AWS Certificate Manager to issue a certificate and associate it with the RDS DB instance

A logistics startup has an IAM group for operations engineers. The group is attached to this policy: { “Version”:”2012-10-17″, “Id”:”EC2SafeTerminatePolicyV2″, “Statement”:[ { “Effect”:”Deny”, “Action”:”ec2:*”, “Resource”:”“, “Condition”:{ “StringNotEquals”:{ “ec2:Region”:”eu-central-1″ } } }, { “Effect”:”Allow”, “Action”:”ec2:TerminateInstances”, “Resource”:”“, “Condition”:{ “IpAddress”:{ “aws:SourceIp”:”172.16.50.0/25″ } } } ] } Given this configuration, which statement is accurate?

  • ❏ A. Users in the IAM group can terminate an EC2 instance in eu-central-1 when the instance’s private IP is 172.16.50.77

  • ❏ B. Users in the IAM group can terminate an EC2 instance in eu-central-1 only when the user’s source IP is 172.16.50.77

  • ❏ C. Users in the IAM group can terminate EC2 instances in eu-central-1 from any IP because the Allow includes ec2:TerminateInstances

  • ❏ D. Users in the IAM group can terminate EC2 instances in any region except eu-central-1 when the user’s source IP is 172.16.50.77

HarborPoint Press operates a 28 TB MongoDB cluster in its data center and needs to move the data to Amazon DynamoDB within 3 weeks. The site has a constrained WAN link, so a direct online transfer would not complete in time. Which approach should the solutions architect choose to complete the migration without relying on the limited internet connection?

  • ❏ A. Set up AWS Direct Connect and use AWS Database Migration Service to migrate to Amazon DynamoDB

  • ❏ B. Use AWS DataSync to push the dataset to Amazon S3, then apply AWS Database Migration Service to load into Amazon DynamoDB

  • ❏ C. Use AWS Database Migration Service to extract and load the data onto an AWS Snowball Edge, then finalize the migration with AWS DMS in AWS

  • ❏ D. Use AWS Schema Conversion Tool data extractors to load the data to an AWS Snowball Edge device, then use AWS Database Migration Service to migrate into Amazon DynamoDB

A retail analytics startup keeps its data in Amazon S3, where some objects are accessed daily while others are rarely used. The head of finance noticed a 35% month-over-month spike in S3 charges and needs a fast way to cut costs with as little ongoing management as possible. What is the quickest approach to reduce S3 spending with minimal operational effort?

  • ❏ A. Use AWS Trusted Advisor to automatically move objects into optimal S3 storage classes

  • ❏ B. Apply S3 Lifecycle rules to transition objects into the most appropriate storage classes

  • ❏ C. Migrate every object to S3 Glacier Instant Retrieval

  • ❏ D. Build a Step Functions and Lambda workflow that evaluates access logs and retiers objects accordingly

A fast-growing consumer messaging service stores account records, contact relationships, and interaction counters in Amazon DynamoDB. The company is rolling out to additional continents and must deliver consistently low read and write latency and high availability for a worldwide audience. The workload is spiky and hard to predict, and the system must keep operating during a Regional disruption while staying cost efficient. What is the most cost-effective approach to meet these needs?

  • ❏ A. Persist user state in Amazon S3 with Cross-Region Replication and invoke AWS Lambda to update data in real time

  • ❏ B. Configure DynamoDB global tables across multiple Regions using provisioned capacity with Auto Scaling for traffic bursts

  • ❏ C. Enable DynamoDB global tables in multiple Regions and use on-demand capacity to absorb unpredictable traffic

  • ❏ D. Run a single-Region DynamoDB table in provisioned capacity mode and replicate asynchronously to a secondary Region with DynamoDB Streams for disaster recovery

A data analytics startup named Panorama Metrics runs a self-managed database on Amazon EC2 with synchronous commit replication across a pair of Availability Zones. The database must be reachable from the internet, so both EC2 instances are in public subnets, and the replication stream uses each node’s public IPv4 address. You want to reduce the ongoing cost of the replication traffic while preserving the high-availability design. What should you change?

  • ❏ A. Assign Elastic IP addresses to both instances and replicate over those

  • ❏ B. AWS PrivateLink

  • ❏ C. Use the instances’ private IP addresses for the replication

  • ❏ D. AWS Global Accelerator

BlueBridge Analytics, a healthcare technology company, is extending its hybrid environment and needs a secure connection between its on-premises data center and an AWS VPC. All traffic between locations must be encrypted at the IP layer and also protected at the application session layer such as TLS. The design must provide fine-grained controls so that only necessary communication is allowed between on premises and AWS and it should scale as usage increases. What should the solutions architect recommend?

  • ❏ A. Use AWS Client VPN to provide individual user tunnels into the VPC and manage permissions with security groups and IAM policies

  • ❏ B. Establish a dedicated AWS Direct Connect link to the VPC and control routing with VPC route tables along with security groups and network ACLs

  • ❏ C. Configure AWS Site-to-Site VPN between the on-premises edge device and the VPC, and use VPC route tables, security groups, and network ACLs to allow only necessary flows

  • ❏ D. Expose services via AWS PrivateLink endpoints to the corporate network and restrict access using security groups

StreamSpace Learning is building a browser-based chat feature for live courses. The system must maintain persistent bidirectional connections with clients using WebSocket APIs and is expected to handle roughly 40,000 concurrent users. The chat microservices run on containers in an Amazon EKS cluster within private subnets of a VPC and must remain non-public. What is the best way to enable secure access from the WebSocket endpoint to these backend services?

  • ❏ A. Build an Amazon API Gateway REST API and use a private VPC link to reach the EKS service

  • ❏ B. Amazon AppSync

  • ❏ C. Build an Amazon API Gateway WebSocket API and integrate privately via VPC link to a load balancer that fronts the EKS workloads

  • ❏ D. Build an Amazon API Gateway WebSocket API and allow access by adding security group rules that permit API Gateway to reach the EKS endpoints

A digital publishing startup keeps about 40 TB of video and audio assets in a multi-account data lake governed by AWS Lake Formation. The growth marketing group needs secure, fine-grained, cross-account access to just the datasets relevant to their audience segmentation and campaign analysis. What is the simplest way to enable this access while minimizing ongoing administration?

  • ❏ A. Replicate the selected datasets into a centralized shared services account and create an IAM role trusted by the marketing accounts

  • ❏ B. AWS Resource Access Manager to share the underlying S3 buckets with the marketing analytics account

  • ❏ C. Configure Lake Formation tag-based access control to grant cross-account permissions on the required datasets to the marketing analytics accounts

  • ❏ D. Run Lake Formation Grant permissions in each producer account for the specific marketing users

Riverton Geospatial runs a weekly batch analytics job on satellite imagery every Saturday night. The workload consumes large input files, maintains intermediate state, and must execute for at least 3 hours without interruption. The team wants minimal operational effort while ensuring the run starts on time each week. Which approach should they use?

  • ❏ A. Launch a single Amazon EC2 On-Demand instance to host the job and trigger it with a crontab entry

  • ❏ B. Run the workload on an Amazon EMR cluster built from Spot Instances and orchestrate runs with AWS Step Functions

  • ❏ C. Package the job as a container and run it on AWS Fargate with Amazon ECS, scheduling execution using Amazon EventBridge Scheduler

  • ❏ D. Implement the job as an AWS Lambda function with reserved concurrency and schedule it with Amazon EventBridge

A multinational biotech company is replatforming its compliance document vault to AWS. The application will store confidential clinical trial files in Amazon S3. Company policy mandates that every file be encrypted on the client before it is transmitted and stored in S3 to meet strict regulatory controls. Which approach should the company implement?

  • ❏ A. Configure server-side encryption with AWS KMS keys (SSE-KMS) and enforce access with custom key policies

  • ❏ B. Use server-side encryption with customer-provided keys (SSE-C)

  • ❏ C. Perform client-side encryption with AWS KMS customer managed keys and upload only encrypted objects to S3

  • ❏ D. Use AWS CloudHSM to store keys and rely on SSE-S3 for object encryption

Aurora Media is launching a global photo gallery for its news site that will be visited by hundreds of thousands of readers across several regions. Editors upload high-resolution images a single time, and the application must apply on-the-fly transformations such as resizing when viewers retrieve them. The team prefers fully managed services that are highly available, scale automatically, and deliver low latency worldwide with minimal operational effort. What is the simplest architecture that meets these requirements?

  • ❏ A. Store images in Amazon S3 and front them with Amazon CloudFront, using AWS Lambda@Edge to modify origin responses and dynamically resize the images

  • ❏ B. Store images in Amazon S3 behind Amazon CloudFront and configure S3 Event Notifications to invoke an AWS Lambda function to transform images when users request them

  • ❏ C. Store images in Amazon S3 behind Amazon CloudFront and use S3 Object Lambda to apply transformations to objects during GET requests

  • ❏ D. Store images in Amazon DynamoDB with Global Tables and invoke AWS Lambda to process items as they are read from the table

An insurance analytics firm is moving a set of legacy virtual machines to AWS. These workloads depend on drivers and tooling tied closely to the guest operating system and cannot be containerized or rebuilt because of vendor restrictions. Each workload currently runs alone on its own VM. The team will host these workloads on Amazon EC2 without changing the design and needs a solution that provides high availability and resiliency to instance or Availability Zone failures. Which approach should they implement?

  • ❏ A. Containerize the legacy applications and deploy them on Amazon ECS with the Fargate launch type behind an Application Load Balancer across multiple Availability Zones

  • ❏ B. Use AWS Elastic Disaster Recovery to replicate the VMs into AWS and rely on a manual failover runbook during outages

  • ❏ C. Build an AMI for each workload and run two EC2 instances from it in different Availability Zones behind a Network Load Balancer with health checks

  • ❏ D. Create AMIs and launch an Auto Scaling group with a minimum and maximum of 1 instance, then place an Application Load Balancer in front for routing and health-check failover

A telemedicine platform runs on a fleet of EC2 instances in an Auto Scaling group. The first launch template, LT-Alpha, sets instance tenancy to dedicated, but it uses VPC-Blue where the VPC tenancy is default. Later, the team creates LT-Beta with tenancy set to default and launches into VPC-Green where the VPC tenancy is dedicated. What tenancy will the instances use when launched from LT-Alpha and from LT-Beta?

  • ✓ C. Instances from both LT-Alpha and LT-Beta will have dedicated tenancy

The correct answer is Instances from both LT-Alpha and LT-Beta will have dedicated tenancy.

Tenancy can be set either in the launch template or at the VPC level and if either is configured as dedicated the instances launch as dedicated. In this scenario the first launch template LT-Alpha explicitly sets tenancy to dedicated and the second launch template LT-Beta launches into a VPC whose tenancy is dedicated so both produce dedicated instances.

Instances from LT-Alpha will have default tenancy, and instances from LT-Beta will have dedicated tenancy is wrong because LT-Alpha specifies dedicated tenancy in its template which enforces dedicated instances regardless of the VPC default.

Instances from both LT-Alpha and LT-Beta will have default tenancy is wrong because each case includes at least one dedicated configuration so the instances cannot be default.

Instances from LT-Alpha will have dedicated tenancy, and instances from LT-Beta will have default tenancy is wrong because LT-Beta launches into a VPC with dedicated tenancy which causes those instances to be dedicated.

Remember that dedicated anywhere equals dedicated and you only get default tenancy when both the launch template and the VPC are default.

A media analytics startup operates workloads across multiple AWS accounts under AWS Organizations. Some application maintenance is handled by third-party engineers. These contractors need secure AWS Management Console access and command-line access to Amazon Linux 2023 EC2 instances that reside in private subnets across three VPCs for troubleshooting. The company requires full auditing of activities and wants the smallest possible external attack surface. Which approach is the most secure?

  • ✓ C. Install AWS Systems Manager Agent on all instances with an IAM role for Systems Manager, onboard contractors with permission sets in AWS IAM Identity Center for console access, and use Systems Manager Session Manager for shell access so no inbound ports are opened

The correct option is Install AWS Systems Manager Agent on all instances with an IAM role for Systems Manager, onboard contractors with permission sets in AWS IAM Identity Center for console access, and use Systems Manager Session Manager for shell access so no inbound ports are opened.

This approach centralizes identity and access management and removes the need to open inbound ports. Systems Manager Agent with an appropriate role lets Session Manager provide interactive shell access without SSH. Session Manager integrates with CloudWatch Logs and S3 for full activity auditing and with AWS KMS for encryption. AWS IAM Identity Center provides centralized permission sets that work across accounts in AWS Organizations and it simplifies contractor onboarding and lifecycle management while supporting least privilege.

Deploy a hardened bastion host in a public subnet, restrict SSH to known contractor CIDR ranges with security groups, issue IAM user credentials for console sign-in, and distribute SSH key pairs for hopping to the private instances via the bastion is less secure because it creates a host that is exposed to the internet and it requires managing SSH keys and long lived credentials which increases the attack surface and operational overhead.

Install AWS Systems Manager Agent on each instance with an instance profile, create short-lived IAM users in every account for contractors to reach the console, and require Systems Manager Session Manager for instance access improves access compared to SSH but it fragments identity management by using per account IAM users. This makes auditing and permission lifecycle harder than using centralized permission sets with IAM Identity Center.

Establish AWS Client VPN from the contractors’ office network, create IAM users in each account for console access, and allow SSH from the VPN to the private EC2 instances using security groups adds network complexity and still relies on SSH which expands the attack surface. It also depends on account scoped IAM users which are harder to govern and audit at scale.

Use IAM Identity Center for centralized contractor identities and prefer Session Manager for portless shell access so you reduce attack surface and get consistent audit logs.

PulsePath, a healthtech startup, runs a microservices platform on Amazon EKS. The platform team needs a managed way to collect and consolidate cluster metrics and application logs and to visualize CPU and memory usage across EKS namespaces, services, and pods in a single centralized dashboard without migrating away from EKS. Which solution should they adopt?

  • ✓ C. Enable Amazon CloudWatch Container Insights for the existing EKS cluster and use the CloudWatch console dashboards

Enable Amazon CloudWatch Container Insights for the existing EKS cluster and use the CloudWatch console dashboards is the correct choice because Container Insights auto discovers EKS resources and delivers curated dashboards that show CPU and memory usage across namespaces services and pods in a single centralized view without replatforming.

CloudWatch Container Insights collects performance metrics and aggregates them by EKS entity so you can drill down from cluster to namespace to service to pod. It also ingests logs and links metrics to logs in the CloudWatch console which meets the requirement for consolidated metrics and application logs and it provides ready made dashboards so you do not need to build and wire up exporters and dashboards yourself.

Configure AWS X-Ray and store traces in Amazon OpenSearch Service is focused on distributed tracing and latency analysis rather than CPU and memory utilization and it does not provide the EKS resource level dashboards or a unified metrics and logs view required here.

Use Amazon Managed Service for Prometheus integrated with Amazon Managed Grafana can provide powerful metrics dashboards but it does not natively aggregate application logs and it requires deploying exporters and building panels which is more setup than this use case calls for.

Deploy the Amazon CloudWatch agent as a DaemonSet and view metrics and logs in the CloudWatch console can collect metrics and logs but it lacks the curated EKS entity metrics and out of the box dashboards that CloudWatch Container Insights provides and Container Insights adds EKS aware aggregation on top of the agent.

Scan for phrases like namespaces services and pods when the question asks for EKS resource level CPU and memory reporting and choose CloudWatch Container Insights for an out of the box centralized dashboard and logs experience.

A prominent South American motorsports federation has granted exclusive live streaming rights for its races in Canada to an Austin-based media platform. The agreement mandates that only viewers physically located in Canada can watch the live broadcasts, and access from any other country must be denied. How should the platform enforce these location-based restrictions using AWS services? (Choose 2)

  • ✓ A. Turn on CloudFront geo restriction to whitelist Canada and block all other countries

  • ✓ C. Configure Amazon Route 53 geolocation routing to resolve Canadian DNS queries to the streaming endpoints and return a denial host for other countries

Turn on CloudFront geo restriction to whitelist Canada and block all other countries and Configure Amazon Route 53 geolocation routing to resolve Canadian DNS queries to the streaming endpoints and return a denial host for other countries are the correct choices to enforce Canada only access for the live streams.

Turn on CloudFront geo restriction to whitelist Canada and block all other countries enforces country based blocking at the CDN edge so requests originating outside Canada are denied before they reach your origin and this reduces load and exposure. Configure Amazon Route 53 geolocation routing to resolve Canadian DNS queries to the streaming endpoints and return a denial host for other countries lets you resolve DNS queries from Canada to the live streaming endpoints while resolving queries from other countries to a static denial or informational endpoint which prevents non Canadian clients from finding the stream.

Use Amazon Route 53 latency-based routing to send viewers to the lowest-latency endpoint is focused on delivering the lowest latency and it does not provide country level blocking so it cannot enforce a Canada only policy.

Secure streams with Amazon CloudFront signed URLs to control who can watch controls user level access and expirations and it does not inherently block by country so it cannot replace geo restrictions for location enforcement.

Apply Amazon Route 53 weighted routing to split traffic across endpoints distributes traffic by weight and does not perform geographic filtering so it is not suitable for denying access from non Canadian locations.

Combine CloudFront geo restrictions at the edge with Route 53 geolocation routing at DNS for defense in depth when enforcing country based access.

A healthcare analytics firm operates a patient risk scoring application in two private subnets behind an Application Load Balancer within a VPC. The VPC has both a NAT gateway and an internet gateway. The application processes sensitive records and then writes summaries to an Amazon S3 bucket used for internal reporting. The organization requires all traffic to remain on the AWS private network without traversing the public internet, and leadership wants the lowest-cost compliant approach. What is the most cost-effective solution to meet these requirements?

  • ✓ B. Provision a gateway VPC endpoint for Amazon S3 and update the private subnet route tables to route S3 prefixes to the endpoint

Provision a gateway VPC endpoint for Amazon S3 and update the private subnet route tables to route S3 prefixes to the endpoint is correct because a gateway endpoint provides private connectivity from the VPC to S3 over the AWS network and it does not incur hourly endpoint charges which satisfies the requirement to keep traffic off the public internet while minimizing cost.

Provision a gateway VPC endpoint for Amazon S3 and update the private subnet route tables to route S3 prefixes to the endpoint works by adding routes for S3 prefixes in the private subnet route tables so that S3 traffic is translated to the AWS backbone instead of going out via the internet gateway or NAT gateway. The gateway endpoint option for S3 is free of per-hour and per-GB endpoint fees which makes it the most cost effective choice for high volume internal traffic to S3.

Create an S3 interface VPC endpoint and attach a security group that allows the application to connect to S3 privately meets the private connectivity requirement but it is not the lowest cost option because interface endpoints for S3 incur hourly and data processing charges. Use interface endpoints when you need security group level control or PrivateLink features and accept the additional cost.

Route S3 traffic through the existing NAT gateway by updating routes and tightening security group and network ACL rules is not compliant because routing S3 access through a NAT gateway can route traffic to public S3 endpoints and that allows traffic to traverse the public internet. This approach also increases cost due to NAT gateway hourly and per gigabyte charges which contradicts the lowest cost requirement.

Enable S3 Transfer Acceleration and restrict access with a bucket policy that allows only trusted IP ranges does not keep traffic fully on the AWS private network because Transfer Acceleration uses public edge locations and the public internet. It also adds extra cost and does not meet the compliance goal of avoiding public internet egress.

Remember that gateway VPC endpoints are the preferred low cost option for Amazon S3 and DynamoDB when the question requires private connectivity and minimal cost. Use interface endpoints when you need security group controls or PrivateLink support for other services.

A digital media startup, NimbusPlay, stores user profiles, session events, clickstream data, and viewed content in Amazon DynamoDB. Some workloads must sustain up to 7 million read requests per second with consistently low single-digit millisecond latency and high availability. To absorb heavy read bursts, the team wants to introduce a dedicated cache in front of DynamoDB. Which AWS services should be used as the caching layer for this requirement? (Choose 2)

  • ✓ B. Amazon DynamoDB Accelerator (DAX)

  • ✓ D. Amazon ElastiCache

The correct choices are Amazon DynamoDB Accelerator (DAX) and Amazon ElastiCache because they provide the ultra low latency and high read throughput needed in front of DynamoDB to absorb heavy read bursts.

Amazon DynamoDB Accelerator (DAX) is purpose built for DynamoDB and delivers microsecond read latency with seamless integration and managed caching. DAX supports read through and write through patterns and reduces the operational complexity of cache invalidation for DynamoDB workloads while scaling to millions of requests per second.

Amazon ElastiCache provides Redis or Memcached in memory caching that can be used as a read through or look aside cache to offload read traffic from DynamoDB and smooth spikes. ElastiCache is a general purpose, highly performant cache that is useful when you need flexible data structures or caching across multiple services.

Amazon OpenSearch Service is focused on search and analytics and it is not designed to act as a low latency coherent cache directly in front of DynamoDB so it is not the right fit for microsecond operational reads.

Amazon Relational Database Service (Amazon RDS) is a transactional relational database and it is not intended to serve as an in memory caching tier for DynamoDB so it will not meet the microsecond latency goal.

Amazon Redshift is a columnar data warehouse optimized for analytical and batch queries and it is not suitable as an application level cache for high throughput operational reads.

When you need microsecond reads and millions of requests per second choose Amazon DynamoDB Accelerator (DAX) for tight DynamoDB integration and choose Amazon ElastiCache when you need a general purpose in memory cache across services.

A media startup operates a stateless web service on nine Amazon EC2 instances managed by an Auto Scaling group in a single Availability Zone, and traffic is routed through an Application Load Balancer. The company must withstand an Availability Zone outage without changing the application code. What architecture change should the solutions architect implement to achieve high availability?

  • ✓ B. Reconfigure the Auto Scaling group to span three Availability Zones and maintain three instances per AZ

Reconfigure the Auto Scaling group to span three Availability Zones and maintain three instances per AZ is the correct choice because distributing the nine instances across three AZs provides zonal fault tolerance and the existing Application Load Balancer will continue to route traffic only to healthy targets.

Enabling additional Availability Zones in the Auto Scaling group gives resiliency without changing the application code. The Auto Scaling group replaces failed instances automatically and the ALB directs traffic away from unhealthy targets so the service remains available when a single AZ fails.

Create an Amazon CloudFront distribution with the ALB as a custom origin is not sufficient because CloudFront speeds up content delivery and provides edge caching but it does not add compute capacity or automatic instance replacement across Availability Zones.

Create a new Auto Scaling group in a second Region and use Amazon Route 53 to split traffic between Regions is unnecessary for surviving an AZ outage and introduces multi-Region complexity. It requires data replication and failover planning and is typically used for disaster recovery rather than simple regional high availability.

Create a new launch template to quickly add instances in another Region during an incident is reactive and does not provide automatic resilience. A launch template by itself does not create multi-AZ redundancy or automatically bring capacity online during an AZ outage.

For regional high availability choose multi-AZ Auto Scaling with an ALB so instance replacement and traffic routing happen automatically without code changes.

A regional logistics company is adopting a hybrid model that keeps core systems in its primary colocation site while using AWS for object storage and analytics, moving about 30 TB of data to Amazon S3 each month. The network team requires a dedicated private link to AWS for steady performance, but they also want to maintain uptime by failing over to an encrypted path over the public internet during an outage. Which combination of connectivity choices should they implement to satisfy these needs? (Choose 2)

  • ✓ B. Use AWS Direct Connect as the primary connection

  • ✓ E. Use AWS Site-to-Site VPN as a backup connection

The correct choices are Use AWS Direct Connect as the primary connection and Use AWS Site-to-Site VPN as a backup connection. These two options provide a dedicated private circuit for steady, predictable performance and an encrypted internet path that preserves uptime when the primary link fails.

Use AWS Direct Connect as the primary connection gives a private, dedicated link with consistent bandwidth and lower latency than internet paths. Pairing it with Use AWS Site-to-Site VPN as a backup connection lets you configure an encrypted tunnel over the public internet that can automatically take over when the Direct Connect circuit is unavailable while still protecting data in transit.

Use Egress-Only Internet Gateway as a backup connection is incorrect because that gateway only supports outbound IPv6 traffic from a VPC and does not provide a private or VPN path to an on-premises data center.

Use AWS Global Accelerator as a backup connection is incorrect because Global Accelerator speeds access to public endpoints and edge locations and does not create private connectivity between a colocation site and a VPC.

Use AWS Site-to-Site VPN as the primary connection is not ideal here because a VPN over the internet is subject to variable latency and throughput and cannot match the predictable performance of a dedicated Direct Connect circuit.

For hybrid scenarios remember to choose Direct Connect for stable private throughput and add a Site-to-Site VPN as an encrypted internet failover.

A travel booking marketplace runs on an Amazon RDS for PostgreSQL database. Several read-heavy reports join many tables and have become slow and costly at peak load, so the team wants to cache the join results. The cache engine must be able to leverage multiple CPU threads concurrently. Which AWS service should the solutions architect recommend?

  • ✓ C. Amazon ElastiCache for Memcached

Amazon ElastiCache for Memcached is the correct choice because it supports multi threaded execution across CPU cores, scales out horizontally, and provides a simple distributed key value cache that is well suited to storing the results of read heavy join queries from an Amazon RDS for PostgreSQL database.

Amazon ElastiCache for Memcached can run multiple worker threads per node so the cache can utilize several CPU cores concurrently, it shards data across nodes for scale, and it offers low latency for high throughput read workloads where the application stores precomputed join results as values keyed by a query signature.

Amazon ElastiCache for Redis is feature rich and offers persistence, advanced data structures, and pub sub, but the primary command execution loop is largely single threaded so it will not make full use of multiple CPU cores for parallel command execution within a shard.

Amazon DynamoDB Accelerator (DAX) is purpose built for DynamoDB and cannot be used to cache queries from RDS for PostgreSQL so it is not applicable to this scenario.

AWS Global Accelerator improves global network performance and availability for client traffic but it is not a caching layer and does not store application query results so it does not meet the caching requirement.

On the exam look for explicit requirements such as multi threaded CPU utilization and a simple key value cache when choosing a caching service. If the requirement is throughput and parallel CPU usage choose a cache that supports multiple worker threads per node.

NovaStream, a global video training provider, runs its web application on AWS. The service uses Amazon EC2 instances in private subnets behind an Application Load Balancer. New distribution agreements require that traffic from several specific countries be blocked. What is the simplest way to implement this requirement?

  • ✓ C. Put Amazon CloudFront in front of the ALB and enable geo restriction to deny the listed countries

The correct choice is Put Amazon CloudFront in front of the ALB and enable geo restriction to deny the listed countries.

Put Amazon CloudFront in front of the ALB and enable geo restriction to deny the listed countries is the simplest solution because CloudFront enforces country based allow and deny controls at global edge locations and requires only a few configuration steps. This approach blocks unwanted countries before traffic reaches the ALB or your EC2 instances and it can also reduce load on the origin through caching.

Associate an AWS WAF web ACL with the ALB and add a geo match rule to block the specified countries can be used to block by country but it requires provisioning a Web ACL and writing rules which adds complexity and cost when simple edge geo restrictions suffice.

Edit the security group on the ALB to reject connections from those countries is not feasible because security groups do not support geography based rules and they operate as stateful allow lists rather than explicit deny controls.

Use VPC network ACLs to block the public IP ranges for those countries is impractical because it would require maintaining very large and frequently changing lists of IP ranges and network ACLs act at the subnet level which is not efficient for country level blocking.

When a question asks for the easiest way to block traffic by country think CloudFront geo restriction first and use WAF only when you need custom layer 7 rules.

A media analytics startup runs a batch data transformation job that usually finishes in about 75 minutes. The pipeline uses checkpoints so it can pause and resume, and it can tolerate compute interruptions without losing progress. Which approach offers the lowest cost for running these jobs?

  • ✓ C. Amazon EC2 Spot Instances

The most cost efficient choice is Amazon EC2 Spot Instances. They let you consume spare EC2 capacity at steep discounts and the pipeline can tolerate interruptions and resume from checkpoints so occasional preemptions do not jeopardize completion.

Amazon EC2 Spot Instances provide deep discounts compared with standard pricing and they are ideal when jobs are restartable and time flexible. Because your job finishes in about 75 minutes and can resume from checkpoints a Spot based approach reduces total compute cost while still allowing the job to complete reliably after interruptions.

AWS Lambda is unsuitable because each invocation has a 15 minute maximum runtime and a 75 minute batch cannot finish in a single invocation. Breaking the work into many short functions increases orchestration complexity and often raises cost versus a long running instance for CPU bound batch jobs.

Amazon EC2 On-Demand Instances will run the job without interruption but you pay the full on demand rate for the entire runtime. That makes them more expensive than using spare capacity when the workload can tolerate preemption.

Amazon EC2 Reserved Instances lower cost for steady predictable usage but they require long term commitments and they do not align well with intermittent batch jobs that benefit most from spot pricing.

When a batch job is restartable and interruption tolerant choose Spot instances for lowest cost and remember that AWS Lambda is limited to 15 minute executions.

A translational medicine institute ingests genomic sequencing datasets for more than twenty hospitals. Each hospital keeps the raw data in its own relational database. The institute must extract the data, run hospital-specific transformation logic, and deliver curated outputs to Amazon S3. Because the data is highly sensitive, it must be protected while ETL runs and when stored in Amazon S3, and each hospital requires the use of its own encryption keys to satisfy compliance. The institute wants the simplest day-to-day operations possible. Which approach will meet these needs with the least operational effort?

  • ✓ B. Create separate AWS Glue ETL jobs per hospital and attach a security configuration that uses that hospital’s AWS KMS key for SSE-KMS during job runs and for objects written to Amazon S3

Create separate AWS Glue ETL jobs per hospital and attach a security configuration that uses that hospital’s AWS KMS key for SSE-KMS during job runs and for objects written to Amazon S3 is correct because it lets you enforce per hospital encryption boundaries while using a serverless service that minimizes day to day operations.

This approach uses Glue security configurations to bind each job to a hospital specific AWS KMS customer managed key so data at rest in S3 and temporary job artifacts are encrypted with the required key and access is auditable. SSE-KMS ensures objects are encrypted with customer managed keys rather than S3 managed keys and you can apply key policies and monitor key usage through CloudTrail which helps satisfy compliance.

Use AWS Glue to run a single multi-tenant ETL job for all hospitals, tag records by hospital, and select a hospital’s AWS KMS key dynamically to write results to Amazon S3 with SSE-KMS is less desirable because a single multi tenant job requires complex branching and dynamic key selection and that increases the risk of misrouting data or misconfiguring encryption compared with per hospital jobs bound to specific keys.

Build containerized ETL with AWS Batch on Fargate, encrypt task storage with KMS and write outputs to Amazon S3 using SSE-S3 adds significant operational overhead since you must build maintain and scale containers and orchestration. In addition SSE-S3 does not meet the requirement for customer managed per hospital keys because it uses S3 managed encryption keys.

Operate one shared Amazon EMR cluster for all hospitals, secure traffic with TLS, and store outputs in Amazon S3 using SSE-S3 fails the per hospital key requirement because TLS only protects data in transit and SSE-S3 uses S3 managed keys so you cannot enforce separate customer managed keys for each hospital.

Prefer serverless per tenant jobs to reduce operations and attach a Glue security configuration using the hospital CMK for SSE-KMS to enforce per hospital encryption.

A global logistics provider equips 18,000 cargo pallets with IoT GPS tags that check for significant movement about every 90 seconds and push updated coordinates. The devices currently send updates to a backend running on Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones in one AWS Region. A recent surge in events overwhelmed the application, some location updates were dropped, and there is no ability to replay missed data. The company wants an ingestion design that avoids future data loss during bursts and keeps day-to-day operations lightweight. What should a solutions architect implement?

  • ✓ B. Create an Amazon SQS queue to buffer location messages and have the application poll and process them

The best choice is Create an Amazon SQS queue to buffer location messages and have the application poll and process them. This approach provides a durable buffer that absorbs spikes and prevents the application from dropping updates while keeping the day to day architecture simple.

SQS supports long polling and configurable visibility timeouts to coordinate retries and reduce empty receives. You can attach a dead letter queue to capture messages that repeatedly fail and set message retention so bursts are smoothed over. EC2 consumers can autoscale on queue depth and process messages asynchronously which preserves at least once delivery and keeps operational overhead low.

Amazon Kinesis Data Streams with multiple shards and EC2 consumers offers ordered streaming and replay but requires shard sizing and active throughput management. That adds operational complexity and capacity planning which is heavier than needed for simple burst buffering.

Configure an Amazon Kinesis Data Firehose delivery stream to store updates in Amazon S3 for the application to scan periodically is intended for continuous delivery into data lakes and files and it introduces latency and file based processing. It is not optimal for event by event consumption with immediate retry semantics.

Use an AWS IoT Core rule to publish device updates to an Amazon SNS topic and have the application poll the topic is unsuitable because SNS is a push notification service and it does not provide durable queueing or polling semantics. Backpressure cannot be handled safely and messages may be lost when consumers are overwhelmed.

For bursty device traffic use a managed buffer and scale consumers by queue depth. Use SQS with long polling and a dead letter queue and design consumers to be idempotent.

Blue Harbor Games operates a latency sensitive multiplayer arena on Amazon EC2 instances deployed in a single Availability Zone. Players connect using UDP at Layer 4, and traffic spikes during weekend events while dropping significantly overnight. Leadership requires higher availability and a way to lower costs during off-peak hours. What should the solutions architect implement to meet these goals? (Choose 2)

  • ✓ B. Use an EC2 Auto Scaling group across at least two Availability Zones to scale based on demand

  • ✓ D. Place a Network Load Balancer in front of the EC2 fleet to handle Layer 4 connections

Use an EC2 Auto Scaling group across at least two Availability Zones to scale based on demand and Place a Network Load Balancer in front of the EC2 fleet to handle Layer 4 connections are the correct choices for this scenario.

Use an EC2 Auto Scaling group across at least two Availability Zones to scale based on demand provides resilience against an Availability Zone outage and it automatically adjusts instance count to match player load so costs fall during off peak hours. Place a Network Load Balancer in front of the EC2 fleet to handle Layer 4 connections is appropriate because the Network Load Balancer operates at Layer 4 and it supports UDP so it can forward the game traffic without terminating or inspecting application layer payloads.

Switch to smaller instance types and increase the number of instances in the same Availability Zone is incorrect because keeping all instances in one Availability Zone does not improve fault tolerance and it will not automatically reduce costs without an automated scaling mechanism.

Front the service with an Application Load Balancer is incorrect because the Application Load Balancer works at Layer 7 and it does not support UDP so it cannot handle the game connections.

AWS Global Accelerator is incorrect as a standalone solution because it can improve routing and global performance but it does not provide multi Availability Zone compute fault tolerance or automatic cost savings from scaling and it introduces additional cost.

Match the balancer to the protocol and pair it with automated scaling. Use Network Load Balancer for UDP game traffic and multi AZ Auto Scaling to improve availability and reduce off peak cost.

A fintech startup runs a batch processing app on a single Amazon EC2 instance that must read and write to a DynamoDB table named SalesLedgerV3 within the same AWS account. What should a solutions architect implement to grant the instance the required permissions in a secure, maintainable way?

  • ✓ B. Create an IAM role for EC2 with least-privilege access to the SalesLedgerV3 table and attach it to the instance using an instance profile

Create an IAM role for EC2 with least-privilege access to the SalesLedgerV3 table and attach it to the instance using an instance profile is correct because it provides temporary credentials to the application running on the instance and it avoids managing static secrets.

The role attached via an instance profile lets the EC2 instance obtain temporary AWS credentials from the instance metadata service and those credentials are rotated automatically by AWS. This approach implements least privilege by scoping permissions to the SalesLedgerV3 table and it is easier to maintain than distributing and rotating long term keys.

Store IAM user access keys in AWS Secrets Manager and have the application retrieve them to call DynamoDB is not ideal because it still depends on long term IAM user credentials and you must manage rotation and access to those secrets. Secrets Manager protects stored secrets but it does not eliminate the operational risk of static credentials on the instance.

Create an IAM user with DynamoDB permissions and save its access keys on the instance file system for the application to load is insecure because embedding static credentials on the host increases the chance of accidental exposure and makes rotation manual and error prone.

Create an IAM role and add the specific EC2 instance ID to the role trust policy so the instance can assume it is incorrect because EC2 roles are granted through instance profiles and the role trust policy uses the EC2 service principal rather than listing instance IDs. The correct delivery mechanism is to attach the role to the instance with an instance profile.

Use IAM roles with instance profiles for EC2 access and apply least privilege. Avoid placing static access keys on instances even if you store them in a secrets store.

NovaMetrics runs a critical Python job that ingests near–real-time data snapshots on a fixed cadence. The job must execute every 12 minutes, finishes in about 90 seconds, needs roughly 1.5 GB of memory, and is CPU bound. The company wants to minimize the cost of running this workload while meeting these requirements; which approach should they choose?

  • ✓ C. Implement the job as an AWS Lambda function with 1536 MB of memory and schedule it to run every 12 minutes with Amazon EventBridge

The most cost efficient approach is Implement the job as an AWS Lambda function with 1536 MB of memory and schedule it to run every 12 minutes with Amazon EventBridge. This choice meets the fixed 12 minute cadence and short 90 second runtime while avoiding the cost of running and managing persistent infrastructure.

Implement the job as an AWS Lambda function with 1536 MB of memory and schedule it to run every 12 minutes with Amazon EventBridge is appropriate because Lambda bills per millisecond and there is no charge when functions are idle, which keeps costs low for intermittent work. Allocating 1536 MB gives proportionally more CPU capacity for a CPU bound job and EventBridge provides a fully managed scheduler so there is no operational overhead to manage scheduling.

Use AWS App2Container to containerize the code and run it as an Amazon ECS task on AWS Fargate with 1 vCPU and 1536 MB of memory is less cost effective because short, frequent runs pay for vCPU and memory while tasks run and container startup overhead can add latency and cost for brief jobs.

Run the workload as a Kubernetes CronJob on Amazon EKS using AWS Fargate adds complexity and costs such as the EKS control plane and cluster management which are unnecessary for a lightweight scheduled job and therefore make it more expensive and operationally heavy.

Use AWS App2Container to containerize the application and run it on an Amazon EC2 instance, using Amazon CloudWatch to stop the instance when the job is idle still requires managing instance lifecycle and you incur instance start and stop delays while potentially paying for instance time when it cannot be perfectly paused between runs.

Favor pay per use serverless compute like AWS Lambda with EventBridge for short periodic jobs to minimize idle costs and operational overhead.

Borealis Media is retiring its on-site file server and needs a fully managed cloud file service. Employees frequently work remotely from different countries, and applications that still run on the corporate network must mount the same shared storage using standard file protocols. Which solution should the company implement to meet these needs?

  • ✓ C. Amazon FSx for Windows File Server with SMB and AWS Client VPN

Amazon FSx for Windows File Server with SMB and AWS Client VPN is the correct choice because it provides a fully managed SMB file system that integrates with Active Directory and supports standard file semantics. Corporate applications can mount the same shared storage and remote employees can connect securely by using AWS Client VPN.

Amazon FSx for Windows File Server with SMB and AWS Client VPN offers high availability and native SMB protocol support which preserves file locks and permissions and enables applications that expect a traditional file server to work without modification. The Client VPN component provides secure, client based connectivity so employees working from different countries can access the same file share as if they were on the corporate network.

AWS DataSync is designed for fast data transfer and migration between on premises storage and AWS and it does not provide a persistent, mountable SMB endpoint for simultaneous access by users and applications.

Amazon WorkDocs is a document collaboration service for end users and it does not expose an SMB file share that line of business applications can mount in the same way as a network file server.

AWS Storage Gateway volume gateway exposes iSCSI block volumes rather than SMB or NFS file shares and it is therefore not suited for widely distributed users who need a managed SMB share with standard file semantics.

Match the protocol to the access pattern and think about how applications mount storage. If you need a managed SMB share for both on premises applications and remote users choose the SMB native service with VPN rather than transfer or block storage tools.

A regional retailer runs a monolithic web portal on a single Amazon EC2 instance. During weekend flash sales, customers report slow responses. Amazon CloudWatch shows CPU utilization hovering around 99% during these peak windows. The team wants to remove the bottleneck and raise availability while keeping costs in check. Which actions should they take? (Choose 2)

  • ✓ B. Place the application behind an Application Load Balancer and use an EC2 Auto Scaling group to scale out

  • ✓ D. Use AWS Compute Optimizer to right-size the instance to a larger class for vertical scaling

Place the application behind an Application Load Balancer and use an EC2 Auto Scaling group to scale out and Use AWS Compute Optimizer to right-size the instance to a larger class for vertical scaling are correct because they together remove the single instance bottleneck and provide cost conscious capacity management.

Place the application behind an Application Load Balancer and use an EC2 Auto Scaling group to scale out enables automatic addition and removal of instances during traffic spikes and distributes requests across healthy instances. This improves both performance during flash sales and overall availability without requiring large always on capacity.

Use AWS Compute Optimizer to right-size the instance to a larger class for vertical scaling provides data driven recommendations based on observed CPU and other metrics so you can adjust baseline instance size enough to reduce sustained CPU saturation while avoiding overprovisioning. Combining right sizing with scaling out gives a balanced and cost effective solution.

Configure an Auto Scaling group with an Application Load Balancer to scale vertically is incorrect because an Auto Scaling group with an ALB performs scale out by adding instances and it does not change an instance type to scale up. Vertical scaling requires changing the instance class.

Use AWS Compute Optimizer to choose an instance type for horizontal scaling is incorrect because Compute Optimizer focuses on right sizing instances and not on adding more instances behind a load balancer. Horizontal scaling is implemented with an Auto Scaling group and a load balancer.

Add Amazon CloudFront in front of the EC2 instance to reduce load is incorrect because a content delivery network can cache static content and reduce origin bandwidth but it will not fix CPU saturation for dynamic origin processing and it does not increase backend redundancy when only a single instance exists.

When one EC2 shows sustained high CPU think scale out with an ALB and Auto Scaling for elasticity and availability. Use Compute Optimizer to right size the baseline instance so you avoid unnecessary cost.

UrbanLift, a ride-hailing startup, runs its platform on AWS using Amazon API Gateway and AWS Lambda with an Amazon RDS for PostgreSQL backend. The Lambda functions currently connect to the database using a stored username and password. The team wants to enhance security at the authentication layer by adopting short-lived, automatically generated credentials instead of static passwords. Which actions should you implement to achieve this? (Choose 2)

  • ✓ B. Use IAM database authentication with Amazon RDS for PostgreSQL

  • ✓ D. Assign an IAM execution role to the Lambda function

The correct choices are Use IAM database authentication with Amazon RDS for PostgreSQL and Assign an IAM execution role to the Lambda function.

Use IAM database authentication with Amazon RDS for PostgreSQL issues short lived authentication tokens that are valid for a limited time so the application does not store static database passwords. Assign an IAM execution role to the Lambda function gives the function the AWS credentials it needs to request those IAM authentication tokens securely using the AWS SDK so the Lambda code can generate and present a token at connection time.

Place the Lambda function in the same VPC as RDS improves network reachability and can reduce latency but it does not change the authentication model and it does not provide short lived credentials.

Configure AWS Secrets Manager rotation and fetch passwords from Lambda provides rotated secrets and reduces long term exposure of static credentials but the application still uses password style authentication rather than per connection short lived IAM tokens so it does not meet the requirement for ephemeral credentials.

Limit the RDS security group inbound rule to the Lambda function security group tightens network access and is a good defense in depth control but it does not alter how clients authenticate to the database and it does not produce short lived authentication tokens.

When a question asks for short-lived credentials prefer IAM DB authentication plus an execution role rather than only changing networking or using password rotation.

A digital publishing startup stores easily reproducible project files in Amazon S3. New uploads are heavily downloaded for the first couple of days, and access drops sharply after about eight days. The team still requires immediate, on-demand retrieval later but wants to minimize storage expenses as much as possible. What approach should they take to reduce costs while meeting these needs?

  • ✓ C. Set an S3 Lifecycle rule to transition objects to S3 One Zone-IA after 30 days

The correct choice is Set an S3 Lifecycle rule to transition objects to S3 One Zone-IA after 30 days. This option reduces storage cost while preserving immediate, on demand retrieval for files that are easy to reproduce and see most downloads in the first week.

S3 One Zone-IA stores objects in a single Availability Zone which lowers storage pricing compared with multi AZ classes and still allows instant access when needed. Waiting 30 days before transitioning respects the minimum storage duration that applies to infrequent access classes and avoids early minimum charges.

Set an S3 Lifecycle rule to transition objects to S3 Standard-IA after 30 days is functionally correct but it is more expensive because Standard-IA keeps data redundantly across multiple Availability Zones and that extra resilience is unnecessary for easily reproducible content.

Use S3 Intelligent-Tiering for all objects from day one would handle variable access automatically but it introduces monitoring and per object overhead and it is less cost effective when the access pattern is predictable and a simple lifecycle to a cheaper class is sufficient.

Set an S3 Lifecycle rule to transition objects to S3 One Zone-IA after 10 days is inappropriate because infrequent access storage classes generally enforce a 30 day minimum storage period and moving earlier can trigger minimum charges that cancel the intended savings.

For recreatable files that need immediate access but lower durability, move to One Zone-IA on or after the IA 30 day minimum to maximize savings.

A regional design firm runs a reporting utility on a single Amazon EC2 instance that requires about 300 GiB of block storage. The tool is idle most of the time but has brief usage spikes on weekday mid-mornings and early evenings. Disk I/O varies, with peaks reaching up to 3,000 IOPS. The architect needs the most economical option that still delivers the required performance. Which Amazon EBS volume type should be selected?

  • ✓ C. Amazon EBS General Purpose SSD (gp2)

The correct choice is Amazon EBS General Purpose SSD (gp2). It is the most cost effective option for a 300 GiB volume because gp2 provides three baseline IOPS per GiB which gives about 900 IOPS at that size and it can burst to accommodate short peaks near 3,000 IOPS.

Because Amazon EBS General Purpose SSD (gp2) combines baseline IOPS proportional to volume size and burst credits it handles long idle periods with brief, predictable spikes without the extra expense of provisioning sustained IOPS.

Amazon EBS Provisioned IOPS SSD (io1) can meet or exceed the IOPS requirement but it incurs higher cost because you pay for guaranteed, provisioned performance that is unnecessary for brief, infrequent bursts.

Amazon EBS Cold HDD (sc1) is designed for infrequently accessed, large sequential workloads so it cannot deliver the random IOPS performance the reporting tool needs.

Amazon EBS Throughput Optimized HDD (st1) is optimized for high throughput and streaming large sequential I/O and not for high IOPS bursts so it will not satisfy the 3,000 IOPS peaks.

Remember that gp2 gives three IOPS per GiB baseline plus burst capability so choose it for moderate size volumes with short spikes and reserve io1 for workloads that need sustained, guaranteed IOPS.

A solutions architect at a digital ticketing startup needs to store rolling application logs from its production web platform in Amazon S3 for roughly 90 days. The team cannot predict which log objects will be read or how often they will be retrieved, but they must be available immediately when needed. To keep costs low without adding operational overhead, which S3 storage class should be used?

  • ✓ C. S3 Intelligent-Tiering

S3 Intelligent-Tiering is the correct choice because the logs must remain immediately available and access patterns are unknown so automatic tiering reduces cost without adding operational overhead.

S3 Intelligent-Tiering automatically moves objects between access tiers based on observed access patterns and does not require manual lifecycle rules. This keeps objects ready for instant retrieval while optimizing storage cost for objects that become infrequently accessed.

S3 Standard-Infrequent Access (S3 Standard-IA) is less suitable because unpredictable frequent reads can incur retrieval fees and minimum storage duration charges which can increase total cost when access cannot be predicted.

S3 Glacier is not appropriate because it requires restore operations and has retrieval delays which prevents immediate access to logs when they are needed.

S3 One Zone-Infrequent Access (S3 One Zone-IA) is not a good fit because it reduces durability by storing data in a single Availability Zone and it still applies retrieval fees which can make costs higher under variable access patterns.

When access patterns are unknown or variable let S3 handle tiering automatically so you avoid manual lifecycle management and keep objects instantly available while minimizing cost.

A retail analytics startup runs an Amazon RDS for MySQL instance to store purchase transactions. The database is already encrypted at rest with an AWS KMS key. The security team now requires every application connection to use TLS and validate the database server certificate. What should the solutions architect do to enable encryption in transit for these connections?

  • ✓ C. Download the current AWS RDS CA root certificate bundle and configure the application clients to use it when connecting

Download the current AWS RDS CA root certificate bundle and configure the application clients to use it when connecting is correct because it enables TLS for client connections and allows each application to validate the RDS server certificate presented by the DB endpoint.

Amazon RDS provisions and rotates the server SSL/TLS certificate for each DB instance and you cannot upload or attach your own server certificate. Clients must trust the RDS certificate authority by using the AWS RDS CA root certificate bundle when establishing an encrypted connection so that the client can verify the server identity and prevent man in the middle attacks.

Enable encryption in transit in the RDS console and obtain a key from AWS KMS is incorrect because there is no RDS console option that issues TLS server certificates and AWS KMS is used for encryption at rest rather than providing TLS session certificates.

Install a self-signed server certificate on the RDS instance and distribute it to the applications is incorrect because Amazon RDS does not allow installing custom or self signed server certificates on managed DB instances and that approach is not supported for RDS endpoints.

Use AWS Certificate Manager to issue a certificate and associate it with the RDS DB instance is incorrect because ACM certificates cannot be attached to RDS DB instances and RDS manages its own server certificates and rotation process.

When enforcing certificate validation configure each client to use the AWS RDS CA bundle and verify server identity in the client connection settings before deploying to production.

A logistics startup has an IAM group for operations engineers. The group is attached to this policy: { “Version”:”2012-10-17″, “Id”:”EC2SafeTerminatePolicyV2″, “Statement”:[ { “Effect”:”Deny”, “Action”:”ec2:*”, “Resource”:”“, “Condition”:{ “StringNotEquals”:{ “ec2:Region”:”eu-central-1″ } } }, { “Effect”:”Allow”, “Action”:”ec2:TerminateInstances”, “Resource”:”“, “Condition”:{ “IpAddress”:{ “aws:SourceIp”:”172.16.50.0/25″ } } } ] } Given this configuration, which statement is accurate?

  • ✓ B. Users in the IAM group can terminate an EC2 instance in eu-central-1 only when the user’s source IP is 172.16.50.77

Users in the IAM group can terminate an EC2 instance in eu-central-1 only when the user’s source IP is 172.16.50.77 is correct.

The policy contains an explicit deny that blocks all EC2 actions when the request is not in eu-central-1, so effective permissions can only apply inside that region. The allow statement grants the TerminateInstances action only when the caller originates from the 172.16.50.0/25 network which includes 172.16.50.77. Both conditions must be satisfied and the explicit deny takes precedence over any allow, so termination is allowed only for requests coming from that source IP and targeting eu-central-1.

Users in the IAM group can terminate an EC2 instance in eu-central-1 when the instance’s private IP is 172.16.50.77 is wrong because the condition checks the caller source IP not the instance private IP.

Users in the IAM group can terminate EC2 instances in eu-central-1 from any IP because the Allow includes ec2:TerminateInstances is wrong because the allow is constrained by the IP condition so it does not permit termination from arbitrary source IPs.

Users in the IAM group can terminate EC2 instances in any region except eu-central-1 when the user’s source IP is 172.16.50.77 is wrong because the explicit deny blocks EC2 actions outside eu-central-1 and a deny always overrides an allow.

Explicit denies always override allows and IP conditions refer to the caller IP not the resource IP. Verify both region and source IP conditions when evaluating IAM policies.

HarborPoint Press operates a 28 TB MongoDB cluster in its data center and needs to move the data to Amazon DynamoDB within 3 weeks. The site has a constrained WAN link, so a direct online transfer would not complete in time. Which approach should the solutions architect choose to complete the migration without relying on the limited internet connection?

  • ✓ D. Use AWS Schema Conversion Tool data extractors to load the data to an AWS Snowball Edge device, then use AWS Database Migration Service to migrate into Amazon DynamoDB

Use AWS Schema Conversion Tool data extractors to load the data to an AWS Snowball Edge device, then use AWS Database Migration Service to migrate into Amazon DynamoDB is the correct choice because it enables offline bulk seeding of the 28 TB dataset and lets a managed replication service complete and synchronize the target in AWS.

Use AWS Schema Conversion Tool data extractors to load the data to an AWS Snowball Edge device, then use AWS Database Migration Service to migrate into Amazon DynamoDB works by using the Schema Conversion Tool extractors to export MongoDB data into files that can be loaded onto a Snowball Edge device for physical transport to AWS. After the device arrives in AWS the data can be ingested into S3 and AWS Database Migration Service can perform the final load into DynamoDB and apply change data capture to catch any updates made during transit.

Set up AWS Direct Connect and use AWS Database Migration Service to migrate to Amazon DynamoDB is unsuitable because provisioning Direct Connect can take longer than the project window and it still relies on the constrained WAN for the full bulk transfer.

Use AWS DataSync to push the dataset to Amazon S3, then apply AWS Database Migration Service to load into Amazon DynamoDB is not appropriate because DataSync transfers over the network and will be limited by the same constrained WAN so it does not provide an offline seeding path for very large data.

Use AWS Database Migration Service to extract and load the data onto an AWS Snowball Edge, then finalize the migration with AWS DMS in AWS is incorrect because AWS DMS does not perform offline extraction directly to a Snowball Edge device and AWS guidance is to use SCT extractors for the device stage while DMS runs in the cloud to finalize and replicate changes.

Seed large datasets offline with Snowball Edge and SCT extractors and then run DMS in AWS to apply changes and complete the migration.

A retail analytics startup keeps its data in Amazon S3, where some objects are accessed daily while others are rarely used. The head of finance noticed a 35% month-over-month spike in S3 charges and needs a fast way to cut costs with as little ongoing management as possible. What is the quickest approach to reduce S3 spending with minimal operational effort?

  • ✓ B. Apply S3 Lifecycle rules to transition objects into the most appropriate storage classes

Apply S3 Lifecycle rules to transition objects into the most appropriate storage classes is the correct choice because it automates moving data to lower cost classes and requires very little ongoing management.

Lifecycle rules allow you to define time based or object attribute based transitions at the bucket or prefix level so that infrequently accessed objects move to lower cost classes such as S3 Standard-IA or S3 Glacier Instant Retrieval and you can also expire objects when they are no longer needed. Using lifecycle policies reduces costs quickly without building and maintaining custom workflows and it operates continuously once configured.

Use AWS Trusted Advisor to automatically move objects into optimal S3 storage classes is wrong because Trusted Advisor provides guidance and checks but it does not perform automated object transitions for you.

Migrate every object to S3 Glacier Instant Retrieval is wrong because moving frequently accessed objects to an archival class can increase retrieval costs and latency and it can raise total cost for objects that are accessed often.

Build a Step Functions and Lambda workflow that evaluates access logs and retiers objects accordingly is wrong because this adds significant development and operational overhead and it takes longer to implement than using the built in lifecycle automation.

When a question highlights quick cost reduction and low operational overhead think S3 Lifecycle transitions to automate tiering instead of custom pipelines or migrating all objects to Glacier.

A fast-growing consumer messaging service stores account records, contact relationships, and interaction counters in Amazon DynamoDB. The company is rolling out to additional continents and must deliver consistently low read and write latency and high availability for a worldwide audience. The workload is spiky and hard to predict, and the system must keep operating during a Regional disruption while staying cost efficient. What is the most cost-effective approach to meet these needs?

  • ✓ C. Enable DynamoDB global tables in multiple Regions and use on-demand capacity to absorb unpredictable traffic

Enable DynamoDB global tables in multiple Regions and use on-demand capacity to absorb unpredictable traffic is the correct choice because it provides active multi Region replication and local reads and writes with low latency while scaling automatically to match unpredictable demand.

Global tables deliver fully active multi Region replication so users get local read and write performance in each Region and the system continues operating during a Regional disruption. On-demand capacity charges per request and removes the need to predict traffic so it is cost effective for spiky and hard to forecast workloads and it reduces operational overhead from capacity planning.

Persist user state in Amazon S3 with Cross-Region Replication and invoke AWS Lambda to update data in real time is unsuitable because S3 is object storage and it does not provide millisecond transactional access patterns for user graphs and interactions and using Lambda to emulate database semantics adds latency and complexity.

Configure DynamoDB global tables across multiple Regions using provisioned capacity with Auto Scaling for traffic bursts offers the right active multi Region topology but provisioned capacity can require careful tuning and it can be more expensive or lead to throttling when traffic is highly unpredictable compared to on demand.

Run a single-Region DynamoDB table in provisioned capacity mode and replicate asynchronously to a secondary Region with DynamoDB Streams for disaster recovery increases operational complexity and it does not provide active multi Region writes so users far from the primary Region face higher latency and failover is more complex.

Focus on low latency worldwide and unpredictable traffic and prefer global tables with on demand capacity to avoid capacity planning and reduce risk of throttling

A data analytics startup named Panorama Metrics runs a self-managed database on Amazon EC2 with synchronous commit replication across a pair of Availability Zones. The database must be reachable from the internet, so both EC2 instances are in public subnets, and the replication stream uses each node’s public IPv4 address. You want to reduce the ongoing cost of the replication traffic while preserving the high-availability design. What should you change?

  • ✓ C. Use the instances’ private IP addresses for the replication

Use the instances’ private IP addresses for the replication is correct because moving the replication stream to private addresses keeps traffic on the AWS network and avoids higher internet egress charges while maintaining cross Availability Zone high availability.

Using private IP addresses keeps replication traffic inside the VPC and on the AWS backbone so it is billed at intra region data transfer rates rather than public internet egress rates. This change preserves the existing design of two instances in different Availability Zones and lets the database remain reachable from the internet for client traffic while the replication uses the private network.

Assign Elastic IP addresses to both instances and replicate over those is incorrect because Elastic IPs are public addresses and replication over them still traverses the internet path and continues to incur higher egress costs.

AWS PrivateLink is incorrect because PrivateLink provides interface endpoints for accessing services across VPCs or accounts and it is not the appropriate mechanism for direct EC2 to EC2 replication within the same VPC.

AWS Global Accelerator is incorrect because Global Accelerator optimizes client facing traffic via the AWS edge network and it does not reduce east west replication transfer costs inside a VPC.

When you see EC2 to EC2 traffic using public IPs think about switching replication to private IPs so traffic stays on the AWS backbone and you avoid data transfer out charges.

BlueBridge Analytics, a healthcare technology company, is extending its hybrid environment and needs a secure connection between its on-premises data center and an AWS VPC. All traffic between locations must be encrypted at the IP layer and also protected at the application session layer such as TLS. The design must provide fine-grained controls so that only necessary communication is allowed between on premises and AWS and it should scale as usage increases. What should the solutions architect recommend?

  • ✓ C. Configure AWS Site-to-Site VPN between the on-premises edge device and the VPC, and use VPC route tables, security groups, and network ACLs to allow only necessary flows

The correct choice is Configure AWS Site-to-Site VPN between the on-premises edge device and the VPC, and use VPC route tables, security groups, and network ACLs to allow only necessary flows.

The Configure AWS Site-to-Site VPN between the on-premises edge device and the VPC, and use VPC route tables, security groups, and network ACLs to allow only necessary flows option meets both encryption requirements because the Site-to-Site VPN implements IPsec to protect traffic at the network layer and applications can still use TLS to protect the session layer. The VPC route tables together with security groups and network ACLs allow fine grained control over which subnets and ports can communicate so only the necessary flows are permitted. The solution also scales as you add tunnels or use AWS managed VPN or VPN appliances and it preserves the full bidirectional network connectivity that a hybrid environment needs.

Establish a dedicated AWS Direct Connect link to the VPC and control routing with VPC route tables along with security groups and network ACLs is not sufficient because Direct Connect does not encrypt traffic by default and meeting the network layer encryption requirement would require adding MACsec where supported or an IPsec overlay. That extra step means Direct Connect alone does not satisfy the stated requirement.

Use AWS Client VPN to provide individual user tunnels into the VPC and manage permissions with security groups and IAM policies is focused on end user remote access rather than site to site connectivity. Client VPN provides individual user tunnels and does not provide the seamless network routing between an on premises data center and a VPC that a hybrid environment typically requires.

Expose services via AWS PrivateLink endpoints to the corporate network and restrict access using security groups provides private access to specific services but it does not create a full bidirectional network path for general traffic. PrivateLink still depends on an underlying transport such as VPN or Direct Connect to reach on premises so it does not by itself address the requirement for a network layer encrypted site to site connection.

When both network layer and session layer encryption are required remember IPsec for the network layer and TLS for the session layer. Also remember that Direct Connect is not encrypted by default and Client VPN is for individual users.

StreamSpace Learning is building a browser-based chat feature for live courses. The system must maintain persistent bidirectional connections with clients using WebSocket APIs and is expected to handle roughly 40,000 concurrent users. The chat microservices run on containers in an Amazon EKS cluster within private subnets of a VPC and must remain non-public. What is the best way to enable secure access from the WebSocket endpoint to these backend services?

  • ✓ C. Build an Amazon API Gateway WebSocket API and integrate privately via VPC link to a load balancer that fronts the EKS workloads

The best choice is Build an Amazon API Gateway WebSocket API and integrate privately via VPC link to a load balancer that fronts the EKS workloads. This option delivers a managed WebSocket endpoint for persistent bidirectional connections while keeping the chat microservices private inside the VPC.

Build an Amazon API Gateway WebSocket API and integrate privately via VPC link to a load balancer that fronts the EKS workloads is correct because WebSocket APIs provide full duplex, long lived connections required for real time chat and API Gateway can reach private targets by using a VPC link that points to an internal Network Load Balancer. The internal load balancer fronts the EKS pods so traffic never traverses the public internet and the NLB scales and performs well for tens of thousands of concurrent connections.

Build an Amazon API Gateway REST API and use a private VPC link to reach the EKS service is unsuitable because REST does not provide persistent full duplex connections and would not meet the real time bidirectional requirement for chat.

Amazon AppSync can support GraphQL subscriptions that behave like WebSockets but it is a GraphQL managed service and does not directly implement the API Gateway WebSocket plus VPC link pattern to privately front EKS workloads, so it does not match the architecture asked for.

Build an Amazon API Gateway WebSocket API and allow access by adding security group rules that permit API Gateway to reach the EKS endpoints is not viable because API Gateway is a managed service external to your VPC and cannot be granted access solely by adding security group rules. A VPC link to an internal load balancer is required for private integrations.

When a requirement calls for real time bidirectional communication choose WebSocket APIs and when API Gateway must call private VPC services use a VPC link to an internal load balancer rather than relying on security group changes.

A digital publishing startup keeps about 40 TB of video and audio assets in a multi-account data lake governed by AWS Lake Formation. The growth marketing group needs secure, fine-grained, cross-account access to just the datasets relevant to their audience segmentation and campaign analysis. What is the simplest way to enable this access while minimizing ongoing administration?

  • ✓ C. Configure Lake Formation tag-based access control to grant cross-account permissions on the required datasets to the marketing analytics accounts

Configure Lake Formation tag-based access control to grant cross-account permissions on the required datasets to the marketing analytics accounts is the correct choice because it enables fine grained, cross account access without copying data and it reduces ongoing administration.

With Configure Lake Formation tag-based access control to grant cross-account permissions on the required datasets to the marketing analytics accounts you assign LF tags to databases, tables, and locations and grant permissions by tag values. This allows the marketing analytics accounts to get table, column, and location level access across accounts and it scales as producers add new resources because tag based policies apply automatically.

Replicate the selected datasets into a centralized shared services account and create an IAM role trusted by the marketing accounts is incorrect because replicating the 40 TB of assets creates data duplication and extra pipelines and it increases IAM and operational work. It still does not offer the simple, tag driven governance that Lake Formation provides.

AWS Resource Access Manager to share the underlying S3 buckets with the marketing analytics account is incorrect because RAM shares at the bucket or resource level and does not integrate with Lake Formation tag based permissions. It cannot enforce table or column level controls so governance and auditing are less precise.

Run Lake Formation Grant permissions in each producer account for the specific marketing users is incorrect because per resource grants across multiple producer accounts do not scale and they create high administrative overhead. Manual grants must be updated as datasets proliferate which increases operational burden.

Tag based access is the preferred pattern for multi account Lake Formation scenarios that need fine grained, cross account sharing with minimal administration.

Riverton Geospatial runs a weekly batch analytics job on satellite imagery every Saturday night. The workload consumes large input files, maintains intermediate state, and must execute for at least 3 hours without interruption. The team wants minimal operational effort while ensuring the run starts on time each week. Which approach should they use?

  • ✓ C. Package the job as a container and run it on AWS Fargate with Amazon ECS, scheduling execution using Amazon EventBridge Scheduler

The best choice is Package the job as a container and run it on AWS Fargate with Amazon ECS, scheduling execution using Amazon EventBridge Scheduler. This approach provides a fully managed container runtime and a managed scheduler so the weekly run can start on time with minimal operational effort.

With Package the job as a container and run it on AWS Fargate with Amazon ECS, scheduling execution using Amazon EventBridge Scheduler you avoid managing servers and operating system patching and you can run a container for multiple hours without instance level maintenance. You can keep large inputs in S3 and persist intermediate state to durable storage such as EFS or S3 while Fargate handles the runtime and EventBridge Scheduler ensures reliable scheduled starts.

Launch a single Amazon EC2 On-Demand instance to host the job and trigger it with a crontab entry forces you to manage capacity and OS patching and to build resilience. That increases operational burden compared with a serverless container service.

Run the workload on an Amazon EMR cluster built from Spot Instances and orchestrate runs with AWS Step Functions relies on Spot capacity that can be reclaimed unexpectedly and so it is risky for a stateful, multi hour job. Handling interruptions adds complexity even if orchestration is used.

Implement the job as an AWS Lambda function with reserved concurrency and schedule it with Amazon EventBridge cannot meet the three hour uninterrupted requirement because Lambda has a 15 minute maximum execution time and is not designed for long stateful batch processing.

Prefer serverless containers on Fargate for scheduled, long running batch jobs and use EventBridge Scheduler to guarantee timely starts. Avoid Lambda for runs longer than 15 minutes and avoid Spot Instances for stateful multi hour tasks.

A multinational biotech company is replatforming its compliance document vault to AWS. The application will store confidential clinical trial files in Amazon S3. Company policy mandates that every file be encrypted on the client before it is transmitted and stored in S3 to meet strict regulatory controls. Which approach should the company implement?

  • ✓ C. Perform client-side encryption with AWS KMS customer managed keys and upload only encrypted objects to S3

Perform client-side encryption with AWS KMS customer managed keys and upload only encrypted objects to S3 is the correct choice because the company policy requires files to be encrypted on the client before they are transmitted and stored in Amazon S3.

Client side encryption ensures that plaintext never leaves the client environment and only ciphertext is uploaded. Using AWS KMS customer managed keys gives the team control over key lifecycle operations and auditing while allowing clients or an encryption library to encrypt data locally before upload.

Configure server-side encryption with AWS KMS keys (SSE-KMS) and enforce access with custom key policies is incorrect because SSE-KMS encrypts data after the object reaches S3 and so it does not meet the requirement to encrypt files before transmission.

Use server-side encryption with customer-provided keys (SSE-C) is incorrect because SSE-C also relies on S3 performing the server side encryption and it requires passing keys with each request which increases operational risk and does not provide client side protection.

Use AWS CloudHSM to store keys and rely on SSE-S3 for object encryption is incorrect because SSE-S3 is a server side mechanism that uses S3 managed keys and it does not encrypt data before upload. Using CloudHSM for key storage does not change the fact that encryption would occur on the server side.

Focus on whether encryption happens before or after data leaves the client when answering encryption questions. If the requirement is encryption before transmission choose client side encryption and prefer KMS customer managed keys for centralized key control.

Aurora Media is launching a global photo gallery for its news site that will be visited by hundreds of thousands of readers across several regions. Editors upload high-resolution images a single time, and the application must apply on-the-fly transformations such as resizing when viewers retrieve them. The team prefers fully managed services that are highly available, scale automatically, and deliver low latency worldwide with minimal operational effort. What is the simplest architecture that meets these requirements?

  • ✓ C. Store images in Amazon S3 behind Amazon CloudFront and use S3 Object Lambda to apply transformations to objects during GET requests

Store images in Amazon S3 behind Amazon CloudFront and use S3 Object Lambda to apply transformations to objects during GET requests is the simplest managed pattern for dynamic image processing and global delivery.

S3 Object Lambda lets you run Lambda code inline with S3 GET operations to resize or filter images on demand without creating derivative files or managing proxy infrastructure. When you pair S3 Object Lambda with CloudFront you get low latency and automatic scaling with minimal operational effort which matches the team preferences for a fully managed, highly available solution.

Store images in Amazon S3 and front them with Amazon CloudFront, using AWS Lambda@Edge to modify origin responses and dynamically resize the images is workable but not the easiest. Lambda@Edge has packaging and runtime constraints and it adds deployment complexity across Regions which makes it less straightforward than using S3 Object Lambda for per object transformations.

Store images in Amazon S3 behind Amazon CloudFront and configure S3 Event Notifications to invoke an AWS Lambda function to transform images when users request them is incorrect because S3 Event Notifications do not fire on GET requests. Notifications trigger on object creation and other lifecycle events which means you cannot perform transformations per retrieval with that pattern.

Store images in Amazon DynamoDB with Global Tables and invoke AWS Lambda to process items as they are read from the table is not appropriate because DynamoDB is not designed to serve or transform binary image data. That approach adds unnecessary complexity and does not provide the caching and global delivery benefits of S3 plus CloudFront.

Remember that S3 Object Lambda performs per request transforms and pair it with CloudFront for global caching and low latency.

An insurance analytics firm is moving a set of legacy virtual machines to AWS. These workloads depend on drivers and tooling tied closely to the guest operating system and cannot be containerized or rebuilt because of vendor restrictions. Each workload currently runs alone on its own VM. The team will host these workloads on Amazon EC2 without changing the design and needs a solution that provides high availability and resiliency to instance or Availability Zone failures. Which approach should they implement?

  • ✓ C. Build an AMI for each workload and run two EC2 instances from it in different Availability Zones behind a Network Load Balancer with health checks

The correct approach is Build an AMI for each workload and run two EC2 instances from it in different Availability Zones behind a Network Load Balancer with health checks. This option gives active redundancy across multiple Availability Zones and automatic failover without modifying the guest operating system or the vendor supplied drivers.

By building an AMI you capture the required guest operating system state and vendor drivers and you can launch identical EC2 instances that run the legacy workloads without modification. Placing at least two instances in different Availability Zones provides fault isolation and a Network Load Balancer with health checks will route traffic away from unhealthy instances and enable automatic failover across AZs. The NLB is appropriate for OS level and TCP based workloads because it operates at the connection level and preserves client addressing.

Containerize the legacy applications and deploy them on Amazon ECS with the Fargate launch type behind an Application Load Balancer across multiple Availability Zones is not viable because the workloads cannot be containerized due to drivers and tooling tied to the guest operating system. Fargate requires containerized workloads and refactoring would violate the vendor restrictions.

Use AWS Elastic Disaster Recovery to replicate the VMs into AWS and rely on a manual failover runbook during outages is focused on disaster recovery and not on active high availability. Relying on manual cutover introduces downtime and does not provide the continuous automatic failover that the requirement demands.

Create AMIs and launch an Auto Scaling group with a minimum and maximum of 1 instance, then place an Application Load Balancer in front for routing and health-check failover still leaves only a single running instance so an Availability Zone failure would take the workload offline. An Application Load Balancer also operates at layer seven and may not suit non HTTP or OS level requirements as well as a Network Load Balancer.

For legacy, non containerized systems choose AMIs and run redundant EC2 instances across multiple Availability Zones behind a load balancer and use health checks to enable automatic failover.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.