Free AWS Solutions Architect Practice Exams - ★ ★ ★ ★ ★

AWS Solutions Architect Exam Questions & Answers

A media startup is piloting a TV streaming catalog on AWS. The architecture uses an Application Load Balancer in front of an Auto Scaling group of Amazon EC2 instances for the web tier, and an Amazon RDS for PostgreSQL database operating in a Single-AZ configuration. Viewers report sluggish responses while browsing and searching the catalog. The catalog data changes only a few times per week, and monitoring shows database CPU spikes during these queries. What should a solutions architect recommend to speed up catalog searches?

  • ❏ A. Migrate the catalog to Amazon DynamoDB and add DynamoDB Accelerator (DAX) to cache catalog lookups

  • ❏ B. Deploy Amazon ElastiCache for Redis and have the application lazily populate cached catalog query results

  • ❏ C. Create Amazon RDS read replicas and route catalog read traffic to the replicas from the frontend

  • ❏ D. Move the catalog to Amazon Aurora Serverless v2 and rely on its scaling and caching to handle frequent queries

A digital publishing startup operates a three-tier content platform on Amazon EC2 behind an Application Load Balancer. The fleet scales with an EC2 Auto Scaling group across multiple Availability Zones and persists data in an Amazon Aurora cluster. During live news breaks, the site sees short, intense bursts of incoming requests and leadership wants greater resilience to these surges without changing application code. What should you implement to better handle these traffic spikes? (Choose 2)

  • ❏ A. Enable AWS WAF on the Application Load Balancer

  • ❏ B. Deploy an Amazon CloudFront distribution with the Application Load Balancer as the origin

  • ❏ C. Place AWS Global Accelerator in front of the Application Load Balancer

  • ❏ D. Add Amazon Aurora Replicas and use the reader endpoint for read traffic

  • ❏ E. Subscribe to AWS Shield Advanced

A healthcare technology startup wants to add an extra layer of protection to a serverless application that exposes REST endpoints with Amazon API Gateway in the us-west-2 and ap-south-1 Regions. The company uses multiple AWS accounts under AWS Organizations and needs to guard against SQL injection and cross-site scripting across all accounts while keeping operations simple. What is the most efficient approach to meet these requirements?

  • ❏ A. Configure AWS WAF separately in both Regions and attach a Regional web ACL to each API Gateway stage

  • ❏ B. Use AWS Firewall Manager to centrally apply AWS WAF policies across the organization and automatically associate Regional web ACLs with the API Gateway stages in both Regions

  • ❏ C. Enable AWS Shield Advanced in both Regions for the APIs and rely on it to block SQL injection and XSS

  • ❏ D. Deploy AWS Network Firewall in the VPCs to filter malicious traffic before it reaches API Gateway

A geospatial analytics startup plans to run a custom distributed compute engine on AWS. The engine delivers the best results when nodes are placed very close together for ultra-low latency and maximum east–west bandwidth, and the team is prioritizing raw performance over cost or failure isolation. What deployment approach should the solutions architect choose to meet these requirements?

  • ❏ A. Use a Spread placement group

  • ❏ B. Use a Cluster placement group

  • ❏ C. Use Spot Instances for all compute nodes

  • ❏ D. Optimize the instance kernel and network stack using EC2 user data

The engineering teams at Aurora Toys maintain eight AWS accounts for development, QA, and staging. Recent invoices show a surge in Amazon EC2 spend from idle oversized instances. The cloud architect needs a centrally managed way to stop anyone in any account from launching xlarge and larger EC2 instance sizes while keeping ongoing management effort minimal. What should the architect do?

  • ❏ A. Attach a resource-based policy to Amazon EC2 in every account that denies launching xlarge and larger instance types

  • ❏ B. Create an AWS Organization for all accounts and apply a service control policy that denies ec2:RunInstances for disallowed instance sizes

  • ❏ C. Create a service-linked role for Amazon EC2 and attach a customer-managed policy that denies launching large instances

  • ❏ D. Deploy an AWS Config rule with Systems Manager automation to stop or terminate any xlarge or larger instances after launch

Which AWS solution runs existing shell or Python scripts in serverless containers on a cron schedule with minimal refactoring and provides a clear path to event driven triggers for jobs that run up to 50 minutes?

  • ❏ A. Amazon EKS with Kubernetes CronJobs

  • ❏ B. AWS Batch with managed EC2 compute environments and Batch schedules

  • ❏ C. AWS Glue Python shell jobs scheduled by EventBridge Scheduler

  • ❏ D. EventBridge Scheduler starting ECS tasks on AWS Fargate

An education technology company has replatformed a legacy monolith into microservices behind a new Application Load Balancer. The team updated an Amazon Route 53 simple record so that portal.learnbright.co now points to the new load balancer rather than the old one. Several minutes later, many users still reach the previous load balancer when visiting the site. Which configuration issue most likely explains this behavior?

  • ❏ A. A weighted routing policy is still directing traffic to the old load balancer

  • ❏ B. The DNS record TTL has not expired yet

  • ❏ C. Route 53 health checks for the new load balancer are failing

  • ❏ D. The Alias record to the load balancer is misconfigured

You are advising a mobile analytics vendor named LumenApps. Several client teams send JSON events from their mobile apps into a single Amazon Kinesis Data Streams data stream. CloudWatch reports frequent ProvisionedThroughputExceededException errors. Your review shows the producers call PutRecord synchronously for each event and push them one by one at about 3,600 records per second. The business wants to mitigate the errors while keeping costs as low as possible. What should you recommend?

  • ❏ A. Increase the number of shards in the stream

  • ❏ B. Enable enhanced fan-out for the stream

  • ❏ C. Batch and aggregate records and publish with PutRecords or the Kinesis Producer Library

  • ❏ D. Add exponential backoff and retries to producer writes

A financial services startup needs to securely link its headquarters network to a new Amazon VPC as quickly as possible. The connection must provide encryption for data in transit over the public internet, and the team prefers a solution they can deploy the same day without waiting for a carrier circuit. What should the solutions architect recommend?

  • ❏ A. AWS Direct Connect

  • ❏ B. AWS Site-to-Site VPN

  • ❏ C. AWS Client VPN

  • ❏ D. AWS DataSync

A media streaming startup operates a mission-critical service in the us-east-1 Region that uses an Amazon Aurora MySQL-Compatible cluster holding approximately 3 TB of data. A solutions architect must design a disaster recovery plan to fail over to us-west-2 that achieves a recovery time objective of 8 minutes and a recovery point objective of 4 minutes. Which approach will meet these objectives?

  • ❏ A. Create a cross-Region Aurora MySQL read replica in us-west-2 and use an Amazon EventBridge rule with AWS Lambda to promote the replica when a failure is detected

  • ❏ B. Build a single multi-Region Aurora MySQL DB cluster spanning us-east-1 and us-west-2 and use Amazon Route 53 health checks to shift traffic on failure

  • ❏ C. Migrate to an Aurora Global Database with us-east-1 as the primary and a secondary DB cluster in us-west-2, and use Amazon EventBridge to trigger an AWS Lambda function to promote us-west-2 during an outage

  • ❏ D. Deploy an Aurora multi-master cluster across us-east-1 and us-west-2 to allow writes in both Regions

NovaRide, a ride-sharing marketplace, is moving its on-premises two-tier application to AWS. The stack includes stateless web and API servers and a Microsoft SQL Server backend. The company wants the database tier to have the highest resiliency in a single Region with automatic failover while keeping day-to-day management minimal. What should the solutions architect implement?

  • ❏ A. Migrate the database to Amazon RDS for SQL Server with a cross-Region read replica

  • ❏ B. Launch Amazon RDS for SQL Server in a Multi-AZ configuration with automatic failover

  • ❏ C. Configure Amazon RDS for SQL Server in a cross-Region Multi-AZ deployment

  • ❏ D. Host Microsoft SQL Server on Amazon EC2 instances spread across multiple Availability Zones

Which changes will make a two tier EC2 application highly available across multiple Availability Zones? (Choose 2)

  • ❏ A. Add private subnets in another AZ and migrate the database to Amazon RDS with Multi-AZ

  • ❏ B. Create an EC2 Auto Scaling group across two or more AZs behind an Application Load Balancer

  • ❏ C. Use Amazon Route 53 failover records to direct traffic to the current stack

  • ❏ D. Create new private subnets in a second AZ but keep the database on a single EC2 instance

  • ❏ E. Place existing web instances behind an ALB but stay in one AZ

A retail analytics firm is moving the static assets for its partner portal from Amazon EC2 to an Amazon S3 bucket and will place an Amazon CloudFront distribution in front. The existing EC2 security group permits access only from about 16 known partner office IP ranges. After the migration, the static content must still be accessible exclusively from those same IP addresses while preventing direct access to the S3 bucket. Which combination of actions should be used to meet these requirements? (Choose 2)

  • ❏ A. Configure CloudFront origin access control and update the S3 bucket policy to allow only requests signed by that OAC

  • ❏ B. Attach an AWS WAF web ACL to the CloudFront distribution that allows only the approved partner IP CIDR ranges using an IP set

  • ❏ C. Associate a security group with the CloudFront distribution that permits traffic from the same partner IP list

  • ❏ D. Create a network ACL with the partner IP ranges and associate it to the CloudFront distribution

  • ❏ E. Use an S3 bucket policy with aws:SourceIp to allow only partner IPs and make the objects public for CloudFront to fetch

A wellness app company on AWS needs to run a weekly rollover on its Amazon RDS for MySQL database. The team wants a serverless scheduler to trigger a Python script that moves the prior seven days of data into an archive so the primary tables remain small, and the task typically finishes in about 7 minutes. What is the most cost efficient and reliable way to set this up?

  • ❏ A. Configure an AWS Glue job with a weekly trigger to run the Python rollover script

  • ❏ B. Launch an Amazon EC2 Spot Instance and use an OS cron to run the script each week

  • ❏ C. Create a weekly Amazon EventBridge schedule that invokes an AWS Lambda function to execute the rollover

  • ❏ D. Use Amazon EventBridge to start an AWS Fargate task once a week to run the rollover in a container

A regional healthcare provider runs critical medical records applications in its on-premises colocation facility. The facility connects to an Amazon VPC through a dedicated 10 Gbps AWS Direct Connect link. The organization is adding several new AWS accounts for different teams, and VPCs in those accounts must reach the on-premises systems quickly and cost-effectively while minimizing ongoing management. Which design change should the solutions architect implement to meet these goals?

  • ❏ A. Create a new AWS Direct Connect connection for each new account and route traffic to the data center

  • ❏ B. Use AWS Cloud WAN to interconnect all VPCs and integrate the existing Direct Connect

  • ❏ C. Attach the existing Direct Connect to an AWS Transit Gateway and share the transit gateway with the new accounts for VPC-to-on-premises routing

  • ❏ D. Establish AWS Site-to-Site VPN tunnels from each new account to the Direct Connect VPC

A robotics manufacturer is moving its telemetry processing stack to AWS. The existing system uses an on-premises message broker that communicates over the MQTT protocol. The application components will run on Amazon EC2, and the team wants a fully managed broker in AWS that supports the same standards so the application code does not need to be modified. Which AWS service should they choose for the broker?

  • ❏ A. Amazon SNS

  • ❏ B. Amazon MQ

  • ❏ C. AWS IoT Core

  • ❏ D. Amazon SQS

An online learning company operates its public-facing API in multiple AWS Regions and requires fixed public IP addresses for client access worldwide. Users in different continents are reporting slow responses when connecting over the internet. What should a solutions architect recommend to minimize internet latency while preserving static IP entry points?

  • ❏ A. Establish AWS Direct Connect circuits in multiple Regions

  • ❏ B. Deploy AWS Global Accelerator and register the application endpoints

  • ❏ C. Configure Amazon Route 53 latency-based routing to send users to the closest Region

  • ❏ D. Create an Amazon CloudFront distribution in front of the application

Which IAM policy JSON allows listing the S3 bucket named foo and reading all objects within that bucket?

  • ❏ A. { “Version”:”2012-10-17″, “Statement”:[ { “Effect”:”Allow”, “Action”:[ “s3:ListBucket” ], “Resource”:”arn:aws:s3:::foo/*” }, { “Effect”:”Allow”, “Action”:[ “s3:GetObject” ], “Resource”:”arn:aws:s3:::foo” } ] }

  • ❏ B. { “Version”:”2012-10-17″, “Statement”:[ { “Effect”:”Allow”, “Action”:[ “s3:GetObject” ], “Resource”:”arn:aws:s3:::foo/*” }, { “Effect”:”Allow”, “Action”:[ “s3:ListBucket” ], “Resource”:”arn:aws:s3:::foo” } ] }

  • ❏ C. { “Version”:”2012-10-17″, “Statement”:[ { “Effect”:”Allow”, “Action”:[ “s3:ListBucket”, “s3:GetObject” ], “Resource”:”arn:aws:s3:::foo” } ] }

  • ❏ D. { “Version”:”2012-10-17″, “Statement”:[ { “Effect”:”Allow”, “Action”:[ “s3:ListAllMyBuckets” ], “Resource”:”*” }, { “Effect”:”Allow”, “Action”:[ “s3:GetObject” ], “Resource”:”arn:aws:s3:::foo/*” } ] }

A media analytics startup, NovaStream Labs, needs a messaging approach for a service that performs rapid request-and-reply exchanges at very high volume. The team wants to minimize build effort and reduce operational costs, and expects to spin up thousands of short-lived reply endpoints each hour. Which Amazon SQS queue capability should they use to best satisfy these goals?

  • ❏ A. Amazon SQS FIFO queues

  • ❏ B. Amazon SQS delay queues

  • ❏ C. Amazon SQS temporary queues

  • ❏ D. Amazon SQS dead-letter queues

A field services company equips its maintenance vans with GPS trackers that report positions. Each device sends an update every 4 minutes only if the vehicle has moved more than 200 meters. The updates currently post to a web tier running on four Amazon EC2 instances distributed across multiple Availability Zones in a single Region. During a seasonal surge, the web tier was saturated, some updates were dropped, and there was no buffer to replay them. The company needs to avoid location data loss and absorb bursts elastically with minimal operational effort. What should a solutions architect recommend?

  • ❏ A. Ingest the GPS stream with Amazon Kinesis Data Streams, trigger AWS Lambda for real-time processing, and store locations in Amazon DynamoDB

  • ❏ B. Use an Amazon SQS queue to buffer incoming GPS updates and scale out workers to poll and process messages

  • ❏ C. Route updates through Amazon Managed Streaming for Apache Kafka (Amazon MSK) and run consumer applications to process the stream

  • ❏ D. Write each update to Amazon S3 and have the web tier periodically list the bucket and process new objects

A global e-commerce startup runs its web application across North America and Asia Pacific. The database layer is a MySQL instance hosted on Amazon EC2 in us-east-2, and Amazon Route 53 latency-based routing sends users to the nearest application stack. Customers in Asia Pacific report slow responses for read-heavy requests, with typical round-trip latency exceeding 180 ms to the database. What change to the database tier will most effectively reduce latency for those users?

  • ❏ A. Create an Amazon RDS for MySQL read replica in ap-southeast-1 and point the Asia Pacific application tier to the replica for queries

  • ❏ B. Move the workload to Amazon Redshift and use AWS Database Migration Service to synchronize, then direct application queries to Redshift

  • ❏ C. Migrate to an Amazon Aurora MySQL compatible global database and have the Asia Pacific application tier use the local reader endpoint

  • ❏ D. Migrate to Amazon RDS for MySQL in ap-southeast-1 and enable Multi-AZ

A global courier firm runs its tracking APIs on Amazon EC2 instances in the AWS ap-southeast-2 Region. The HTTPS endpoints are consumed worldwide by partners and customers to fetch near real-time parcel updates. Users connecting from Africa and South America report high latency and inconsistent response times. You must recommend a cost-efficient way to accelerate access for these distant users without re-platforming or deploying new regional stacks. What should you recommend?

  • ❏ A. Use Amazon Route 53 latency-based routing to direct clients to identical API stacks deployed across multiple Regions

  • ❏ B. Configure AWS Global Accelerator with an HTTPS listener and register the existing ap-southeast-2 EC2 endpoint to improve global access

  • ❏ C. Deploy Amazon API Gateway in multiple Regions and use AWS Lambda as a proxy to forward requests to the EC2 API in ap-southeast-2

  • ❏ D. Place Amazon CloudFront in front of the API and apply the CachingOptimized managed policy to cache responses

A fintech startup operates six isolated VPCs in one AWS Region named dev, qa, sales, hr, analytics, and core to separate environments. Following a reorganization, the team needs private any-to-any connectivity between all VPCs. They created a hub-and-spoke by peering each spoke to the core VPC, but workloads in different spokes still cannot communicate with each other. What is the most scalable and resource-efficient way to achieve full connectivity across these VPCs?

  • ❏ A. Full-mesh VPC peering

  • ❏ B. Interface VPC endpoints with AWS PrivateLink

  • ❏ C. AWS Transit Gateway

  • ❏ D. Internet Gateway

Which scaling method allows an Amazon ECS service running on Fargate to handle an initial 36 hour spike in demand and then automatically reduce costs as demand decreases?

  • ❏ A. Lambda + CloudWatch alarms to change ECS desired count

  • ❏ B. ECS Service Auto Scaling with target tracking

  • ❏ C. Fargate Spot only

  • ❏ D. Application Auto Scaling predictive scaling for ECS

A photo marketplace called VistaMart lets customers upload product images for preview. The worker tier that resizes and watermarks these images runs on instances in an Auto Scaling group. The front end must be decoupled from the processors, and the group should scale automatically in proportion to the rate of incoming uploads. What is the best way to accomplish this?

  • ❏ A. Publish each upload event to Amazon SNS and scale the Auto Scaling group based on the number of SNS notifications

  • ❏ B. Attach a target tracking policy to maintain 65% average CPU utilization across the Auto Scaling group

  • ❏ C. Buffer uploads in Amazon SQS and use a CloudWatch metric for queue backlog per instance to drive Auto Scaling on queue length

  • ❏ D. Send upload events to Amazon Kinesis Data Streams and adjust scaling by increasing shard count as traffic rises

A regional credit union operates a portfolio analytics application behind an Application Load Balancer with instances in an Amazon EC2 Auto Scaling group. On the first business day of each month at 07:30 UTC, employees trigger intensive reconciliation jobs and the application becomes sluggish. Amazon CloudWatch shows average CPU on the instances peaking at 100 percent during that period. What should a solutions architect recommend to ensure capacity is ready before the surge and prevent performance degradation?

  • ❏ A. Enable predictive scaling for the EC2 Auto Scaling group

  • ❏ B. Configure Amazon ElastiCache to offload read queries from the application

  • ❏ C. Create a simple scaling policy that scales on average CPU utilization

  • ❏ D. Configure a scheduled scaling action for the EC2 Auto Scaling group that runs just before the monthly jobs

Pioneer Metrics runs a public web application on Amazon EC2 instances behind an Application Load Balancer. The security team wants to strengthen protection and limit the effect of large bursts of unwanted HTTP requests that could exhaust backend resources. What is the most effective way to reduce the impact of these events at the load balancer so the instances remain available?

  • ❏ A. Enable VPC Flow Logs, store them in Amazon S3, and analyze them with Amazon Athena to detect and block DDoS patterns

  • ❏ B. AWS Shield Advanced

  • ❏ C. Attach an AWS WAF web ACL with rate based rules to the Application Load Balancer

  • ❏ D. Build a custom AWS Lambda function to watch for suspicious traffic and update a network ACL during an attack

A fintech company runs an API on Amazon EC2 instances in three private subnets across two Availability Zones. The API must read from and write to an Amazon DynamoDB table. The security team prohibits egress to the internet, NAT gateways, and use of public DynamoDB endpoints. With the least operational effort and while keeping traffic on the AWS network, how should the architect enable private connectivity from the subnets to DynamoDB?

  • ❏ A. Deploy an interface VPC endpoint for DynamoDB and attach the ENIs in each private subnet

  • ❏ B. Establish a private DynamoDB endpoint accessible through an AWS Site-to-Site VPN

  • ❏ C. Provision a DynamoDB gateway VPC endpoint and update the private subnet route tables

  • ❏ D. Build a software VPN directly between the private subnets and DynamoDB

A media streaming startup, LumenFlix, runs five Linux-based Amazon EC2 instances in a single Availability Zone to operate a clustered workload. The team needs a block volume that every instance can attach to at the same time using Amazon EBS Multi-Attach so the nodes can share the same data. Which Amazon EBS volume type should they choose?

  • ❏ A. General Purpose SSD (gp3) EBS volumes

  • ❏ B. Provisioned IOPS SSD (io1/io2) EBS volumes with Multi-Attach enabled

  • ❏ C. Throughput Optimized HDD (st1) EBS volumes

  • ❏ D. Cold HDD (sc1) EBS volumes

Which AWS service provides a high performance POSIX parallel file system for hot data that delivers very low latency and highly parallel I O and includes native Amazon S3 integration for fast import and export to a low cost cold tier?

  • ❏ A. Amazon FSx for OpenZFS

  • ❏ B. Amazon FSx for Lustre with S3 integration

  • ❏ C. Amazon EFS

  • ❏ D. Amazon FSx for Windows File Server

During a two-day onboarding lab at a fintech startup, the platform team quizzes junior engineers on Amazon S3 lifecycle rules. Which of the following storage class transitions are not supported when moving existing S3 objects between classes? (Choose 2)

  • ❏ A. Amazon S3 Standard ⇒ Amazon S3 Intelligent-Tiering

  • ❏ B. Amazon S3 One Zone-IA ⇒ Amazon S3 Standard-IA

  • ❏ C. Amazon S3 Standard-IA ⇒ Amazon S3 One Zone-IA

  • ❏ D. Amazon S3 Intelligent-Tiering ⇒ Amazon S3 Standard

  • ❏ E. Amazon S3 Standard-IA ⇒ Amazon S3 Glacier Instant Retrieval

A regional travel booking platform is breaking a tightly coupled monolith into AWS microservices after a 7x surge in daily users over the past quarter caused latency to rise. Some services will generate tasks quickly while others will take longer to process them. What is the best way to connect these microservices so fast producers do not overwhelm slower consumers?

  • ❏ A. Amazon Kinesis Data Streams

  • ❏ B. Amazon Simple Queue Service (Amazon SQS)

  • ❏ C. Amazon MQ

  • ❏ D. Amazon Simple Notification Service (Amazon SNS)

An accounting firm has launched a production three-tier web application on AWS. The web tier runs in public subnets across two Availability Zones in a single VPC. The application and database tiers run in private subnets in the same VPC. The company operates a third-party virtual firewall from AWS Marketplace in a dedicated security VPC, and the appliance exposes an IP interface that can receive IP packets. A solutions architect must ensure every inbound request to the application is inspected by this appliance before it reaches the web tier while minimizing operational effort. What should the architect implement?

  • ❏ A. Create a Network Load Balancer in the application VPC public subnets to steer incoming traffic to the firewall appliance for inspection

  • ❏ B. Deploy a Gateway Load Balancer in the security VPC and create a Gateway Load Balancer endpoint in the application VPC to direct all ingress traffic through the appliance

  • ❏ C. Attach a Transit Gateway between the VPCs and update routes to send incoming traffic through the security VPC before the web tier

  • ❏ D. AWS Network Firewall

A climate analytics startup plans to modernize its data processing stack into a fully serverless design. Historical sensor files are stored in an Amazon S3 bucket, and the team needs to run standard SQL on both the backlog and future ingestions. All objects must be encrypted at rest and automatically replicated to a different AWS Region to meet resilience and governance requirements. Which approach will satisfy these needs with the least ongoing operational effort?

  • ❏ A. Enable Cross-Region Replication on the current S3 bucket, protect objects with AWS KMS multi-Region keys, run ETL in AWS Glue, and query using Amazon Redshift

  • ❏ B. Provision a new S3 bucket with SSE-S3, turn on Cross-Region Replication, load the data, and use Amazon Redshift Spectrum for SQL

  • ❏ C. Create a new S3 bucket using SSE-KMS with multi-Region keys, enable Cross-Region Replication, migrate data, and query with Amazon Athena

  • ❏ D. Enable Cross-Region Replication for the existing S3 bucket, use SSE-S3 for encryption, and query the data using Amazon Athena

A sports streaming startup is about to release a fan-interaction feature that can experience sharp surges in traffic during championship broadcasts and major content drops. The backend runs on an Amazon Aurora PostgreSQL Serverless v2 cluster to absorb variable demand. The design must scale compute and storage performance to keep latency low and avoid bottlenecks when load spikes. The team is choosing a storage configuration and wants an option that grows automatically with traffic, optimizes I/O performance, and remains cost-efficient without manual provisioning or tuning. Which configuration should they choose?

  • ❏ A. Configure the Aurora cluster to use General Purpose SSD (gp2) storage and rely on scaling database capacity to offset I/O limits

  • ❏ B. Enable Aurora I/O-Optimized for the cluster to provide consistently low-latency, high-throughput I/O with flat pricing and no per-I/O fees

  • ❏ C. Use Magnetic (Standard) storage to minimize baseline costs and depend on Aurora autoscaling to handle spikes

  • ❏ D. Select Provisioned IOPS (io1) and adjust IOPS manually ahead of anticipated peak events

A media startup is piloting a TV streaming catalog on AWS. The architecture uses an Application Load Balancer in front of an Auto Scaling group of Amazon EC2 instances for the web tier, and an Amazon RDS for PostgreSQL database operating in a Single-AZ configuration. Viewers report sluggish responses while browsing and searching the catalog. The catalog data changes only a few times per week, and monitoring shows database CPU spikes during these queries. What should a solutions architect recommend to speed up catalog searches?

  • ✓ B. Deploy Amazon ElastiCache for Redis and have the application lazily populate cached catalog query results

Deploy Amazon ElastiCache for Redis and have the application lazily populate cached catalog query results is the correct choice because an in memory cache serves frequent catalog reads with far lower latency and it offloads repeated query work from the single AZ RDS instance.

Deploy Amazon ElastiCache for Redis and have the application lazily populate cached catalog query results reduces database CPU by returning results from memory after a cache miss populates the cache. With catalog data changing only a few times per week a cache aside pattern with sensible TTLs and explicit invalidation on updates will keep cached entries fresh and greatly lower latency during browsing and search.

Migrate the catalog to Amazon DynamoDB and add DynamoDB Accelerator (DAX) to cache catalog lookups is not ideal because it requires a major migration from a relational schema to NoSQL and it adds operational and development overhead that is unnecessary when the data is infrequently updated.

Create Amazon RDS read replicas and route catalog read traffic to the replicas from the frontend can increase read capacity but replicas still execute queries on database instances and may not eliminate CPU spikes or reduce latency as effectively as an in memory cache. Replication lag is also a potential concern for recently updated catalog items.

Move the catalog to Amazon Aurora Serverless v2 and rely on its scaling and caching to handle frequent queries involves moving database engines and it does not provide the same low latency for hot reads that a dedicated in memory cache delivers. Autoscaling compute helps concurrency but it does not replace caching for repeated lookups.

Use a cache aside pattern with ElastiCache for hot reads when data is mostly read and rarely updated and set TTLs or invalidate on updates to avoid stale results.

A digital publishing startup operates a three-tier content platform on Amazon EC2 behind an Application Load Balancer. The fleet scales with an EC2 Auto Scaling group across multiple Availability Zones and persists data in an Amazon Aurora cluster. During live news breaks, the site sees short, intense bursts of incoming requests and leadership wants greater resilience to these surges without changing application code. What should you implement to better handle these traffic spikes? (Choose 2)

  • ✓ B. Deploy an Amazon CloudFront distribution with the Application Load Balancer as the origin

  • ✓ D. Add Amazon Aurora Replicas and use the reader endpoint for read traffic

Deploy an Amazon CloudFront distribution with the Application Load Balancer as the origin and Add Amazon Aurora Replicas and use the reader endpoint for read traffic are the correct choices because they offload traffic at the edge and scale read capacity without requiring application code changes.

Deploy an Amazon CloudFront distribution with the Application Load Balancer as the origin reduces load on the EC2 fleet by caching static and cacheable responses at edge locations and regional caches. This offload lowers the request volume that reaches the load balancer and origin during brief spikes and it also reduces latency for end users.

Add Amazon Aurora Replicas and use the reader endpoint for read traffic increases read throughput by distributing read queries across replicas via the reader endpoint. This approach scales read capacity and reduces contention on the writer during peak traffic so the database layer is more resilient to sudden surges.

Enable AWS WAF on the Application Load Balancer is focused on filtering and blocking malicious or unwanted requests and it does not provide caching or additional compute or database capacity for legitimate traffic surges.

Place AWS Global Accelerator in front of the Application Load Balancer improves global network performance and provides faster routing and failover but it does not cache content or add backend capacity so it will not absorb high volumes of requests during short bursts.

Subscribe to AWS Shield Advanced provides enhanced DDoS protection and attack mitigation and it is aimed at malicious traffic rather than scaling or caching for legitimate live user spikes.

When you must absorb high legitimate traffic without changing code think edge caching to offload requests and read scaling to increase database capacity.

A healthcare technology startup wants to add an extra layer of protection to a serverless application that exposes REST endpoints with Amazon API Gateway in the us-west-2 and ap-south-1 Regions. The company uses multiple AWS accounts under AWS Organizations and needs to guard against SQL injection and cross-site scripting across all accounts while keeping operations simple. What is the most efficient approach to meet these requirements?

  • ✓ B. Use AWS Firewall Manager to centrally apply AWS WAF policies across the organization and automatically associate Regional web ACLs with the API Gateway stages in both Regions

Use AWS Firewall Manager to centrally apply AWS WAF policies across the organization and automatically associate Regional web ACLs with the API Gateway stages in both Regions is correct because it enables a single, organization wide WAF policy to be enforced across multiple accounts and Regions while providing application layer protections for SQL injection and cross site scripting.

Use AWS Firewall Manager to centrally apply AWS WAF policies across the organization and automatically associate Regional web ACLs with the API Gateway stages in both Regions lets you define WAF rule sets once and have Firewall Manager automatically deploy and associate Regional web ACLs with API Gateway stages in us-west-2 and ap-south-1 across accounts managed by AWS Organizations. This centralization reduces per account and per region setup and ongoing rule maintenance while ensuring consistent layer seven defenses for common web threats.

Configure AWS WAF separately in both Regions and attach a Regional web ACL to each API Gateway stage would provide the necessary layer seven protections but it forces manual per account and per region configuration and maintenance which increases operational effort compared to a centralized approach.

Enable AWS Shield Advanced in both Regions for the APIs and rely on it to block SQL injection and XSS is incorrect because Shield Advanced focuses on DDoS mitigation at the network and transport layers and it does not provide application layer rules to block SQL injection or cross site scripting.

Deploy AWS Network Firewall in the VPCs to filter malicious traffic before it reaches API Gateway is not appropriate because regional API Gateway endpoints are managed and their request paths do not traverse customer VPC data planes so Network Firewall cannot enforce protections for those managed endpoints.

For multi account and multi region web protection choose Firewall Manager to centrally apply AWS WAF rules and remember that AWS Shield is aimed at DDoS rather than SQLi or XSS.

A geospatial analytics startup plans to run a custom distributed compute engine on AWS. The engine delivers the best results when nodes are placed very close together for ultra-low latency and maximum east–west bandwidth, and the team is prioritizing raw performance over cost or failure isolation. What deployment approach should the solutions architect choose to meet these requirements?

  • ✓ B. Use a Cluster placement group

The correct choice is Use a Cluster placement group because placing instances close together inside a single Availability Zone delivers the ultra-low latency and maximum east–west bandwidth that the workload requires.

A cluster placement group keeps instances on the same high-bisection bandwidth network segment so tightly coupled distributed compute engines see minimal inter-node latency and very high throughput. Using a cluster placement group together with enhanced networking capable instance types maximizes raw performance which matches the startup prioritization of latency and throughput over cost and failure isolation.

Use a Spread placement group is designed for failure isolation and it deliberately places instances on distinct racks which increases physical separation and undermines the goal of lowest latency and highest inter-node bandwidth.

Use Spot Instances for all compute nodes targets cost savings and accepts interruptions which does not guarantee stable placement or the sustained performance required for tightly coupled workloads.

Optimize the instance kernel and network stack using EC2 user data can improve software level performance but it cannot change host placement or the underlying network fabric which are the primary factors for inter-node latency and throughput.

When a question emphasizes low latency and high inter-node throughput favor a Cluster placement group and enhanced networking capable instance types.

The engineering teams at Aurora Toys maintain eight AWS accounts for development, QA, and staging. Recent invoices show a surge in Amazon EC2 spend from idle oversized instances. The cloud architect needs a centrally managed way to stop anyone in any account from launching xlarge and larger EC2 instance sizes while keeping ongoing management effort minimal. What should the architect do?

  • ✓ B. Create an AWS Organization for all accounts and apply a service control policy that denies ec2:RunInstances for disallowed instance sizes

Create an AWS Organization for all accounts and apply a service control policy that denies ec2:RunInstances for disallowed instance sizes is correct because it provides a centralized preventive guardrail that stops launches of xlarge and larger instances across all member accounts and Regions with minimal ongoing management.

Create an AWS Organization for all accounts and apply a service control policy that denies ec2:RunInstances for disallowed instance sizes works as a preventive control because service control policies are evaluated before account level permissions and they can be attached at the organization root or to organizational units to block the ec2:RunInstances action for the specified instance types. This approach avoids per account policy maintenance and it prevents noncompliant launches rather than reacting after the fact.

Attach a resource-based policy to Amazon EC2 in every account that denies launching xlarge and larger instance types is incorrect because Amazon EC2 does not support resource based policies so there is nothing to attach and it cannot be used to centrally block launches.

Create a service-linked role for Amazon EC2 and attach a customer-managed policy that denies launching large instances is incorrect because service linked roles are predefined and managed by AWS and they are not intended for customer managed deny guardrails across accounts. They cannot be used as an organization wide preventive block for RunInstances.

Deploy an AWS Config rule with Systems Manager automation to stop or terminate any xlarge or larger instances after launch is incorrect because this is a detective and reactive solution that still allows the initial noncompliant launch and it requires setup per account and Region which increases operational overhead compared with an SCP.

When you need a centralized, preventive control across multiple accounts think SCPs in AWS Organizations. Use detective services like AWS Config only when you accept that noncompliant actions may occur first.

Which AWS solution runs existing shell or Python scripts in serverless containers on a cron schedule with minimal refactoring and provides a clear path to event driven triggers for jobs that run up to 50 minutes?

  • ✓ D. EventBridge Scheduler starting ECS tasks on AWS Fargate

EventBridge Scheduler starting ECS tasks on AWS Fargate is correct because it runs existing shell or Python scripts inside serverless containers on a cron schedule with minimal refactoring and it provides a clear path to event driven triggers for jobs that run up to 50 minutes.

EventBridge Scheduler starting ECS tasks on AWS Fargate keeps scripts as they are inside container images and it uses AWS Fargate to provide serverless compute so there are no servers or cluster control planes to manage. The Scheduler offers managed cron style scheduling and you can later replace a schedule with an EventBridge rule to start the same ECS task definition for event driven patterns. This approach also avoids the AWS Lambda 15 minute execution limit which makes Lambda unsuitable for jobs that run around 50 minutes.

Amazon EKS with Kubernetes CronJobs is wrong because it reintroduces control plane and node management which increases operational overhead compared with a serverless container approach.

AWS Batch with managed EC2 compute environments and Batch schedules is incorrect since it focuses on managed compute environments and queue driven batch processing rather than simple scheduled container runs with minimal operational burden.

AWS Glue Python shell jobs scheduled by EventBridge Scheduler is not ideal because Glue is aimed at ETL workloads and it is not a general purpose container runtime for arbitrary shell plus Python scripts, so it does not match the serverless container requirement.

Look for keywords like serverless containers and minimal refactoring when you have longer running scheduled jobs. If a job exceeds Lambda limits think in terms of Fargate tasks started by EventBridge.

An education technology company has replatformed a legacy monolith into microservices behind a new Application Load Balancer. The team updated an Amazon Route 53 simple record so that portal.learnbright.co now points to the new load balancer rather than the old one. Several minutes later, many users still reach the previous load balancer when visiting the site. Which configuration issue most likely explains this behavior?

  • ✓ B. The DNS record TTL has not expired yet

The DNS record TTL has not expired yet is the most likely cause of users still reaching the previous load balancer. DNS resolvers and client systems cache the prior A or alias record until the configured TTL expires so many clients will continue to be directed to the old load balancer even after you update the Route 53 record. For planned cutovers lower the TTL well before the change for example to 120 to 300 seconds and then restore it to a higher value after you verify the switch.

A weighted routing policy is still directing traffic to the old load balancer is unlikely because the scenario specifies a simple routing policy which does not distribute traffic by weight. Weighted routing would require an explicit configuration in Route 53 which the question does not describe.

Route 53 health checks for the new load balancer are failing does not fit because simple routing does not use health checks to change which record answers are returned. Health checks are relevant for failover weighted latency or geolocation policies and not for a single simple record.

The Alias record to the load balancer is misconfigured would more likely cause immediate resolution failures or point clients to a wrong target rather than cause many users to continue hitting the previous load balancer due to cached answers. The described pattern matches TTL based caching rather than an alias misconfiguration.

Lower the record TTL well before a DNS cutover and verify with tools like dig or nslookup then raise the TTL after you confirm the new endpoint serves traffic.

You are advising a mobile analytics vendor named LumenApps. Several client teams send JSON events from their mobile apps into a single Amazon Kinesis Data Streams data stream. CloudWatch reports frequent ProvisionedThroughputExceededException errors. Your review shows the producers call PutRecord synchronously for each event and push them one by one at about 3,600 records per second. The business wants to mitigate the errors while keeping costs as low as possible. What should you recommend?

  • ✓ C. Batch and aggregate records and publish with PutRecords or the Kinesis Producer Library

The correct recommendation is Batch and aggregate records and publish with PutRecords or the Kinesis Producer Library. This approach reduces the number of API calls and the per-record overhead so the stream can accept the same event volume with fewer writes and lower cost.

Batch and aggregate records and publish with PutRecords or the Kinesis Producer Library works because PutRecords accepts multiple records in a single request and the Kinesis Producer Library performs client-side aggregation and efficient multi-record uploads so producers do not push one API call per event. Aggregation lowers the request rate against each shard and reduces the likelihood of ProvisionedThroughputExceededException while also lowering request costs compared with increasing shard capacity.

Increase the number of shards in the stream would add write throughput and stop throttling but it increases ongoing costs because each shard has a price and the scenario asks to keep spend as low as possible.

Enable enhanced fan-out for the stream affects consumer read throughput and does not change producer-side write limits so it does not address ProvisionedThroughputExceededException errors.

Add exponential backoff and retries to producer writes can help absorb short bursts but it does not raise sustained write capacity and will still result in throttling if producers continually send one record per API call at high rates.

Prefer batching and client-side aggregation to cut per-record overhead and only increase shards if aggregation still cannot meet sustained throughput.

A financial services startup needs to securely link its headquarters network to a new Amazon VPC as quickly as possible. The connection must provide encryption for data in transit over the public internet, and the team prefers a solution they can deploy the same day without waiting for a carrier circuit. What should the solutions architect recommend?

  • ✓ B. AWS Site-to-Site VPN

AWS Site-to-Site VPN is the correct choice because it creates IPsec encrypted tunnels over the public internet and it can be established quickly to link a headquarters network to a new Amazon VPC.

AWS Site-to-Site VPN uses IPsec to protect data in transit and supports connections to a virtual private gateway or a Transit Gateway. Teams can generate the customer gateway configuration and bring tunnels up within hours, so it meets the requirement to deploy the same day without waiting for a carrier circuit.

AWS Direct Connect provides a dedicated private circuit and it usually delivers more consistent bandwidth and lower latency but it is not encrypted by default and provisioning a physical circuit typically takes days to weeks so it does not satisfy the rapid deployment requirement.

AWS Client VPN is built for individual users to access AWS resources from remote devices and it is not designed to connect an entire on premises headquarters network to a VPC so it is not appropriate here.

AWS DataSync is a managed service for transferring files and objects between on premises storage and AWS storage services and it does not create a persistent network connection between networks so it does not address the connectivity and encryption requirement.

For questions that require fast, encrypted network connectivity between on premises and a VPC choose AWS Site-to-Site VPN unless the scenario explicitly requires dedicated private bandwidth or lower latency.

A media streaming startup operates a mission-critical service in the us-east-1 Region that uses an Amazon Aurora MySQL-Compatible cluster holding approximately 3 TB of data. A solutions architect must design a disaster recovery plan to fail over to us-west-2 that achieves a recovery time objective of 8 minutes and a recovery point objective of 4 minutes. Which approach will meet these objectives?

  • ✓ C. Migrate to an Aurora Global Database with us-east-1 as the primary and a secondary DB cluster in us-west-2, and use Amazon EventBridge to trigger an AWS Lambda function to promote us-west-2 during an outage

Migrate to an Aurora Global Database with us-east-1 as the primary and a secondary DB cluster in us-west-2, and use Amazon EventBridge to trigger an AWS Lambda function to promote us-west-2 during an outage is correct because it provides low latency physical replication and supports fast cross-Region promotion that meets the stated recovery time objective and recovery point objective.

Aurora Global Database uses dedicated, log based replication that is designed for minimal replication lag and it typically achieves sub second RPO and very fast managed failover that is well within an 8 minute RTO. Using EventBridge to detect the outage and invoke a Lambda to call the promotion operation automates the failover and helps meet the 4 minute RPO and 8 minute RTO targets.

Create a cross-Region Aurora MySQL read replica in us-west-2 and use an Amazon EventBridge rule with AWS Lambda to promote the replica when a failure is detected is not ideal because promoting large multi terabyte replicas can take many minutes and replication lag can exceed tight RPO windows.

Build a single multi-Region Aurora MySQL DB cluster spanning us-east-1 and us-west-2 and use Amazon Route 53 health checks to shift traffic on failure is not feasible because Aurora clusters are regional resources and there is no supported single cluster that spans Regions.

Deploy an Aurora multi-master cluster across us-east-1 and us-west-2 to allow writes in both Regions is incorrect because Aurora multi master is limited to a single Region and cannot provide a cross Region write capable architecture for disaster recovery.

Remember that Aurora Global Database is the recommended choice when you need very low cross Region replication lag and automated promotion to meet strict RPO and RTO targets.

NovaRide, a ride-sharing marketplace, is moving its on-premises two-tier application to AWS. The stack includes stateless web and API servers and a Microsoft SQL Server backend. The company wants the database tier to have the highest resiliency in a single Region with automatic failover while keeping day-to-day management minimal. What should the solutions architect implement?

  • ✓ B. Launch Amazon RDS for SQL Server in a Multi-AZ configuration with automatic failover

Launch Amazon RDS for SQL Server in a Multi-AZ configuration with automatic failover is the correct choice because it gives managed high availability in a single Region with automatic failover and it keeps day to day management minimal.

Amazon RDS Multi-AZ provisions a synchronous standby in a different Availability Zone and performs automated failover so applications recover quickly without manual intervention. The service also handles routine tasks such as backups, patching, and monitoring which reduces operational burden compared with self managed database servers.

Migrate the database to Amazon RDS for SQL Server with a cross-Region read replica is unsuitable because read replicas focus on read scaling and Amazon RDS does not provide managed read replica failover for SQL Server. A cross Region read replica does not deliver the automatic in Region high availability required by this scenario.

Configure Amazon RDS for SQL Server in a cross-Region Multi-AZ deployment is invalid because the Multi-AZ feature operates within a single Region and creates a standby in another Availability Zone. There is no cross Region Multi-AZ product and cross Region patterns are intended for disaster recovery rather than intra Region automatic failover.

Host Microsoft SQL Server on Amazon EC2 instances spread across multiple Availability Zones increases operational overhead because you must build and manage clustering and failover, perform backups and patching yourself, and monitor recovery processes. That approach does not meet the requirement to minimize day to day management.

When a question asks for the highest availability in one Region with low operational overhead choose RDS Multi-AZ and remember that read replicas are primarily for read scaling not automatic failover.

Which changes will make a two tier EC2 application highly available across multiple Availability Zones? (Choose 2)

  • ✓ A. Add private subnets in another AZ and migrate the database to Amazon RDS with Multi-AZ

  • ✓ B. Create an EC2 Auto Scaling group across two or more AZs behind an Application Load Balancer

Add private subnets in another AZ and migrate the database to Amazon RDS with Multi-AZ and Create an EC2 Auto Scaling group across two or more AZs behind an Application Load Balancer are the correct changes because they remove single AZ failure points for both the database tier and the web tier.

The web tier becomes highly available when you run instances in multiple Availability Zones and place them in an Auto Scaling group behind an Application Load Balancer. This combination ensures traffic is distributed and instances can be replaced automatically if an AZ or instance fails.

Migrating the database to Amazon RDS with Multi-AZ and adding private subnets in another AZ provides managed synchronous failover for the stateful tier. RDS Multi-AZ eliminates a single EC2 database instance as a single point of failure and provides automatic failover to the standby in the other AZ.

Use Amazon Route 53 failover records to direct traffic to the current stack is insufficient because DNS failover does not create AZ level redundancy for private EC2 instances and it does not provide automated database failover.

Create new private subnets in a second AZ but keep the database on a single EC2 instance is wrong because moving only the network without making the database highly available still leaves a single point of failure in the database tier.

Place existing web instances behind an ALB but stay in one AZ is wrong because keeping all web instances in one AZ means the entire application can fail if that AZ becomes unavailable.

Remember run stateless web servers across multiple AZs behind an ALB with Auto Scaling and use RDS Multi-AZ for databases to achieve AZ level high availability.

A retail analytics firm is moving the static assets for its partner portal from Amazon EC2 to an Amazon S3 bucket and will place an Amazon CloudFront distribution in front. The existing EC2 security group permits access only from about 16 known partner office IP ranges. After the migration, the static content must still be accessible exclusively from those same IP addresses while preventing direct access to the S3 bucket. Which combination of actions should be used to meet these requirements? (Choose 2)

  • ✓ A. Configure CloudFront origin access control and update the S3 bucket policy to allow only requests signed by that OAC

  • ✓ B. Attach an AWS WAF web ACL to the CloudFront distribution that allows only the approved partner IP CIDR ranges using an IP set

The correct options are Configure CloudFront origin access control and update the S3 bucket policy to allow only requests signed by that OAC and Attach an AWS WAF web ACL to the CloudFront distribution that allows only the approved partner IP CIDR ranges using an IP set.

Using CloudFront origin access control and the matching S3 bucket policy makes the bucket private and ensures only the designated CloudFront distribution can fetch objects. This prevents direct requests to S3 and forces all access to go through the distribution.

Attaching an AWS WAF web ACL to the CloudFront distribution lets you enforce viewer IP restrictions at the edge by using an IP set with the approved partner CIDR ranges. This blocks requests from other IPs before they reach CloudFront or the origin.

Associate a security group with the CloudFront distribution that permits traffic from the same partner IP list is incorrect because CloudFront does not support security groups. Security groups apply to network interfaces inside VPCs and cannot be attached to global CloudFront edge locations.

Create a network ACL with the partner IP ranges and associate it to the CloudFront distribution is incorrect because network ACLs are attached to VPC subnets. They cannot be used to filter traffic at CloudFront edge locations.

Use an S3 bucket policy with aws:SourceIp to allow only partner IPs and make the objects public for CloudFront to fetch is incorrect because S3 would see requests coming from CloudFront and from edge IPs rather than the original viewer IPs. Making objects public also allows direct access that would bypass CloudFront and defeat the access control requirement.

Think in two layers and lock the origin to CloudFront with OAC while enforcing viewer IPs with a WAF IP set on the distribution.

A wellness app company on AWS needs to run a weekly rollover on its Amazon RDS for MySQL database. The team wants a serverless scheduler to trigger a Python script that moves the prior seven days of data into an archive so the primary tables remain small, and the task typically finishes in about 7 minutes. What is the most cost efficient and reliable way to set this up?

  • ✓ C. Create a weekly Amazon EventBridge schedule that invokes an AWS Lambda function to execute the rollover

The correct choice is Create a weekly Amazon EventBridge schedule that invokes an AWS Lambda function to execute the rollover. This option provides a serverless, scheduled trigger and a function execution environment that fits the short seven minute runtime and minimizes cost and operational overhead.

Create a weekly Amazon EventBridge schedule that invokes an AWS Lambda function to execute the rollover works well because EventBridge handles reliable cron style schedules and Lambda is billed by execution time and memory which keeps cost low for a task that finishes in minutes. The seven minute runtime is under the Lambda maximum execution limit so there is no need for additional infrastructure. This combination removes server management and reduces failure surface compared to managing instances or containers yourself.

Configure an AWS Glue job with a weekly trigger to run the Python rollover script is not ideal because AWS Glue is optimized for Spark ETL workloads and introduces extra startup time and cost for brief jobs. Glue is typically more expensive for short single purpose scripts than a Lambda invocation.

Launch an Amazon EC2 Spot Instance and use an OS cron to run the script each week is unreliable for scheduled maintenance because Spot Instances can be interrupted and you must manage instance lifecycle and OS scheduling. This approach increases operational overhead and it does not meet the serverless requirement.

Use Amazon EventBridge to start an AWS Fargate task once a week to run the rollover in a container will work functionally but it brings container build and orchestration overhead and usually higher per run cost for a simple seven minute script. Fargate is a better fit when you need a custom runtime or longer execution than Lambda allows.

When a scheduled job runs under 15 minutes and you want a serverless, low operational burden solution choose an EventBridge schedule that triggers Lambda unless you need specialized runtimes or longer time limits.

A regional healthcare provider runs critical medical records applications in its on-premises colocation facility. The facility connects to an Amazon VPC through a dedicated 10 Gbps AWS Direct Connect link. The organization is adding several new AWS accounts for different teams, and VPCs in those accounts must reach the on-premises systems quickly and cost-effectively while minimizing ongoing management. Which design change should the solutions architect implement to meet these goals?

  • ✓ C. Attach the existing Direct Connect to an AWS Transit Gateway and share the transit gateway with the new accounts for VPC-to-on-premises routing

Attach the existing Direct Connect to an AWS Transit Gateway and share the transit gateway with the new accounts for VPC-to-on-premises routing is the correct option because it lets all new VPCs in different accounts reach the on premises systems over the single 10 Gbps Direct Connect while minimizing management and cost.

Attach the existing Direct Connect to an AWS Transit Gateway and share the transit gateway with the new accounts for VPC-to-on-premises routing works by associating a Direct Connect gateway with the Transit Gateway and sharing the Transit Gateway through AWS Resource Access Manager. This hub and spoke pattern lets each account attach its VPCs to the central hub and use the existing Direct Connect for low latency and consistent throughput. Centralizing routing and connectivity reduces per account configuration and simplifies scaling.

The shared Transit Gateway approach preserves the performance of the dedicated Direct Connect link and reduces ongoing operational overhead because you avoid provisioning multiple physical connections or a large number of VPN tunnels.

Create a new AWS Direct Connect connection for each new account and route traffic to the data center is incorrect because provisioning a separate Direct Connect per account is costly and unnecessary when a single Direct Connect can be shared via a Direct Connect gateway and a Transit Gateway.

Use AWS Cloud WAN to interconnect all VPCs and integrate the existing Direct Connect is not the best choice here because Cloud WAN introduces additional control plane components and policy management and it is more complex than the straightforward Transit Gateway plus Direct Connect gateway pattern that meets the stated requirements.

Establish AWS Site-to-Site VPN tunnels from each new account to the Direct Connect VPC is inferior because VPN tunnels add variable latency and throughput and they increase management effort compared to leveraging the dedicated Direct Connect through a centralized Transit Gateway hub.

For multi account VPC to on premises connectivity with an existing Direct Connect choose a centralized hub using a Transit Gateway with a Direct Connect gateway and share it via AWS RAM to minimize operational overhead and cost.

A robotics manufacturer is moving its telemetry processing stack to AWS. The existing system uses an on-premises message broker that communicates over the MQTT protocol. The application components will run on Amazon EC2, and the team wants a fully managed broker in AWS that supports the same standards so the application code does not need to be modified. Which AWS service should they choose for the broker?

  • ✓ B. Amazon MQ

The correct choice is Amazon MQ because it provides a fully managed message broker that supports standard protocols including MQTT so existing telemetry clients can connect without changing application code.

Amazon MQ is a managed broker service that supports MQTT AMQP STOMP and JMS which enables a lift and shift of on premises messaging workloads. The service handles provisioning patching backups and failover so EC2 hosted application components can reuse their existing protocols and broker semantics.

Amazon SNS is a pub sub notification service that focuses on fan out to endpoints and it does not expose an MQTT broker interface so it cannot act as a drop in replacement for an on premises MQTT broker.

Amazon SQS is a managed queuing service with its own APIs and semantics and it does not support MQTT so applications would need to be refactored to use the SQS model and SDKs.

AWS IoT Core does support MQTT but it is designed for device identity authentication and IoT specific patterns and it normally requires client registration topic model changes and authentication workflows so it is not a zero code change replacement for a general purpose broker.

When the scenario requires no code changes choose a managed broker that supports the same standards based protocols as your on premises system so clients can connect without modification.

An online learning company operates its public-facing API in multiple AWS Regions and requires fixed public IP addresses for client access worldwide. Users in different continents are reporting slow responses when connecting over the internet. What should a solutions architect recommend to minimize internet latency while preserving static IP entry points?

  • ✓ B. Deploy AWS Global Accelerator and register the application endpoints

Deploy AWS Global Accelerator and register the application endpoints is the correct choice because it provides a pair of static anycast IP addresses and routes user traffic over the AWS global network to the nearest healthy regional endpoint which reduces internet latency for worldwide clients.

Global Accelerator uses anycast to present fixed entry points and then hands traffic off to the optimal AWS Region over the AWS backbone which avoids many public internet hops. It supports TCP and UDP and can improve performance for API traffic that cannot be cached by a CDN while also providing health checks and automatic failover to the best endpoint.

Establish AWS Direct Connect circuits in multiple Regions is not appropriate because Direct Connect is designed for private dedicated connectivity from on premises networks to AWS and it does not optimize general internet paths for end users across continents.

Configure Amazon Route 53 latency-based routing to send users to the closest Region is not sufficient because latency based DNS only affects resolution of names and it does not provide acceleration over the network nor fixed anycast IP addresses for clients to use.

Create an Amazon CloudFront distribution in front of the application can reduce latency for cached HTTP or HTTPS content but it does not provide static public IP addresses and it is not a fit when mixed protocols or fixed IP entry points are required.

When an architecture asks for fixed public IPs and global performance for APIs think about AWS Global Accelerator because it provides anycast static IPs and routes traffic over the AWS backbone instead of relying only on DNS or CDN caching.

Which IAM policy JSON allows listing the S3 bucket named foo and reading all objects within that bucket?

  • ✓ B. { “Version”:”2012-10-17″, “Statement”:[ { “Effect”:”Allow”, “Action”:[ “s3:GetObject” ], “Resource”:”arn:aws:s3:::foo/*” }, { “Effect”:”Allow”, “Action”:[ “s3:ListBucket” ], “Resource”:”arn:aws:s3:::foo” } ] }

The option that assigns s3:GetObject to arn:aws:s3:::foo/ and s3:ListBucket to arn:aws:s3:::foo* is correct because S3 permissions are scoped by resource type and bucket level actions such as ListBucket must target the bucket ARN while object level actions such as GetObject must target object ARNs using the bucket/* pattern.

The correct policy puts s3:ListBucket on the bucket ARN arn:aws:s3:::foo so the principal can list keys in that bucket and it puts s3:GetObject on arn:aws:s3:::foo/* so the principal can read any object in the bucket. This separation matches AWS service authorization rules and ensures the intended permissions are effective.

The policy that puts ListBucket on object ARNs and GetObject on the bucket ARN is wrong because ListBucket must reference the bucket ARN and GetObject must reference object ARNs, so the resources do not match the action scopes and the permissions will not grant the intended access.

The policy that applies both s3:ListBucket and s3:GetObject to arn:aws:s3:::foo is wrong because object level access requires the arn:aws:s3:::bucket/* pattern and assigning GetObject only to the bucket ARN does not allow reading objects.

The policy using ListAllMyBuckets plus GetObject on foo/* is wrong because ListAllMyBuckets is an account level action for listing buckets and it does not list objects inside a specific bucket, so you still need ListBucket on the bucket ARN to enumerate keys.

Map bucket actions to the bucket ARN and map object actions to the bucket/* pattern. ListBucket goes to arn:aws:s3:::bucket and GetObject goes to arn:aws:s3:::bucket/*.

A media analytics startup, NovaStream Labs, needs a messaging approach for a service that performs rapid request-and-reply exchanges at very high volume. The team wants to minimize build effort and reduce operational costs, and expects to spin up thousands of short-lived reply endpoints each hour. Which Amazon SQS queue capability should they use to best satisfy these goals?

  • ✓ C. Amazon SQS temporary queues

The correct choice is Amazon SQS temporary queues. They provide on-demand virtual reply queues that let you multiplex thousands of short-lived reply endpoints onto a single underlying queue which lowers build effort and operational cost.

You can use the Temporary Queue Client to create virtual reply queues on demand that multiplex onto a single SQS queue. This enables high-throughput request-response because it reduces the number of API calls and the amount of queue management required and it keeps costs down when spinning up thousands of short-lived reply destinations each hour.

Amazon SQS FIFO queues are focused on preserving message order and providing exactly-once processing. They are not optimized for creating and tearing down many ephemeral reply endpoints and they impose throughput constraints that make them a poor fit for very high volume short-lived request-reply workloads.

Amazon SQS delay queues introduce delivery delays per message and they do not provide facilities for multiplexed reply endpoints or simplified reply handling. They therefore do not address the startup routing or cost concerns of a high-throughput request-reply pattern.

Amazon SQS dead-letter queues are designed to capture messages that cannot be processed after retries for debugging and isolation. They are useful for failure handling but they are not a mechanism to implement request-response at scale.

When an item on the exam describes many short-lived reply endpoints and a need to reduce operational overhead think temporary queues and the Temporary Queue Client to multiplex reply destinations onto a single queue.

A field services company equips its maintenance vans with GPS trackers that report positions. Each device sends an update every 4 minutes only if the vehicle has moved more than 200 meters. The updates currently post to a web tier running on four Amazon EC2 instances distributed across multiple Availability Zones in a single Region. During a seasonal surge, the web tier was saturated, some updates were dropped, and there was no buffer to replay them. The company needs to avoid location data loss and absorb bursts elastically with minimal operational effort. What should a solutions architect recommend?

  • ✓ B. Use an Amazon SQS queue to buffer incoming GPS updates and scale out workers to poll and process messages

The best choice is Use an Amazon SQS queue to buffer incoming GPS updates and scale out workers to poll and process messages. SQS decouples producers from consumers and provides durable at least once delivery so the application can absorb bursts without losing location updates while keeping operational overhead low.

SQS supports automatic scaling of traffic and lets you add visibility timeouts and dead letter queues so failed messages can be retried or quarantined. The managed nature of SQS means there is no cluster to maintain and workers can scale independently to process the buffered updates at their own pace.

Ingest the GPS stream with Amazon Kinesis Data Streams, trigger AWS Lambda for real-time processing, and store locations in Amazon DynamoDB is a powerful real time option that supports ordering and replay, but Kinesis Data Streams requires shard management and introduces more moving parts so it adds operational complexity compared with SQS.

Route updates through Amazon Managed Streaming for Apache Kafka (Amazon MSK) and run consumer applications to process the stream can meet high throughput needs, but Amazon MSK still requires managing topics and consumer groups and typically increases operational burden which does not match the minimal ops requirement.

Write each update to Amazon S3 and have the web tier periodically list the bucket and process new objects is inefficient for many small, frequent records because listing and object creation add latency and overhead and it is not suited to near real time ingestion during bursts.

When the exam emphasizes minimal operational effort and avoiding data loss pick a managed queuing service such as SQS that decouples producers and consumers and can absorb traffic spikes.

A global e-commerce startup runs its web application across North America and Asia Pacific. The database layer is a MySQL instance hosted on Amazon EC2 in us-east-2, and Amazon Route 53 latency-based routing sends users to the nearest application stack. Customers in Asia Pacific report slow responses for read-heavy requests, with typical round-trip latency exceeding 180 ms to the database. What change to the database tier will most effectively reduce latency for those users?

  • ✓ C. Migrate to an Amazon Aurora MySQL compatible global database and have the Asia Pacific application tier use the local reader endpoint

Migrate to an Amazon Aurora MySQL compatible global database and have the Asia Pacific application tier use the local reader endpoint is the correct choice because it provides a local read endpoint in the secondary Region which lowers round trip time for read heavy requests.

Migrate to an Amazon Aurora MySQL compatible global database and have the Asia Pacific application tier use the local reader endpoint uses storage based, asynchronous replication to replicate data to a secondary Region and it exposes a local reader endpoint there which serves reads with low latency. This design minimizes performance impact on the primary Region and it is intended for globally distributed, read heavy workloads where local reads matter.

Create an Amazon RDS for MySQL read replica in ap-southeast-1 and point the Asia Pacific application tier to the replica for queries is not feasible because RDS read replicas must be created from an RDS source and cannot be created from a self managed MySQL instance running on EC2.

Move the workload to Amazon Redshift and use AWS Database Migration Service to synchronize, then direct application queries to Redshift is unsuitable because Redshift is a columnar analytics data warehouse and it is not designed for low latency transactional OLTP queries that an application serves per request.

Migrate to Amazon RDS for MySQL in ap-southeast-1 and enable Multi-AZ improves availability within a Region but Multi AZ is intra Region only and it does not provide cross Region read endpoints so it will not address latency for Asia Pacific users.

For global read heavy workloads prefer Aurora Global Database and point regional tiers to the local reader endpoint. Remember that Multi AZ addresses regional high availability not cross Region latency.

A global courier firm runs its tracking APIs on Amazon EC2 instances in the AWS ap-southeast-2 Region. The HTTPS endpoints are consumed worldwide by partners and customers to fetch near real-time parcel updates. Users connecting from Africa and South America report high latency and inconsistent response times. You must recommend a cost-efficient way to accelerate access for these distant users without re-platforming or deploying new regional stacks. What should you recommend?

  • ✓ B. Configure AWS Global Accelerator with an HTTPS listener and register the existing ap-southeast-2 EC2 endpoint to improve global access

Configure AWS Global Accelerator with an HTTPS listener and register the existing ap-southeast-2 EC2 endpoint to improve global access is correct because it accelerates traffic over the AWS global network and delivers more consistent and lower latency access for distant users without requiring new regional stacks.

Global Accelerator optimizes the network path by bringing user traffic onto the AWS backbone at the nearest edge and routing it to the ap-southeast-2 endpoint over AWS infrastructure. This preserves the existing EC2 based API stack so no replatforming or multi Region deployments are needed and it is generally more cost efficient than operating duplicate stacks.

Use Amazon Route 53 latency-based routing to direct clients to identical API stacks deployed across multiple Regions is not suitable because it forces you to deploy and operate full multi Region stacks which the scenario seeks to avoid and that increases cost and operational complexity.

Deploy Amazon API Gateway in multiple Regions and use AWS Lambda as a proxy to forward requests to the EC2 API in ap-southeast-2 is suboptimal because it adds extra hops and service costs while still relying on a distant origin so it does not reliably fix latency or consistency for global users.

Place Amazon CloudFront in front of the API and apply the CachingOptimized managed policy to cache responses is ineffective for highly dynamic and user specific tracking data that does not cache well and CloudFront can add an extra request hop rather than giving consistent low latency for fresh data.

When a single Region origin must serve global and dynamic API traffic and you cannot replicate the application think about using AWS Global Accelerator to reduce internet variability and improve latency.

A fintech startup operates six isolated VPCs in one AWS Region named dev, qa, sales, hr, analytics, and core to separate environments. Following a reorganization, the team needs private any-to-any connectivity between all VPCs. They created a hub-and-spoke by peering each spoke to the core VPC, but workloads in different spokes still cannot communicate with each other. What is the most scalable and resource-efficient way to achieve full connectivity across these VPCs?

  • ✓ C. AWS Transit Gateway

AWS Transit Gateway is the correct option because it provides a scalable regional hub that enables private any to any connectivity between all VPCs.

AWS Transit Gateway acts as a central routing hub and supports transitive routing between attached VPCs which allows spokes to communicate through the gateway without creating direct peering links. A hub and spoke design built with VPC peering fails in this case because VPC peering is non transitive and traffic cannot flow from one spoke to another via the core VPC.

Full-mesh VPC peering could provide direct connectivity between every VPC but it does not scale well because you must create N times N minus 1 divided by 2 peering connections and manage many route entries which becomes operationally heavy.

Interface VPC endpoints with AWS PrivateLink are intended for publishing or consuming specific services privately and they do not provide general purpose routing between VPCs.

Internet Gateway provides internet access to a VPC and it is not a mechanism for private VPC to VPC connectivity inside the AWS network.

When multiple VPCs must communicate privately in a Region remember that VPC peering is non transitive and prefer Transit Gateway for scalable many to many connectivity.

Which scaling method allows an Amazon ECS service running on Fargate to handle an initial 36 hour spike in demand and then automatically reduce costs as demand decreases?

  • ✓ B. ECS Service Auto Scaling with target tracking

ECS Service Auto Scaling with target tracking is the correct choice because it lets an Amazon ECS service running on Fargate scale out to handle a long 36 hour demand spike and then automatically scale in as demand falls to reduce cost.

ECS Service Auto Scaling with target tracking works by integrating with CloudWatch metrics such as average CPU or memory usage and automatically adjusting the number of Fargate tasks to keep the metric near the configured target. This managed approach provides the performance needed during surges without custom orchestration and it scales in automatically so you do not pay for excess capacity after the spike.

Lambda + CloudWatch alarms to change ECS desired count is not ideal because it duplicates managed scaling features, increases operational complexity and raises the risk of errors or race conditions compared with using the built in Application Auto Scaling integration.

Fargate Spot only is unsuitable because Spot capacity can be interrupted and does not guarantee capacity for launch spikes, so it is not reliable as the primary scaling mechanism for critical traffic.

Application Auto Scaling predictive scaling for ECS is incorrect because predictive scaling targets EC2 Auto Scaling groups rather than ECS services, so it is not the appropriate feature to scale Fargate services in most cases.

For variable workloads on Fargate prefer using target tracking in Application Auto Scaling so a metric stays near the target. Use the managed integration instead of custom Lambda scripts and avoid relying on Spot when you need guaranteed capacity.

A photo marketplace called VistaMart lets customers upload product images for preview. The worker tier that resizes and watermarks these images runs on instances in an Auto Scaling group. The front end must be decoupled from the processors, and the group should scale automatically in proportion to the rate of incoming uploads. What is the best way to accomplish this?

  • ✓ C. Buffer uploads in Amazon SQS and use a CloudWatch metric for queue backlog per instance to drive Auto Scaling on queue length

Buffer uploads in Amazon SQS and use a CloudWatch metric for queue backlog per instance to drive Auto Scaling on queue length is the correct choice because it decouples the front end from the processors and lets the worker fleet scale automatically in proportion to incoming uploads.

Inserting Amazon SQS between the web tier and the worker tier provides durable buffering and isolates spikes in upload traffic from processing throughput. CloudWatch exposes queue metrics such as visible message count and you can compute backlog per instance to drive Auto Scaling policies. Scaling on queue depth ensures capacity follows pending work instead of relying on host-level signals.

Publish each upload event to Amazon SNS and scale the Auto Scaling group based on the number of SNS notifications is not suitable because SNS is a push based fan-out service and not a pullable work queue. EC2 Auto Scaling has no native way to treat SNS notification counts as a backlog metric.

Attach a target tracking policy to maintain 65% average CPU utilization across the Auto Scaling group does not meet the requirement to scale with upload volume because CPU utilization may not correlate with the number of queued images. Relying on CPU can cause scaling to lag or to provision too many instances.

Send upload events to Amazon Kinesis Data Streams and adjust scaling by increasing shard count as traffic rises is misaligned because Kinesis is optimized for streaming and analytics rather than a pullable task queue. Changing shard count is not a native signal for EC2 Auto Scaling and it adds operational complexity.

When autoscaling worker fleets that process jobs prefer scaling on queue depth rather than host metrics. Use SQS visible messages or backlog per instance with CloudWatch to drive Auto Scaling.

A regional credit union operates a portfolio analytics application behind an Application Load Balancer with instances in an Amazon EC2 Auto Scaling group. On the first business day of each month at 07:30 UTC, employees trigger intensive reconciliation jobs and the application becomes sluggish. Amazon CloudWatch shows average CPU on the instances peaking at 100 percent during that period. What should a solutions architect recommend to ensure capacity is ready before the surge and prevent performance degradation?

  • ✓ D. Configure a scheduled scaling action for the EC2 Auto Scaling group that runs just before the monthly jobs

Configure a scheduled scaling action for the EC2 Auto Scaling group that runs just before the monthly jobs is correct because the reconciliation workload is predictable and capacity must be ready before 07:30 UTC to avoid CPU saturation and application slowdowns.

The monthly surge occurs at a known time so proactive scaling launches and warms instances before the spike begins and ensures the load balancer can route traffic to healthy instances. Scheduled scaling avoids the delay inherent in reactive metrics and instance warm up and it prevents users from experiencing degraded performance while new instances start.

Enable predictive scaling for the EC2 Auto Scaling group is not the best choice because predictive scaling expects recurring patterns like daily or weekly seasonality and it may not guarantee instances are available at the exact monthly time.

Configure Amazon ElastiCache to offload read queries from the application does not address a compute bound reconciliation job because caching cannot reduce the CPU consumed by heavy processing on the instances.

Create a simple scaling policy that scales on average CPU utilization is reactive so it waits until CPU rises before adding capacity and users will see performance degradation while instances are launched and warmed.

Use scheduled scaling for predictable, time based spikes and use target tracking or predictive scaling for unpredictable or recurring patterns that are not tied to a specific clock time.

Pioneer Metrics runs a public web application on Amazon EC2 instances behind an Application Load Balancer. The security team wants to strengthen protection and limit the effect of large bursts of unwanted HTTP requests that could exhaust backend resources. What is the most effective way to reduce the impact of these events at the load balancer so the instances remain available?

  • ✓ C. Attach an AWS WAF web ACL with rate based rules to the Application Load Balancer

Attach an AWS WAF web ACL with rate based rules to the Application Load Balancer is correct because it can detect and throttle excessive request rates per source IP in a rolling five minute window and block HTTP floods before they exhaust backend resources while allowing legitimate traffic to continue.

The WAF rate based rules provide inline application layer filtering at the ALB and support per IP rate limits and custom rules that match HTTP paths and headers. This keeps the ALB and the EC2 instances available and reduces the need for manual intervention during surges.

Enable VPC Flow Logs, store them in Amazon S3, and analyze them with Amazon Athena to detect and block DDoS patterns is reactive and analytical only because flow logs and Athena queries do not provide inline enforcement at the ALB and they cannot immediately stop a flood of requests.

AWS Shield Advanced provides strong network and transport layer DDoS protection and cost protection but it does not provide the granular HTTP request level filtering or per IP rate limiting on an ALB that is needed to stop application layer floods. The WAF rate based rules are the more precise control for HTTP floods and AWS Shield Advanced can complement WAF in a layered defense.

Build a custom AWS Lambda function to watch for suspicious traffic and update a network ACL during an attack is brittle and coarse because network ACLs operate at the subnet boundary and cannot express HTTP semantics or per path limits and detection plus update latency makes this approach less effective than native WAF controls.

When mitigating application layer HTTP floods attach AWS WAF rate based rules to the ALB for inline throttling and use logs for post attack analysis.

A fintech company runs an API on Amazon EC2 instances in three private subnets across two Availability Zones. The API must read from and write to an Amazon DynamoDB table. The security team prohibits egress to the internet, NAT gateways, and use of public DynamoDB endpoints. With the least operational effort and while keeping traffic on the AWS network, how should the architect enable private connectivity from the subnets to DynamoDB?

  • ✓ C. Provision a DynamoDB gateway VPC endpoint and update the private subnet route tables

The correct option is Provision a DynamoDB gateway VPC endpoint and update the private subnet route tables. This approach uses a gateway VPC endpoint so traffic to DynamoDB stays on the AWS network and no internet egress or NAT is required while operational effort is minimal.

A gateway endpoint integrates with your subnet route tables so EC2 instances can reach DynamoDB through the managed gateway within the VPC. You add routes that point to the gateway endpoint and DynamoDB traffic is kept on the AWS backbone. This satisfies the security requirement to avoid public endpoints and internet egress and it is simpler to operate than managing custom VPNs or NAT infrastructure.

Deploy an interface VPC endpoint for DynamoDB and attach the ENIs in each private subnet is incorrect because DynamoDB is accessed via gateway endpoints rather than interface endpoints with ENIs. Interface endpoints use PrivateLink and are appropriate for many services, but not for DynamoDB and S3.

Establish a private DynamoDB endpoint accessible through an AWS Site-to-Site VPN is incorrect since there is no separate customer VPN reachable private endpoint for DynamoDB. VPC endpoints are consumed inside the VPC so a Site-to-Site VPN does not provide a native DynamoDB target and it would add unnecessary complexity.

Build a software VPN directly between the private subnets and DynamoDB is incorrect because managed services like DynamoDB do not terminate customer VPN tunnels. A software VPN to a managed service endpoint is unsupported and would increase operational burden without solving the private connectivity requirement.

Remember that Amazon S3 and Amazon DynamoDB use gateway VPC endpoints while most other AWS services use interface VPC endpoints backed by PrivateLink. For exam questions identify the service type and confirm whether it integrates with route tables before choosing the endpoint.

A media streaming startup, LumenFlix, runs five Linux-based Amazon EC2 instances in a single Availability Zone to operate a clustered workload. The team needs a block volume that every instance can attach to at the same time using Amazon EBS Multi-Attach so the nodes can share the same data. Which Amazon EBS volume type should they choose?

  • ✓ B. Provisioned IOPS SSD (io1/io2) EBS volumes with Multi-Attach enabled

The correct choice is Provisioned IOPS SSD (io1/io2) EBS volumes with Multi-Attach enabled. This option allows a single block volume to be attached to multiple EC2 instances in the same Availability Zone so the clustered nodes can share the same data.

Provisioned IOPS SSD (io1/io2) EBS volumes with Multi-Attach enabled support EBS Multi-Attach which enables concurrent attachments and they provide provisioned IOPS for low latency and consistent performance. You must keep the volume and all instances in the same Availability Zone and use a cluster aware file system or application level coordination to prevent data corruption when multiple nodes access the volume.

General Purpose SSD (gp3) EBS volumes are versatile and cost effective but they do not support Multi-Attach so they cannot be used for concurrent attachment by multiple instances.

Throughput Optimized HDD (st1) EBS volumes are designed for large sequential throughput and are HDD based and they do not support Multi-Attach so they are unsuitable for shared block storage across cluster nodes.

Cold HDD (sc1) EBS volumes target infrequently accessed, low cost data and they also do not support Multi-Attach and they lack the performance needed for a clustered shared disk workload.

Only io1 and io2 support EBS Multi-Attach and all attachments must be in the same Availability Zone. Use a cluster aware file system when multiple instances write to the same volume.

Which AWS service provides a high performance POSIX parallel file system for hot data that delivers very low latency and highly parallel I O and includes native Amazon S3 integration for fast import and export to a low cost cold tier?

  • ✓ B. Amazon FSx for Lustre with S3 integration

Amazon FSx for Lustre with S3 integration is correct because it provides a high performance POSIX parallel file system for hot data that delivers very low latency and highly parallel I O and it includes native Amazon S3 integration for fast import and export to a low cost cold tier.

Amazon FSx for Lustre with S3 integration uses the Lustre file system which is POSIX compliant and optimized for HPC style workloads that need massively parallel reads and writes. It achieves low latency and high throughput by striping data across multiple servers and it links the file system namespace directly to Amazon S3 so you can import active datasets quickly and export results to S3 for long term storage.

Amazon FSx for OpenZFS is not correct because it is a scale up network attached storage solution that focuses on single node performance and advanced data management features and it does not provide the same distributed, massively parallel I O or the native S3 data repository integration used for seamless hot to cold workflows.

Amazon EFS is not correct because it is a general purpose, elastic NFS file system built for broad compatibility and many concurrent clients rather than extreme parallel I O patterns for HPC simulations and it does not offer native S3 repository linking like the Lustre integration.

Amazon FSx for Windows File Server is not correct because it provides SMB access and Windows file system features for Windows workloads and it lacks POSIX semantics, HPC style parallel I O, and the native S3 integration needed for the hot to cold tier pattern described.

Watch for keywords such as POSIX, HPC scale parallel I O, and S3 data repository integration to identify Amazon FSx for Lustre with S3 integration on exam questions.

During a two-day onboarding lab at a fintech startup, the platform team quizzes junior engineers on Amazon S3 lifecycle rules. Which of the following storage class transitions are not supported when moving existing S3 objects between classes? (Choose 2)

  • ✓ B. Amazon S3 One Zone-IA ⇒ Amazon S3 Standard-IA

  • ✓ D. Amazon S3 Intelligent-Tiering ⇒ Amazon S3 Standard

Amazon S3 Intelligent-Tiering ⇒ Amazon S3 Standard and Amazon S3 One Zone-IA ⇒ Amazon S3 Standard-IA are not supported lifecycle transitions. These two options attempt to move objects back to a higher durability class or to a multi Availability Zone IA class which lifecycle transitions do not allow.

The Amazon S3 Intelligent-Tiering ⇒ Amazon S3 Standard transition is invalid because lifecycle rules are designed to move data to colder or lower cost storage classes rather than revert to the higher cost Standard tier. Intelligent-Tiering already manages automatic tiering for access patterns and it is not a target for lifecycle rules to move objects back into Standard.

The Amazon S3 One Zone-IA ⇒ Amazon S3 Standard-IA transition is invalid because One Zone-IA stores objects in a single Availability Zone and lifecycle policies do not support moving objects back to a multi Availability Zone IA class. Transitions can reduce durability or cost but they cannot increase cross AZ resilience.

Amazon S3 Standard ⇒ Amazon S3 Intelligent-Tiering is valid because S3 supports transitioning from Standard to Intelligent-Tiering to optimize costs for unknown or changing access patterns.

Amazon S3 Standard-IA ⇒ Amazon S3 One Zone-IA is valid because lifecycle rules can move data from Standard-IA to One Zone-IA when you accept lower durability for lower cost.

Amazon S3 Standard-IA ⇒ Amazon S3 Glacier Instant Retrieval is valid because lifecycle policies support transitioning IA objects to colder archive classes such as Glacier Instant Retrieval for lower storage cost with retrieval performance appropriate to that class.

Think of lifecycle transitions as a one way path toward colder or lower durability storage and eliminate any answer that tries to move objects back to Standard or to increase AZ resilience.

A regional travel booking platform is breaking a tightly coupled monolith into AWS microservices after a 7x surge in daily users over the past quarter caused latency to rise. Some services will generate tasks quickly while others will take longer to process them. What is the best way to connect these microservices so fast producers do not overwhelm slower consumers?

  • ✓ B. Amazon Simple Queue Service (Amazon SQS)

The correct choice is Amazon Simple Queue Service (Amazon SQS). Amazon SQS provides a durable and scalable buffer so fast producers do not overwhelm slower consumers.

Amazon SQS supports visibility timeouts and automatic retries so messages are returned to the queue if a consumer fails and consumers can process at their own pace. Amazon SQS is fully managed and scales independently from producers and consumers so it minimizes operational work while providing reliable decoupling and backpressure.

Amazon Simple Notification Service (Amazon SNS) is a push based pub sub service and it delivers messages to endpoints rather than storing them per subscriber. Amazon SNS does not provide native queue semantics or built in backpressure so slow consumers need additional buffering.

Amazon Kinesis Data Streams is designed for high throughput streaming and parallel consumption from shards. Amazon Kinesis Data Streams is optimized for event streaming and analytics rather than task queue semantics and consumers must scale to keep up with shard throughput.

Amazon MQ offers managed brokers with traditional messaging features but it requires broker sizing and other operational considerations. For cloud native microservices that need elastic, low ops buffering, Amazon SQS is generally the simpler and more scalable choice.

When producers outpace consumers think decouple with a queue and prefer managed queues that provide visibility timeouts and automatic retries.

An accounting firm has launched a production three-tier web application on AWS. The web tier runs in public subnets across two Availability Zones in a single VPC. The application and database tiers run in private subnets in the same VPC. The company operates a third-party virtual firewall from AWS Marketplace in a dedicated security VPC, and the appliance exposes an IP interface that can receive IP packets. A solutions architect must ensure every inbound request to the application is inspected by this appliance before it reaches the web tier while minimizing operational effort. What should the architect implement?

  • ✓ B. Deploy a Gateway Load Balancer in the security VPC and create a Gateway Load Balancer endpoint in the application VPC to direct all ingress traffic through the appliance

The best choice is Deploy a Gateway Load Balancer in the security VPC and create a Gateway Load Balancer endpoint in the application VPC to direct all ingress traffic through the appliance. This ensures every inbound request is inspected by the third party firewall before it can reach the web tier.

Gateway Load Balancer provides transparent service insertion using GENEVE encapsulation and it scales third party virtual appliances. A Gateway Load Balancer endpoint in the application VPC lets you steer ingress traffic into the security VPC without complex per instance routing changes and it minimizes operational overhead compared with manual traffic steering or custom routing solutions.

Create a Network Load Balancer in the application VPC public subnets to steer incoming traffic to the firewall appliance for inspection is not suitable because Network Load Balancer operates at Layer 4 and it does not provide transparent bump in the middle insertion or built in appliance scaling across VPCs.

Attach a Transit Gateway between the VPCs and update routes to send incoming traffic through the security VPC before the web tier is incorrect because Transit Gateway provides connectivity and routing but it does not perform packet inspection or offer appliance load balancing natively. Using it alone increases operational complexity while still failing to guarantee inspection of every inbound request.

AWS Network Firewall is a managed AWS service and not the third party Marketplace appliance the company must integrate. Selecting it would change the design intent and it would not meet the requirement to use the existing virtual firewall from AWS Marketplace.

When questions require third party appliance inspection across VPCs and minimal operational overhead prefer Gateway Load Balancer with GWLB endpoints for transparent service insertion and scaling.

A climate analytics startup plans to modernize its data processing stack into a fully serverless design. Historical sensor files are stored in an Amazon S3 bucket, and the team needs to run standard SQL on both the backlog and future ingestions. All objects must be encrypted at rest and automatically replicated to a different AWS Region to meet resilience and governance requirements. Which approach will satisfy these needs with the least ongoing operational effort?

  • ✓ C. Create a new S3 bucket using SSE-KMS with multi-Region keys, enable Cross-Region Replication, migrate data, and query with Amazon Athena

Create a new S3 bucket using SSE-KMS with multi-Region keys, enable Cross-Region Replication, migrate data, and query with Amazon Athena is correct because it delivers serverless SQL over S3 while providing KMS backed encryption and automated cross region copies to meet resilience and governance requirements with minimal operational effort.

Amazon Athena runs SQL directly over objects in S3 without provisioning or managing clusters which makes it ideal for querying both historical backlog and future ingestions. This serverless model reduces ongoing administration compared to running a data warehouse.

SSE-KMS with multi-Region keys combined with Cross-Region Replication ensures objects are encrypted at rest and that key material and access controls are available consistently in both Regions. That setup supports compliance and auditability without manual key replication or custom tooling.

Enable Cross-Region Replication on the current S3 bucket, protect objects with AWS KMS multi-Region keys, run ETL in AWS Glue, and query using Amazon Redshift is not ideal because adding AWS Glue ETL and an Amazon Redshift warehouse increases operational overhead. Running ETL jobs and managing a warehouse is unnecessary when serverless SQL over S3 satisfies the use case.

Provision a new S3 bucket with SSE-S3, turn on Cross-Region Replication, load the data, and use Amazon Redshift Spectrum for SQL is unsuitable because SSE-S3 lacks KMS key controls and Redshift Spectrum still depends on a Redshift cluster. That combination raises administration compared to a fully serverless approach.

Enable Cross-Region Replication for the existing S3 bucket, use SSE-S3 for encryption, and query the data using Amazon Athena is close but may not meet governance that requires KMS based encryption and centralized key policies across Regions. Choosing KMS multi-Region keys gives stronger controls and auditability for compliance.

When a question asks for serverless SQL over S3 and strict cross region encryption favor Athena and SSE-KMS multi-Region keys to minimize operational work.

A sports streaming startup is about to release a fan-interaction feature that can experience sharp surges in traffic during championship broadcasts and major content drops. The backend runs on an Amazon Aurora PostgreSQL Serverless v2 cluster to absorb variable demand. The design must scale compute and storage performance to keep latency low and avoid bottlenecks when load spikes. The team is choosing a storage configuration and wants an option that grows automatically with traffic, optimizes I/O performance, and remains cost-efficient without manual provisioning or tuning. Which configuration should they choose?

  • ✓ B. Enable Aurora I/O-Optimized for the cluster to provide consistently low-latency, high-throughput I/O with flat pricing and no per-I/O fees

Enable Aurora I/O-Optimized for the cluster to provide consistently low-latency, high-throughput I/O with flat pricing and no per-I/O fees is the correct choice because it provides automatically scalable storage performance that keeps latency low during sharp traffic surges.

Aurora I/O-Optimized is purpose built for I/O intensive Aurora workloads and delivers predictable high throughput and low latency without per I/O charges. It integrates with Aurora Serverless v2 so compute and storage performance scale together, which avoids manual provisioning and reduces operational overhead while often lowering cost for spiky or sustained heavy I/O.

Configure the Aurora cluster to use General Purpose SSD (gp2) storage and rely on scaling database capacity to offset I/O limits is incorrect because Aurora does not expose a gp2 selection and gp2’s burst model does not suit sustained high or highly variable I/O.

Use Magnetic (Standard) storage to minimize baseline costs and depend on Aurora autoscaling to handle spikes is incorrect because magnetic storage is legacy and has much higher latency and lower throughput, so it is not appropriate for modern Aurora production workloads.

Select Provisioned IOPS (io1) and adjust IOPS manually ahead of anticipated peak events is incorrect because Aurora does not rely on manual io1 provisioning and manual IOPS tuning adds operational burden and cost without matching the automatic scaling and pricing benefits of the I/O optimized configuration.

When an Aurora workload is unpredictable and I/O heavy choose Aurora I/O-Optimized so storage IOPS scale automatically and you avoid per I/O fees.

The AWS Certified Solutions Architect Associate B0ook of Exam Questions & Answers, by Cameron McKenzie, is one of the most complete and thoughtfully designed resources for passing the AWS Certified Solutions Architect Associate exam.

It earns a full five stars, and I recommend it to anyone who wants to get certified with confidence. The content is technical but accessible, the voice is friendly and clear, and the material builds both knowledge and exam judgment in a steady and encouraging way.

AWS Solution Architect Exam Topics

The book maps closely to what you will face on test day. You practice how to weigh tradeoffs across security, resilience, performance, and cost in real scenarios that feel authentic.

The structure of the book helps you see how these concerns work together in practical designs.

  1. Design secure architectures with IAM, KMS, network boundaries, and data protection
  2. Design resilient architectures with multi AZ thinking, health checks, and graceful failure
  3. Design high performing architectures with the right compute, storage, and caching choices
  4. Design cost optimized architectures with smart storage classes, right sizing, and useful purchasing models

Every question comes with a full explanation. You learn why the correct option works, and you also learn why the other options do not.

This is the secret to exam success because it trains you to think like an architect rather than a guesser. When a prompt points to an Application Load Balancer, you also see why a Network Load Balancer or a Gateway Load Balancer would not satisfy the requirement. That back and forth reasoning builds a mental model that you can reuse across many questions.

The AWS Solutions Architect Certification Exam Book is a great resource to help you learn AWS and get AWS certified.

AWS Exam Tips Build Meta Skills

After each AWS Solution Architect exam question you get an Exam Tip that helps you think about how the exam is built. These short notes highlight cues in the wording that point to specific services or patterns. In the sample about classification results, the tip reminds you that language about true labels and predicted labels across all classes maps to a confusion matrix. That guidance becomes a reliable trigger you can trust when the clock is running.

Why This Structure Works

This structure works because pattern recognition grows as you see the same decision frames from different angles. Your error awareness improves as you begin to recognize the traps behind tempting wrong answers. At the same time, your confidence rises because you come to understand both the winning choice and the near miss.

This AWS Solutions Architect Book starts with foundation sets that reinforce essential ideas, then moves into longer scenario items that build stamina and focus. You feel steady progress as you advance, and you can measure your readiness without stress. By the final chapters you are reading quickly, filtering out noise, and locking onto the primary requirement with ease.

The value carries beyond the exam. The explanations mirror the tradeoffs you face on real teams. You learn how to choose storage classes that balance cost and durability, how to design VPC boundaries that promote least privilege, and how to mix caching and database choices for both speed and reliability. That practical voice makes the lessons stick.

Who Should Use This Book

The AWS Certified Solutions Architect Associate Book of Exam Questions & Answers is ideal for new architects who want a clear path to their first certification. It is equally helpful for experienced builders who are looking for a fast tune up before test day. Team leads will also find it useful as a shared study resource that reinforces good design habits across a group.

How To Get The Most From It

To make the most of this book, read the explanations even when you have chosen the correct answer because those insights deepen your understanding. Write down the reasons that rule out each incorrect option so you build the habit of eliminating distractors. Review the Exam Tip after each question and rephrase the pattern in your own words so it becomes part of your thinking. Finally, retake the practice sets until both your score and your confidence grow together.

Final Verdict

The AWS Solutions Architect Book delivers complete domain coverage, explanations that build real judgment, and Exam Tips that sharpen your instincts. It deserves five stars. It is friendly and effective, and it prepares you for the exam and for real solution design. I recommend it wholeheartedly for anyone who wants to get certified and move forward in cloud architecture.



Other AWS Certification Books

If you are interested in attaining an Amazon cert in another domain, check out the other AWS certification books in this series: