Certified Professional AWS Solutions Architect Exam Braindumps

AWS Professional Solutions Architect Exam Topics

Despite the title, this is not a Professional Solutions Architect Braindump in the traditional sense. I do not believe in cheating. The term “braindump” once referred to someone taking an exam, memorizing the questions, and posting them online for others to use. That approach is unethical, violates the AWS certification agreement, and prevents true learning.

This is not an exam dump or copied content. All of these questions come from my AWS Solutions Architect Professional Udemy course and from AWS Solutions Architect Professional Practice Questions available on certificationexams.pro.

Each question is crafted to align with the official AWS Certified Solutions Architect – Professional exam blueprint. They reflect the tone, logic, and complexity of real AWS scenarios, but none are taken from the actual test. These exercises help you learn and reason through architectural trade-offs, cost decisions, and governance strategies.

AWS Architect Exam Simulators

If you can answer these questions and understand why incorrect options are wrong, you will not only pass the real exam but also gain a deep understanding of how to design complex AWS solutions that balance performance, cost, and reliability.

Each scenario includes detailed explanations and realistic examples to help you think like an AWS architect. Practice using the AWS Solutions Architect Professional Exam Simulator and the Professional Solutions Architect Practice Test to build your timing, reasoning, and analytical skills.

Real AWS Architect Exam Questions

So if you choose to call this your Professional Solutions Architect Exam Dump, remember that every question is built to teach, not to cheat. Approach your preparation with integrity, focus, and curiosity. Success as a certified AWS Solutions Architect Professional comes from understanding architecture, not memorizing answers.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AWS Professional Solutions Architect Braindump Questions

Question 1

Orion Retail Group operates about 18 AWS accounts for its online storefronts and partner integrations. Each EC2 instance runs the unified CloudWatch agent and publishes logs to CloudWatch Logs. The company must consolidate all security events in a separate AWS account that is dedicated to immutable log retention. The network operations team needs to aggregate and correlate events from every account with near real time latency so analysts can respond within one minute. What is the best way to implement this cross account centralized ingestion?

  • ❏ A. Create an IAM role in every source account and run a Lambda function in the logging account every 45 minutes that assumes the role and exports each CloudWatch Logs group to an S3 bucket in the logging account

  • ❏ B. Create a CloudWatch Logs destination in the logging account that targets a Kinesis Data Firehose delivery stream and configure subscription filters on the security log groups in each source account to stream events to that destination and deliver them to an S3 bucket in the logging account

  • ❏ C. Configure CloudWatch Logs in each source account to forward events directly to CloudWatch Logs in the logging account and then subscribe a Kinesis Data Firehose stream to Amazon EventBridge to store the data in S3

  • ❏ D. Send the security events into Google Pub Sub and process them with Dataflow to BigQuery and then export the results to Amazon S3

Question 2

Sundial Apps runs a RESTful API on Amazon EC2 instances in an Auto Scaling group across three private subnets. An Application Load Balancer spans two public subnets and is configured as the only origin for an Amazon CloudFront distribution. The team needs to ensure that only CloudFront can reach the ALB and that direct access from the internet is blocked at the origin. What approach should the solutions architect choose to strengthen origin security?

  • ❏ A. Associate an AWS WAF web ACL to the ALB with an IP match that includes the published CloudFront service ranges and migrate the ALB into two private subnets

  • ❏ B. Enable AWS Shield Advanced and attach a security group policy that only permits CloudFront service addresses to the ALB

  • ❏ C. Store a random token in AWS Secrets Manager with automated rotation using AWS Lambda then have CloudFront pass the token in a custom origin header and enforce a header match rule in an AWS WAF web ACL that is associated to the ALB

  • ❏ D. Save a shared key in AWS Systems Manager Parameter Store with rotation configured then configure CloudFront to send the key as a custom header and add custom code on the target instances to check the header and drop requests without the key

Question 3

Riverbend Logistics plans to run its connected van telemetry platform on AWS to collect signals from roughly 9,000 vehicles every three seconds so it can update routes and estimated arrival times in near real time. The engineering team requires a fully serverless design that scales automatically without any shard or instance capacity to size or tune and the team does not want to intervene during traffic spikes. As the AWS Certified Solutions Architect Professional advising this initiative, which approach should they implement?

  • ❏ A. Stream the data into Amazon Kinesis Data Firehose and configure it to deliver records directly into an Amazon DynamoDB table

  • ❏ B. Publish the telemetry to an Amazon SNS topic that invokes an AWS Lambda function which stores items in an Amazon DynamoDB table

  • ❏ C. Send messages to an Amazon SQS standard queue and have an AWS Lambda function process batches and write them to an auto scaled Amazon DynamoDB table

  • ❏ D. Ingest the telemetry into an Amazon Kinesis Data Stream and use a consumer application on Amazon EC2 to read the stream and store data in an Amazon DynamoDB table

Question 4

EduNova at example.com plans to distribute private downloads and a subscriber only library using Amazon CloudFront. Only newly registered customers who have paid for a 12 month plan should be allowed to fetch the desktop installer, and only active subscribers should be able to view any files in the members area. What should the architect implement to enforce these restrictions while keeping delivery efficient with CloudFront? (Choose 2)

  • ❏ A. Configure CloudFront signed cookies to authorize access to all content in the members area

  • ❏ B. Cloud CDN signed URLs

  • ❏ C. Configure CloudFront signed cookies to authorize access to the single installer object

  • ❏ D. Configure CloudFront signed URLs to protect the installer download

  • ❏ E. Configure CloudFront signed URLs for every object in the members area

Question 5

The platform engineering team at Riverbend Manufacturing plans to move several hundred virtual machines from two colocation facilities into AWS, and they must inventory their workloads, map service dependencies, and produce a consolidated assessment report. They have already initiated a Migration Evaluator engagement and they are allowed to install collection software on all VMs. Which approach will deliver the required insights with the least operational effort?

  • ❏ A. Configure the AWS Application Discovery Service Agentless Collector in the data centers. After a 30 day collection window, use AWS Migration Hub to inspect dependency maps. Export the server inventory and upload it to Migration Evaluator to generate the Quick Insights assessment

  • ❏ B. Install the AWS Application Discovery Agent on every on-premises VM. After a 30 day collection window, use AWS Migration Hub to view application dependencies. Download the Quick Insights assessment report directly from Migration Hub

  • ❏ C. Deploy the Migration Evaluator Collector to all VMs. When the 30 day collection completes, use Migration Evaluator to review discovered servers and dependencies. Export the inventory to Amazon QuickSight and then download the Quick Insights assessment from the generated dashboard

  • ❏ D. Set up the Migration Evaluator Collector in the environment and also install the AWS Application Discovery Agent on each VM. After the 30 day run, use AWS Migration Hub for dependency visualization and retrieve the Quick Insights assessment from Migration Evaluator

Question 6

A fintech startup named LumaPay lets customers submit high resolution receipt photos from a mobile app to validate cashback offers. The app stores images in an Amazon S3 bucket in the us-east-2 Region. The business recently launched across several European countries and those users report long delays when sending images from their phones. What combination of changes should a Solutions Architect implement to speed up the image upload experience for these users? (Choose 2)

  • ❏ A. Create an Amazon CloudFront distribution with the S3 bucket as the origin

  • ❏ B. Update the mobile app to use Amazon S3 multipart upload

  • ❏ C. Change the bucket storage class to S3 Intelligent-Tiering

  • ❏ D. Enable Amazon S3 Transfer Acceleration on the bucket

  • ❏ E. Provision an AWS Direct Connect link from Europe to the us-east-2 Region

Question 7

NorthPeak Lending runs its containerized platform on Amazon ECS with Amazon API Gateway in front and stores relational data in Amazon Aurora and key value data in Amazon DynamoDB. The team provisions with the AWS CDK and releases through AWS CodePipeline. The company requires an RPO of 90 minutes and an RTO of 3 hours for a regional outage while keeping spend as low as possible. Which approach should the architects implement to meet these goals?

  • ❏ A. Use AWS Database Migration Service for Aurora replication and use DynamoDB Streams with Amazon EventBridge and AWS Lambda for DynamoDB replication to a second Region, deploy API Gateway Regional endpoints in both Regions, and configure Amazon Route 53 failover to move traffic during a disaster

  • ❏ B. Create an Aurora global database and enable DynamoDB global tables in a secondary Region, deploy API Gateway Regional endpoints in each Region, and use Amazon Route 53 failover routing to shift clients to the standby Region during an outage

  • ❏ C. Configure AWS Backup to copy Aurora and DynamoDB backups into a secondary Region, deploy API Gateway Regional endpoints in both Regions, and use Amazon Route 53 failover to direct users to the secondary Region when needed

  • ❏ D. Create an Aurora global database and DynamoDB global tables to a second Region, deploy API Gateway Regional endpoints in each Region, and place Amazon CloudFront in front with origin failover to route users to the secondary Region during an event

Question 8

Orbit Finance must decommission its on-premises server room on short notice and urgently move its datasets to AWS. The facility has a 1.5 Gbps internet connection and a 700 Mbps AWS Direct Connect link. The team needs to transfer 28 TB of files into a new Amazon S3 bucket. Which approach will finish the transfer in the shortest time?

  • ❏ A. Use AWS DataSync to move the files into the S3 bucket

  • ❏ B. Load the data onto a 100 TB AWS Snowball and return the device for import

  • ❏ C. Enable Amazon S3 Transfer Acceleration on the bucket and upload over the internet

  • ❏ D. Send the data over the existing AWS Direct Connect link to S3

Question 9

Arcadia Retail Group plans to retire its on-premises hardware so it can shift teams to machine learning initiatives and customer personalization. As part of this modernization the company needs to archive roughly 8.5 PB of data from its primary data center into durable long term storage on AWS with the fastest migration and the most cost effective outcome. As a Solutions Architect Professional what approach should you recommend to move and store this data?

  • ❏ A. Transfer the on-premises data into a Snowmobile and import it into Amazon S3 then apply a lifecycle policy to transition the objects to S3 Glacier

  • ❏ B. Load the on-premises data onto multiple Snowball Edge Storage Optimized devices then copy it into Amazon S3 and use a lifecycle policy to transition the data to S3 Glacier

  • ❏ C. Transfer the on-premises data into a Snowmobile and import it directly into S3 Glacier

  • ❏ D. Load the on-premises data onto multiple Snowball Edge Storage Optimized devices and import it directly into S3 Glacier

Question 10

FerroTech Media runs many AWS accounts under AWS Organizations and wants to keep Amazon EC2 costs from spiking without warning. The cloud finance team needs an automatic alert whenever any account shows EC2 usage or spend that rises beyond a threshold based on recent behavior. If EC2 consumption or charges increase by more than 20% compared to the 60 day rolling average then the team wants a daily notification. The solution must function across all member accounts with minimal ongoing effort. Which approach will best satisfy these needs?

  • ❏ A. Publish EC2 instance hour counts as Amazon CloudWatch custom metrics in every account and build management account alarms for deviations from trend

  • ❏ B. Enable AWS Cost Anomaly Detection for a linked account group that covers the organization and configure a service monitor for Amazon EC2 with daily emails when anomalies exceed 20% of the 60 day average

  • ❏ C. Ingest AWS Cost and Usage Report into Amazon S3 and query with Amazon Athena on a daily schedule to compare EC2 costs to a 60 day average and then publish alerts to Amazon SNS

  • ❏ D. Create AWS Budgets in each member account with fixed EC2 spend limits and send notifications through AWS Budgets alerts

Question 11

Ravenwood Analytics is introducing client-side encryption for files that will be stored in a new Amazon S3 bucket. The engineers created a customer managed key in AWS Key Management Service to support the encryption workflow. They attached the following IAM policy to the role that the uploader uses. { “Version”: “2012-10-17”, “Id”: “key-policy-2”, “Statement”: [ { “Sid”: “GetPut”, “Effect”: “Allow”, “Action”: [ “s3:GetObject”, “s3:PutObject” ], “Resource”: “arn:aws:s3:::ravenwood-uploads-east/*” }, { “Sid”: “KMS”, “Effect”: “Allow”, “Action”: [ “kms:Decrypt”, “kms:Encrypt” ], “Resource”: “arn:aws:kms:us-east-2:444455556666:key/keyid-90210” } ] } Test downloads from the bucket worked, but every attempt to upload a new object failed with an AccessDenied error that reported the action was forbidden. Which additional IAM action must be added to this policy so that client-side encrypted uploads can succeed?

  • ❏ A. kms:GetKeyPolicy

  • ❏ B. Cloud KMS

  • ❏ C. kms:GenerateDataKey

  • ❏ D. kms:GetPublicKey

Question 12

BrightLeaf Media runs an image processing backend on AWS and wants to lower costs and reduce operational effort while keeping the environment secure. The VPC spans two Availability Zones with both public and private subnets. Amazon EC2 instances in the private subnets host the application behind an Application Load Balancer located in the public subnets. The instances currently reach the internet through two NAT gateways, and about 900 GB of new images are stored in Amazon S3 each day. What should a solutions architect do to meet these goals without weakening security?

  • ❏ A. Create Amazon S3 interface VPC endpoints in each Availability Zone and update the route tables for the private subnets to use these endpoints

  • ❏ B. Relocate the EC2 instances into the public subnets and remove the NAT gateways

  • ❏ C. Create an Amazon S3 gateway VPC endpoint in the VPC and apply an endpoint policy that allows only the required S3 actions for the bucket

  • ❏ D. Use Auto Scaling NAT instances in place of the NAT gateways and point the private subnet routes at the instances

Question 13

Cedar Peak Engineering is creating a disaster recovery plan for a mission critical Windows application that runs in its data center. About 250 Windows servers access a shared SMB file repository and the business mandates an RTO of 12 minutes and an RPO of 4 minutes, and operations expects native failover and straightforward failback. Which approach delivers these goals in the most cost effective way?

  • ❏ A. Use AWS Application Migration Service to replicate the on premises servers and place the shared files in Amazon S3 with AWS DataSync behind AWS Storage Gateway File Gateway, then update DNS to route clients to AWS during a disaster and copy data back when returning to on premises

  • ❏ B. Set up AWS Elastic Disaster Recovery for the Windows servers and use AWS DataSync to replicate the SMB data to Amazon FSx for Windows File Server, then fail over the servers to AWS during an event and use Elastic Disaster Recovery to fail back to new or existing on premises hosts

  • ❏ C. Build infrastructure templates with AWS CloudFormation and replicate all file data to Amazon Elastic File System using AWS DataSync, then deploy the stack during an incident with a pipeline and synchronize back afterward

  • ❏ D. Deploy AWS Storage Gateway File Gateway and schedule nightly backups of the Windows servers to Amazon S3, then restore servers from those backups during an outage and run temporary instances on Amazon EC2 during tailback

Question 14

HarborPoint Analytics has adopted a hybrid work model and needs to provide employees with secure remote access to internal services that run in five AWS accounts. The VPCs are already connected using existing VPC peering and some corporate resources are reachable through an AWS Site-to-Site VPN. The architect must deploy an AWS Client VPN that scales and keeps ongoing cost low while enabling access across the peered VPCs. What is the most cost-effective approach?

  • ❏ A. Provision a transit gateway for all VPCs and place a Client VPN endpoint in the shared services account that forwards traffic through the transit gateway

  • ❏ B. Integrate a Client VPN endpoint with AWS Cloud WAN and attach all VPCs to the core network

  • ❏ C. Create a Client VPN endpoint in the shared services account and advertise routes over the existing VPC peering to reach applications in other accounts

  • ❏ D. Deploy a Client VPN endpoint in each AWS account and configure routes to the application subnets

Question 15

The platform engineering team at Aurora Digital is building an Amazon EKS cluster to run an event driven thumbnail rendering service. The workload uses ephemeral stateless pods that can surge from about 30 to more than 600 replicas within minutes during traffic spikes. They want a configuration that most improves node resilience and limits the blast radius if an Availability Zone experiences an outage. What should they implement?

  • ❏ A. Consolidate node groups and switch to larger instance sizes to run more pods per node

  • ❏ B. Google Kubernetes Engine

  • ❏ C. Apply Kubernetes topology spread constraints keyed on Availability Zone so replicas are evenly distributed across zones

  • ❏ D. Configure the Kubernetes Cluster Autoscaler to keep capacity slightly underprovisioned during spikes

Question 16

A retail technology firm named Alder Cove Systems is moving its workloads to AWS and needs a multi account plan. There are five product squads and each wants strict isolation from the others. The Finance department needs clear chargeback so that costs and usage are separated by squad. The Security team wants centralized oversight with least privilege access and the ability to set preventive guardrails across environments. What account strategy should the company adopt to meet these requirements?

  • ❏ A. Use AWS Control Tower to set up the landing zone and keep a single shared workload account for all squads while using cost allocation tags for billing and rely on guardrails for governance

  • ❏ B. Use AWS Organizations to establish a management account then provision a dedicated account for each squad and create a separate security tooling account with cross account access and apply service control policies to all workload accounts and have the security team write IAM policies that grant least privilege

  • ❏ C. Create separate AWS accounts for each squad and set the security account as the management account and enable consolidated billing and allow the security team to administer other accounts through a cross account role

  • ❏ D. Create a single AWS account and use Active Directory federation for access and rely on resource tags to split billing by team and manage permissions with IAM policies that grant only the required access

Question 17

The engineering group at BrightWave Logistics is launching a relational backend that must support cross Region disaster recovery. The business requires an RPO below 4 minutes and an RTO below 12 minutes for approximately 12 TB of data while keeping costs as low as possible. Which approach will meet these targets at the lowest cost?

  • ❏ A. Provision Amazon Aurora DB clusters in two Regions and use AWS Database Migration Service to stream ongoing changes into the secondary cluster

  • ❏ B. Deploy Amazon RDS with Multi AZ in one Region and rely on the automatic failover capability during an outage

  • ❏ C. Run Amazon RDS in a primary Region with a cross Region read replica and plan to promote the replica to primary during a Regional disruption

  • ❏ D. Use Amazon Aurora Global Database with a writer in the primary Region and a reader in a secondary Region to enable rapid cross Region recovery

Question 18

HarborTech Labs plans to move its on premises file processing system to AWS. Customers upload files through a web portal at example.com and the files are currently kept on a network file share. A backend worker fleet reads tasks from a queue to process each file and individual jobs can run for up to 50 minutes. Traffic spikes during weekday business hours and is quiet overnight and on weekends. Which migration approach would be the most cost effective while fulfilling these needs?

  • ❏ A. Use Amazon SQS for the queue and have the existing web tier publish messages then trigger AWS Lambda to process each file and store results in Amazon S3

  • ❏ B. Use Amazon MQ for the queue and modify the web tier to publish messages then spin up an Amazon EC2 instance when messages arrive to process files and save results on Amazon EFS and stop the instance when done

  • ❏ C. Use Amazon SQS for the queue and have the web tier publish messages then run Amazon EC2 instances in an Auto Scaling group that scales on SQS queue depth to process files and store results in Amazon S3

  • ❏ D. Use Amazon MQ for the queue and have the web tier publish messages then trigger AWS Lambda to process each file and write outputs to Amazon EFS

Question 19

The operations group at Orion Metrics runs a licensed application on Amazon EC2 that stores shared files on an Amazon EFS file system that is encrypted with AWS KMS. The file system is protected by AWS Backup with the default backup plan. The business now requires a recovery point objective of 90 minutes for these files. What should a solutions architect change to meet this objective while keeping encryption in place?

  • ❏ A. Create a new backup plan and update the KMS key policy to allow the AWSServiceRoleForBackup service role to use the key, then run a backup every 45 minutes by using a custom cron expression

  • ❏ B. Use the existing backup plan, update the KMS key policy to allow the AWSServiceRoleForBackup role to use the key, and enable cross Region replication for the EFS file system

  • ❏ C. Create a dedicated IAM role for backups and a new backup plan, update the KMS key policy to permit that role to use the key, and schedule backups every hour

  • ❏ D. Create a new IAM role, keep the current backup plan, update the KMS key policy to allow the new role to use the key, and enable continuous backups for point in time recovery

Question 20

The platform group at SkyVertex Systems reports a rise in errors on PUT operations against their public REST API. Logs indicate that one client is sending large request bursts that exhaust capacity. They want to protect other users and keep responses user friendly while avoiding changes to backend code. What should the solutions architect recommend?

  • ❏ A. Attach AWS WAF to the API Gateway and create a rate based rule that limits bursts from a single source

  • ❏ B. Configure API Gateway usage plans with per key throttling limits and have the client handle HTTP 429 responses gracefully

  • ❏ C. Set reserved concurrency on the Lambda integration to handle sudden spikes

  • ❏ D. Enable API caching on the production stage and run 20 minute load tests to tune cache capacity

Question 21

PixelForge Studios plans to launch a real time multiplayer quiz application on AWS for internet users. The service will run on one Amazon EC2 instance and clients will connect using UDP. Leadership requires a highly secure architecture while keeping the design simple. As the Solutions Architect, what actions should you implement? (Choose 3)

  • ❏ A. Deploy AWS Global Accelerator with an Elastic Load Balancer as the endpoint

  • ❏ B. Place a Network Load Balancer in front of the EC2 instance and create a Route 53 record game.example.com that resolves to the NLB Elastic IP address

  • ❏ C. Create AWS WAF rules to drop any non UDP traffic and attach them to the load balancer

  • ❏ D. Enable AWS Shield Advanced on all internet facing resources

  • ❏ E. Use an Application Load Balancer in front of the instance and publish a friendly DNS name in Amazon Route 53 that aliases to the ALB public name

  • ❏ F. Configure subnet network ACLs to deny all protocols except UDP and associate them to the subnets that contain the load balancer nodes

Question 22

A regional home goods retailer named BayTrail Living runs its shopping site on three Amazon EC2 instances behind an Application Load Balancer, and the application stores order data in an Amazon DynamoDB table named OrdersProd. Traffic surges during quarterly flash sales and throughput on reads and writes degrades at the busiest moments. What change will provide a scalable architecture that rides through peaks with the least development effort?

  • ❏ A. Add DynamoDB Accelerator DAX and keep the existing EC2 fleet and ALB

  • ❏ B. Replatform the web tier to AWS Lambda and increase provisioned read capacity and write capacity for the DynamoDB table

  • ❏ C. Create Auto Scaling groups for the web tier and enable DynamoDB auto scaling

  • ❏ D. Create Auto Scaling groups for the web tier and add Amazon SQS with a Lambda function to batch writes into DynamoDB

Question 23

HarborPeak Logistics is moving a dual-tier web platform from its on-premises environment into AWS. The team will use Amazon Aurora PostgreSQL-Compatible Edition, EC2 Auto Scaling, and an Elastic Load Balancer to support a rapidly expanding audience. The application is stateful because it keeps session data in memory and users expect consistent interactions during traffic spikes. Which approach will ensure session consistency while allowing both the application tier and the database tier to scale?

  • ❏ A. Enable Aurora Replicas auto scaling and use a Network Load Balancer configured with least outstanding requests and stickiness

  • ❏ B. Enable Aurora Replicas auto scaling and place an Application Load Balancer in front with round robin routing and sticky sessions turned on

  • ❏ C. Turn on auto scaling for Aurora writers and use a Network Load Balancer with least outstanding requests and stickiness

  • ❏ D. Turn on auto scaling for Aurora writers and use an Application Load Balancer with round robin routing and sticky sessions

Question 24

Ridgeview Analytics operates an internal reporting tool that writes CSV exports to an Amazon S3 bucket. The files contain confidential information and are normally accessed only by the company’s IAM users. The team needs to share one specific CSV file with an external auditor for a 36 hour review. A solutions architect used an IAM user to call PutObjectAcl to add a public read ACL to that object, but the request returned “AccessDenied”. What is the most likely reason this operation failed?

  • ❏ A. The bucket has the BlockPublicPolicy setting turned on

  • ❏ B. S3 Object Lock in compliance mode is enabled for the object version

  • ❏ C. The bucket is configured with BlockPublicAcls enabled

  • ❏ D. The IAM user is not listed on the object ACL with write ACL permission

Question 25

RiverStone Capital uses AWS Control Tower and needs to apply cost governance across more than 320 developer accounts inside a Sandbox organizational unit. The company wants to require burstable EC2 and RDS instance classes and to block services that do not apply to their workloads. What should a solutions architect propose?

  • ❏ A. Define a custom detective guardrail in AWS Control Tower that flags non burstable instance launches and disallowed services and apply it to the Sandbox OU

  • ❏ B. Use Google Cloud Organization Policy constraints to restrict machine types and services across development projects

  • ❏ C. Craft a Service Control Policy in AWS Organizations that permits only burstable EC2 and RDS instance families and denies nonessential services and attach it to the Sandbox OU

  • ❏ D. Implement a custom preventive guardrail in AWS Control Tower that enforces only burstable EC2 and RDS instance types and blocks nonapproved services and enable it on the Sandbox OU

AWS Solutions Architect Professional Exam Dump Answers

Question 1

Orion Retail Group operates about 18 AWS accounts for its online storefronts and partner integrations. Each EC2 instance runs the unified CloudWatch agent and publishes logs to CloudWatch Logs. The company must consolidate all security events in a separate AWS account that is dedicated to immutable log retention. The network operations team needs to aggregate and correlate events from every account with near real time latency so analysts can respond within one minute. What is the best way to implement this cross account centralized ingestion?

  • ✓ B. Create a CloudWatch Logs destination in the logging account that targets a Kinesis Data Firehose delivery stream and configure subscription filters on the security log groups in each source account to stream events to that destination and deliver them to an S3 bucket in the logging account

The correct choice is Create a CloudWatch Logs destination in the logging account that targets a Kinesis Data Firehose delivery stream and configure subscription filters on the security log groups in each source account to stream events to that destination and deliver them to an S3 bucket in the logging account.

This design uses CloudWatch Logs subscription filters to stream events continuously from every source account with near real time latency. A destination in the logging account with an appropriate resource policy enables cross account delivery. Kinesis Data Firehose then buffers, compresses and encrypts the data and delivers it to Amazon S3 in the logging account. Firehose buffering can be tuned to small sizes or short intervals which supports the one minute response requirement. S3 provides durable centralized storage and can be configured for immutable retention with Object Lock if needed.

Create an IAM role in every source account and run a Lambda function in the logging account every 45 minutes that assumes the role and exports each CloudWatch Logs group to an S3 bucket in the logging account is not suitable because CloudWatch Logs exports are batch operations and are not near real time. They introduce significant delay and operational overhead across many accounts.

Configure CloudWatch Logs in each source account to forward events directly to CloudWatch Logs in the logging account and then subscribe a Kinesis Data Firehose stream to Amazon EventBridge to store the data in S3 is incorrect because CloudWatch Logs does not forward directly to another Logs group. Subscription filters deliver only to Kinesis Data Streams, Kinesis Data Firehose or Lambda. Firehose is not subscribed to EventBridge in this way and this path would not provide the required streaming flow.

Send the security events into Google Pub Sub and process them with Dataflow to BigQuery and then export the results to Amazon S3 is not appropriate because it introduces a different cloud platform without any benefit for this requirement. It adds latency, cost and complexity and does not align with native AWS cross account streaming.

When you see a requirement to centralize CloudWatch Logs across many accounts with near real time delivery to S3, look for subscription filters to a cross account destination that targets Kinesis Data Firehose.

Question 2

Sundial Apps runs a RESTful API on Amazon EC2 instances in an Auto Scaling group across three private subnets. An Application Load Balancer spans two public subnets and is configured as the only origin for an Amazon CloudFront distribution. The team needs to ensure that only CloudFront can reach the ALB and that direct access from the internet is blocked at the origin. What approach should the solutions architect choose to strengthen origin security?

  • ✓ C. Store a random token in AWS Secrets Manager with automated rotation using AWS Lambda then have CloudFront pass the token in a custom origin header and enforce a header match rule in an AWS WAF web ACL that is associated to the ALB

The correct option is Store a random token in AWS Secrets Manager with automated rotation using AWS Lambda then have CloudFront pass the token in a custom origin header and enforce a header match rule in an AWS WAF web ACL that is associated to the ALB.

This approach ensures that only requests carrying a secret header value reach the Application Load Balancer because CloudFront is configured to add the header on every origin request. You attach an AWS WAF web ACL to the ALB and create a rule that allows traffic only when the expected header and value are present, which blocks direct internet requests that do not include the secret. CloudFront natively supports adding custom headers to origin requests and AWS WAF can evaluate HTTP headers at the ALB. Managing the secret in AWS Secrets Manager with rotation through Lambda keeps the token fresh without manual updates which strengthens the security posture while remaining operationally sound.

Associate an AWS WAF web ACL to the ALB with an IP match that includes the published CloudFront service ranges and migrate the ALB into two private subnets is incorrect because placing the ALB in private subnets would prevent CloudFront from reaching it since CloudFront connects to origins over the public internet. Relying on IP match rules for CloudFront address ranges is also brittle and operationally heavy as address ranges change.

Enable AWS Shield Advanced and attach a security group policy that only permits CloudFront service addresses to the ALB is incorrect because Shield Advanced provides DDoS protections and visibility but it does not manage or attach security group policies, and it cannot be used to restrict origin access to only CloudFront.

Save a shared key in AWS Systems Manager Parameter Store with rotation configured then configure CloudFront to send the key as a custom header and add custom code on the target instances to check the header and drop requests without the key is incorrect because Parameter Store does not provide native secret rotation like Secrets Manager and enforcing the check in application code allows unwanted traffic to reach the instances before being rejected which is less secure and less efficient than blocking at the ALB with AWS WAF.

When CloudFront must be the only client for an ALB, prefer a secret custom header added by CloudFront and enforced by an ALB-attached AWS WAF rule. Verify whether the origin must remain publicly reachable since CloudFront requires a public origin for ALB targets.

Question 3

Riverbend Logistics plans to run its connected van telemetry platform on AWS to collect signals from roughly 9,000 vehicles every three seconds so it can update routes and estimated arrival times in near real time. The engineering team requires a fully serverless design that scales automatically without any shard or instance capacity to size or tune and the team does not want to intervene during traffic spikes. As the AWS Certified Solutions Architect Professional advising this initiative, which approach should they implement?

  • ✓ C. Send messages to an Amazon SQS standard queue and have an AWS Lambda function process batches and write them to an auto scaled Amazon DynamoDB table

The correct option is Send messages to an Amazon SQS standard queue and have an AWS Lambda function process batches and write them to an auto scaled Amazon DynamoDB table.

This choice is fully serverless and requires no shard or instance capacity to size or tune. SQS absorbs bursty traffic and provides buffering and backpressure so Lambda scales concurrency automatically as queue depth grows and slows when traffic subsides. Batch processing improves throughput and cost efficiency while retries and dead letter queues increase resilience with no operator intervention. Using DynamoDB with on demand capacity or auto scaling removes the need to pre provision write capacity so the database scales automatically with the ingestion rate. This meets the near real time requirement for frequent telemetry updates while keeping operations hands off during spikes.

Stream the data into Amazon Kinesis Data Firehose and configure it to deliver records directly into an Amazon DynamoDB table is incorrect because Firehose does not support DynamoDB as a delivery destination. Even if it did, the requirement to write straight into DynamoDB would not be met by this service.

Publish the telemetry to an Amazon SNS topic that invokes an AWS Lambda function which stores items in an Amazon DynamoDB table is not the best fit because SNS is push based and lacks buffering and backpressure. During spikes it can drive very high Lambda concurrency and overwhelm DynamoDB which often requires manual controls to avoid throttling.

Ingest the telemetry into an Amazon Kinesis Data Stream and use a consumer application on Amazon EC2 to read the stream and store data in an Amazon DynamoDB table is not fully serverless because it relies on EC2 instances to consume the stream. In addition, Kinesis Data Streams commonly requires shard capacity planning and tuning which conflicts with the requirement to avoid managing shards.

When a question emphasizes fully serverless with no shards to manage and no operator action during spikes, favor patterns that provide buffering and backpressure such as SQS triggering Lambda and pair them with DynamoDB on demand capacity.

Question 4

EduNova at example.com plans to distribute private downloads and a subscriber only library using Amazon CloudFront. Only newly registered customers who have paid for a 12 month plan should be allowed to fetch the desktop installer, and only active subscribers should be able to view any files in the members area. What should the architect implement to enforce these restrictions while keeping delivery efficient with CloudFront? (Choose 2)

  • ✓ A. Configure CloudFront signed cookies to authorize access to all content in the members area

  • ✓ D. Configure CloudFront signed URLs to protect the installer download

The correct options are Configure CloudFront signed URLs to protect the installer download and Configure CloudFront signed cookies to authorize access to all content in the members area.

Using the first option for the installer lets the application issue a time limited link only after confirming the customer is newly registered and has paid for the 12 month plan. This ensures precise per object control with expirations and optional constraints while still letting CloudFront cache and deliver efficiently.

Using the second option for the members area lets a single authorization grant cover many objects under the protected paths. This avoids creating and managing a separate link for every file and it keeps delivery efficient because the edge can continue to serve cached content to authorized subscribers.

Cloud CDN signed URLs is incorrect because it refers to a Google Cloud service while the scenario uses Amazon CloudFront, so it does not apply to this distribution.

Configure CloudFront signed cookies to authorize access to the single installer object is not the best choice because cookies are better for granting access to multiple objects, and per object downloads are more cleanly controlled with a single signed link.

Configure CloudFront signed URLs for every object in the members area would be operationally heavy and error prone because it requires generating and attaching a link for every file and it does not scale as well as a single cookie that authorizes access across the protected paths.

Match the mechanism to the scope of access. Use signed URLs for a single file or a small set and use signed cookies when you need to authorize many files under common paths. Watch for provider mismatches such as choosing a GCP feature when the scenario specifies AWS.

Question 5

The platform engineering team at Riverbend Manufacturing plans to move several hundred virtual machines from two colocation facilities into AWS, and they must inventory their workloads, map service dependencies, and produce a consolidated assessment report. They have already initiated a Migration Evaluator engagement and they are allowed to install collection software on all VMs. Which approach will deliver the required insights with the least operational effort?

  • ✓ B. Install the AWS Application Discovery Agent on every on-premises VM. After a 30 day collection window, use AWS Migration Hub to view application dependencies. Download the Quick Insights assessment report directly from Migration Hub

The correct option is Install the AWS Application Discovery Agent on every on-premises VM. After a 30 day collection window, use AWS Migration Hub to view application dependencies. Download the Quick Insights assessment report directly from Migration Hub. This path collects the detailed host and network telemetry needed for dependency mapping and exposes the Quick Insights report in Migration Hub, which satisfies the inventory, dependency, and assessment requirements with minimal overhead.

The Application Discovery Agent gathers process and network connection data in addition to system and performance metrics. Migration Hub uses that agent data to visualize server to server dependencies so the team can map services accurately. Because they already have a Migration Evaluator engagement, Migration Hub can surface the Quick Insights assessment directly without extra export or transform steps, which reduces operational effort while producing the consolidated report they need.

Configure the AWS Application Discovery Service Agentless Collector in the data centers. After a 30 day collection window, use AWS Migration Hub to inspect dependency maps. Export the server inventory and upload it to Migration Evaluator to generate the Quick Insights assessment is incorrect because the agentless collector does not capture the process and network flow details required for dependency maps. It also adds unnecessary steps to export and upload data when Quick Insights can be accessed in Migration Hub once agent data is available.

Deploy the Migration Evaluator Collector to all VMs. When the 30 day collection completes, use Migration Evaluator to review discovered servers and dependencies. Export the inventory to Amazon QuickSight and then download the Quick Insights assessment from the generated dashboard is incorrect because Migration Evaluator focuses on cost and right sizing insights and does not build application dependency maps. Exporting to Amazon QuickSight is not required to obtain Quick Insights and would add avoidable work.

Set up the Migration Evaluator Collector in the environment and also install the AWS Application Discovery Agent on each VM. After the 30 day run, use AWS Migration Hub for dependency visualization and retrieve the Quick Insights assessment from Migration Evaluator is incorrect because running two collectors increases operational effort without adding necessary capability. With the agent in place, Migration Hub already provides dependency visualization and can present the Quick Insights assessment directly.

When a question emphasizes dependency mapping for on premises servers choose the Application Discovery Agent because it collects process and network connection data that the agentless collector does not. For least operational effort avoid stacking multiple collectors when one satisfies all stated requirements.

Question 6

A fintech startup named LumaPay lets customers submit high resolution receipt photos from a mobile app to validate cashback offers. The app stores images in an Amazon S3 bucket in the us-east-2 Region. The business recently launched across several European countries and those users report long delays when sending images from their phones. What combination of changes should a Solutions Architect implement to speed up the image upload experience for these users? (Choose 2)

  • ✓ B. Update the mobile app to use Amazon S3 multipart upload

  • ✓ D. Enable Amazon S3 Transfer Acceleration on the bucket

The correct options are Update the mobile app to use Amazon S3 multipart upload and Enable Amazon S3 Transfer Acceleration on the bucket.

Update the mobile app to use Amazon S3 multipart upload improves performance for large images over long distance and variable mobile networks because the client can upload parts in parallel, retry only failed parts, and maximize throughput. This reduces the impact of high latency and intermittent connectivity that European users experience when sending high resolution photos to a bucket in another continent.

Enable Amazon S3 Transfer Acceleration on the bucket speeds uploads from geographically distant clients by directing them to the nearest acceleration endpoint and then routing traffic over the AWS global network to the bucket. European users connect to nearby edge locations which reduces first mile latency and yields faster and more consistent uploads. The app must use the accelerate endpoint for this benefit.

Create an Amazon CloudFront distribution with the S3 bucket as the origin is designed to cache and accelerate content delivery to viewers, primarily for downloads. It does not provide a simple or supported path for end user clients to upload directly to an S3 origin, so it would not meaningfully improve the mobile upload experience.

Change the bucket storage class to S3 Intelligent-Tiering only affects how objects are stored for cost optimization and does not change network paths or client upload throughput. Therefore it does not address the latency users encounter during uploads.

Provision an AWS Direct Connect link from Europe to the us-east-2 Region is intended for private connectivity from enterprise sites or data centers. It does not help mobile users on public cellular and Wi-Fi networks and would be unnecessary and costly for this use case.

First determine whether the scenario is about uploads or downloads. For distant users and large objects, pair client-side multipart upload with network path optimization such as S3 Transfer Acceleration. Be skeptical of options that change storage classes or enterprise connectivity when the bottleneck is end user network latency.

Question 7

NorthPeak Lending runs its containerized platform on Amazon ECS with Amazon API Gateway in front and stores relational data in Amazon Aurora and key value data in Amazon DynamoDB. The team provisions with the AWS CDK and releases through AWS CodePipeline. The company requires an RPO of 90 minutes and an RTO of 3 hours for a regional outage while keeping spend as low as possible. Which approach should the architects implement to meet these goals?

  • ✓ B. Create an Aurora global database and enable DynamoDB global tables in a secondary Region, deploy API Gateway Regional endpoints in each Region, and use Amazon Route 53 failover routing to shift clients to the standby Region during an outage

The correct answer is Create an Aurora global database and enable DynamoDB global tables in a secondary Region, deploy API Gateway Regional endpoints in each Region, and use Amazon Route 53 failover routing to shift clients to the standby Region during an outage.

This choice uses native cross Region replication for both data stores which aligns directly to the recovery goals. Aurora Global Database provides asynchronous replication with typical sub second lag and supports rapid promotion in another Region. That easily fits an RPO of 90 minutes and an RTO of 3 hours while keeping operations simple. DynamoDB Global Tables replicate items across Regions without custom code and provide highly reliable propagation which minimizes data loss and meets the RPO target. Regional API Gateway endpoints in each Region allow Route 53 health checks and failover routing to shift traffic during an outage. This keeps costs lower because secondary Region capacity can be kept minimal until needed.

Use AWS Database Migration Service for Aurora replication and use DynamoDB Streams with Amazon EventBridge and AWS Lambda for DynamoDB replication to a second Region, deploy API Gateway Regional endpoints in both Regions, and configure Amazon Route 53 failover to move traffic during a disaster is not ideal because DMS is not the native replication mechanism for Aurora and introduces additional latency and operational overhead. A custom pipeline from DynamoDB Streams with EventBridge and Lambda lacks the durability and predictability of Global Tables and can miss the 90 minute RPO under high load or failure scenarios.

Configure AWS Backup to copy Aurora and DynamoDB backups into a secondary Region, deploy API Gateway Regional endpoints in both Regions, and use Amazon Route 53 failover to direct users to the secondary Region when needed is unlikely to meet the objectives. Backup copy windows make the RPO variable and often longer than 90 minutes. Restoring Aurora clusters and large DynamoDB tables can take many hours which risks exceeding the 3 hour RTO and increases potential data loss compared to continuous replication.

Create an Aurora global database and DynamoDB global tables to a second Region, deploy API Gateway Regional endpoints in each Region, and place Amazon CloudFront in front with origin failover to route users to the secondary Region during an event adds CloudFront where it is not needed for API recovery. Origin failover in CloudFront is oriented to content delivery and does not improve detection or handling of Regional outages over Route 53 health checks. It adds complexity and cost without improving RPO or RTO.

Start by mapping the stated RPO and RTO to the data layer. When the window is small prefer managed cross Region replication with DNS failover and avoid answers that depend on backup restores or custom replication pipelines.

Question 8

Orbit Finance must decommission its on-premises server room on short notice and urgently move its datasets to AWS. The facility has a 1.5 Gbps internet connection and a 700 Mbps AWS Direct Connect link. The team needs to transfer 28 TB of files into a new Amazon S3 bucket. Which approach will finish the transfer in the shortest time?

  • ✓ C. Enable Amazon S3 Transfer Acceleration on the bucket and upload over the internet

The correct option is Enable Amazon S3 Transfer Acceleration on the bucket and upload over the internet.

Amazon S3 Transfer Acceleration uses AWS edge locations to route uploads over the AWS backbone, which reduces the impact of distance and network variability and helps you sustain higher throughput. With a 1.5 Gbps internet link, Transfer Acceleration can approach line rate and move 28 TB in roughly forty to forty five hours, which is faster than the available 700 Mbps private link and avoids the multi day delays of shipping a device.

Use AWS DataSync to move the files into the S3 bucket is not the fastest in this scenario because AWS DataSync is constrained by the same available network bandwidth and typically will not exceed what you can achieve with Transfer Acceleration. It also requires deploying and configuring an agent, which adds setup time that does not help when you are under urgent time pressure.

Load the data onto a 100 TB AWS Snowball and return the device for import introduces ordering, shipping, and processing time that usually takes days end to end. For only 28 TB and with a strong 1.5 Gbps internet link, completing the upload over the network with Transfer Acceleration will finish sooner than a physical device workflow.

Send the data over the existing AWS Direct Connect link to S3 is slower here because the link is only 700 Mbps, which would take roughly ninety hours for 28 TB even before protocol overhead. The 1.5 Gbps internet path with Transfer Acceleration can complete significantly faster.

Estimate transfer time by converting data size and link speed into hours, then factor in overhead and logistics. If the internet path is faster than your private link and the data must move quickly, consider Transfer Acceleration to leverage the AWS backbone. If shipping is involved, remember to include ordering and transit time since those often dominate.

Question 9

Arcadia Retail Group plans to retire its on-premises hardware so it can shift teams to machine learning initiatives and customer personalization. As part of this modernization the company needs to archive roughly 8.5 PB of data from its primary data center into durable long term storage on AWS with the fastest migration and the most cost effective outcome. As a Solutions Architect Professional what approach should you recommend to move and store this data?

  • ✓ B. Load the on-premises data onto multiple Snowball Edge Storage Optimized devices then copy it into Amazon S3 and use a lifecycle policy to transition the data to S3 Glacier

The correct option is Load the on-premises data onto multiple Snowball Edge Storage Optimized devices then copy it into Amazon S3 and use a lifecycle policy to transition the data to S3 Glacier.

This approach fits the 8.5 PB scale well because you can order many devices and run transfers in parallel which shortens the ingestion window. The data lands in Amazon S3 which is the required import target for these devices and you can then apply an S3 lifecycle policy to transition objects into S3 Glacier classes for long term archival at the lowest ongoing cost. This sequence provides fast bulk ingestion and optimizes storage economics after the move.

Transfer the on-premises data into a Snowmobile and import it into Amazon S3 then apply a lifecycle policy to transition the objects to S3 Glacier is not the best fit because Snowmobile is designed for tens of petabytes or more and is typically recommended at or above roughly 10 PB. For 8.5 PB it is usually slower to procure and more expensive than using multiple Snowball Edge devices in parallel.

Transfer the on-premises data into a Snowmobile and import it directly into S3 Glacier is incorrect because neither Snowmobile nor Snowball Edge imports directly into S3 Glacier. Data must be imported into S3 first and only then transitioned to Glacier storage classes with lifecycle policies.

Load the on-premises data onto multiple Snowball Edge Storage Optimized devices and import it directly into S3 Glacier is also incorrect for the same reason. You cannot ingest directly into S3 Glacier and must import into S3 first then transition with lifecycle rules.

When you see petabyte scale migrations, match the tool to the size and timeline. Use Snowball Edge for a few petabytes with parallel devices and use Snowmobile only for very large migrations such as 10 PB or more. Remember that data lands in S3 first and then a lifecycle policy moves it to S3 Glacier.

Question 10

FerroTech Media runs many AWS accounts under AWS Organizations and wants to keep Amazon EC2 costs from spiking without warning. The cloud finance team needs an automatic alert whenever any account shows EC2 usage or spend that rises beyond a threshold based on recent behavior. If EC2 consumption or charges increase by more than 20% compared to the 60 day rolling average then the team wants a daily notification. The solution must function across all member accounts with minimal ongoing effort. Which approach will best satisfy these needs?

  • ✓ B. Enable AWS Cost Anomaly Detection for a linked account group that covers the organization and configure a service monitor for Amazon EC2 with daily emails when anomalies exceed 20% of the 60 day average

The correct option is Enable AWS Cost Anomaly Detection for a linked account group that covers the organization and configure a service monitor for Amazon EC2 with daily emails when anomalies exceed 20% of the 60 day average.

Using AWS Cost Anomaly Detection in the management account allows you to monitor spend across all linked accounts in AWS Organizations with a service monitor focused on Amazon EC2. It learns normal spend patterns from recent history and lets you set percentage based alerting and a daily notification cadence, so the finance team is informed when spend rises sharply relative to the baseline. This is a managed capability that reduces operational effort while meeting the requirement for organization wide coverage and daily alerts.

Because Cost Anomaly Detection evaluates spend rather than raw usage, it captures cost driven changes such as pricing model shifts and sudden On-Demand spikes. It also supports email and SNS notifications, which aligns with the need for automated alerts with minimal maintenance.

Publish EC2 instance hour counts as Amazon CloudWatch custom metrics in every account and build management account alarms for deviations from trend is not suitable because it relies on custom metrics and alarms in every account and focuses on usage hours instead of cost. It would require substantial engineering to baseline trends, aggregate data across accounts, and maintain the system, which violates the minimal ongoing effort requirement and does not directly detect cost anomalies.

Ingest AWS Cost and Usage Report into Amazon S3 and query with Amazon Athena on a daily schedule to compare EC2 costs to a 60 day average and then publish alerts to Amazon SNS can work but creates a heavy, custom pipeline to compute baselines and detect changes. The Cost and Usage Report has data latency and the approach demands continual query, code, and schema maintenance, which is more complex than necessary for this use case.

Create AWS Budgets in each member account with fixed EC2 spend limits and send notifications through AWS Budgets alerts does not meet the requirement because budgets are fixed thresholds and are not based on a rolling behavioral baseline. It also requires creating and managing budgets across many accounts, which is cumbersome and not aligned with the need for dynamic anomaly detection and minimal maintenance.

Look for phrases like minimal ongoing effort, across accounts, and anomaly. These often point to a managed cost anomaly capability with organization scope and daily notifications rather than custom pipelines or static budgets.

Question 11

Ravenwood Analytics is introducing client-side encryption for files that will be stored in a new Amazon S3 bucket. The engineers created a customer managed key in AWS Key Management Service to support the encryption workflow. They attached the following IAM policy to the role that the uploader uses. { “Version”: “2012-10-17”, “Id”: “key-policy-2”, “Statement”: [ { “Sid”: “GetPut”, “Effect”: “Allow”, “Action”: [ “s3:GetObject”, “s3:PutObject” ], “Resource”: “arn:aws:s3:::ravenwood-uploads-east/*” }, { “Sid”: “KMS”, “Effect”: “Allow”, “Action”: [ “kms:Decrypt”, “kms:Encrypt” ], “Resource”: “arn:aws:kms:us-east-2:444455556666:key/keyid-90210” } ] } Test downloads from the bucket worked, but every attempt to upload a new object failed with an AccessDenied error that reported the action was forbidden. Which additional IAM action must be added to this policy so that client-side encrypted uploads can succeed?

  • ✓ C. kms:GenerateDataKey

The only correct option is kms:GenerateDataKey.

For client side encryption the uploader must obtain a fresh data key from AWS KMS for each object. The GenerateDataKey API returns a plaintext data key for local encryption and a copy that is encrypted under the customer managed key. Without permission to call this API the client cannot get a data key and the upload fails with AccessDenied even if it already has permission to encrypt and decrypt and to put objects in the bucket.

The option kms:GetKeyPolicy only allows reading the key policy document. It does not create data keys and it does not enable the envelope encryption workflow needed for uploads.

The option Cloud KMS is a Google Cloud product name and not an AWS IAM action, so it does not grant any usable permission in this scenario.

The option kms:GetPublicKey returns the public key of an asymmetric KMS key. S3 client side encryption uses symmetric data keys generated by the proper API so this action does not enable the required workflow.

When a write workflow uses envelope encryption, check for permission to call kms:GenerateDataKey on the KMS key. If uploads fail with AccessDenied even though Encrypt and Decrypt are allowed, the missing permission is often the data key generation action.

Question 12

BrightLeaf Media runs an image processing backend on AWS and wants to lower costs and reduce operational effort while keeping the environment secure. The VPC spans two Availability Zones with both public and private subnets. Amazon EC2 instances in the private subnets host the application behind an Application Load Balancer located in the public subnets. The instances currently reach the internet through two NAT gateways, and about 900 GB of new images are stored in Amazon S3 each day. What should a solutions architect do to meet these goals without weakening security?

  • ✓ C. Create an Amazon S3 gateway VPC endpoint in the VPC and apply an endpoint policy that allows only the required S3 actions for the bucket

The correct option is Create an Amazon S3 gateway VPC endpoint in the VPC and apply an endpoint policy that allows only the required S3 actions for the bucket. Adding a gateway VPC endpoint for S3 with an endpoint policy routes S3 traffic privately and enforces least privilege which lowers NAT costs and operational effort without weakening security.

With a gateway endpoint, traffic from the private subnets to S3 stays on the AWS network and uses route table entries to the S3 prefix. The service is managed and highly available and it has no hourly charge and no data processing fee, so sending about 900 GB each day to S3 avoids NAT gateway data processing charges and can reduce per hour NAT spend if you downsize or remove NAT later. An endpoint policy can limit access to only the required bucket and actions which strengthens security while the instances remain private behind the load balancer.

Create Amazon S3 interface VPC endpoints in each Availability Zone and update the route tables for the private subnets to use these endpoints is not the best choice for S3 because the recommended pattern is a gateway endpoint for S3. Interface endpoints incur hourly and data processing charges and they do not require route table changes to reach S3, so this option increases cost and contains an inaccurate step.

Relocate the EC2 instances into the public subnets and remove the NAT gateways would give the application servers public exposure and add public IPs or direct internet routing which weakens security and violates the requirement to keep the environment secure.

Use Auto Scaling NAT instances in place of the NAT gateways and point the private subnet routes at the instances increases operational burden because NAT instances must be patched, scaled, and monitored. This is a legacy pattern that AWS does not recommend for new designs and it also leaves all S3 traffic flowing through NAT so you keep paying data processing charges rather than removing them.

When a workload in private subnets transfers a large volume of data to S3, prefer an S3 gateway VPC endpoint and use an endpoint policy to enforce least privilege. Be cautious of answers that move instances into public subnets when the question stresses keeping the environment secure.

Question 13

Cedar Peak Engineering is creating a disaster recovery plan for a mission critical Windows application that runs in its data center. About 250 Windows servers access a shared SMB file repository and the business mandates an RTO of 12 minutes and an RPO of 4 minutes, and operations expects native failover and straightforward failback. Which approach delivers these goals in the most cost effective way?

  • ✓ B. Set up AWS Elastic Disaster Recovery for the Windows servers and use AWS DataSync to replicate the SMB data to Amazon FSx for Windows File Server, then fail over the servers to AWS during an event and use Elastic Disaster Recovery to fail back to new or existing on premises hosts

The correct option is Set up AWS Elastic Disaster Recovery for the Windows servers and use AWS DataSync to replicate the SMB data to Amazon FSx for Windows File Server, then fail over the servers to AWS during an event and use Elastic Disaster Recovery to fail back to new or existing on premises hosts.

This approach meets the aggressive recovery objectives because AWS Elastic Disaster Recovery performs continuous block level replication and can launch recovery instances quickly, which enables recovery time measured in minutes. It also provides guided and automated failback to on premises, which satisfies the request for native failover and straightforward failback without complex manual steps.

Using Amazon FSx for Windows File Server for the shared repository gives you a fully managed Windows native SMB service with NTFS permissions and Active Directory integration, which is appropriate for hundreds of Windows clients. AWS DataSync can move only changes from the on premises SMB share to FSx on a frequent schedule and at scale, which allows you to achieve a four minute recovery point when bandwidth and task scheduling are planned correctly. Together these services provide a cost effective pattern that keeps infrastructure warm enough for fast recovery while avoiding the expense of fully active duplicate stacks.

Use AWS Application Migration Service to replicate the on premises servers and place the shared files in Amazon S3 with AWS DataSync behind AWS Storage Gateway File Gateway, then update DNS to route clients to AWS during a disaster and copy data back when returning to on premises is not the best fit because S3 behind File Gateway is object backed and relies on edge caching, which is not ideal for a heavily shared SMB workload at this scale. This approach complicates failback of the shared data and does not provide the same native Windows file system features and performance that FSx for Windows File Server offers. It therefore risks missing the recovery objectives and the requirement for straightforward failback.

Build infrastructure templates with AWS CloudFormation and replicate all file data to Amazon Elastic File System using AWS DataSync, then deploy the stack during an incident with a pipeline and synchronize back afterward is unsuitable because Amazon EFS is an NFS service and does not provide SMB or Windows native features, so it does not meet the application requirements. Provisioning core infrastructure only during an incident also lengthens recovery time, which makes the twelve minute target unrealistic.

Deploy AWS Storage Gateway File Gateway and schedule nightly backups of the Windows servers to Amazon S3, then restore servers from those backups during an outage and run temporary instances on Amazon EC2 during tailback cannot meet a four minute recovery point because nightly backups leave many hours of potential data loss. Restoring servers from backups during an outage also makes a twelve minute recovery time highly unlikely, and the process does not provide a simple and automated failback path.

Map workload needs to the right building blocks. If the question mentions Windows with shared SMB and NTFS permissions, think Amazon FSx for Windows File Server. If it requires minute level RTO and RPO with automated failback, think AWS Elastic Disaster Recovery rather than backups or ad hoc provisioning.

Question 14

HarborPoint Analytics has adopted a hybrid work model and needs to provide employees with secure remote access to internal services that run in five AWS accounts. The VPCs are already connected using existing VPC peering and some corporate resources are reachable through an AWS Site-to-Site VPN. The architect must deploy an AWS Client VPN that scales and keeps ongoing cost low while enabling access across the peered VPCs. What is the most cost-effective approach?

  • ✓ C. Create a Client VPN endpoint in the shared services account and advertise routes over the existing VPC peering to reach applications in other accounts

The correct option is Create a Client VPN endpoint in the shared services account and advertise routes over the existing VPC peering to reach applications in other accounts.

This design uses a single managed Client VPN endpoint that can scale to many users while leveraging the already established VPC peering to reach applications in other accounts. You add routes in the Client VPN route table for each peered VPC CIDR and ensure the VPC route tables and security groups allow the traffic. Because the endpoint creates elastic network interfaces inside the shared services VPC, traffic to the peered VPCs is direct and does not require transitive routing. This keeps costs low since you avoid new interconnect infrastructure and you pay only for one endpoint and its connections while using existing peering links.

Provision a transit gateway for all VPCs and place a Client VPN endpoint in the shared services account that forwards traffic through the transit gateway is not the most cost effective because a transit gateway introduces per hour and per attachment charges along with data processing costs. It also adds complexity that is unnecessary when peering already connects the VPCs.

Integrate a Client VPN endpoint with AWS Cloud WAN and attach all VPCs to the core network is not viable because Client VPN does not integrate directly with Cloud WAN and this would add Cloud WAN costs and operational overhead without solving anything that existing peering already provides.

Deploy a Client VPN endpoint in each AWS account and configure routes to the application subnets is more expensive and harder to manage since each endpoint incurs its own hourly and connection charges and you would need to manage multiple configurations instead of one centralized endpoint.

When VPC peering already exists, prefer a single Client VPN endpoint in a shared hub VPC and advertise routes to the peered VPC CIDRs. Watch for nontransitive routing limits and ensure the necessary routes and security groups are in place.

Question 15

The platform engineering team at Aurora Digital is building an Amazon EKS cluster to run an event driven thumbnail rendering service. The workload uses ephemeral stateless pods that can surge from about 30 to more than 600 replicas within minutes during traffic spikes. They want a configuration that most improves node resilience and limits the blast radius if an Availability Zone experiences an outage. What should they implement?

  • ✓ C. Apply Kubernetes topology spread constraints keyed on Availability Zone so replicas are evenly distributed across zones

The correct option is Apply Kubernetes topology spread constraints keyed on Availability Zone so replicas are evenly distributed across zones.

This approach instructs the scheduler to place replicas evenly across Availability Zones which limits the impact of a single zone failure. By spreading ephemeral stateless pods across zones the service keeps capacity available even during rapid surges so only a fraction of replicas are affected if a zone goes down. This improves node resilience because no single node group or zone carries a disproportionate share of the workload and it pairs well with autoscaling since new pods are still distributed evenly as capacity grows.

Consolidate node groups and switch to larger instance sizes to run more pods per node is incorrect because packing more pods onto fewer larger nodes increases the blast radius when a node or zone fails. Consolidation also reduces fault isolation and can slow scaling responsiveness under sudden spikes.

Google Kubernetes Engine is incorrect because it is a managed Kubernetes service on a different cloud and does not address building resilience for an Amazon EKS cluster. Switching platforms does not solve the requirement to limit blast radius within AWS Availability Zones.

Configure the Kubernetes Cluster Autoscaler to keep capacity slightly underprovisioned during spikes is incorrect because intentional underprovisioning harms availability and SLOs and does not improve resilience. The Cluster Autoscaler’s role is to add nodes to meet pending pods and it does not by itself ensure balanced distribution across zones.

When you see requirements about limiting blast radius across Availability Zones for Kubernetes workloads, look for controls that influence pod placement such as topology spread constraints rather than changes to instance size or intentional underprovisioning.

Question 16

A retail technology firm named Alder Cove Systems is moving its workloads to AWS and needs a multi account plan. There are five product squads and each wants strict isolation from the others. The Finance department needs clear chargeback so that costs and usage are separated by squad. The Security team wants centralized oversight with least privilege access and the ability to set preventive guardrails across environments. What account strategy should the company adopt to meet these requirements?

  • ✓ B. Use AWS Organizations to establish a management account then provision a dedicated account for each squad and create a separate security tooling account with cross account access and apply service control policies to all workload accounts and have the security team write IAM policies that grant least privilege

The correct option is Use AWS Organizations to establish a management account then provision a dedicated account for each squad and create a separate security tooling account with cross account access and apply service control policies to all workload accounts and have the security team write IAM policies that grant least privilege.

This design gives each squad its own account which provides strong blast radius isolation and clear boundaries for access. Finance can use consolidated billing with per account cost visibility which makes chargeback straightforward and reliable. The security team gains centralized oversight through a dedicated security tooling account with cross account roles for visibility and action. Preventive guardrails are enforced with Service Control Policies from the management account which ensures consistent control across all workload accounts. Least privilege is achieved by writing scoped IAM policies within each squad account while SCPs provide the outer boundary.

Use AWS Control Tower to set up the landing zone and keep a single shared workload account for all squads while using cost allocation tags for billing and rely on guardrails for governance is incorrect because a single shared account does not meet the requirement for strict isolation between squads and it makes least privilege harder to maintain. Tags can help with reporting but they do not provide hard isolation and they are prone to human error so they do not satisfy the need for clear chargeback by team on their own.

Create separate AWS accounts for each squad and set the security account as the management account and enable consolidated billing and allow the security team to administer other accounts through a cross account role is incorrect because the management account should be reserved for organization and billing control and not double as a security tooling account. Combining these roles concentrates risk and goes against best practices which recommend a separate management account and a dedicated security tooling account.

Create a single AWS account and use Active Directory federation for access and rely on resource tags to split billing by team and manage permissions with IAM policies that grant only the required access is incorrect because a single account cannot provide strict isolation between squads. Tag based chargeback is not authoritative compared to separate accounts and preventive guardrails at the organization level are not possible in a single account approach.

When you see needs for strict isolation, clear chargeback per team and preventive guardrails with centralized security then think one account per team under AWS Organizations with a separate management and security tooling account and enforce boundaries with SCPs and least privilege IAM.

Question 17

The engineering group at BrightWave Logistics is launching a relational backend that must support cross Region disaster recovery. The business requires an RPO below 4 minutes and an RTO below 12 minutes for approximately 12 TB of data while keeping costs as low as possible. Which approach will meet these targets at the lowest cost?

  • ✓ C. Run Amazon RDS in a primary Region with a cross Region read replica and plan to promote the replica to primary during a Regional disruption

The correct option is Run Amazon RDS in a primary Region with a cross Region read replica and plan to promote the replica to primary during a Regional disruption.

This approach provides a cost effective cross Region disaster recovery posture by using asynchronous replication that typically maintains replication lag within a few minutes under normal load, which can satisfy the required recovery point objective below four minutes. Promotion of the replica to primary is a controlled operation that generally completes within minutes, which makes meeting the recovery time objective below twelve minutes achievable. It also avoids the premium pricing of specialized global features and the operational overhead of additional replication services, which keeps total cost lower while still meeting the stated objectives.

Provision Amazon Aurora DB clusters in two Regions and use AWS Database Migration Service to stream ongoing changes into the secondary cluster is more complex and typically more expensive because it runs two managed clusters on a higher priced engine and adds a separate replication service. The additional replication layer can introduce extra lag and operational overhead, which makes consistently meeting the sub four minute recovery point objective harder while also failing the lowest cost requirement.

Deploy Amazon RDS with Multi AZ in one Region and rely on the automatic failover capability during an outage does not meet the cross Region requirement because Multi AZ protects against Availability Zone failures within a single Region. It would not provide service continuity during a Regional disruption and therefore it cannot satisfy the stated disaster recovery objective.

Use Amazon Aurora Global Database with a writer in the primary Region and a reader in a secondary Region to enable rapid cross Region recovery could meet very aggressive recovery points and times, yet it comes with a higher price point due to the Aurora engine and global replication architecture. Since the question asks for the lowest cost option that still meets the targets, this choice is not the best fit.

Map the stated RPO and RTO to the replication model and scope. If the requirement is cross Region then Multi AZ alone is insufficient. Choose the simplest design that meets the targets at the lowest cost and reserve premium global features for when the question asks for the fastest recovery.

Question 18

HarborTech Labs plans to move its on premises file processing system to AWS. Customers upload files through a web portal at example.com and the files are currently kept on a network file share. A backend worker fleet reads tasks from a queue to process each file and individual jobs can run for up to 50 minutes. Traffic spikes during weekday business hours and is quiet overnight and on weekends. Which migration approach would be the most cost effective while fulfilling these needs?

  • ✓ C. Use Amazon SQS for the queue and have the web tier publish messages then run Amazon EC2 instances in an Auto Scaling group that scales on SQS queue depth to process files and store results in Amazon S3

The correct option is Use Amazon SQS for the queue and have the web tier publish messages then run Amazon EC2 instances in an Auto Scaling group that scales on SQS queue depth to process files and store results in Amazon S3. It satisfies the 50 minute job duration because EC2 workers are not bound by short execution limits and it scales out during busy periods and scales in when idle which keeps costs low.

This approach uses SQS to decouple ingestion from processing so the queue absorbs traffic spikes and exposes backlog metrics for scaling. An Auto Scaling group can change capacity based on queue depth per instance so the fleet grows to handle weekday bursts and shrinks overnight and on weekends. EC2 instances can run long tasks reliably and Amazon S3 provides durable and cost effective storage for outputs.

The option Use Amazon SQS for the queue and have the existing web tier publish messages then trigger AWS Lambda to process each file and store results in Amazon S3 is incorrect because Lambda has a maximum execution time of about 15 minutes which cannot accommodate jobs that run for up to 50 minutes.

The option Use Amazon MQ for the queue and modify the web tier to publish messages then spin up an Amazon EC2 instance when messages arrive to process files and save results on Amazon EFS and stop the instance when done is incorrect because Amazon MQ is a managed broker chosen mainly for protocol compatibility and it adds cost and complexity that are unnecessary for simple queueing. Spinning up individual instances on message arrival reacts slowly to spikes and is less efficient than an Auto Scaling worker pool and EFS is typically more expensive than S3 for outputs that only need object storage.

The option Use Amazon MQ for the queue and have the web tier publish messages then trigger AWS Lambda to process each file and write outputs to Amazon EFS is incorrect because Lambda still cannot run for 50 minutes and Amazon MQ is not required for this use case. While Lambda can consume from MQ and mount EFS these features do not remove the execution time limit.

Check service limits early. When jobs are long running prefer EC2 worker fleets that scale on SQS queue depth and store outputs in the most cost effective service.

Question 19

The operations group at Orion Metrics runs a licensed application on Amazon EC2 that stores shared files on an Amazon EFS file system that is encrypted with AWS KMS. The file system is protected by AWS Backup with the default backup plan. The business now requires a recovery point objective of 90 minutes for these files. What should a solutions architect change to meet this objective while keeping encryption in place?

  • ✓ C. Create a dedicated IAM role for backups and a new backup plan, update the KMS key policy to permit that role to use the key, and schedule backups every hour

The correct choice is Create a dedicated IAM role for backups and a new backup plan, update the KMS key policy to permit that role to use the key, and schedule backups every hour.

This meets a 90 minute recovery point objective because an hourly schedule ensures that the maximum data loss window is about 60 minutes. A new backup plan is needed because the default plan uses a daily cadence which does not satisfy the requirement. Updating the KMS key policy to allow the chosen IAM role to use the key preserves encryption for the Amazon EFS backups since AWS Backup must be able to generate data keys and decrypt when creating and restoring recovery points.

Create a new backup plan and update the KMS key policy to allow the AWSServiceRoleForBackup service role to use the key, then run a backup every 45 minutes by using a custom cron expression is incorrect because AWS Backup does not support schedules more frequent than hourly. Allowing the service linked role to use the KMS key is valid but the proposed 45 minute cadence is not supported.

Use the existing backup plan, update the KMS key policy to allow the AWSServiceRoleForBackup role to use the key, and enable cross Region replication for the EFS file system is incorrect because keeping the existing default plan retains a daily schedule which misses the 90 minute objective. EFS cross Region replication improves disaster recovery posture but it does not change AWS Backup recovery point frequency.

Create a new IAM role, keep the current backup plan, update the KMS key policy to allow the new role to use the key, and enable continuous backups for point in time recovery is incorrect because keeping the current plan leaves the daily cadence in place. In addition, point in time recovery is not a supported AWS Backup feature for Amazon EFS so enabling continuous backups would not apply to this resource.

Work backward from the stated RPO to the needed backup frequency and verify the service supports that cadence. Confirm whether PITR is available for the specific resource type and remember AWS Backup scheduling has a minimum interval of one hour.

Question 20

The platform group at SkyVertex Systems reports a rise in errors on PUT operations against their public REST API. Logs indicate that one client is sending large request bursts that exhaust capacity. They want to protect other users and keep responses user friendly while avoiding changes to backend code. What should the solutions architect recommend?

  • ✓ B. Configure API Gateway usage plans with per key throttling limits and have the client handle HTTP 429 responses gracefully

The correct option is Configure API Gateway usage plans with per key throttling limits and have the client handle HTTP 429 responses gracefully.

This choice lets you issue API keys to each client and apply burst and steady rate limits to a single key so the abusive client is throttled while other users continue to receive normal service. API Gateway responds with HTTP 429 which is straightforward for clients to back off and retry and this keeps responses user friendly. This solution requires no changes to backend code and can also include quotas to cap overall consumption if needed.

Attach AWS WAF to the API Gateway and create a rate based rule that limits bursts from a single source is less suitable because it operates on IP addresses rather than API keys which can penalize many users behind shared NATs. It typically returns a 403 block rather than a graceful 429 throttle and it does not implement per client fairness.

Set reserved concurrency on the Lambda integration to handle sudden spikes addresses function concurrency rather than request rate at the API layer. It can cause throttling for all callers and does not isolate the single abusive client that is exhausting capacity.

Enable API caching on the production stage and run 20 minute load tests to tune cache capacity does not help because PUT requests are not cacheable. Caching improves repeated reads and will not mitigate write bursts that are overloading capacity.

When you see a need for per client fairness and throttle control with no code changes think API Gateway keys and usage plans that return 429 so clients can retry gracefully. If the scenario is about abusive IPs rather than keys then consider WAF which usually blocks with 403.

Question 21

PixelForge Studios plans to launch a real time multiplayer quiz application on AWS for internet users. The service will run on one Amazon EC2 instance and clients will connect using UDP. Leadership requires a highly secure architecture while keeping the design simple. As the Solutions Architect, what actions should you implement? (Choose 3)

  • ✓ B. Place a Network Load Balancer in front of the EC2 instance and create a Route 53 record game.example.com that resolves to the NLB Elastic IP address

  • ✓ D. Enable AWS Shield Advanced on all internet facing resources

  • ✓ F. Configure subnet network ACLs to deny all protocols except UDP and associate them to the subnets that contain the load balancer nodes

The correct options are Place a Network Load Balancer in front of the EC2 instance and create a Route 53 record game.example.com that resolves to the NLB Elastic IP address, Enable AWS Shield Advanced on all internet facing resources, and Configure subnet network ACLs to deny all protocols except UDP and associate them to the subnets that contain the load balancer nodes.

Using a Network Load Balancer for a UDP application is appropriate because it operates at layer 4 and supports UDP natively with very low latency and high throughput. Creating a Route 53 record for a friendly name that points to the NLB static addresses keeps the design simple and provides a stable entry point for clients.

Enabling AWS Shield Advanced raises the security posture by providing managed detection and mitigation against large scale network and transport layer DDoS events, which is especially important for publicly exposed UDP services.

Hardening the load balancer subnets with network ACLs that allow only UDP reduces the exposed surface and drops unwanted scans or non UDP traffic before it reaches the instance. Ensure rules also account for any required health check and return traffic so availability is not impacted.

Deploy AWS Global Accelerator with an Elastic Load Balancer as the endpoint is unnecessary for a single instance setup focused on simplicity and security. It introduces extra components and cost without a clear security benefit for this scenario.

Create AWS WAF rules to drop any non UDP traffic and attach them to the load balancer is incorrect because AWS WAF filters web traffic at layer 7 and does not inspect or enforce policies on UDP or other layer 4 protocols, and it cannot be attached to a layer 4 load balancer for UDP traffic.

Use an Application Load Balancer in front of the instance and publish a friendly DNS name in Amazon Route 53 that aliases to the ALB public name is wrong because an Application Load Balancer handles HTTP, HTTPS, and gRPC and does not support UDP, so it cannot serve this workload.

When you see UDP in a requirement, think Network Load Balancer. For DDoS hardening on public endpoints, consider AWS Shield Advanced. Remember that AWS WAF protects web applications and does not filter generic layer 4 traffic.

Question 22

A regional home goods retailer named BayTrail Living runs its shopping site on three Amazon EC2 instances behind an Application Load Balancer, and the application stores order data in an Amazon DynamoDB table named OrdersProd. Traffic surges during quarterly flash sales and throughput on reads and writes degrades at the busiest moments. What change will provide a scalable architecture that rides through peaks with the least development effort?

  • ✓ C. Create Auto Scaling groups for the web tier and enable DynamoDB auto scaling

The correct option is Create Auto Scaling groups for the web tier and enable DynamoDB auto scaling. This approach lets the EC2 instances behind the load balancer scale out and in with traffic, while DynamoDB adjusts read and write capacity to meet demand. It preserves the current design and requires minimal code changes, which means it meets the goal of riding through peaks with the least development effort.

Auto Scaling groups add elasticity to the web tier so the application can handle flash sale surges without manual intervention. DynamoDB auto scaling manages provisioned capacity within set bounds and targets a utilization level, which smooths out sudden spikes and prevents throttling during busy periods. Together these managed features address both compute and database throughput bottlenecks with straightforward configuration.

Add DynamoDB Accelerator DAX and keep the existing EC2 fleet and ALB is not sufficient because DAX primarily reduces read latency for read heavy workloads and does not increase write throughput or address capacity throttling on writes. It also requires client changes and leaves the web tier unscaled during peaks.

Replatform the web tier to AWS Lambda and increase provisioned read capacity and write capacity for the DynamoDB table demands significant redevelopment effort and ongoing capacity tuning. It risks overprovisioning or underprovisioning during unpredictable flash sales and does not provide the least effort path when the existing architecture can scale with managed services.

Create Auto Scaling groups for the web tier and add Amazon SQS with a Lambda function to batch writes into DynamoDB adds unnecessary complexity and can introduce latency for order persistence. It is not required to solve the immediate scaling needs when autoscaling the web tier and enabling DynamoDB auto scaling already provide a simpler and effective solution.

First identify whether the spike is in compute or database throughput and prefer managed autoscaling features before rearchitecting. Use DAX only for heavy reads and remember it does not improve writes.

Question 23

HarborPeak Logistics is moving a dual-tier web platform from its on-premises environment into AWS. The team will use Amazon Aurora PostgreSQL-Compatible Edition, EC2 Auto Scaling, and an Elastic Load Balancer to support a rapidly expanding audience. The application is stateful because it keeps session data in memory and users expect consistent interactions during traffic spikes. Which approach will ensure session consistency while allowing both the application tier and the database tier to scale?

  • ✓ B. Enable Aurora Replicas auto scaling and place an Application Load Balancer in front with round robin routing and sticky sessions turned on

The correct option is Enable Aurora Replicas auto scaling and place an Application Load Balancer in front with round robin routing and sticky sessions turned on.

This choice aligns with a stateful web tier that keeps session data in memory because ALB sticky sessions use cookies to keep a user bound to the same EC2 instance across requests. Round robin continues to spread new sessions evenly so the application tier can scale out with EC2 Auto Scaling while preserving per-user session affinity during spikes.

On the database side, Aurora Replicas auto scaling adds or removes read replicas to match demand so read traffic scales horizontally while the primary writer continues to handle writes. Aurora PostgreSQL uses a single writer in a cluster, so you scale writes vertically or architect for sharding, and you scale reads with replicas. This pairing gives you session consistency at layer seven and elastic capacity in both tiers.

Enable Aurora Replicas auto scaling and use a Network Load Balancer configured with least outstanding requests and stickiness is not suitable because a Network Load Balancer operates at layer four and its stickiness relies on source IP rather than HTTP cookies. That does not reliably preserve user sessions behind NATs or proxies and it cannot provide cookie-based affinity for an in-memory session application.

Turn on auto scaling for Aurora writers and use a Network Load Balancer with least outstanding requests and stickiness is incorrect because Aurora does not support auto scaling the number of writers in a single-writer Aurora PostgreSQL cluster and the Network Load Balancer does not provide cookie-based session stickiness.

Turn on auto scaling for Aurora writers and use an Application Load Balancer with round robin routing and sticky sessions is incorrect because Aurora writer auto scaling is not a supported capability. While the Application Load Balancer with sticky sessions fits the stateful web tier, the database scaling approach is invalid.

When the app is stateful and keeps sessions in memory, look for a layer seven load balancer with sticky sessions or move sessions to a shared store. For Aurora PostgreSQL expect read scaling with Aurora Replicas and remember that writers do not automatically scale out.

Question 24

Ridgeview Analytics operates an internal reporting tool that writes CSV exports to an Amazon S3 bucket. The files contain confidential information and are normally accessed only by the company’s IAM users. The team needs to share one specific CSV file with an external auditor for a 36 hour review. A solutions architect used an IAM user to call PutObjectAcl to add a public read ACL to that object, but the request returned “AccessDenied”. What is the most likely reason this operation failed?

  • ✓ C. The bucket is configured with BlockPublicAcls enabled

The correct option is The bucket is configured with BlockPublicAcls enabled.

Amazon S3 Block Public Access includes a control that prevents adding new public ACLs. When this bucket level setting is enabled, any attempt to use PutObjectAcl to grant public read access is rejected and returns AccessDenied. This fits the scenario where the team tried to add a public read ACL to a single object and immediately encountered an access error.

The bucket has the BlockPublicPolicy setting turned on is incorrect because that control prevents public access granted through bucket policies. It does not block or fail ACL based operations like PutObjectAcl, so it would not explain this specific error when changing an object ACL.

S3 Object Lock in compliance mode is enabled for the object version is incorrect because Object Lock prevents object deletes and overwrites based on retention rules. It does not govern or block changes to ACLs, so it would not cause PutObjectAcl to fail with AccessDenied.

The IAM user is not listed on the object ACL with write ACL permission is unlikely here because the failure occurred when trying to make the object public and the scenario strongly points to a bucket level control that broadly blocks public ACLs. If the caller has the necessary IAM permission and the object is owned by the bucket owner, an ACL entry for the caller is not required to change the ACL, so this is not the most probable cause.

When a question involves making an S3 object public and you see AccessDenied, quickly check for Block Public Access settings. For time bound sharing needs, consider a presigned URL that matches the required review window.

Question 25

RiverStone Capital uses AWS Control Tower and needs to apply cost governance across more than 320 developer accounts inside a Sandbox organizational unit. The company wants to require burstable EC2 and RDS instance classes and to block services that do not apply to their workloads. What should a solutions architect propose?

  • ✓ D. Implement a custom preventive guardrail in AWS Control Tower that enforces only burstable EC2 and RDS instance types and blocks nonapproved services and enable it on the Sandbox OU

The correct option is Implement a custom preventive guardrail in AWS Control Tower that enforces only burstable EC2 and RDS instance types and blocks nonapproved services and enable it on the Sandbox OU. This approach enforces policy at the organizational unit level across all developer accounts and blocks noncompliant API calls so it meets the requirement to require burstable instance classes and to block services that are not approved.

This solution fits a Control Tower governed environment because preventive controls in Control Tower are evaluated before actions are allowed and are implemented using service control policies. You can scope the control to the Sandbox organizational unit and ensure consistent enforcement across more than 320 accounts. Because the enforcement happens at authorization time, non burstable EC2 or RDS instance classes cannot be launched and unapproved services are denied, which satisfies cost governance and standardization goals while keeping centralized visibility and lifecycle management in Control Tower.

Define a custom detective guardrail in AWS Control Tower that flags non burstable instance launches and disallowed services and apply it to the Sandbox OU is incorrect because detective controls only detect and report after the fact. They do not block API calls or prevent resource creation, so they would not enforce the requirement to require and to block.

Use Google Cloud Organization Policy constraints to restrict machine types and services across development projects is incorrect because it applies to Google Cloud rather than AWS and it cannot govern AWS accounts or services.

Craft a Service Control Policy in AWS Organizations that permits only burstable EC2 and RDS instance families and denies nonessential services and attach it to the Sandbox OU is not the best choice in a Control Tower environment. While an SCP can technically enforce such restrictions, applying it directly outside the Control Tower control framework can cause governance drift and reduce Control Tower visibility and reporting. The scenario explicitly uses AWS Control Tower, so the managed preventive control that integrates with Control Tower is the preferred solution.

When a scenario mentions AWS Control Tower and the need to enforce or block, look for a preventive control. A detective control only flags noncompliance. If Control Tower is in scope, favor controls managed by Control Tower rather than attaching standalone _SCP_s.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.