AWS Solutions Architect Professional Sample Questions
Question 1
The platform team at NovaPlay Labs is preparing disaster recovery for a containerized web API that runs on Amazon ECS with AWS Fargate and uses Amazon RDS for MySQL, and they require the service to come back online in a different AWS Region with the least possible downtime if the primary Region fails. Which approach best enables rapid failover to the secondary Region with minimal interruption?
-
❏ A. Use an AWS Lambda function to create the second ECS cluster and Fargate service only when a failure is detected, then snapshot and copy the RDS instance to the other Region, restore it there, and change Route 53 records, with EventBridge triggering the workflow
-
❏ B. Enable Amazon RDS Multi AZ for the MySQL instance in the primary Region and place AWS Global Accelerator in front of the application to reroute traffic during issues
-
❏ C. Maintain a second ECS cluster and Fargate service in the backup Region and configure an Amazon RDS cross Region read replica there, then use a Lambda function to promote the replica to primary and update Route 53 during failover, with EventBridge invoking the promotion
-
❏ D. Precreate an ECS cluster and Fargate service in the recovery Region, then use an AWS Lambda function that regularly snapshots the RDS instance, copies the snapshot to the recovery Region, restores a new RDS instance from that snapshot, and updates Amazon Route 53 to point traffic to the standby service, with an Amazon EventBridge schedule invoking the function
Question 2
BlueRiver Tickets runs its customer-facing services on Amazon EC2 behind a single Application Load Balancer and hosts the public zone example.com in Amazon Route 53. The company will expose several hostnames such as m.example.com, www.example.com and api.example.com and they also want the apex example.com name to reach the web tier. You need to design scalable ALB listener rules so that each hostname forwards to the correct target group without adding more load balancers. Which configuration should you implement? (Choose 2)
-
❏ A. Use Path conditions in the ALB listener to route *.example.com to appropriate target groups
-
❏ B. Use Host conditions in the ALB listener to route example.com to the correct target group
-
❏ C. Google Cloud Load Balancing
-
❏ D. Use Host conditions in the ALB listener to route *.example.com to the right target groups
-
❏ E. Use the Path component of Redirect actions in the ALB listener to route example.com to target groups
Question 3
Orchid Dynamics wants to cut data transfer and compute spending across 18 developer AWS accounts while allowing engineers to quickly pull data from Amazon S3 and still move fast when launching EC2 instances and building VPCs. Which approach will minimize cost without reducing developer agility?
-
❏ A. Create SCPs to block unapproved EC2 types and distribute a CloudFormation template that builds a standard VPC with S3 interface endpoints and restrict IAM so developers create VPC resources only through CloudFormation
-
❏ B. Adopt Google Cloud Deployment Manager and VPC Service Controls to govern egress and access to Cloud Storage and connect AWS workloads through hybrid networking
-
❏ C. Publish an AWS Service Catalog portfolio that provisions a vetted VPC with S3 gateway endpoints and approved EC2 options and share it to the engineer accounts with a launch constraint role then scope developer IAM to use Service Catalog
-
❏ D. Set daily cost budgets with AWS Budgets for EC2 and S3 data transfer and send alerts at 70 percent of forecast then trigger actions to stop EC2 and remove VPCs at 95 percent of actual
Question 4
A ticketing startup that operates example.com runs its storefront on Amazon EC2 behind an Auto Scaling group and uses Amazon RDS for PostgreSQL in one Region. The team needs a budget friendly disaster recovery plan that can achieve a recovery point objective of 45 seconds and a recovery time objective of 12 minutes. Which approach best meets these requirements while keeping costs under control?
-
❏ A. Use infrastructure as code to stand up the DR environment in a second Region and create a cross Region read replica for the RDS database and configure AWS Backup to produce cross Region backups for the EC2 instances and the database every 45 seconds and restore instances from the newest backup during an incident and use an Amazon Route 53 geolocation policy so traffic shifts to the DR Region after a disaster
-
❏ B. Set up AWS Backup to create cross Region backups for the EC2 fleet and the database on a 45 second schedule and use infrastructure as code to create the DR networking and subnets and restore the backups onto new instances when needed and use an Amazon Route 53 simple policy to move users to the DR Region
-
❏ C. Build the DR stack with infrastructure as code and create a cross Region read replica for the RDS database and set up AWS Elastic Disaster Recovery to stream changes for the EC2 instances into the DR Region and keep a minimal number of instances running in the DR Region and use an Amazon Route 53 failover policy to switch during an outage then scale out the Auto Scaling group
-
❏ D. Define the DR environment with infrastructure as code and migrate the database to Amazon Aurora PostgreSQL with an Aurora global database and use AWS Elastic Disaster Recovery for the EC2 instances and run the Auto Scaling group at full capacity in the DR Region and use an Amazon Route 53 failover policy to switch during an event
Question 5
Riverton Digital Press is moving its editorial publishing site to AWS. The organization must allow continuous content edits by multiple contributors and also move 250 TB of archived media from an on premises NAS into Amazon S3. They will continue using their existing Site to Site VPN and will run web servers on Amazon EC2 instances behind an Application Load Balancer. Which combination of actions will fulfill these requirements? (Choose 2)
-
❏ A. Configure an Amazon Elastic Block Store EBS Multi Attach volume that is shared by the EC2 instances for content access then build a script to synchronize that volume with the NAS each night
-
❏ B. Order an AWS Snowball Edge Storage Optimized device and copy the static archives to the device then return it to AWS
-
❏ C. Create an Amazon EventBridge schedule that invokes an AWS Lambda function every hour to push updates from the NAS directly to the EC2 instances
-
❏ D. Mount an Amazon Elastic File System EFS file system from the on premises servers over the VPN and also mount the same file system on the EC2 instances to present the latest content
-
❏ E. Google Transfer Appliance
Question 6
HarborView Logistics needs an internal application where employees submit travel and expense reimbursements, with activity spiking on the fifteenth and again on the last business day of each month. The finance team must be able to produce consistent month end reports from the stored data. The platform must be highly available and scale automatically while keeping operational effort as low as possible. Which combination of solutions will best meet these requirements with the least ongoing management? (Choose 2)
-
❏ A. Store the expense records in Amazon S3 and use Amazon Athena and Amazon QuickSight to query and visualize the data
-
❏ B. Deploy the application on Amazon EC2 behind an Application Load Balancer and use Amazon EC2 Auto Scaling with scheduled scale outs
-
❏ C. Host the web front end in Amazon S3 served by Amazon CloudFront and build the API with Amazon API Gateway using an AWS Lambda proxy integration
-
❏ D. Run the application on Amazon ECS with AWS Fargate behind an Application Load Balancer and use Service Auto Scaling with scheduled capacity adjustments
-
❏ E. Persist the expense data in Amazon EMR and use Amazon QuickSight to report directly from the EMR cluster
Question 7
Luma Media runs a public web API on Amazon EC2 instances in one Availability Zone. Leadership has asked a Solutions Architect to redesign the platform so it is highly available across at least two Availability Zones and enforces strong request filtering. The security team requires that inbound traffic be inspected for common web exploits and that any blocked requests are delivered to an external auditing service at logs.example.com for compliance review. What architecture should be implemented?
-
❏ A. Set up an Application Load Balancer with a target group that includes the existing EC2 instances in the current Availability Zone, create Amazon Kinesis Data Firehose with the logs.example.com service as the destination, attach an AWS WAF web ACL to the ALB, enable WAF logging to the Firehose stream, and subscribe to AWS Managed Rules
-
❏ B. Use Google Cloud Armor in front of a Google Cloud HTTP Load Balancer and export blocked events from Cloud Logging to the logs.example.com auditing service
-
❏ C. Configure a Multi AZ Auto Scaling group from the application AMI with an Application Load Balancer in front, attach an AWS WAF web ACL with AWS Managed Rules, and enable WAF logging to Amazon Kinesis Data Firehose that delivers blocked requests to the logs.example.com auditing endpoint
-
❏ D. Deploy an Application Load Balancer and register the EC2 instances as targets, attach an AWS WAF web ACL, enable logging with Amazon CloudWatch Logs, and use an AWS Lambda function to forward entries to the logs.example.com auditing service
Question 8
BrightPaws is a fast growing startup with a viral mobile photo app where people upload pet pictures and add captions, and active users have climbed past 12 million across several continents. The backend runs on Amazon EC2 with Amazon EFS and the instances are behind an Application Load Balancer. Traffic is unpredictable and spikes cause slow responses during busy evenings. What architectural changes should a Solutions Architect propose to cut costs and improve global performance?
-
❏ A. Create an Amazon CloudFront distribution and place the Application Load Balancer behind it and store user images in Amazon S3 using the S3 Standard Infrequent Access storage class
-
❏ B. Use AWS Global Accelerator in front of the Application Load Balancer and migrate static files to Amazon FSx for Windows File Server while using an AWS Lambda function to compress images during the cutover
-
❏ C. Move the image bucket to Google Cloud Storage and serve content through Cloud CDN while continuing to run the dynamic service behind the existing Application Load Balancer
-
❏ D. Place user photos in Amazon S3 with the Intelligent Tiering storage class and front S3 with Amazon CloudFront while using AWS Lambda to perform image processing
Question 9
BlueRiver Foods operates a primary data center and needs a private connection to AWS that is encrypted and delivers consistent low latency with high throughput near 4 Gbps. The team can budget roughly eight weeks for provisioning and is willing to manage the setup effort. Which approach provides end to end connectivity that satisfies these requirements while requiring the least additional infrastructure?
-
❏ A. Create a Site-to-Site VPN from the data center to an Amazon VPC over the internet
-
❏ B. Provision AWS Direct Connect without adding a VPN
-
❏ C. Provision AWS Direct Connect and layer an AWS Site-to-Site VPN over it to encrypt traffic
-
❏ D. Google Cloud Interconnect
Question 10
NorthPeak Systems has a platform engineering group that manages environments using AWS CloudFormation and they need to ensure that mission critical data in Amazon RDS instances and Amazon EBS volumes is not removed if a production stack is deleted. What should the team implement to prevent accidental data loss when someone deletes the stack?
-
❏ A. Create IAM policies that deny delete operations on RDS and EBS when the “aws:cloudformation:stack-name” tag is present
-
❏ B. Set DeletionPolicy of Retain on the RDS and EBS resources in the CloudFormation templates
-
❏ C. Apply a CloudFormation stack policy that blocks deletion actions for RDS and EBS resources
-
❏ D. Enable termination protection on the production CloudFormation stack

All questions come from my Solutions Architect Udemy Course and certificationexams.pro
Question 11
SpryCart runs an Amazon RDS database in private subnets within an Amazon VPC where outbound internet access is not allowed. The team stored credentials in AWS Secrets Manager and enabled rotation with an AWS Lambda function placed in the same VPC. Recent rotation attempts fail and Amazon CloudWatch Logs show the Lambda function timing out when it tries to call Secrets Manager APIs. The environment must remain without internet egress. What should the team implement so that secret rotation completes successfully under these constraints?
-
❏ A. Create an interface VPC endpoint for the Lambda service in the VPC to allow the function to run without internet access
-
❏ B. Configure a NAT gateway in the VPC and update private route tables to provide outbound access to AWS endpoints
-
❏ C. Create an interface VPC endpoint for Secrets Manager and ensure the rotation function uses it for API calls
-
❏ D. Recreate the rotation function from the latest Secrets Manager blueprint to ensure SSL and TLS support
Question 12
Northwind Labs uses AWS Organizations and has two organizational units named Analytics and Platform under the root. Due to regulatory policy the company must ensure that all workloads run only in ap-southeast-2, and the Platform OU must be limited to a defined list of Amazon EC2 instance types. A solutions architect needs to implement controls with minimal ongoing administration. Which combination of actions should be implemented to satisfy these requirements? (Choose 2)
-
❏ A. Deploy AWS Config rules in every account to detect noncompliant Regions and EC2 instance types and trigger Systems Manager Automation for remediation
-
❏ B. Create a Service Control Policy that uses the aws:RequestedRegion condition to deny all Regions except ap-southeast-2 and attach it to the organization root
-
❏ C. Create IAM users in all accounts and attach an inline policy to each that uses the aws:RequestedRegion condition to allow only ap-southeast-2
-
❏ D. Attach a Service Control Policy to the Platform OU that uses the ec2:InstanceType condition to allow only the approved instance types
-
❏ E. Create a Service Control Policy that uses the ec2:Region condition and apply it to the root, Analytics, and Platform OUs
Question 13
NorthRiver Media plans to launch a microservices streaming service that is expected to grow from 8 million to 45 million users within nine months. The platform runs on Amazon ECS with AWS Fargate and all requests must use HTTPS. The solutions architect must enable blue or green deployments, send traffic through a load balancer, and adjust running tasks automatically by using Amazon CloudWatch alarms. Which solution should the team implement?
-
❏ A. Configure ECS services for blue or green deployments with a Network Load Balancer and request a higher tasks per service quota
-
❏ B. Configure ECS services for blue or green deployments with an Application Load Balancer and enable ECS Service Auto Scaling per service
-
❏ C. Configure ECS services for blue or green deployments with an Application Load Balancer and attach an Auto Scaling group that is managed by Cluster Autoscaler
-
❏ D. Configure ECS services for rolling update deployments with an Application Load Balancer and use ECS Service Auto Scaling per service
Question 14
The Orion initiative at DataVista Labs is overspending on EC2, and its AWS account is deliberately kept separate from DataVista’s AWS Organization. To control costs, what should a solutions architect put in place to ensure developers in the Orion initiative can launch only t3.micro EC2 instances in the eu-west-1 Region?
-
❏ A. Attach a Service Control Policy that denies all EC2 launches except t3.micro in eu-west-1 to the account
-
❏ B. Apply an IAM policy in the project account that allows only t3.micro launches in eu-west-1 and attach it to developer roles
-
❏ C. Use Google Cloud Organization Policy to restrict machine types to e2-micro and limit the region to europe-west1
-
❏ D. Create a new developer account and move workloads to eu-west-1 then add it to the company AWS Organization and enforce a tag policy for Region placement
Question 15
The research team at Orion Data Labs is rolling out a new machine learning service on six Amazon EC2 instances within a single AWS Region. They require very high throughput and very low network latency between all instances and they are not concerned with fault tolerance or hardware diversity. What should they implement to satisfy these needs?
-
❏ A. Distribute six EC2 instances across a spread placement group and attach an additional elastic network interface to each instance
-
❏ B. Put six EC2 instances in a partition placement group and select instance types with enhanced networking
-
❏ C. Place six EC2 instances in a cluster placement group and choose instance types that support enhanced networking
-
❏ D. Compute Engine
Question 16
Following a shift from manually managed EC2 servers to an Auto Scaling group to handle a surge in users, the platform team at Apex Retail faces a patching issue. Security updates that run every 45 days require a reboot, and while an instance is rebooting, the Auto Scaling group replaces it, which leaves the fleet with fresh but unpatched instances. Which actions should a solutions architect propose to ensure instances are patched without being prematurely terminated? (Choose 2)
-
❏ A. Place an Application Load Balancer in front of the Auto Scaling group and rely on target health checks during replacements
-
❏ B. Automate baking a patched AMI, update the launch template to that AMI, and trigger an Auto Scaling instance refresh
-
❏ C. Enable termination protection on EC2 instances in the Auto Scaling group
-
❏ D. Stand up a parallel Auto Scaling group before the maintenance window and patch and reboot instances in both groups during the window
-
❏ E. Set the Auto Scaling termination policy to prefer instances launched from the oldest launch template or configuration
Question 17
Northpeak Analytics is rolling out a global SaaS platform and wants clients to be routed automatically to the closest AWS Region while security teams require static IP addresses that customers can add to their allow lists. The application runs on Amazon EC2 instances behind a Network Load Balancer and uses an Auto Scaling group that spans four Availability Zones in each Region. What solution will fulfill these needs?
-
❏ A. Create an Amazon CloudFront distribution with an origin group that includes the NLB in each Region and provide customers the IP ranges for CloudFront edge locations
-
❏ B. Create an AWS Global Accelerator standard accelerator and add an endpoint group for the NLB in every active Region then share the accelerator’s static IP addresses with customers
-
❏ C. Configure Amazon Route 53 latency based routing with health checks and assign Elastic IP addresses to NLBs in each Region then distribute those IPs to customers
-
❏ D. Create an AWS Global Accelerator custom routing accelerator and configure a listener and port mappings for the NLBs in each Region then give customers the accelerator IP addresses
Question 18
NovaMetrics Ltd. operates a managed API in its AWS account and a client in a different AWS account needs to invoke that service from automation using the CLI while maintaining least privilege and avoiding long lived credentials. How should NovaMetrics grant the client secure access to the service?
-
❏ A. Publish the service with AWS PrivateLink and require an API key for callers
-
❏ B. Create an IAM user for the client and share its access keys
-
❏ C. Create a cross account IAM role with only the needed permissions and let the client assume it using the role ARN without an external ID
-
❏ D. Create a cross account IAM role with only the needed permissions and require an external ID in the trust policy then have the client assume the role using its ARN and the external ID
Question 19
Rivertown Retail Co. runs a customer portal on EC2 instances behind an Application Load Balancer. Orders are stored in an Amazon RDS for MySQL database and related PDFs are kept in Amazon S3. The finance team’s ad hoc reporting slows the production database. Leadership requires a disaster recovery strategy that can keep the application available during a regional outage and that limits data loss to only a few minutes and that also removes the reporting workload from the primary database. What should the solutions architect build?
-
❏ A. Migrate the transactional database to Amazon DynamoDB with global tables, direct the finance team to query a global table in a secondary Region, use an AWS Lambda function on a schedule to copy objects to an S3 bucket in that Region, and stand up EC2 and a new ALB there while pointing the app to the new bucket
-
❏ B. Create an RDS for MySQL cross-Region read replica and route finance queries to that replica, enable S3 Cross-Region Replication to a bucket in the recovery Region, build AMIs of the web tier and copy them to that Region, and during a disaster promote the replica then launch EC2 from the AMIs behind a new ALB and update the app to use the replicated bucket
-
❏ C. Launch more EC2 instances in a second Region and register them with the existing ALB, create an RDS read replica in the second Region for finance use, enable S3 Cross-Region Replication, and in an outage promote the replica and repoint the application
-
❏ D. Replatform the workload by moving the database to Google Cloud Spanner, export reports to BigQuery, front the app with Google Cloud Load Balancing, and store documents in dual-region Cloud Storage for resilience
Question 20
Northwind Publications stores media in Amazon S3 with objects under the prefixes s3://cdn-media-001/photos and s3://cdn-media-001/thumbs. Newly uploaded photos receive heavy traffic, and after 60 days the photos are rarely read while the thumbnails continue to be accessed. After 210 days the team wants to archive both the photos and the thumbnails. The design must remain highly available across multiple Availability Zones to mitigate the impact of an AZ outage. Which actions will deliver the most cost effective approach while meeting these goals? (Choose 2)
-
❏ A. Create a lifecycle rule that transitions only the photos prefix to S3 One Zone-IA after 60 days
-
❏ B. Create a lifecycle rule that transitions all objects to S3 Glacier Flexible Retrieval after 210 days
-
❏ C. Create a lifecycle rule that transitions all objects to S3 Standard-IA after 60 days
-
❏ D. Create a lifecycle rule that transitions only the photos prefix to S3 Standard-IA after 60 days
-
❏ E. Create a lifecycle rule that transitions only the photos prefix to S3 Glacier after 210 days
AWS Professional Solutions Architect Questions Answered

All questions come from my Solutions Architect Udemy Course and certificationexams.pro
Question 1
The platform team at NovaPlay Labs is preparing disaster recovery for a containerized web API that runs on Amazon ECS with AWS Fargate and uses Amazon RDS for MySQL, and they require the service to come back online in a different AWS Region with the least possible downtime if the primary Region fails. Which approach best enables rapid failover to the secondary Region with minimal interruption?
-
✓ C. Maintain a second ECS cluster and Fargate service in the backup Region and configure an Amazon RDS cross Region read replica there, then use a Lambda function to promote the replica to primary and update Route 53 during failover, with EventBridge invoking the promotion
The correct approach is Maintain a second ECS cluster and Fargate service in the backup Region and configure an Amazon RDS cross Region read replica there, then use a Lambda function to promote the replica to primary and update Route 53 during failover, with EventBridge invoking the promotion.
With a cross Region read replica and a warm standby ECS service the database is continuously replicated and the compute layer is already provisioned. Promoting the replica and updating Route 53 to direct traffic to the secondary Region is fast, which minimizes recovery time and limits data loss to replication lag. EventBridge and Lambda orchestrate the promotion and DNS change so failover is quick and consistent.
The option to Use an AWS Lambda function to create the second ECS cluster and Fargate service only when a failure is detected, then snapshot and copy the RDS instance to the other Region, restore it there, and change Route 53 records, with EventBridge triggering the workflow causes long downtime because building infrastructure and restoring from snapshots after an outage is slow and increases the risk of data loss.
The suggestion to Enable Amazon RDS Multi AZ for the MySQL instance in the primary Region and place AWS Global Accelerator in front of the application to reroute traffic during issues does not meet cross Region recovery needs because Multi AZ protects only within a single Region and there is no replicated database or service in a second Region to receive traffic.
The plan to Precreate an ECS cluster and Fargate service in the recovery Region, then use an AWS Lambda function that regularly snapshots the RDS instance, copies the snapshot to the recovery Region, restores a new RDS instance from that snapshot, and updates Amazon Route 53 to point traffic to the standby service, with an Amazon EventBridge schedule invoking the function results in slow recovery during restore and stale data between snapshots, and it adds unnecessary cost and operational complexity compared to continuous replication.
When a question emphasizes least downtime across Regions, prefer a warm standby with preprovisioned compute and cross Region read replicas, then automate promotion and DNS updates for a fast and reliable failover.
Question 2
BlueRiver Tickets runs its customer-facing services on Amazon EC2 behind a single Application Load Balancer and hosts the public zone example.com in Amazon Route 53. The company will expose several hostnames such as m.example.com, www.example.com and api.example.com and they also want the apex example.com name to reach the web tier. You need to design scalable ALB listener rules so that each hostname forwards to the correct target group without adding more load balancers. Which configuration should you implement? (Choose 2)
-
✓ B. Use Host conditions in the ALB listener to route example.com to the correct target group
-
✓ D. Use Host conditions in the ALB listener to route *.example.com to the right target groups
The correct options are Use Host conditions in the ALB listener to route example.com to the correct target group and Use Host conditions in the ALB listener to route *.example.com to the right target groups.
Host header based rules are the intended way to send traffic to different target groups based on the requested hostname. You create listener rules that evaluate the Host header and then use forward actions to the appropriate target groups. This scales cleanly because a single Application Load Balancer can support many host conditions and rules without adding more load balancers.
For the apex name you configure a Route 53 alias A or AAAA record pointing example.com to the ALB so the browser sends the Host header for example.com and the listener rule that matches that host forwards to the correct target group.
For subdomains you can add host rules for specific names such as api.example.com or you can use a wildcard host condition to match a set of subdomains that share the same target group. You can then add additional host rules for any subdomains that must go to different target groups.
Use Path conditions in the ALB listener to route *.example.com to appropriate target groups is incorrect because path based routing evaluates the URL path after the hostname and it cannot select a target group based on the requested host.
Google Cloud Load Balancing is incorrect because the scenario and services are on AWS and a different cloud provider does not apply.
Use the Path component of Redirect actions in the ALB listener to route example.com to target groups is incorrect because a redirect changes the client URL and does not forward to a target group to serve the request. You should use a forward action with host conditions for this use case.
When a question mentions multiple hostnames on one load balancer think host based rules on the listener and not path based rules. Remember that the apex name uses a Route 53 alias to the ALB and the listener uses a forward action to target groups.
Question 3
Orchid Dynamics wants to cut data transfer and compute spending across 18 developer AWS accounts while allowing engineers to quickly pull data from Amazon S3 and still move fast when launching EC2 instances and building VPCs. Which approach will minimize cost without reducing developer agility?
-
✓ C. Publish an AWS Service Catalog portfolio that provisions a vetted VPC with S3 gateway endpoints and approved EC2 options and share it to the engineer accounts with a launch constraint role then scope developer IAM to use Service Catalog
The correct option is Publish an AWS Service Catalog portfolio that provisions a vetted VPC with S3 gateway endpoints and approved EC2 options and share it to the engineer accounts with a launch constraint role then scope developer IAM to use Service Catalog.
This approach minimizes cost because S3 gateway endpoints keep VPC to S3 traffic on the AWS network and do not add endpoint hourly or data processing charges. It preserves developer agility by giving engineers a curated and fast self service path to create a standardized VPC and launch approved EC2 options. Launch constraints and scoped IAM provide guardrails while sharing the portfolio to the engineer accounts lets teams move quickly in their own accounts with consistent governance.
Create SCPs to block unapproved EC2 types and distribute a CloudFormation template that builds a standard VPC with S3 interface endpoints and restrict IAM so developers create VPC resources only through CloudFormation is less cost efficient and reduces agility. It relies on S3 interface endpoints which add per hour and data processing charges, while S3 gateway endpoints have no additional charge. Forcing all VPC work through a single CloudFormation path can slow developers and increases operational overhead compared to a curated self service catalog.
Adopt Google Cloud Deployment Manager and VPC Service Controls to govern egress and access to Cloud Storage and connect AWS workloads through hybrid networking does not address the AWS requirement. It introduces another cloud and governs Google Cloud Storage instead of Amazon S3, which adds complexity and cost without solving the stated need for the AWS accounts.
Set daily cost budgets with AWS Budgets for EC2 and S3 data transfer and send alerts at 70 percent of forecast then trigger actions to stop EC2 and remove VPCs at 95 percent of actual is reactive and harms developer agility. Budgets can notify and apply some controls but they do not prevent costly patterns up front and they do not safely remove VPCs. This does not provide a governed self service way to create compliant VPCs or reduce S3 transfer costs.
When cost control and speed are both required, look for governed self service patterns such as AWS Service Catalog and prefer S3 gateway endpoints for Amazon S3 traffic. Be cautious of relying on reactive alerts or broad restrictions without offering an easy approved path.
Question 4
A ticketing startup that operates example.com runs its storefront on Amazon EC2 behind an Auto Scaling group and uses Amazon RDS for PostgreSQL in one Region. The team needs a budget friendly disaster recovery plan that can achieve a recovery point objective of 45 seconds and a recovery time objective of 12 minutes. Which approach best meets these requirements while keeping costs under control?
-
✓ C. Build the DR stack with infrastructure as code and create a cross Region read replica for the RDS database and set up AWS Elastic Disaster Recovery to stream changes for the EC2 instances into the DR Region and keep a minimal number of instances running in the DR Region and use an Amazon Route 53 failover policy to switch during an outage then scale out the Auto Scaling group
The correct option is Build the DR stack with infrastructure as code and create a cross Region read replica for the RDS database and set up AWS Elastic Disaster Recovery to stream changes for the EC2 instances into the DR Region and keep a minimal number of instances running in the DR Region and use an Amazon Route 53 failover policy to switch during an outage then scale out the Auto Scaling group.
This approach meets the 45 second recovery point objective because AWS Elastic Disaster Recovery continuously replicates block level changes from the EC2 instances which delivers an RPO measured in seconds. The cross Region read replica for Amazon RDS PostgreSQL provides asynchronous replication with lag typically in seconds so the database side also aligns with the required point objective. Using a Route 53 failover policy allows health based redirection to the recovery Region and keeping only a minimal footprint warm in that Region reduces ongoing cost while the Auto Scaling group can expand quickly after failover to meet the 12 minute recovery time objective. Defining the stack with infrastructure as code makes rehearsals and clean cutovers consistent and repeatable which further improves time to recover.
Use infrastructure as code to stand up the DR environment in a second Region and create a cross Region read replica for the RDS database and configure AWS Backup to produce cross Region backups for the EC2 instances and the database every 45 seconds and restore instances from the newest backup during an incident and use an Amazon Route 53 geolocation policy so traffic shifts to the DR Region after a disaster is not viable because AWS Backup and snapshot based restores cannot achieve a 45 second RPO and restore based recovery of EC2 fleets typically cannot meet a strict 12 minute RTO. Geolocation routing sends users based on their location and does not perform health based failover during an outage.
Set up AWS Backup to create cross Region backups for the EC2 fleet and the database on a 45 second schedule and use infrastructure as code to create the DR networking and subnets and restore the backups onto new instances when needed and use an Amazon Route 53 simple policy to move users to the DR Region is incorrect because periodic backups cannot provide sub minute RPO and restoring instances during an event is unlikely to meet the 12 minute RTO. A simple routing policy does not provide automated failover driven by health checks so traffic will not reliably move during an outage.
Define the DR environment with infrastructure as code and migrate the database to Amazon Aurora PostgreSQL with an Aurora global database and use AWS Elastic Disaster Recovery for the EC2 instances and run the Auto Scaling group at full capacity in the DR Region and use an Amazon Route 53 failover policy to switch during an event would meet the objectives but it is not budget friendly. Running full capacity in the recovery Region doubles steady state cost and migrating to Aurora Global adds cost and complexity that is unnecessary when a cross Region read replica for RDS can satisfy the requirements.
Translate the numbers to mechanisms. Sub minute RPO requires continuous replication rather than periodic backups and a minutes level RTO needs pre wired routing with failover and some warm capacity. Prefer policies that use health checks and prefer designs that scale out after cutover to control cost.
Question 5
Riverton Digital Press is moving its editorial publishing site to AWS. The organization must allow continuous content edits by multiple contributors and also move 250 TB of archived media from an on premises NAS into Amazon S3. They will continue using their existing Site to Site VPN and will run web servers on Amazon EC2 instances behind an Application Load Balancer. Which combination of actions will fulfill these requirements? (Choose 2)
-
✓ B. Order an AWS Snowball Edge Storage Optimized device and copy the static archives to the device then return it to AWS
-
✓ D. Mount an Amazon Elastic File System EFS file system from the on premises servers over the VPN and also mount the same file system on the EC2 instances to present the latest content
The correct options are Order an AWS Snowball Edge Storage Optimized device and copy the static archives to the device then return it to AWS and Mount an Amazon Elastic File System EFS file system from the on premises servers over the VPN and also mount the same file system on the EC2 instances to present the latest content.
The Snowball Edge device is built for bulk offline transfers so moving 250 TB this way will be faster and more predictable than sending the data across a Site to Site VPN. After you ship the device back the service imports the data into Amazon S3 which satisfies the archive migration requirement.
Using Amazon EFS provides a managed NFS file system that supports many concurrent readers and writers which meets the need for continuous edits by multiple contributors. You can mount EFS from on premises over the existing VPN and mount it on all EC2 web servers so every instance serves the same up to date content.
The option Configure an Amazon Elastic Block Store EBS Multi Attach volume that is shared by the EC2 instances for content access then build a script to synchronize that volume with the NAS each night is incorrect because Multi Attach is limited to specific volume types and instances in a single Availability Zone and it is not a general purpose shared file system for many writers. It would also introduce lag and complexity with nightly synchronization and would not support concurrent edits safely without a cluster aware file system.
The option Create an Amazon EventBridge schedule that invokes an AWS Lambda function every hour to push updates from the NAS directly to the EC2 instances is incorrect because copying files into individual instances does not provide a single authoritative store. It risks configuration drift across servers and does not enable simultaneous collaborative editing.
The option Google Transfer Appliance is incorrect because it is a Google Cloud product and cannot be used to ingest data into AWS.
When a workload needs many writers across multiple servers think about managed shared file systems rather than copying files to instance disks. When data size is hundreds of terabytes and the network is constrained think about offline ingestion to meet timelines reliably.
Question 6
HarborView Logistics needs an internal application where employees submit travel and expense reimbursements, with activity spiking on the fifteenth and again on the last business day of each month. The finance team must be able to produce consistent month end reports from the stored data. The platform must be highly available and scale automatically while keeping operational effort as low as possible. Which combination of solutions will best meet these requirements with the least ongoing management? (Choose 2)
-
✓ A. Store the expense records in Amazon S3 and use Amazon Athena and Amazon QuickSight to query and visualize the data
-
✓ C. Host the web front end in Amazon S3 served by Amazon CloudFront and build the API with Amazon API Gateway using an AWS Lambda proxy integration
The correct options are Store the expense records in Amazon S3 and use Amazon Athena and Amazon QuickSight to query and visualize the data and Host the web front end in Amazon S3 served by Amazon CloudFront and build the API with Amazon API Gateway using an AWS Lambda proxy integration.
The first option uses durable object storage for all expense records and queries them with a serverless SQL service, then presents results through a fully managed visualization service. This design has virtually no infrastructure to manage, scales automatically with usage, and supports consistent month end reporting when you write immutable files and partition or snapshot the data so reports run against a stable view.
The second option delivers a static front end with global caching and a fully managed API layer that invokes serverless compute for business logic. This provides high availability by default and scales seamlessly during the mid month and end of month spikes while keeping operational effort very low because there are no servers or containers to patch or capacity to right size.
Deploy the application on Amazon EC2 behind an Application Load Balancer and use Amazon EC2 Auto Scaling with scheduled scale outs is not the best fit because you must manage instances, operating systems, images, patching, and scaling schedules. Scheduled actions may not match real demand and this increases ongoing management compared to a serverless approach.
Run the application on Amazon ECS with AWS Fargate behind an Application Load Balancer and use Service Auto Scaling with scheduled capacity adjustments reduces server maintenance but still requires building and updating container images, task definitions, deployments, and scaling configuration. This is more operational work than a fully serverless web and API design.
Persist the expense data in Amazon EMR and use Amazon QuickSight to report directly from the EMR cluster is unsuitable because that service is intended for big data processing rather than as a primary data store. Keeping a cluster available for reporting increases cost and operational complexity and it does not provide the simplicity and automatic scaling that serverless querying on object storage offers.
When a question highlights least management and spiky workloads, prefer serverless and fully managed services for storage, compute, and APIs. Pair object storage with serverless query engines for reporting and use managed front ends and APIs to minimize operations.
Question 7
Luma Media runs a public web API on Amazon EC2 instances in one Availability Zone. Leadership has asked a Solutions Architect to redesign the platform so it is highly available across at least two Availability Zones and enforces strong request filtering. The security team requires that inbound traffic be inspected for common web exploits and that any blocked requests are delivered to an external auditing service at logs.example.com for compliance review. What architecture should be implemented?
-
✓ C. Configure a Multi AZ Auto Scaling group from the application AMI with an Application Load Balancer in front, attach an AWS WAF web ACL with AWS Managed Rules, and enable WAF logging to Amazon Kinesis Data Firehose that delivers blocked requests to the logs.example.com auditing endpoint
The correct choice is Configure a Multi AZ Auto Scaling group from the application AMI with an Application Load Balancer in front, attach an AWS WAF web ACL with AWS Managed Rules, and enable WAF logging to Amazon Kinesis Data Firehose that delivers blocked requests to the logs.example.com auditing endpoint. This design achieves high availability by spreading instances across at least two Availability Zones and it inspects inbound requests with AWS WAF using managed rule groups. It also satisfies the requirement to deliver blocked requests to an external auditing service by using WAF logging sent to a Firehose delivery stream that posts to the specified endpoint.
Using a Multi AZ Auto Scaling group with an Application Load Balancer provides resilient capacity across multiple subnets and Availability Zones and it maintains health and replaces failed instances automatically. Attaching a web ACL to the load balancer centralizes inspection of all inbound traffic so the application instances only receive requests that pass the WAF rules.
Enabling AWS WAF logging to a Firehose delivery stream captures detailed information about allowed and blocked requests. Firehose can deliver to an HTTP endpoint destination which allows direct delivery of blocked events to the logs.example.com auditing service and this meets the compliance requirement without custom log parsing.
The option Set up an Application Load Balancer with a target group that includes the existing EC2 instances in the current Availability Zone, create Amazon Kinesis Data Firehose with the logs.example.com service as the destination, attach an AWS WAF web ACL to the ALB, enable WAF logging to the Firehose stream, and subscribe to AWS Managed Rules is not correct because it keeps all instances in a single Availability Zone and therefore does not provide the required high availability across at least two Availability Zones.
The option Use Google Cloud Armor in front of a Google Cloud HTTP Load Balancer and export blocked events from Cloud Logging to the logs.example.com auditing service is not correct because it proposes Google Cloud services while the workload runs on Amazon EC2 and the requirement is to redesign the platform within AWS.
The option Deploy an Application Load Balancer and register the EC2 instances as targets, attach an AWS WAF web ACL, enable logging with Amazon CloudWatch Logs, and use an AWS Lambda function to forward entries to the logs.example.com auditing service is not correct because it does not establish a Multi AZ Auto Scaling design and AWS WAF does not natively log to CloudWatch Logs. The supported and scalable path for WAF logs is Amazon Data Firehose which is the appropriate way to deliver blocked request records to an external HTTP endpoint.
Translate keywords into services. At least two Availability Zones points to an Auto Scaling group spanning multiple subnets with a load balancer. Inspect for common web exploits maps to AWS WAF with AWS Managed Rules. Deliver blocked requests to an external endpoint indicates enabling WAF logging to Amazon Data Firehose with an HTTP endpoint destination.
Question 8
BrightPaws is a fast growing startup with a viral mobile photo app where people upload pet pictures and add captions, and active users have climbed past 12 million across several continents. The backend runs on Amazon EC2 with Amazon EFS and the instances are behind an Application Load Balancer. Traffic is unpredictable and spikes cause slow responses during busy evenings. What architectural changes should a Solutions Architect propose to cut costs and improve global performance?
-
✓ D. Place user photos in Amazon S3 with the Intelligent Tiering storage class and front S3 with Amazon CloudFront while using AWS Lambda to perform image processing
The correct option is Place user photos in Amazon S3 with the Intelligent Tiering storage class and front S3 with Amazon CloudFront while using AWS Lambda to perform image processing.
Moving images from a shared file system to object storage reduces operational overhead and cost while delivering virtually unlimited durability and throughput. Intelligent tiering automatically optimizes storage costs for unpredictable access patterns without retrieval fees and keeps performance high for hot objects. Placing a global content delivery network in front of the bucket caches images near users around the world which reduces latency and offloads traffic during spikes. Event driven serverless image processing scales transparently with demand and you only pay when processing happens which fits spiky workloads well.
Create an Amazon CloudFront distribution and place the Application Load Balancer behind it and store user images in Amazon S3 using the S3 Standard Infrequent Access storage class is not optimal because Standard Infrequent Access adds retrieval fees and minimum storage duration charges that can increase costs when access is unpredictable. It also does not explicitly use the bucket as a CDN origin for images which limits caching benefits for the static content.
Use AWS Global Accelerator in front of the Application Load Balancer and migrate static files to Amazon FSx for Windows File Server while using an AWS Lambda function to compress images during the cutover does not address edge caching. Global Accelerator improves network paths to a regional endpoint but it does not cache content, and FSx for Windows File Server targets SMB workloads and adds licensing and operational cost rather than simplifying static image delivery.
Move the image bucket to Google Cloud Storage and serve content through Cloud CDN while continuing to run the dynamic service behind the existing Application Load Balancer introduces cross cloud complexity and data movement overhead without a clear benefit for this AWS based stack. It would complicate operations and costs compared to an AWS native solution that already provides global caching and serverless processing.
First identify which parts of the workload are static and spiky and map them to object storage with a CDN and serverless processing. Distinguish a CDN that caches content from a network accelerator that only optimizes routing. Watch for storage class retrieval fees and minimum duration charges when access patterns are unpredictable.
Question 9
BlueRiver Foods operates a primary data center and needs a private connection to AWS that is encrypted and delivers consistent low latency with high throughput near 4 Gbps. The team can budget roughly eight weeks for provisioning and is willing to manage the setup effort. Which approach provides end to end connectivity that satisfies these requirements while requiring the least additional infrastructure?
-
✓ C. Provision AWS Direct Connect and layer an AWS Site-to-Site VPN over it to encrypt traffic
The correct option is Provision AWS Direct Connect and layer an AWS Site-to-Site VPN over it to encrypt traffic.
This solution gives you a dedicated private circuit for predictable low latency and high throughput at the multi gigabit level, and it adds an IPsec VPN overlay to meet the encryption requirement end to end. Provisioning a dedicated circuit commonly takes several weeks which aligns with the available eight week window, and the setup can be managed by the team with minimal additional infrastructure because the VPN can be terminated on typical enterprise edge gear without adding third party appliances.
Create a Site-to-Site VPN from the data center to an Amazon VPC over the internet is not suitable because the public internet cannot guarantee consistent low latency, and aggregate throughput is limited and variable. This fails the requirement for predictable near 4 Gbps performance.
Provision AWS Direct Connect without adding a VPN does provide private and consistent connectivity, but it does not encrypt traffic by default. While MACsec can encrypt on supported dedicated links, it requires specific locations and compatible hardware which adds complexity and infrastructure, so this does not best meet the encryption and minimal addition requirements as stated.
Google Cloud Interconnect is a Google Cloud service and does not provide private connectivity to AWS, so it does not meet the requirement to connect the on premises data center to Amazon Web Services.
Map each requirement to a capability and eliminate any option that misses even one. When you see private connectivity with encryption, predictable latency, and multi gigabit throughput, expect a dedicated link paired with an IPsec overlay unless the scenario clearly states MACsec is available.
Question 10
NorthPeak Systems has a platform engineering group that manages environments using AWS CloudFormation and they need to ensure that mission critical data in Amazon RDS instances and Amazon EBS volumes is not removed if a production stack is deleted. What should the team implement to prevent accidental data loss when someone deletes the stack?
-
✓ B. Set DeletionPolicy of Retain on the RDS and EBS resources in the CloudFormation templates
The correct option is Set DeletionPolicy of Retain on the RDS and EBS resources in the CloudFormation templates. This guarantees that CloudFormation does not delete those resources when the stack is removed so the data remains intact.
This setting uses the DeletionPolicy attribute with the Retain value so CloudFormation will keep the Amazon RDS DB instance and Amazon EBS volume during stack deletion or replacement. The resources continue to exist outside the stack and must be cleaned up manually when appropriate. This directly prevents accidental loss of production data even if a user deletes the stack.
Create IAM policies that deny delete operations on RDS and EBS when the “aws:cloudformation:stack-name” tag is present is not the right approach because it blocks service delete APIs and would cause stack deletions to fail. It is brittle and hard to maintain and it can interfere with legitimate operations while leaving the stack in a failed state rather than cleanly retaining the resources.
Apply a CloudFormation stack policy that blocks deletion actions for RDS and EBS resources is incorrect because stack policies only govern what can be updated during a stack update and they do not apply during stack deletion. They cannot prevent resources from being deleted when the stack itself is deleted.
Enable termination protection on the production CloudFormation stack does not meet the requirement because it prevents deleting the entire stack rather than retaining specific resources. If someone disables termination protection then a delete could still remove the resources and the goal is to keep the data even if the stack is deleted.
When a question asks you to keep specific resources after a stack is deleted, map it to DeletionPolicy with Retain. Remember that stack policies only affect updates and termination protection blocks deletion of the whole stack rather than preserving select resources.
Question 11
SpryCart runs an Amazon RDS database in private subnets within an Amazon VPC where outbound internet access is not allowed. The team stored credentials in AWS Secrets Manager and enabled rotation with an AWS Lambda function placed in the same VPC. Recent rotation attempts fail and Amazon CloudWatch Logs show the Lambda function timing out when it tries to call Secrets Manager APIs. The environment must remain without internet egress. What should the team implement so that secret rotation completes successfully under these constraints?
-
✓ C. Create an interface VPC endpoint for Secrets Manager and ensure the rotation function uses it for API calls
The correct option is Create an interface VPC endpoint for Secrets Manager and ensure the rotation function uses it for API calls.
A Lambda function that runs in private subnets without internet egress cannot reach the public Secrets Manager endpoints. An interface VPC endpoint for Secrets Manager provides private connectivity over the AWS network so the function can call the Secrets Manager APIs without using the internet. Enable Private DNS on the endpoint so the standard regional Secrets Manager hostname resolves to the endpoint inside the VPC, or configure the function’s SDK client to use the endpoint specific DNS name. Verify that the endpoint security group allows connections from the Lambda function network interfaces and that VPC DNS resolution and hostnames are enabled.
Create an interface VPC endpoint for the Lambda service in the VPC to allow the function to run without internet access does not address the failure. The timeout occurs when the function calls Secrets Manager, so the missing connectivity is to the Secrets Manager API and not to the Lambda control plane. A Lambda interface endpoint is not required for the function to reach Secrets Manager.
Configure a NAT gateway in the VPC and update private route tables to provide outbound access to AWS endpoints violates the requirement to keep the environment without internet egress. A NAT gateway provides internet access for private subnets and is unnecessary when a PrivateLink interface endpoint to Secrets Manager solves the problem.
Recreate the rotation function from the latest Secrets Manager blueprint to ensure SSL and TLS support does not fix a connectivity timeout. The issue is network reachability to the Secrets Manager API, not TLS support in the code.
When a function in private subnets must call an AWS API without egress, think interface VPC endpoint for that specific service. Confirm Private DNS is enabled and that security groups allow connections from the function to the endpoint.
Question 12
Northwind Labs uses AWS Organizations and has two organizational units named Analytics and Platform under the root. Due to regulatory policy the company must ensure that all workloads run only in ap-southeast-2, and the Platform OU must be limited to a defined list of Amazon EC2 instance types. A solutions architect needs to implement controls with minimal ongoing administration. Which combination of actions should be implemented to satisfy these requirements? (Choose 2)
-
✓ B. Create a Service Control Policy that uses the aws:RequestedRegion condition to deny all Regions except ap-southeast-2 and attach it to the organization root
-
✓ D. Attach a Service Control Policy to the Platform OU that uses the ec2:InstanceType condition to allow only the approved instance types
The correct options are Create a Service Control Policy that uses the aws:RequestedRegion condition to deny all Regions except ap-southeast-2 and attach it to the organization root and Attach a Service Control Policy to the Platform OU that uses the ec2:InstanceType condition to allow only the approved instance types.
An SCP at the organization root that denies all Regions except ap-southeast-2 by using aws:RequestedRegion provides a preventive and centralized control that applies to every account and OU. This satisfies the requirement that all workloads run only in the mandated Region while keeping administration minimal because a single policy enforces the rule everywhere.
An SCP on the Platform OU that limits RunInstances with the ec2:InstanceType condition sets the maximum permissions for that OU so only the approved instance types can be launched. This is also preventive and low maintenance since you update a single allow list when instance type approvals change.
Deploy AWS Config rules in every account to detect noncompliant Regions and EC2 instance types and trigger Systems Manager Automation for remediation is detective and reactive rather than preventive. It adds operational overhead across accounts and still allows noncompliant actions to occur before remediation.
Create IAM users in all accounts and attach an inline policy to each that uses the aws:RequestedRegion condition to allow only ap-southeast-2 is incorrect because IAM policies cannot exceed the guardrails of Organizations and would not cover roles or the root user. It also creates significant administrative burden across many accounts.
Create a Service Control Policy that uses the ec2:Region condition and apply it to the root, Analytics, and Platform OUs is incorrect because ec2:Region is not the appropriate condition key for Region restrictions in Organizations and using it across multiple OUs is unnecessary when a single root attachment suffices. The correct key for Region guardrails is aws:RequestedRegion.
Prefer Service Control Policies for preventive and centralized guardrails. Use aws:RequestedRegion to restrict Regions at the organization root and use ec2:InstanceType on RunInstances to control instance types per OU. Remember that IAM policies cannot grant beyond SCP limits and often increase administration.
Question 13
NorthRiver Media plans to launch a microservices streaming service that is expected to grow from 8 million to 45 million users within nine months. The platform runs on Amazon ECS with AWS Fargate and all requests must use HTTPS. The solutions architect must enable blue or green deployments, send traffic through a load balancer, and adjust running tasks automatically by using Amazon CloudWatch alarms. Which solution should the team implement?
-
✓ B. Configure ECS services for blue or green deployments with an Application Load Balancer and enable ECS Service Auto Scaling per service
The correct answer is Configure ECS services for blue or green deployments with an Application Load Balancer and enable ECS Service Auto Scaling per service.
This choice satisfies every requirement. An Application Load Balancer supports HTTPS termination and advanced routing which is well suited to microservices and it integrates with Amazon ECS and AWS CodeDeploy to enable blue or green deployments using separate target groups and controlled traffic shifting. ECS Service Auto Scaling integrates with Amazon CloudWatch alarms to increase or decrease the desired task count so the service can react automatically to demand. This combination lets the team route all traffic over HTTPS, perform safe cutovers during deployments, and scale tasks up as usage grows and back down when demand falls.
The expected growth from 8 million to 45 million users within nine months is handled cleanly because Fargate removes capacity management, while service level auto scaling adjusts running tasks based on metrics. The ALB provides health checks and routing that help maintain availability during blue or green deployments and during scale events.
Configure ECS services for blue or green deployments with a Network Load Balancer and request a higher tasks per service quota is not sufficient because it does not enable automatic scaling of running tasks with CloudWatch alarms and requesting a quota increase does not meet the requirement to adjust tasks automatically. While a Network Load Balancer can handle TLS, it lacks the application layer features that are commonly used in microservices and the option still fails to address autoscaling at the service level.
Configure ECS services for blue or green deployments with an Application Load Balancer and attach an Auto Scaling group that is managed by Cluster Autoscaler is incorrect because the workload runs on AWS Fargate, which does not use EC2 Auto Scaling groups or Cluster Autoscaler. That approach is for node scaling in Kubernetes or EC2 based clusters and it does not satisfy the requirement to scale ECS tasks with CloudWatch alarms.
Configure ECS services for rolling update deployments with an Application Load Balancer and use ECS Service Auto Scaling per service does not meet the deployment strategy requirement because the team specifically needs blue or green deployments, not rolling updates. Although the load balancing and autoscaling parts are valid, the deployment method is wrong for this scenario.
Map each requirement to the AWS feature that implements it. Use ALB when you need HTTPS and application layer routing, use CodeDeploy for ECS blue or green deployments with two target groups, and use ECS Service Auto Scaling tied to CloudWatch alarms to adjust task counts. If the compute is Fargate, do not choose solutions that rely on Auto Scaling groups.
Question 14
The Orion initiative at DataVista Labs is overspending on EC2, and its AWS account is deliberately kept separate from DataVista’s AWS Organization. To control costs, what should a solutions architect put in place to ensure developers in the Orion initiative can launch only t3.micro EC2 instances in the eu-west-1 Region?
-
✓ B. Apply an IAM policy in the project account that allows only t3.micro launches in eu-west-1 and attach it to developer roles
The correct option is Apply an IAM policy in the project account that allows only t3.micro launches in eu-west-1 and attach it to developer roles.
This approach uses identity based permissions inside the standalone AWS account to constrain EC2 launches. You attach the policy to developer roles and allow ec2:RunInstances only when the request meets conditions such as ec2:InstanceType equal to t3.micro and aws:RequestedRegion equal to eu-west-1. You can structure the policy as an allow with restrictive conditions or as a default allow with an explicit deny that blocks any other instance type or Region. This works even though the account is not part of an AWS Organization.
Attach a Service Control Policy that denies all EC2 launches except t3.micro in eu-west-1 to the account is incorrect because Service Control Policies apply only to accounts that are members of an AWS Organization. Since the Orion account is intentionally outside the Organization, an SCP cannot be attached and would have no effect.
Use Google Cloud Organization Policy to restrict machine types to e2-micro and limit the region to europe-west1 is incorrect because this is a Google Cloud control and it governs GCP projects. The environment in question is AWS and the machine type and region names given are specific to GCP rather than EC2.
Create a new developer account and move workloads to eu-west-1 then add it to the company AWS Organization and enforce a tag policy for Region placement is incorrect because tag policies only standardize and validate tags and they do not enforce where resources are created or which instance types can be launched. This option is also unnecessarily disruptive compared with directly enforcing the restriction through IAM in the existing account.
When an account is outside an AWS Organization, think IAM first. Constrain EC2 by conditioning ec2:RunInstances on ec2:InstanceType and aws:RequestedRegion, and remember SCPs only apply to organization member accounts.
Question 15
The research team at Orion Data Labs is rolling out a new machine learning service on six Amazon EC2 instances within a single AWS Region. They require very high throughput and very low network latency between all instances and they are not concerned with fault tolerance or hardware diversity. What should they implement to satisfy these needs?
-
✓ C. Place six EC2 instances in a cluster placement group and choose instance types that support enhanced networking
The correct option is Place six EC2 instances in a cluster placement group and choose instance types that support enhanced networking.
A cluster placement group packs instances close together within a single Availability Zone so traffic remains on the same high performance network fabric. This placement delivers the very high throughput and very low latency needed by tightly coupled workloads. Pairing the group with enhanced networking on ENA capable instance types increases packets per second and reduces latency and jitter, which directly supports the research team requirements.
The team is not concerned with fault tolerance or hardware diversity, and a cluster placement group intentionally trades those characteristics for the best possible east to west network performance. This makes it the most suitable choice.
Distribute six EC2 instances across a spread placement group and attach an additional elastic network interface to each instance is not appropriate because a spread placement group places instances on distinct hardware to reduce correlated failures, which increases distance and can add latency. This choice does not improve interinstance throughput for tightly coupled communication.
Put six EC2 instances in a partition placement group and select instance types with enhanced networking targets large distributed systems where partitions are isolated on separate racks. That topology does not guarantee the very low latency or highest throughput between all instances even if the instances have faster networking capabilities.
Compute Engine is a Google Cloud service and does not apply to an AWS specific deployment requirement.
Map keywords to placement group types. Use cluster for very low latency and high throughput within one Availability Zone, spread for maximum isolation, and partition for large distributed systems.

All questions come from my Solutions Architect Udemy Course and certificationexams.pro
Question 16
Following a shift from manually managed EC2 servers to an Auto Scaling group to handle a surge in users, the platform team at Apex Retail faces a patching issue. Security updates that run every 45 days require a reboot, and while an instance is rebooting, the Auto Scaling group replaces it, which leaves the fleet with fresh but unpatched instances. Which actions should a solutions architect propose to ensure instances are patched without being prematurely terminated? (Choose 2)
-
✓ B. Automate baking a patched AMI, update the launch template to that AMI, and trigger an Auto Scaling instance refresh
-
✓ E. Set the Auto Scaling termination policy to prefer instances launched from the oldest launch template or configuration
The correct options are Automate baking a patched AMI, update the launch template to that AMI, and trigger an Auto Scaling instance refresh and Set the Auto Scaling termination policy to prefer instances launched from the oldest launch template or configuration.
Using a golden image process to bake patches into an AMI and then updating the launch template removes the need to patch in place. Initiating an instance refresh performs a controlled rolling replacement that launches new instances from the updated AMI, waits for health checks, and then terminates the old instances. This prevents the reboot related unhealthy state that causes premature replacement because the fleet converges to already patched capacity rather than trying to patch running instances.
Configuring the termination policy to prefer the oldest launch template or configuration ensures that during scale in or while a refresh is progressing, the Auto Scaling group selects the oldest generation instances first. This makes it less likely that recently launched and patched instances are terminated before the rollout completes, which aligns with the goal of preserving patched capacity. Choosing the termination policy to prefer instances launched from the oldest launch template or configuration complements the instance refresh by prioritizing removal of unpatched instances.
Place an Application Load Balancer in front of the Auto Scaling group and rely on target health checks during replacements is insufficient because health checks only control traffic routing. When a reboot makes an instance unhealthy, the Auto Scaling group can still replace it, so you would still end up with replacements that are not guaranteed to be patched.
Enable termination protection on EC2 instances in the Auto Scaling group is not effective because EC2 termination protection does not prevent the Auto Scaling group from terminating instances during health or scaling events. The correct protection mechanism within Auto Scaling is instance scale in protection, and even that would not resolve reboot related replacements during in place patching.
Stand up a parallel Auto Scaling group before the maintenance window and patch and reboot instances in both groups during the window adds cost and operational complexity without eliminating the root cause. Both groups would still mark rebooting instances as unhealthy and could replace them, while the simpler and safer approach is to roll out a patched AMI with an instance refresh.
Prefer immutable updates for fleets. Bake patches into an AMI and use an instance refresh with a termination policy that keeps the newest template so scale in removes the oldest instances first.
Question 17
Northpeak Analytics is rolling out a global SaaS platform and wants clients to be routed automatically to the closest AWS Region while security teams require static IP addresses that customers can add to their allow lists. The application runs on Amazon EC2 instances behind a Network Load Balancer and uses an Auto Scaling group that spans four Availability Zones in each Region. What solution will fulfill these needs?
-
✓ B. Create an AWS Global Accelerator standard accelerator and add an endpoint group for the NLB in every active Region then share the accelerator’s static IP addresses with customers
The correct answer is Create an AWS Global Accelerator standard accelerator and add an endpoint group for the NLB in every active Region then share the accelerator’s static IP addresses with customers.
AWS Global Accelerator gives you two static anycast IP addresses that you can share with customers for allow lists and it can also support Bring Your Own IPs. Traffic is directed to the closest healthy AWS Region based on performance measurements and health checks, and endpoint groups let you register Network Load Balancers per Region with fine control using traffic dials and weights. This matches the need for automatic nearest Region routing while also providing a small and stable set of static IPs that satisfy strict security allow list requirements. Global Accelerator integrates natively with Network Load Balancers and performs fast failover across Regions which maintains availability during Regional issues.
Create an Amazon CloudFront distribution with an origin group that includes the NLB in each Region and provide customers the IP ranges for CloudFront edge locations is not suitable because CloudFront does not offer fixed static IP addresses for a distribution and customers would need to allow list large and changing IP ranges. In addition, CloudFront focuses on HTTP and HTTPS content delivery and origin failover rather than steering each request to the closest Region for general TCP or UDP applications behind NLBs.
Configure Amazon Route 53 latency based routing with health checks and assign Elastic IP addresses to NLBs in each Region then distribute those IPs to customers does not meet the requirement cleanly. Route 53 can steer by latency only through DNS resolution while customers typically connect to a hostname rather than a specific IP, and if they try to connect directly to an IP to satisfy allow listing then latency based routing is bypassed. Assigning Elastic IPs to multi Availability Zone NLBs creates many addresses across Regions which is operationally complex for customer allow lists and it still does not provide a small global set of static entry points.
Create an AWS Global Accelerator custom routing accelerator and configure a listener and port mappings for the NLBs in each Region then give customers the accelerator IP addresses is incorrect because custom routing is intended to route traffic directly to specific EC2 instance IPs and ports and it does not support NLB endpoints. The appropriate fit for NLB based multi Region applications is the AWS Global Accelerator standard accelerator.
When a question combines global closest Region routing with a need for a small set of static IP addresses think of Global Accelerator standard rather than DNS or CDN. Map the requirement for static allow lists to anycast accelerator IPs.
Question 18
NovaMetrics Ltd. operates a managed API in its AWS account and a client in a different AWS account needs to invoke that service from automation using the CLI while maintaining least privilege and avoiding long lived credentials. How should NovaMetrics grant the client secure access to the service?
-
✓ D. Create a cross account IAM role with only the needed permissions and require an external ID in the trust policy then have the client assume the role using its ARN and the external ID
The correct answer is: Create a cross account IAM role with only the needed permissions and require an external ID in the trust policy then have the client assume the role using its ARN and the external ID.
This approach enforces least privilege because the role policy grants only the exact permissions the client needs. The client uses AWS STS to assume the role and receives short lived credentials, which avoids long lived keys. Requiring an external ID in the trust policy mitigates the confused deputy problem when a different account or third party is allowed to assume the role. The client can use the CLI to call AssumeRole with the role ARN and the external ID and then invoke the API with the temporary credentials.
Publish the service with AWS PrivateLink and require an API key for callers is not sufficient because PrivateLink only provides private network connectivity and does not provide caller identity or authorization. API keys are not identity based access control and they are typically long lived and used for metering and throttling rather than for secure cross account authorization.
Create an IAM user for the client and share its access keys violates least privilege and uses long lived credentials that are hard to rotate and audit. Sharing static access keys across accounts increases risk and does not align with recommended practices for automation.
Create a cross account IAM role with only the needed permissions and let the client assume it using the role ARN without an external ID omits the external ID that helps prevent the confused deputy issue when granting access to another account. Without the external ID, there is increased risk that an unintended principal could trick a trusted service into using the role.
When you see cross account automation with least privilege and no long lived credentials, look for a role assumed with temporary credentials and require an external ID to prevent the confused deputy problem. Be cautious of answers that rely on API keys or shared access keys.
Question 19
Rivertown Retail Co. runs a customer portal on EC2 instances behind an Application Load Balancer. Orders are stored in an Amazon RDS for MySQL database and related PDFs are kept in Amazon S3. The finance team’s ad hoc reporting slows the production database. Leadership requires a disaster recovery strategy that can keep the application available during a regional outage and that limits data loss to only a few minutes and that also removes the reporting workload from the primary database. What should the solutions architect build?
-
✓ B. Create an RDS for MySQL cross-Region read replica and route finance queries to that replica, enable S3 Cross-Region Replication to a bucket in the recovery Region, build AMIs of the web tier and copy them to that Region, and during a disaster promote the replica then launch EC2 from the AMIs behind a new ALB and update the app to use the replicated bucket
The correct option is Create an RDS for MySQL cross-Region read replica and route finance queries to that replica, enable S3 Cross-Region Replication to a bucket in the recovery Region, build AMIs of the web tier and copy them to that Region, and during a disaster promote the replica then launch EC2 from the AMIs behind a new ALB and update the app to use the replicated bucket.
This solution removes the reporting workload from the production database by directing finance queries to a cross Region read replica. The replica can serve read traffic which offloads ad hoc reporting and protects the primary from heavy query spikes. Because the replica uses asynchronous replication, it typically lags by seconds to minutes which aligns with an RPO of only a few minutes.
For disaster recovery, promoting the read replica in the secondary Region creates a standalone primary that the application can use. Having AMIs prebuilt and copied to the recovery Region allows rapid instance launches, and placing those instances behind a new Application Load Balancer restores the web tier quickly to meet a low RTO. S3 Cross Region Replication keeps the PDFs synchronized to a bucket in the recovery Region so the application can seamlessly switch to the replicated bucket during failover.
Migrate the transactional database to Amazon DynamoDB with global tables, direct the finance team to query a global table in a secondary Region, use an AWS Lambda function on a schedule to copy objects to an S3 bucket in that Region, and stand up EC2 and a new ALB there while pointing the app to the new bucket is not appropriate because it requires a full rearchitecture from a relational schema to a NoSQL model, and ad hoc SQL style reporting is not a good fit for that service. A scheduled Lambda copy for S3 is operationally brittle and provides weaker replication guarantees compared to native S3 Cross Region Replication.
Launch more EC2 instances in a second Region and register them with the existing ALB, create an RDS read replica in the second Region for finance use, enable S3 Cross-Region Replication, and in an outage promote the replica and repoint the application is invalid because an Application Load Balancer is regional and cannot register targets from another Region. This design would not function as described and does not provide a clean failover path for the web tier.
Replatform the workload by moving the database to Google Cloud Spanner, export reports to BigQuery, front the app with Google Cloud Load Balancing, and store documents in dual-region Cloud Storage for resilience is a cross cloud migration that does not meet the requirement to keep the existing application available with minimal changes. It is a major replatforming effort and Spanner is not a drop in replacement for MySQL which makes it unsuitable for this scenario.
Map the stated RPO and RTO to native services that provide near real time replication and fast failover. When you see read heavy analytics slowing a primary database, consider read replicas for offloading. Remember that an Application Load Balancer is regional and that cross Region web tiers need their own load balancer with DNS based failover such as Route 53.
Question 20
Northwind Publications stores media in Amazon S3 with objects under the prefixes s3://cdn-media-001/photos and s3://cdn-media-001/thumbs. Newly uploaded photos receive heavy traffic, and after 60 days the photos are rarely read while the thumbnails continue to be accessed. After 210 days the team wants to archive both the photos and the thumbnails. The design must remain highly available across multiple Availability Zones to mitigate the impact of an AZ outage. Which actions will deliver the most cost effective approach while meeting these goals? (Choose 2)
-
✓ B. Create a lifecycle rule that transitions all objects to S3 Glacier Flexible Retrieval after 210 days
-
✓ D. Create a lifecycle rule that transitions only the photos prefix to S3 Standard-IA after 60 days
The correct options are Create a lifecycle rule that transitions all objects to S3 Glacier Flexible Retrieval after 210 days and Create a lifecycle rule that transitions only the photos prefix to S3 Standard-IA after 60 days.
Moving the photos to Standard-IA after 60 days aligns with the stated access pattern because newly uploaded photos are hot and then become infrequently accessed after about two months. Standard-IA is designed for infrequent access while still providing multi Availability Zone resilience, so it reduces storage cost without violating the high availability requirement. Keeping thumbnails in Standard during this period avoids unnecessary retrieval and request charges since thumbnails remain frequently accessed.
Archiving both photos and thumbnails after 210 days using Flexible Retrieval meets the long term retention goal while remaining highly durable and available across multiple Availability Zones. This class offers significantly lower storage cost than active tiers while still enabling expedited or standard restores when needed, which makes it a cost effective archive choice for content that is rarely read after seven months.
Create a lifecycle rule that transitions only the photos prefix to S3 One Zone-IA after 60 days is not appropriate because One Zone-IA stores data in a single Availability Zone. The requirement explicitly calls for high availability across multiple Availability Zones, so this choice does not meet the resiliency goal.
Create a lifecycle rule that transitions all objects to S3 Standard-IA after 60 days moves thumbnails into an infrequent access tier even though thumbnails continue to be accessed. That would increase retrieval and request costs for the thumbnails and could degrade performance for a hot access pattern, which is not the most cost effective or suitable approach.
Create a lifecycle rule that transitions only the photos prefix to S3 Glacier after 210 days does not archive the thumbnails, which fails to meet the requirement to archive both categories. The name S3 Glacier by itself is also legacy terminology on newer exams where the available archive classes are Flexible Retrieval and Deep Archive, so the option is both incomplete in scope and outdated in naming.
Map the access pattern to storage classes and availability needs. If you see high availability across multiple Availability Zones then avoid One Zone-IA. Use lifecycle transitions to move only the cold prefixes to infrequent access tiers and then archive all data when it becomes rarely accessed.