Free AWS Solution Architect Practice Exams

AWS Solutions Architect Practice Tests

Passing the AWS Certified Solutions Architect Associate exam is one of the most powerful ways to demonstrate your ability to design secure, reliable, and cost-effective applications in the AWS Cloud.

The AWS solutions architect designation validates not just your technical skills, but also your capacity to think like an architect, making design choices that balance performance, scalability, and cost.

The AWS Solutions Architect certification proves you can analyze requirements, map them to AWS services, and create solutions that align with best practices for cloud architecture. The Solutions Architect exam tests your ability to design resilient systems, secure workloads, plan cost-optimized architectures, and implement strategies for high availability and fault tolerance.

Is the AWS Solution Architect cert worth it?

And don’t let anyone ever tell you AWS certification isn’t worth it, especially with a widely respected designation like this.

Earning the AWS Solution Architect certification signals to employers and peers that you are prepared for a career in cloud architecture. It shows you can lead projects that involve designing end-to-end AWS solutions, making you an essential part of any team building applications in the cloud.

If you are interested in testing your skills and seeing if you’ve got what it takes to pass the AWS Solution Architect associate level exam, test your mettle on these 10 practice exam questions. If answering them is well within your skillset, you just might be ready to schedule and pass the exam.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.


Riverton Analytics wants to surface current AWS spend in its internal finance portal. The finance lead needs a programmatic solution that can pull year-to-date costs for the current fiscal year and also return a forecast for the next 9 months while keeping operations to a minimum. What approach should the solutions architect recommend?

  • ❏ A. Deliver AWS Cost and Usage Reports to Amazon S3 and analyze with Amazon Athena to calculate a forecast

  • ❏ B. Configure AWS Budgets to push cost data to the company over FTP

  • ❏ C. Use the AWS Cost Explorer API to query cost and usage data and retrieve forecasts, handling large results with pagination

  • ❏ D. Download Cost Explorer CSV exports and parse them to populate the dashboard

Riverton Insights stores quarterly analytics in an Amazon RDS for PostgreSQL instance and plans to publish this information through a REST API for a public dashboard. Demand is idle for long stretches but can surge to thousands of requests within a short window. Which approach should the solutions architect use to minimize idle cost while seamlessly handling sudden traffic spikes?

  • ❏ A. Amazon API Gateway with AWS Elastic Beanstalk

  • ❏ B. Amazon API Gateway with AWS Lambda

  • ❏ C. Amazon API Gateway with Amazon EC2 and Auto Scaling

  • ❏ D. Amazon API Gateway with Amazon ECS

A media analytics startup runs several Windows-based applications in its private data center and plans to migrate to AWS within the next 45 days. They require a managed shared file service that multiple Windows applications can access at the same time over SMB without building replication or custom synchronization. The file storage must be able to join their existing self-managed Active Directory domain that remains on premises. Which AWS service should they choose to minimize integration work?

  • ❏ A. AWS Storage Gateway File Gateway

  • ❏ B. Amazon FSx for Lustre

  • ❏ C. Amazon FSx for Windows File Server

  • ❏ D. Amazon Elastic File System (Amazon EFS)

A research consortium stores downloadable climate datasets in an Amazon S3 bucket. Users across North America, Europe, and Asia access these files through a vanity URL that resolves to the consortium’s domain. The team requires consistently low latency for global downloads and will keep DNS in Amazon Route 53. What should the team implement?

  • ❏ A. AWS Global Accelerator

  • ❏ B. Deploy an Amazon CloudFront distribution with the S3 bucket as the origin and create an ALIAS record in Route 53 that maps the domain to the distribution

  • ❏ C. Configure a Route 53 traffic policy with geolocation rules and health checks and publish an A record to route queries among endpoints

  • ❏ D. Deploy an Amazon CloudFront distribution with the S3 bucket as the origin and create a CNAME record in Route 53 that points the domain to the distribution

A fintech startup named MeridianPay is rolling out a multi-account AWS landing zone across 14 accounts in three Regions and wants to consolidate security telemetry from AWS services and approved third-party tools into one place. The security team needs to evaluate organization-wide posture and accelerate detection and response while avoiding heavy custom code, manual integrations, and bespoke normalization. Which approach provides these outcomes with the least engineering effort?

  • ❏ A. Create a governed data lake with AWS Lake Formation and use AWS Glue jobs to ingest and standardize security logs for centralized analysis

  • ❏ B. Query disparate security logs in multiple S3 buckets using Amazon Athena saved queries and publish dashboards in Amazon QuickSight

  • ❏ C. Use Amazon Security Lake to automatically collect and normalize security events from AWS services and approved partners into a centrally managed S3 data lake

  • ❏ D. Build a cross-account AWS Lambda aggregator that pulls logs and writes CSV files to a central Amazon S3 bucket

A global media firm operates a distributed web platform on Amazon EC2 behind an Application Load Balancer, with instances managed by an Auto Scaling group. The backend is an Amazon Aurora MySQL cluster deployed across three Availability Zones in one AWS Region. The firm is expanding to a far-away continent and needs the highest availability with minimal disruption during Regional failure or maintenance. What approach should the firm use to extend the application to the new Region while meeting these objectives?

  • ❏ A. Build the application in the new Region with a fresh Aurora MySQL cluster, use AWS Database Migration Service for ongoing replication from the primary, and use Route 53 failover routing to the new Region

  • ❏ B. Stand up the web tier in the new Region, use Aurora Global Database to create a secondary Region cluster, configure Amazon Route 53 health checks with failover routing, and promote the secondary to primary when needed

  • ❏ C. Enlarge the existing Auto Scaling group to span both Regions, enable Aurora Global Database between Regions, and use Route 53 health checks with failover to the new Region

  • ❏ D. Deploy the application in the new Region and create a cross-Region Aurora MySQL read replica, use Route 53 failover, and promote the replica on primary failure

An insurance startup hosts a public website on a Windows-based server in a colocation facility. The team is migrating the site to Amazon EC2 Windows instances spread across three Availability Zones, and the app currently uses a shared SMB file share on an on-premises NAS. Which AWS replacement for this NAS file share will provide the highest resilience and durability for the Windows web tier?

  • ❏ A. Amazon EBS

  • ❏ B. Amazon FSx for Windows File Server

  • ❏ C. AWS Storage Gateway

  • ❏ D. Amazon Elastic File System (Amazon EFS)

HarborStream is a global media startup that needs to deliver low latency live broadcasts and a library of on demand videos to viewers across the world. To improve playback by caching content near users, which AWS service should they use for both live and on demand distribution?

  • ❏ A. AWS Global Accelerator

  • ❏ B. AWS Elemental MediaPackage

  • ❏ C. Amazon CloudFront

  • ❏ D. Amazon Route 53

A retail analytics firm operates 650 Amazon EC2 instances that run recurring batch jobs to process sales data. The team must deploy a third-party agent to every instance rapidly and expects to repeat similar rollouts in the future. The approach should be straightforward to operate and automatically cover new EC2 instances as they are added. What should a solutions architect recommend?

  • ❏ A. AWS Systems Manager Maintenance Windows

  • ❏ B. AWS CodeDeploy

  • ❏ C. AWS Systems Manager Run Command

  • ❏ D. AWS Systems Manager Patch Manager

A media analytics firm operates a three-tier stack in a VPC across two Availability Zones. The web tier runs on Amazon EC2 instances in public subnets, the application tier runs on EC2 instances in private subnets, and the data tier uses an Amazon RDS for MySQL DB instance in private subnets. The security team requires that the database accept connections only from the application tier and from nowhere else. How should you configure access to meet this requirement?

  • ❏ A. Create VPC peering between the public and private subnets and a separate peering between the private and database subnets

  • ❏ B. Associate a new route table with the database subnets that removes routes to the public subnet CIDR ranges

  • ❏ C. Attach a security group to the RDS instance that permits the database port only from the application tier security group

  • ❏ D. Apply a network ACL to the database subnets that denies all sources except the application subnets CIDR blocks

 

An independent news cooperative is retiring its self-hosted servers and moving to AWS to minimize operations. The flagship site must serve cached static assets as well as request-driven dynamic content and should reach users worldwide with low latency. What is the most cost-effective serverless architecture to meet these requirements?

  • ❏ A. Host both static and dynamic content on Amazon EC2 with Amazon RDS and place Amazon CloudFront in front

  • ❏ B. Store both static and dynamic content in Amazon S3 and use Amazon CloudFront for distribution

  • ❏ C. Keep static assets on Amazon S3 and use AWS Lambda with Amazon DynamoDB for dynamic requests, with Amazon CloudFront providing global delivery

  • ❏ D. Serve static files from Amazon S3 and generate dynamic content on Amazon ECS with AWS Fargate and Amazon RDS, fronted by Amazon CloudFront

A startup named Alpine Pixel Labs is launching a microservices workflow on Amazon ECS where front-end tasks receive and transform requests, then hand off the payload to separate back-end tasks that perform intensive processing and persist results. The team wants the tiers to be loosely coupled with durable buffering so that spikes or failures in one layer do not disrupt the other. What should the architect implement?

  • ❏ A. Use Amazon EventBridge rules to route events from the front-end to run the back-end ECS task directly

  • ❏ B. Create an Amazon SQS queue that pushes messages to the back-end; configure the front-end to send messages

  • ❏ C. Provision an Amazon SQS standard queue; have the front-end enqueue work items and the back-end poll and process messages

  • ❏ D. Set up an Amazon Kinesis Data Firehose delivery stream that writes to Amazon S3; make the front-end send records to the stream and have the back-end read from the S3 bucket

A metropolitan toll operator is deploying thousands of roadside sensors that together emit about 1.8 TB of alert messages each day. Each alert is roughly 3 KB and includes vehicle metadata and plate details when barrier violations occur. The company needs to ingest and store these alerts for later analytics using a highly available, low-cost approach without managing servers. They require immediate access to the most recent 21 days of data and want to archive anything older than 21 days. Which solution is the most operationally efficient for these needs?

  • ❏ A. Deploy Amazon EC2 instances in an Auto Scaling group across two Availability Zones behind an Application Load Balancer to receive alerts, write them to Amazon S3, and configure an S3 Lifecycle rule to transition objects to S3 Glacier Flexible Retrieval after 21 days

  • ❏ B. Create an Amazon Kinesis Data Firehose delivery stream that sends alerts to Amazon OpenSearch Service, take daily snapshots, and delete indices older than 21 days

  • ❏ C. Create an Amazon Kinesis Data Firehose delivery stream to deliver alerts directly to an Amazon S3 bucket and apply an S3 Lifecycle rule to move data older than 21 days to S3 Glacier Flexible Retrieval

  • ❏ D. Use an Amazon SQS queue with a message retention period of 21 days, have consumers process and copy 21-day-old messages to Amazon S3, then delete them from the queue

Northwind Analytics is launching an internal insights API on Amazon API Gateway to deliver confidential KPIs to employees. Only requests from four corporate CIDR blocks (10.20.0.0/16, 10.21.0.0/16, 172.18.0.0/20, and 192.168.50.0/24) should be allowed, and calls from any other IP addresses must be blocked. The team wants the simplest operational approach to enforce this restriction. What should the solutions architect implement?

  • ❏ A. Deploy the API Gateway API as Regional in a public subnet and associate a security group that allows only the approved CIDR ranges

  • ❏ B. Attach an API Gateway resource policy that allows only the listed CIDR blocks and denies all other source IPs

  • ❏ C. Put Amazon CloudFront in front of the API and restrict access by creating an IP allow list at the edge

  • ❏ D. Modify the security group attached to the API Gateway API to allow only the corporate IP ranges

A data engineering team at RiverPoint Health is building a service to parse and extract fields from application log files. The logs arrive unpredictably with occasional bursts and idle periods, individual files can be as large as 900 MB, and each file takes about 55 seconds to process. What is the most cost-effective way to process the files as they are created?

  • ❏ A. Write the log files to an Amazon EC2 instance with an attached Amazon EBS volume, process them on the instance, then upload results to Amazon S3

  • ❏ B. Store the log files in an Amazon S3 bucket and use an S3 event notification to trigger an AWS Lambda function to process each object

  • ❏ C. Write the log files to an Amazon S3 bucket and configure an event notification to directly invoke an Amazon ECS task to process and save results

  • ❏ D. Store the log files in Amazon S3 and run an AWS Glue job on object creation to perform the processing and write outputs

BrightWave Media needs to patch a single Amazon EC2 instance that belongs to an Auto Scaling group using step scaling, but the instance fails Elastic Load Balancing health checks for about 6 minutes during the update and the group immediately launches a replacement, so what actions should you recommend to complete the maintenance quickly while avoiding unnecessary extra capacity? (Choose 2)

  • ❏ A. Suspend the ScheduledActions process for the Auto Scaling group, apply the patch, then set the instance health to healthy and resume ScheduledActions

  • ❏ B. Place the instance in Standby, perform the update, then exit Standby to return it to service

  • ❏ C. Delete the Auto Scaling group, patch the instance, then recreate the Auto Scaling group and repopulate it using manual scaling

  • ❏ D. Suspend the ReplaceUnhealthy process for the Auto Scaling group, patch the instance, then mark the instance healthy and resume ReplaceUnhealthy

  • ❏ E. Create a snapshot and AMI from the instance, launch a new instance from that AMI to patch it, add it back to the Auto Scaling group, and terminate the original instance

An architect for Alpine BioMetrics is moving a distributed workload to AWS. The stack includes an Apache Cassandra cluster for operational data, a containerized Ubuntu Linux application tier, and a set of Microsoft SQL Server databases for transactional storage. The business requires a highly available and durable design with minimal ongoing maintenance and wants to migrate without changing database schemas. Which combination of AWS services should be used?

  • ❏ A. Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on Fargate, using Amazon RDS for Microsoft SQL Server for the relational tier

  • ❏ B. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate, using Amazon RDS for Microsoft SQL Server for the relational tier

  • ❏ C. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate, using Amazon Aurora for the relational tier

  • ❏ D. Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on EC2, using Amazon RDS for Microsoft SQL Server for the relational tier

A geospatial imaging startup runs nightly batch jobs that produce roughly 500 GB of logs and metadata. These files are written to an on-premises NFS array at the headquarters, but the array is difficult to scale and cannot keep pace with growth. The team wants a cloud-backed approach that keeps costs low, preserves NFS compatibility with existing tools, and automatically moves infrequently accessed data to cheaper tiers. What solution should they implement to achieve this most cost-effectively?

  • ❏ A. Deploy AWS Storage Gateway Volume Gateway in cached mode, present iSCSI to a local file server, build an NFS share on top, store snapshots in Amazon S3 Glacier Deep Archive, and orchestrate retention with AWS Backup

  • ❏ B. Create an Amazon EFS file system using One Zone-IA, migrate data with AWS DataSync, mount the file system over NFS, and enable EFS lifecycle management to move infrequently accessed files

  • ❏ C. Install AWS Storage Gateway File Gateway on premises, expose an NFS file share to the workloads, store files in Amazon S3, and use S3 Lifecycle policies to transition cold objects to lower-cost storage classes

  • ❏ D. Replace NFS with Amazon FSx for Windows File Server, turn on data deduplication and scheduled backups, archive backups to Amazon S3 Glacier, and reconfigure the application to access files over SMB

A network engineer at a fintech startup launched an Amazon EC2 instance in a public subnet. They opened the required inbound port in the instance’s security group and also allowed the same inbound port in the subnet’s network ACL, yet remote clients still cannot reach the service on TCP port 8443. What should you change to restore connectivity?

  • ❏ A. Network ACLs are stateful, so allowing the inbound port is sufficient, while Security Groups are stateless and must allow both inbound and outbound traffic

  • ❏ B. Add a 0.0.0.0/0 route to an internet gateway in the subnet route table

  • ❏ C. Because Security Groups are stateful, keep the inbound rule on the service port there and update the Network ACL to allow both inbound to the service port and outbound ephemeral ports such as 49152-65535

  • ❏ D. Attach an IAM role to the instance to grant network access from clients

A genomics startup runs a compute-heavy pipeline on a pool of several dozen Amazon EC2 instances that require extremely high random I/O to local storage. The application replicates and keeps the working dataset on each node, and the workflow can tolerate instance failures because a replacement instance quickly rebuilds its local data before proceeding. What is the most cost-effective and resource-efficient way to provision storage for this EC2 fleet?

  • ❏ A. Amazon EFS

  • ❏ B. Amazon EBS

  • ❏ C. Amazon EC2 Instance Store

  • ❏ D. Amazon S3

A media analytics startup plans to run about 60 Linux Amazon EC2 instances distributed across three Availability Zones in a single AWS Region. The application needs a shared file store that all instances can access at the same time. The team wants a solution that scales automatically with growth and is simple to set up. The storage must be mounted using NFS. Which approach will fulfill these needs?

  • ❏ A. Amazon S3 with a gateway endpoint to provide NFS access

  • ❏ B. Amazon FSx for Windows File Server

  • ❏ C. Amazon EFS with mount targets in each Availability Zone

  • ❏ D. An Amazon EBS volume using Multi-Attach across all Availability Zones

A compliance team at Polar Freight Systems requires that every change to an item in an Amazon DynamoDB table be captured as an auditable record outside the table. What is the most efficient way to capture and store each modification?

  • ❏ A. Use Amazon CloudTrail and run an EC2-based parser to read CloudTrail logs and copy changed items into another DynamoDB table

  • ❏ B. Enable DynamoDB Streams with NEW_AND_OLD_IMAGES and invoke an AWS Lambda function to persist each change record to Amazon S3

  • ❏ C. Create DynamoDB Global Tables across two Regions and export stream records directly to Amazon S3

  • ❏ D. Send DynamoDB events to Amazon CloudWatch Logs and have AWS Lambda filter deletes and archive them to Amazon S3

A regional logistics company operates an internal case-tracking app for its service desk. The app runs on Amazon EC2 instances in an Auto Scaling group. During office hours the group scales out to 12 instances and overnight it scales back to 3. Employees report sluggish performance right at the start of the morning shift. How should a Solutions Architect configure the Auto Scaling group to eliminate the morning slowdowns while keeping costs as low as possible?

  • ❏ A. Use target tracking with a lower average CPU target and shorten the cooldown

  • ❏ B. Add an Auto Scaling warm pool to pre-initialize instances

  • ❏ C. Create a scheduled action that sets the desired capacity to 12 shortly before office hours begin

  • ❏ D. Create a scheduled action that sets the minimum and maximum capacity to 12 before office hours

A biotech research lab migrated its shared analysis data to Amazon Elastic File System so that several Amazon EC2 instances in two private subnets can use the same files. The security team must ensure that only designated instances can mount and read the file system while other instances in the VPC are blocked. Which controls should be implemented to meet this requirement? (Choose 2)

  • ❏ A. Attach IAM policies to approved instance roles allowing efs:ClientMount and efs:ClientRead for the EFS file system

  • ❏ B. Configure network ACLs on the subnets to restrict NFS traffic to the file system

  • ❏ C. Enable Amazon GuardDuty to detect and automatically stop unauthorized mounts to the EFS file system

  • ❏ D. Apply VPC security groups on the EFS mount targets that allow NFS only from the approved instances’ security groups

  • ❏ E. Encrypt the file system with a customer managed AWS KMS key to limit which instances can access it

A digital ticketing startup uses Amazon SQS to decouple its purchase workflow. The operations team has noticed that some order messages repeatedly fail to be processed by consumers and are retried multiple times, which slows the rest of the queue. What should the solutions architect implement to reliably capture these failed messages for later inspection while allowing normal traffic to continue?

  • ❏ A. Increase the queue’s visibility timeout to 90 seconds

  • ❏ B. Use long polling to reduce empty receives

  • ❏ C. Configure a dead-letter queue for messages that exceed the max receive count

  • ❏ D. Use a temporary response queue for each failed message

A travel booking startup runs its transactional workload on Amazon Aurora. Performance is stable except when quarter-end revenue dashboards are generated, when Amazon CloudWatch shows simultaneous spikes in Read IOPS and CPUUtilization. What is the most cost-effective way to offload this reporting workload from the primary database during those periods?

  • ❏ A. Deploy Amazon ElastiCache to cache the reporting query results

  • ❏ B. Build an Amazon Redshift cluster and run the analytics there

  • ❏ C. Add an Aurora read replica and route reporting to the reader endpoint

  • ❏ D. Upgrade the Aurora DB instance to a larger class with more vCPUs

An engineering lead at a digital publishing startup plans to add Amazon RDS read replicas to improve read throughput for a multi-tenant editing platform. Before proceeding, the lead wants clarity on how data transfer is billed when setting up and using RDS read replicas. Which statement about RDS read replica data transfer pricing is accurate?

  • ❏ A. Data replicated to a read replica in the same Availability Zone is billed for data transfer

  • ❏ B. There are no data transfer fees when replicating to a read replica in another AWS Region

  • ❏ C. Replicating from a primary DB instance to a read replica in a different AWS Region incurs inter-Region data transfer charges

  • ❏ D. Amazon RDS Proxy

A regional insurance carrier plans to deploy an Amazon RDS Multi-AZ database to store policy and claim transactions. Their actuarial modeling application runs in the headquarters data center, and when employees are in the office the application must connect directly to the RDS database. The company needs this connectivity to be secure and efficient. Which approach provides the most secure connectivity?

  • ❏ A. Create a VPC with two public subnets, host the RDS instance in those public subnets, and allow the headquarters CIDR in the database security group to permit direct internet access

  • ❏ B. Place the RDS database in public subnets within a VPC and have office users connect using AWS Client VPN from their desktops

  • ❏ C. Build a VPC with private subnets in separate Availability Zones, deploy the Multi-AZ RDS in those private subnets, and connect the corporate network to the VPC using AWS Site-to-Site VPN with a customer gateway

  • ❏ D. Provision AWS Direct Connect with a public virtual interface and run the RDS database in public subnets to provide low-latency access from the office

A digital learning startup completed its move from a colocation facility to AWS about 90 days ago and is considering Amazon CloudFront as the CDN for its main web application; the architects want guidance on request routing, protection of sensitive form fields, and designing for failover; which statements about CloudFront are correct? (Choose 3)

  • ❏ A. Configure an origin group with a designated primary and secondary origin to enable automatic failover

  • ❏ B. Use geo restriction to achieve failover and high availability across countries

  • ❏ C. Route requests to different origins by defining separate cache behaviors for path patterns such as /static/* and /api/*

  • ❏ D. Use AWS Key Management Service (AWS KMS) directly in CloudFront to selectively encrypt specific form fields

  • ❏ E. Apply field-level encryption in CloudFront to encrypt sensitive fields like card numbers at the edge

  • ❏ F. Send viewer traffic to different origins based on the selected price class

Summit Drafting LLC is moving approximately 48 TB of design files from an on-premises NAS to AWS. The files must be shared within one AWS Region by Amazon EC2 instances running Windows, macOS, and Linux. Teams must access the same datasets over both SMB and NFS, with some files used often and others only occasionally. The company wants a fully managed solution that keeps administration to a minimum. Which approach best meets these needs?

  • ❏ A. Amazon FSx for OpenZFS

  • ❏ B. Create an Amazon FSx for NetApp ONTAP file system and migrate the dataset

  • ❏ C. Set up Amazon EFS with lifecycle policies to Infrequent Access and use AWS DataSync to copy the data

  • ❏ D. Amazon FSx for Windows File Server

A media analytics startup in Madrid tracks live viewer behavior and associated ad exposures. Event data is processed in real time on its on-premises platform. Each night, the team aggregates the day’s records into a single compressed archive of about 3 gigabytes and stores it in an Amazon S3 bucket for backup. What is the fastest way to move the nightly compressed file from the data center to Amazon S3?

  • ❏ A. Upload the compressed object to Amazon S3 in a single PUT operation

  • ❏ B. Use AWS DataSync to copy the file from on premises to the S3 bucket

  • ❏ C. Use multipart upload with Amazon S3 Transfer Acceleration

  • ❏ D. Use standard multipart upload to Amazon S3 without Transfer Acceleration

A new IT administrator at a nonprofit creates a fresh AWS account and launches an Amazon EC2 instance named appA in us-west-2. She then creates an EBS snapshot of appA, builds an AMI from that snapshot in us-west-2, and copies the AMI to eu-central-1. She subsequently launches an instance named appB in eu-central-1 from the copied AMI. At this moment, which resources are present in eu-central-1?

  • ❏ A. 1 Amazon EC2 instance and 1 AMI exist in eu-central-1

  • ❏ B. 1 Amazon EC2 instance, 1 AMI, and 1 snapshot exist in eu-central-1

  • ❏ C. 1 Amazon EC2 instance and 1 snapshot exist in eu-central-1

  • ❏ D. 1 Amazon EC2 instance, 1 AMI, and 2 snapshots exist in eu-central-1

A boutique event equipment company has 8 engineers sharing a single AWS account, and each team deployed separate VPCs for payments, logistics, and analytics services. The teams now need private connectivity so their applications can talk to each other across these VPCs with the lowest ongoing cost. What should they implement?

  • ❏ A. Internet Gateway

  • ❏ B. VPC peering connection

  • ❏ C. AWS Direct Connect

  • ❏ D. NAT gateway

Riverton Labs, a geospatial analytics startup, has standardized on AWS Control Tower with a multi-account setup where each engineer develops in an isolated sandbox account. Finance has observed occasional unexpected cost spikes from individual sandboxes, and leadership needs a centrally managed way to cap spend that triggers automatic enforcement when a threshold is crossed while requiring minimal ongoing maintenance. What is the most efficient method to enforce account-level spending limits across about 60 developer accounts with the least operational overhead?

  • ❏ A. Deploy a daily AWS Lambda in each sandbox account that reads Cost Explorer and triggers an AWS Config remediation rule when spend exceeds a threshold

  • ❏ B. Configure AWS Cost Anomaly Detection with Amazon EventBridge to invoke an AWS Systems Manager Automation runbook that stops or tags costly resources when anomalies are detected

  • ❏ C. Use AWS Budgets to create per-account budgets with alerts for actual and forecasted spend, and attach Budgets actions that apply a restrictive Deny policy to the developer’s primary role when the threshold is breached

  • ❏ D. Publish standardized portfolios in AWS Service Catalog with template cost limits and add a scheduled Lambda in each account to shut down resources after hours and restart them each morning

A regional media startup, BrightWave Studios, runs its customer platform on a single on-premises MySQL server. The team wants to move to a fully managed AWS database to increase availability and improve performance while keeping operational overhead low. They also need to isolate heavy, read-only reporting queries from the primary write workload so that transactional performance remains stable. Which approach is the most operationally efficient way to achieve these objectives?

  • ❏ A. Deploy self-managed MySQL on Amazon EC2 across two Availability Zones with asynchronous replication to a reporting node and handle backups and patching yourself

  • ❏ B. Use Amazon Aurora MySQL-Compatible Edition and route reporting and BI queries to an Aurora Replica in the same cluster

  • ❏ C. Use Amazon RDS for MySQL in a Single-AZ deployment and create a read replica in the same AZ for reporting

  • ❏ D. Use AWS Database Migration Service to move to an Aurora Global Database in two Regions and direct reporting to a reader in the secondary Region

Riverton Analytics wants to surface current AWS spend in its internal finance portal. The finance lead needs a programmatic solution that can pull year-to-date costs for the current fiscal year and also return a forecast for the next 9 months while keeping operations to a minimum. What approach should the solutions architect recommend?

  • ✓ C. Use the AWS Cost Explorer API to query cost and usage data and retrieve forecasts, handling large results with pagination

The Use the AWS Cost Explorer API to query cost and usage data and retrieve forecasts, handling large results with pagination option is correct because it provides programmatic endpoints for both historical costs and forward-looking forecasts and it supports pagination to manage large responses.

The Use the AWS Cost Explorer API to query cost and usage data and retrieve forecasts, handling large results with pagination lets you call operations such as GetCostAndUsage for year to date data and GetCostForecast for future months, and it returns tokens for paging so your code can handle large result sets with minimal operational work.

Deliver AWS Cost and Usage Reports to Amazon S3 and analyze with Amazon Athena to calculate a forecast is not the best choice because the Cost and Usage Report provides raw usage data and it does not include native forecasting, so you would need to build ingestion, transformation, and forecasting pipelines which increases operational overhead.

Configure AWS Budgets to push cost data to the company over FTP is incorrect because AWS Budgets cannot push data over FTP and it is intended for alerts and threshold actions rather than bulk programmatic export of forecasts.

Download Cost Explorer CSV exports and parse them to populate the dashboard is operationally heavier and less reliable for a programmatic integration and the manual CSV approach does not provide a native forecast endpoint so it is not suitable when you need automated forecasts with minimal operations.

When a question asks for programmatic access to both historical costs and forecasts with minimal operations choose the AWS Cost Explorer API and look for mentions of pagination for large datasets.

Riverton Insights stores quarterly analytics in an Amazon RDS for PostgreSQL instance and plans to publish this information through a REST API for a public dashboard. Demand is idle for long stretches but can surge to thousands of requests within a short window. Which approach should the solutions architect use to minimize idle cost while seamlessly handling sudden traffic spikes?

  • ✓ B. Amazon API Gateway with AWS Lambda

Amazon API Gateway with AWS Lambda is the correct choice because it can scale to zero during idle periods and it can rapidly scale horizontally to absorb sudden spikes while charging only for requests.

The workload is unpredictable with long idle stretches and short bursts so Lambda offers a serverless model that eliminates baseline instance cost and provides near instant concurrency growth. This behavior aligns with the goal to minimize idle cost and to seamlessly handle thousands of requests in a short window.

Amazon API Gateway with AWS Elastic Beanstalk is not ideal because Elastic Beanstalk maintains running capacity and does not scale to zero so you incur costs during idle periods and scaling may be slower under sudden bursts.

Amazon API Gateway with Amazon EC2 and Auto Scaling can scale out but typically keeps baseline instances and scaling up takes longer so idle cost and request latency risk remain higher than a serverless approach.

Amazon API Gateway with Amazon ECS requires managing container tasks or services that usually incur cost when idle and do not natively scale to zero behind API Gateway without additional complexity.

For intermittent traffic with rapid spikes choose a serverless integration that scales to zero so you pay per request and avoid warm instance costs.

A media analytics startup runs several Windows-based applications in its private data center and plans to migrate to AWS within the next 45 days. They require a managed shared file service that multiple Windows applications can access at the same time over SMB without building replication or custom synchronization. The file storage must be able to join their existing self-managed Active Directory domain that remains on premises. Which AWS service should they choose to minimize integration work?

  • ✓ C. Amazon FSx for Windows File Server

The correct choice is Amazon FSx for Windows File Server. It provides fully managed SMB file shares that can join a self managed Active Directory on premises and it supports Windows native features such as NTFS permissions while allowing multiple Windows applications to access the same share without custom replication or synchronization.

Amazon FSx for Windows File Server is a Windows native, managed file system that supports SMB and Active Directory integration so migration work is minimal. It preserves Windows access controls and group based permissions and it removes the need to build custom synchronization for shared Windows workloads.

AWS Storage Gateway File Gateway is built for hybrid access to Amazon S3 and it exposes S3 objects as file shares. It is not a native SMB file server in AWS for EC2 workloads and it does not provide the same AD join and Windows file semantics that the startup requires.

Amazon FSx for Lustre targets high performance computing and analytics with POSIX semantics and tight integration with S3. It does not offer SMB or Windows Active Directory integration so it cannot meet the Windows domain joined SMB requirement.

Amazon Elastic File System (Amazon EFS) provides NFS for Linux clients and lacks native SMB and Windows AD integration. It is therefore unsuitable for Windows based applications that require domain authentication and NTFS style permissions.

Match the file protocol and identity system to the workload. When Windows and a self managed Active Directory are required choose FSx for Windows File Server for SMB and AD integration.

A research consortium stores downloadable climate datasets in an Amazon S3 bucket. Users across North America, Europe, and Asia access these files through a vanity URL that resolves to the consortium’s domain. The team requires consistently low latency for global downloads and will keep DNS in Amazon Route 53. What should the team implement?

  • ✓ B. Deploy an Amazon CloudFront distribution with the S3 bucket as the origin and create an ALIAS record in Route 53 that maps the domain to the distribution

The correct option is Deploy an Amazon CloudFront distribution with the S3 bucket as the origin and create an ALIAS record in Route 53 that maps the domain to the distribution. This configuration uses CloudFront edge caches to deliver the datasets with low, consistent latency worldwide and it allows the vanity domain to resolve cleanly via Route 53.

CloudFront caches static S3 objects at global edge locations so users in North America, Europe, and Asia fetch content from nearby edges which reduces latency and bandwidth to the origin. The distribution integrates with Route 53 Alias records so the custom domain can point to the distribution even at the zone apex and TLS and caching settings are applied at the CDN layer.

AWS Global Accelerator is not appropriate because it cannot target S3 directly and it does not perform content caching so it will not reduce download latency for static S3 files. Global Accelerator optimizes routing for TCP and UDP endpoints but it is not a substitute for a CDN.

Configure a Route 53 traffic policy with geolocation rules and health checks and publish an A record to route queries among endpoints is incorrect because traffic policies do not add edge caching and they assume multiple regional endpoints which are unnecessary for a single S3 origin. This approach does not provide the global caching or bandwidth optimization that CloudFront provides.

Deploy an Amazon CloudFront distribution with the S3 bucket as the origin and create a CNAME record in Route 53 that points the domain to the distribution is inferior for apex domains because CNAME records cannot be used at the zone apex and Route 53 Alias records are the supported method for mapping an apex domain to a CloudFront distribution. Alias records also integrate more cleanly with AWS services and avoid DNS limitations.

For globally distributed static files use CloudFront for edge caching and map your vanity apex domain with a Route 53 Alias record.

A fintech startup named MeridianPay is rolling out a multi-account AWS landing zone across 14 accounts in three Regions and wants to consolidate security telemetry from AWS services and approved third-party tools into one place. The security team needs to evaluate organization-wide posture and accelerate detection and response while avoiding heavy custom code, manual integrations, and bespoke normalization. Which approach provides these outcomes with the least engineering effort?

  • ✓ C. Use Amazon Security Lake to automatically collect and normalize security events from AWS services and approved partners into a centrally managed S3 data lake

Use Amazon Security Lake to automatically collect and normalize security events from AWS services and approved partners into a centrally managed S3 data lake is the correct option. This choice is purpose built to collect telemetry across multiple accounts and Regions and to reduce the need for custom integration work.

Amazon Security Lake centralizes collection and applies built in normalization to the Open Cybersecurity Schema Framework so security teams can evaluate organization wide posture and accelerate detection and response with minimal custom code. The service handles S3 partitioning and retention and integrates with AWS sources and approved partners which reduces engineering and operational overhead compared with building bespoke pipelines.

Create a governed data lake with AWS Lake Formation and use AWS Glue jobs to ingest and standardize security logs for centralized analysis is a general purpose approach that requires custom ETL, parsers, and ongoing pipeline maintenance which contradicts the requirement to avoid heavy custom build.

Query disparate security logs in multiple S3 buckets using Amazon Athena saved queries and publish dashboards in Amazon QuickSight allows analysis but it does not provide a centralized, normalized security data store across accounts and Regions so you would still need to build ingestion, schema management, and cross account plumbing.

Build a cross-account AWS Lambda aggregator that pulls logs and writes CSV files to a central Amazon S3 bucket forces you to manage custom code, error handling, and schema evolution and it lacks built in normalization such as OCSF which increases operational burden.

Prefer managed purpose built services when the question emphasizes centralized normalization and minimal custom code.

A global media firm operates a distributed web platform on Amazon EC2 behind an Application Load Balancer, with instances managed by an Auto Scaling group. The backend is an Amazon Aurora MySQL cluster deployed across three Availability Zones in one AWS Region. The firm is expanding to a far-away continent and needs the highest availability with minimal disruption during Regional failure or maintenance. What approach should the firm use to extend the application to the new Region while meeting these objectives?

  • ✓ B. Stand up the web tier in the new Region, use Aurora Global Database to create a secondary Region cluster, configure Amazon Route 53 health checks with failover routing, and promote the secondary to primary when needed

Stand up the web tier in the new Region, use Aurora Global Database to create a secondary Region cluster, configure Amazon Route 53 health checks with failover routing, and promote the secondary to primary when needed is correct because it deploys the full application stack in the new Region and uses a purpose built cross Region database solution to minimize downtime and data loss during regional failure or maintenance.

Aurora Global Database provides physical storage based replication that keeps replication lag extremely low and it allows controlled or unplanned promotion of the secondary Region to primary with minimal RPO and RTO. Deploying the web tier locally in the new Region and pairing the database with Amazon Route 53 failover health checks lets you shift traffic automatically to the healthy Region while keeping user latency as low as possible.

Build the application in the new Region with a fresh Aurora MySQL cluster, use AWS Database Migration Service for ongoing replication from the primary, and use Route 53 failover routing to the new Region is weaker because AWS Database Migration Service is optimized for migrations and change data capture and it does not provide the low latency cross Region failover guarantees that Aurora Global Database offers.

Enlarge the existing Auto Scaling group to span both Regions, enable Aurora Global Database between Regions, and use Route 53 health checks with failover to the new Region is invalid because Amazon EC2 Auto Scaling groups are regional resources and cannot span multiple Regions, so you must provision separate web tiers per Region.

Deploy the application in the new Region and create a cross-Region Aurora MySQL read replica, use Route 53 failover, and promote the replica on primary failure is less desirable because cross Region read replicas normally involve higher failover time and greater risk of data loss compared with Aurora Global Database which is designed for fast, low RPO cross Region replication.

Prefer Aurora Global Database plus Route 53 failover and deploy the full application stack in each Region to minimize both RTO and RPO during Region failover.

An insurance startup hosts a public website on a Windows-based server in a colocation facility. The team is migrating the site to Amazon EC2 Windows instances spread across three Availability Zones, and the app currently uses a shared SMB file share on an on-premises NAS. Which AWS replacement for this NAS file share will provide the highest resilience and durability for the Windows web tier?

  • ✓ B. Amazon FSx for Windows File Server

The most resilient and durable Windows compatible shared file system for this migration is Amazon FSx for Windows File Server because it provides a native SMB experience for Windows web servers and supports multi Availability Zone high availability.

Amazon FSx for Windows File Server is a fully managed Windows file system that offers SMB protocol support and native integration with Microsoft Active Directory. It supports multi AZ deployment with automatic failover and replication for high availability and durability and it provides point in time backups and encryption to meet enterprise recovery and security needs.

Amazon Elastic File System (Amazon EFS) is designed for NFS and is optimized for Linux based clients so it does not provide a native SMB file share for Windows servers and it therefore does not meet the SMB requirement for the web tier.

Amazon EBS is block storage that attaches to a single instance in one Availability Zone and it does not provide a shared, highly available file system across multiple Windows EC2 instances so it cannot replace the NAS used by the web tier.

AWS Storage Gateway is a hybrid connectivity service that can front S3 or integrate with FSx for on prem access but it is not the direct native multi AZ SMB file system for EC2 workloads and it would add unnecessary architectural complexity for this migration.

Remember to match protocol and OS when choosing storage and keep Windows + SMB + multi AZ in mind as the key phrase that points to Amazon FSx for Windows File Server

HarborStream is a global media startup that needs to deliver low latency live broadcasts and a library of on demand videos to viewers across the world. To improve playback by caching content near users, which AWS service should they use for both live and on demand distribution?

  • ✓ C. Amazon CloudFront

Amazon CloudFront is the correct choice because it is a global content delivery network that caches and delivers both live and on demand video near viewers to reduce latency.

Amazon CloudFront has hundreds of edge locations that can cache HTTP based video streams and support features such as origin shielding and configurable caching policies to optimize playback worldwide. It integrates with media origins and packaging services so it can serve both live and VOD workflows with low latency.

AWS Global Accelerator improves TCP and UDP path performance and provides static anycast IP addresses, but it does not perform content caching or provide HTTP streaming capabilities so it is not a substitute for a CDN.

AWS Elemental MediaPackage handles stream packaging and origination for live and on demand workflows, yet it is normally fronted by a CDN such as Amazon CloudFront to provide global edge caching and low latency. Using MediaPackage alone will not deliver the same worldwide caching benefits.

Amazon Route 53 is a DNS service that can route viewers to endpoints, but it does not move content closer to users or handle media delivery and caching, so it does not meet the caching requirement.

Remember that CloudFront is the service to choose for global caching of live and on demand video and that MediaPackage is for packaging while Global Accelerator and Route 53 address different network or DNS needs.

A retail analytics firm operates 650 Amazon EC2 instances that run recurring batch jobs to process sales data. The team must deploy a third-party agent to every instance rapidly and expects to repeat similar rollouts in the future. The approach should be straightforward to operate and automatically cover new EC2 instances as they are added. What should a solutions architect recommend?

  • ✓ C. AWS Systems Manager Run Command

AWS Systems Manager Run Command is the correct choice for this scenario.

AWS Systems Manager Run Command runs scripts or installs across large EC2 fleets by targeting instances with tags or resource groups and it does not require SSH access. You can execute commands ad hoc or automate them with documents so the team can deploy the third party agent quickly now and repeat the rollout later.

AWS Systems Manager Run Command automatically covers new managed instances that match your targeting criteria so you do not need to manually include instances as the fleet grows. It also integrates with AWS IAM for access control which keeps the operation straightforward to run and to audit.

AWS Systems Manager Maintenance Windows is focused on scheduling tasks inside defined windows and it adds scheduling overhead when the goal is a fast, repeatable agent install.

AWS CodeDeploy is optimized for application deployments and requires packaging and deployment group constructs which add unnecessary complexity for distributing a simple agent to existing instances.

AWS Systems Manager Patch Manager is intended for operating system patching and compliance and it is not designed for general purpose software installation so it does not meet the requirement.

When you need to execute commands across many instances quickly think Run Command and use tags to target the rollout. Use State Manager for persistent desired configuration and use Patch Manager for OS updates.

A media analytics firm operates a three-tier stack in a VPC across two Availability Zones. The web tier runs on Amazon EC2 instances in public subnets, the application tier runs on EC2 instances in private subnets, and the data tier uses an Amazon RDS for MySQL DB instance in private subnets. The security team requires that the database accept connections only from the application tier and from nowhere else. How should you configure access to meet this requirement?

  • ✓ C. Attach a security group to the RDS instance that permits the database port only from the application tier security group

Attach a security group to the RDS instance that permits the database port only from the application tier security group is correct because it limits access to the DB to only the application instances by referencing the app tier security group and enforcing least privilege.

Using a security group on the RDS instance lets you allow the specific database port and reference the application tier security group by its ID. Security groups are stateful so return traffic is allowed automatically and any source not explicitly permitted is implicitly denied by default. This approach works across Availability Zones inside the VPC and is the most precise and manageable method for tier to tier controls.

Create VPC peering between the public and private subnets and a separate peering between the private and database subnets is incorrect because VPC peering is a relationship between VPCs not between subnets and it does not provide per tier access control.

Associate a new route table with the database subnets that removes routes to the public subnet CIDR ranges is incorrect because route tables control traffic paths not which sources may open connections and changing routes can break legitimate internal traffic.

Apply a network ACL to the database subnets that denies all sources except the application subnets CIDR blocks is not the best choice because network ACLs are stateless and CIDR based which makes rules more complex and error prone compared to referencing another security group.

When you must restrict DB access to an app tier prefer using security group references that point at the application security group and test from an app instance to confirm connectivity.

An independent news cooperative is retiring its self-hosted servers and moving to AWS to minimize operations. The flagship site must serve cached static assets as well as request-driven dynamic content and should reach users worldwide with low latency. What is the most cost-effective serverless architecture to meet these requirements?

  • ✓ C. Keep static assets on Amazon S3 and use AWS Lambda with Amazon DynamoDB for dynamic requests, with Amazon CloudFront providing global delivery

Keep static assets on Amazon S3 and use AWS Lambda with Amazon DynamoDB for dynamic requests, with Amazon CloudFront providing global delivery is the correct choice because it implements a fully serverless design that minimizes operations and cost while delivering cached static assets and request driven dynamic content worldwide.

Amazon S3 serves static files at low cost and integrates with Amazon CloudFront to cache content close to users. AWS Lambda runs dynamic request handlers with automatic scaling and per invocation billing which eliminates idle server cost. Amazon DynamoDB provides a managed, low latency NoSQL backend that scales without server maintenance. These components together meet the requirements for low operational overhead cost effectiveness and global low latency delivery.

Host both static and dynamic content on Amazon EC2 with Amazon RDS and place Amazon CloudFront in front is not serverless and requires managing instances and a relational database which increases operational burden and idle costs compared with the serverless option.

Store both static and dynamic content in Amazon S3 and use Amazon CloudFront for distribution is incorrect because Amazon S3 cannot execute server side code to generate dynamic responses so it does not satisfy the dynamic request requirement.

Serve static files from Amazon S3 and generate dynamic content on Amazon ECS with AWS Fargate and Amazon RDS, fronted by Amazon CloudFront is a workable architecture but it introduces container orchestration and a managed relational database which typically raises complexity and cost compared with a AWS Lambda and Amazon DynamoDB serverless approach.

When a question emphasizes serverless and mixed static plus dynamic workloads think S3 for static files and CloudFront with AWS Lambda and DynamoDB for dynamic APIs to minimize operations and cost.

A startup named Alpine Pixel Labs is launching a microservices workflow on Amazon ECS where front-end tasks receive and transform requests, then hand off the payload to separate back-end tasks that perform intensive processing and persist results. The team wants the tiers to be loosely coupled with durable buffering so that spikes or failures in one layer do not disrupt the other. What should the architect implement?

  • ✓ C. Provision an Amazon SQS standard queue; have the front-end enqueue work items and the back-end poll and process messages

The correct choice is Provision an Amazon SQS standard queue; have the front-end enqueue work items and the back-end poll and process messages. SQS provides durable, scalable buffering and pull based consumption so the front end and back end can scale and fail independently without losing work.

SQS supports long polling and visibility timeouts so consumers can retrieve messages when they are ready and avoid duplicate processing during transient failures. The queue can be combined with dead letter queues and auto scaling of consumer tasks so the system handles spikes and backpressure gracefully while preserving message durability.

Use Amazon EventBridge rules to route events from the front-end to run the back-end ECS task directly is less suitable because EventBridge can invoke targets but does not provide the same consumer driven polling or deep durable buffering characteristics that SQS provides for backpressure control.

Create an Amazon SQS queue that pushes messages to the back-end; configure the front-end to send messages is incorrect because SQS does not push messages to consumers. Applications or tasks must poll SQS to receive work items.

Set up an Amazon Kinesis Data Firehose delivery stream that writes to Amazon S3; make the front-end send records to the stream and have the back-end read from the S3 bucket is inappropriate because Firehose is designed for continuous delivery into storage and analytics sinks and is not intended for low latency task to task decoupling or consumer polling patterns.

For asynchronous decoupling choose SQS when you need durable buffering and consumer driven polling and remember that SNS is push based while Firehose delivers into sinks like S3.

A metropolitan toll operator is deploying thousands of roadside sensors that together emit about 1.8 TB of alert messages each day. Each alert is roughly 3 KB and includes vehicle metadata and plate details when barrier violations occur. The company needs to ingest and store these alerts for later analytics using a highly available, low-cost approach without managing servers. They require immediate access to the most recent 21 days of data and want to archive anything older than 21 days. Which solution is the most operationally efficient for these needs?

  • ✓ C. Create an Amazon Kinesis Data Firehose delivery stream to deliver alerts directly to an Amazon S3 bucket and apply an S3 Lifecycle rule to move data older than 21 days to S3 Glacier Flexible Retrieval

Create an Amazon Kinesis Data Firehose delivery stream to deliver alerts directly to an Amazon S3 bucket and apply an S3 Lifecycle rule to move data older than 21 days to S3 Glacier Flexible Retrieval is the correct option because it offers a serverless end to end ingestion and storage path that meets the requirements for high availability low operational overhead immediate access to recent data and automated archival after 21 days.

Kinesis Data Firehose scales automatically to handle very high throughput and it batches and compresses records before delivery which reduces cost and operational work. Delivering directly to S3 provides immediate access to the most recent 21 days of data in a cost effective hot store and S3 Lifecycle policies can transition older objects to S3 Glacier Flexible Retrieval for long term archival without running any servers.

Deploy Amazon EC2 instances in an Auto Scaling group across two Availability Zones behind an Application Load Balancer to receive alerts, write them to Amazon S3, and configure an S3 Lifecycle rule to transition objects to S3 Glacier Flexible Retrieval after 21 days is not a good fit because it requires managing compute instances load balancing and patching which increases operational burden and cost compared with a fully managed ingestion service.

Create an Amazon Kinesis Data Firehose delivery stream that sends alerts to Amazon OpenSearch Service, take daily snapshots, and delete indices older than 21 days is not ideal because OpenSearch is intended for search and interactive analytics and storing very large volumes there is more expensive and operationally heavier than writing to S3 for archival and batch analytics.

Use an Amazon SQS queue with a message retention period of 21 days, have consumers process and copy 21-day-old messages to Amazon S3, then delete them from the queue is incorrect because Amazon SQS only supports up to 14 days of message retention and it also requires consumer infrastructure to poll and move data which conflicts with the requirement to avoid managing servers.

When an answer asks for high throughput with minimal operations choose Kinesis Data Firehose to S3 and then use an S3 Lifecycle rule to archive data older than the retention window.

Northwind Analytics is launching an internal insights API on Amazon API Gateway to deliver confidential KPIs to employees. Only requests from four corporate CIDR blocks (10.20.0.0/16, 10.21.0.0/16, 172.18.0.0/20, and 192.168.50.0/24) should be allowed, and calls from any other IP addresses must be blocked. The team wants the simplest operational approach to enforce this restriction. What should the solutions architect implement?

  • ✓ B. Attach an API Gateway resource policy that allows only the listed CIDR blocks and denies all other source IPs

Attach an API Gateway resource policy that allows only the listed CIDR blocks and denies all other source IPs is the correct choice because it enforces an IP allow list at the API level and requires minimal operational work to maintain.

Attach an API Gateway resource policy that allows only the listed CIDR blocks and denies all other source IPs uses the standard resource policy condition keys such as IpAddress and NotIpAddress so you can explicitly allow the four corporate CIDR blocks and deny all other source IPs on the API invoke action. This approach does not introduce extra infrastructure to manage and it directly prevents calls to the execute api endpoint from disallowed addresses.

Deploy the API Gateway API as Regional in a public subnet and associate a security group that allows only the approved CIDR ranges is not possible because API Gateway is a managed endpoint and it does not run inside your VPC so you cannot place it in a subnet or attach a security group.

Put Amazon CloudFront in front of the API and restrict access by creating an IP allow list at the edge adds operational complexity and it does not by itself stop clients from calling the underlying execute api endpoint unless you also lock down the API with a resource policy or use a private API type.

Modify the security group attached to the API Gateway API to allow only the corporate IP ranges is invalid because there is no security group that you can attach to a public API Gateway endpoint so you cannot enforce IP restrictions that way.

For IP based allow lists prefer using API Gateway resource policies because they are simple to implement and they directly block unwanted source IPs at the API endpoint.

A data engineering team at RiverPoint Health is building a service to parse and extract fields from application log files. The logs arrive unpredictably with occasional bursts and idle periods, individual files can be as large as 900 MB, and each file takes about 55 seconds to process. What is the most cost-effective way to process the files as they are created?

  • ✓ B. Store the log files in an Amazon S3 bucket and use an S3 event notification to trigger an AWS Lambda function to process each object

Store the log files in an Amazon S3 bucket and use an S3 event notification to trigger an AWS Lambda function to process each object is correct because it provides a fully serverless, auto-scaling, pay-per-use model that matches unpredictable bursts and idle periods for per-file processing.

Using Lambda with S3 event notifications lets you run one function per object and you pay only for the execution time and memory used. Lambda supports durations up to 15 minutes so a 55 second job runs comfortably while automatic scaling handles bursts without idle EC2 capacity.

Write the log files to an Amazon EC2 instance with an attached Amazon EBS volume, process them on the instance, then upload results to Amazon S3 is less cost-efficient because you pay for provisioned instances even when traffic is low and you must manage scaling and instance lifecycle which increases operational overhead.

Write the log files to an Amazon S3 bucket and configure an event notification to directly invoke an Amazon ECS task to process and save results is incorrect because S3 event notifications do not directly launch ECS tasks and building a reliable pipeline requires extra services such as EventBridge or SQS which adds complexity and cost.

Store the log files in Amazon S3 and run an AWS Glue job on object creation to perform the processing and write outputs is overkill for short running jobs because Glue has higher startup latency and DPU-based pricing that makes it more expensive for sub-minute processing.

Event-driven serverless functions triggered by S3 minimize cost for sporadic, short tasks. Verify that each file’s processing time fits within Lambda limits and tune memory to balance cost and execution speed.

BrightWave Media needs to patch a single Amazon EC2 instance that belongs to an Auto Scaling group using step scaling, but the instance fails Elastic Load Balancing health checks for about 6 minutes during the update and the group immediately launches a replacement, so what actions should you recommend to complete the maintenance quickly while avoiding unnecessary extra capacity? (Choose 2)

  • ✓ B. Place the instance in Standby, perform the update, then exit Standby to return it to service

  • ✓ D. Suspend the ReplaceUnhealthy process for the Auto Scaling group, patch the instance, then mark the instance healthy and resume ReplaceUnhealthy

Place the instance in Standby, perform the update, then exit Standby to return it to service and Suspend the ReplaceUnhealthy process for the Auto Scaling group, patch the instance, then mark the instance healthy and resume ReplaceUnhealthy are the correct actions because they let you complete maintenance without the Auto Scaling group immediately launching a replacement.

Place the instance in Standby, perform the update, then exit Standby to return it to service removes the instance from load balancing and scaling activities while keeping it attached to the group. This preserves group membership and desired capacity so you can patch the instance and return it to service without causing extra instances to be launched.

Suspend the ReplaceUnhealthy process for the Auto Scaling group, patch the instance, then mark the instance healthy and resume ReplaceUnhealthy temporarily stops automated replacement that is triggered by failed health checks. This lets you finish the update while the instance fails ELB checks and then restore normal automated healing after you mark the instance healthy and resume the process.

Suspend the ScheduledActions process for the Auto Scaling group, apply the patch, then set the instance health to healthy and resume ScheduledActions is incorrect because scheduled actions only control time based scaling and do not prevent health driven replacement or remove the instance from load balancing.

Delete the Auto Scaling group, patch the instance, then recreate the Auto Scaling group and repopulate it using manual scaling is incorrect because deleting the group is disruptive and slow and it introduces unnecessary risk for a single instance maintenance task.

Create a snapshot and AMI from the instance, launch a new instance from that AMI to patch it, add it back to the Auto Scaling group, and terminate the original instance is incorrect because it adds extra steps and resource usage and still risks triggering scaling actions. Using Standby or suspending ReplaceUnhealthy is simpler and faster for this scenario.

When patching a live instance in an Auto Scaling group think Standby to remove it from traffic or Suspend ReplaceUnhealthy to stop automatic replacement while you complete maintenance.

An architect for Alpine BioMetrics is moving a distributed workload to AWS. The stack includes an Apache Cassandra cluster for operational data, a containerized Ubuntu Linux application tier, and a set of Microsoft SQL Server databases for transactional storage. The business requires a highly available and durable design with minimal ongoing maintenance and wants to migrate without changing database schemas. Which combination of AWS services should be used?

  • ✓ B. Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate, using Amazon RDS for Microsoft SQL Server for the relational tier

Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate, using Amazon RDS for Microsoft SQL Server for the relational tier is correct because it allows a lift and shift that preserves the Cassandra data model while minimizing operational overhead and providing managed SQL Server availability.

Amazon Keyspaces is compatible with Apache Cassandra so existing data models and drivers can be reused without schema conversion. Amazon ECS on Fargate removes EC2 host management and reduces ongoing operational tasks. Amazon RDS for Microsoft SQL Server provides managed backups and Multi-AZ high availability which satisfies the business requirements for durability and minimal maintenance.

Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on Fargate, using Amazon RDS for Microsoft SQL Server for the relational tier is unsuitable because Amazon DynamoDB is not compatible with Cassandra and moving to it normally requires redesigning data models and drivers.

Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS on Fargate, using Amazon Aurora for the relational tier will not meet the no schema or engine change requirement because Amazon Aurora does not offer a SQL Server engine and migrating SQL Server databases would be required.

Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on EC2, using Amazon RDS for Microsoft SQL Server for the relational tier increases operational overhead because it requires managing EC2 instances for the container hosts and it also forces a data model change if moving from Cassandra to DynamoDB.

When a question emphasizes no schema conversion and minimal operations prefer managed services that are engine compatible and serverless or fully managed for compute and databases.

A geospatial imaging startup runs nightly batch jobs that produce roughly 500 GB of logs and metadata. These files are written to an on-premises NFS array at the headquarters, but the array is difficult to scale and cannot keep pace with growth. The team wants a cloud-backed approach that keeps costs low, preserves NFS compatibility with existing tools, and automatically moves infrequently accessed data to cheaper tiers. What solution should they implement to achieve this most cost-effectively?

  • ✓ C. Install AWS Storage Gateway File Gateway on premises, expose an NFS file share to the workloads, store files in Amazon S3, and use S3 Lifecycle policies to transition cold objects to lower-cost storage classes

The most cost effective choice is Install AWS Storage Gateway File Gateway on premises, expose an NFS file share to the workloads, store files in Amazon S3, and use S3 Lifecycle policies to transition cold objects to lower-cost storage classes. This approach preserves existing NFS access while pushing data into S3 so capacity grows without changing client tools and it enables automated tiering to reduce long term storage costs.

The reason the Install AWS Storage Gateway File Gateway on premises, expose an NFS file share to the workloads, store files in Amazon S3, and use S3 Lifecycle policies to transition cold objects to lower-cost storage classes option is correct is that the File Gateway presents a native NFS interface locally and stores objects in Amazon S3 so the startup keeps using the same NFS workflows while benefiting from S3 durability and lifecycle policies. The solution scales independently of on premises hardware and lifecycle rules can move infrequently accessed objects to S3 Standard‑IA, S3 Glacier, or S3 Glacier Deep Archive to lower costs.

Deploy AWS Storage Gateway Volume Gateway in cached mode, present iSCSI to a local file server, build an NFS share on top, store snapshots in Amazon S3 Glacier Deep Archive, and orchestrate retention with AWS Backup is less suitable because it exposes block storage over iSCSI and relies on snapshot semantics. That design does not provide native file object storage in S3 or seamless file level lifecycle transitions.

Create an Amazon EFS file system using One Zone-IA, migrate data with AWS DataSync, mount the file system over NFS, and enable EFS lifecycle management to move infrequently accessed files is not the best fit for this scenario because EFS is a cloud native file system without an on premises NFS cache and EFS lifecycle management does not provide the same cost profile for very large cold archives as S3 with lifecycle policies.

Replace NFS with Amazon FSx for Windows File Server, turn on data deduplication and scheduled backups, archive backups to Amazon S3 Glacier, and reconfigure the application to access files over SMB is misaligned because it forces a protocol change to SMB and requires application reconfiguration. That option does not preserve existing NFS workflows and does not offer native S3 object lifecycle tiering for active file access.

For on premises NFS compatibility with cloud scale choose File Gateway so clients keep using NFS while objects land in S3 and lifecycle rules handle archival.

A network engineer at a fintech startup launched an Amazon EC2 instance in a public subnet. They opened the required inbound port in the instance’s security group and also allowed the same inbound port in the subnet’s network ACL, yet remote clients still cannot reach the service on TCP port 8443. What should you change to restore connectivity?

  • ✓ C. Because Security Groups are stateful, keep the inbound rule on the service port there and update the Network ACL to allow both inbound to the service port and outbound ephemeral ports such as 49152-65535

The correct option is Because Security Groups are stateful, keep the inbound rule on the service port there and update the Network ACL to allow both inbound to the service port and outbound ephemeral ports such as 49152-65535.

Security Groups are stateful so the security group answer rule on TCP port 8443 will allow return traffic for established sessions automatically. Network ACLs are stateless so you must explicitly permit the return path by allowing the ephemeral port range on egress in the NACL. Updating the NACL to allow outbound ephemeral ports lets the instance send responses and complete TCP handshakes for remote clients.

Network ACLs are stateful, so allowing the inbound port is sufficient, while Security Groups are stateless and must allow both inbound and outbound traffic is wrong because it reverses the actual behaviors. In reality NACLs are stateless and must allow both directions explicitly while security groups track connection state.

Add a 0.0.0.0/0 route to an internet gateway in the subnet route table is not the root cause in this scenario because routing would not fix blocked return traffic when the NACL egress rules deny ephemeral ports. Proper routing is necessary for internet access but it does not replace correct NACL egress permissions.

Attach an IAM role to the instance to grant network access from clients is irrelevant because IAM roles manage API and service permissions, not TCP flow control inside a VPC. IAM changes will not open blocked ports or alter NACL or security group behavior.

If incoming connections succeed but responses fail check the NACL egress rules and the instance ephemeral port range in addition to security groups.

A genomics startup runs a compute-heavy pipeline on a pool of several dozen Amazon EC2 instances that require extremely high random I/O to local storage. The application replicates and keeps the working dataset on each node, and the workflow can tolerate instance failures because a replacement instance quickly rebuilds its local data before proceeding. What is the most cost-effective and resource-efficient way to provision storage for this EC2 fleet?

  • ✓ C. Amazon EC2 Instance Store

The correct choice is Amazon EC2 Instance Store because the pipeline keeps replicated, ephemeral working data on each node and can tolerate instance replacement while needing extremely high local random I/O.

Amazon EC2 Instance Store provides locally attached NVMe or SSD storage that delivers very high random IOPS and low latency which matches the workload. The storage is included in the instance price and the application already handles replication and rebuilds, so the ephemeral nature of Amazon EC2 Instance Store yields the most cost effective and resource efficient solution for scratch or replicated working sets.

Amazon EBS is network attached and to reach comparable IOPS you would usually require Provisioned IOPS volumes which raise cost and introduce network latency, and the extra durability offered by Amazon EBS is unnecessary for this workload.

Amazon EFS is a managed, shared file system that adds network overhead and per operation costs and it is not optimized for the highest local random I/O per node.

Amazon S3 is object storage and is not suitable for block level, high random I/O on active working data because it has higher latency and is not presented as a local block device.

When a question mentions ephemeral or high local random I/O and the application can rebuild data on replacement nodes pick instance store for cost and performance unless you explicitly need durability or shared access.

A media analytics startup plans to run about 60 Linux Amazon EC2 instances distributed across three Availability Zones in a single AWS Region. The application needs a shared file store that all instances can access at the same time. The team wants a solution that scales automatically with growth and is simple to set up. The storage must be mounted using NFS. Which approach will fulfill these needs?

  • ✓ C. Amazon EFS with mount targets in each Availability Zone

Amazon EFS with mount targets in each Availability Zone is correct because it is a fully managed POSIX compliant NFS file system that allows simultaneous access by many Linux EC2 instances across multiple Availability Zones and it scales automatically while remaining simple to set up.

Amazon EFS with mount targets in each Availability Zone supports standard NFS mounts so applications see consistent permissions and metadata and you can create mount targets in each AZ so instances in all three AZs can access the same file system without complex networking or replication. It elastically grows and shrinks as data changes so capacity management is minimal.

Amazon S3 with a gateway endpoint to provide NFS access is incorrect because Amazon S3 is an object store and it does not natively provide NFS mounts and a VPC gateway endpoint only keeps traffic in the VPC and does not add NFS protocol capabilities.

Amazon FSx for Windows File Server is incorrect because it uses the SMB protocol and is designed for Windows file sharing so it does not meet the explicit NFS mounting requirement.

An Amazon EBS volume using Multi-Attach across all Availability Zones is incorrect because EBS Multi Attach is limited to instances within a single Availability Zone and block volumes do not serve as a shared network file system for many instances across AZs.

When the question specifies shared NFS access across multiple Availability Zones think Amazon EFS and verify that mount targets are provisioned in each AZ so all instances can mount the file system.

A compliance team at Polar Freight Systems requires that every change to an item in an Amazon DynamoDB table be captured as an auditable record outside the table. What is the most efficient way to capture and store each modification?

  • ✓ B. Enable DynamoDB Streams with NEW_AND_OLD_IMAGES and invoke an AWS Lambda function to persist each change record to Amazon S3

The correct option is Enable DynamoDB Streams with NEW_AND_OLD_IMAGES and invoke an AWS Lambda function to persist each change record to Amazon S3. This approach captures every item change and writes an auditable copy outside the table.

DynamoDB Streams provides a time ordered sequence of item level changes and the NEW_AND_OLD_IMAGES setting includes both before and after images so you can record the exact state transitions. Using Lambda as a stream consumer keeps operational overhead low and allows you to persist immutable records to S3 for long term retention and easy retrieval.

Use Amazon CloudTrail and run an EC2-based parser to read CloudTrail logs and copy changed items into another DynamoDB table is incorrect because CloudTrail records API calls rather than item images and it would require managing EC2 instances and additional parsing work to reconstruct item state.

Create DynamoDB Global Tables across two Regions and export stream records directly to Amazon S3 is incorrect because Global Tables are for multi Region replication and not for auditing, and there is no built in path to export Streams directly to S3 without a consumer such as Lambda or Kinesis.

Send DynamoDB events to Amazon CloudWatch Logs and have AWS Lambda filter deletes and archive them to Amazon S3 is incorrect because DynamoDB does not emit item change records to CloudWatch Logs and this pattern would miss inserts and updates while adding unnecessary complexity.

Remember to use DynamoDB Streams with NEW_AND_OLD_IMAGES and a consumer such as AWS Lambda to capture full item images for auditing and to persist them to S3.

A regional logistics company operates an internal case-tracking app for its service desk. The app runs on Amazon EC2 instances in an Auto Scaling group. During office hours the group scales out to 12 instances and overnight it scales back to 3. Employees report sluggish performance right at the start of the morning shift. How should a Solutions Architect configure the Auto Scaling group to eliminate the morning slowdowns while keeping costs as low as possible?

  • ✓ C. Create a scheduled action that sets the desired capacity to 12 shortly before office hours begin

Create a scheduled action that sets the desired capacity to 12 shortly before office hours begin is the correct choice because it ensures the Auto Scaling group launches the required instances before the morning shift so capacity is ready when employees start work while avoiding unnecessary overnight costs.

The scheduled action launches instances ahead of the start time so the group reaches full capacity before users arrive. Setting the desired capacity tells Auto Scaling exactly how many instances to run at that time and prevents the cold-start gap that reactive policies can leave.

Use target tracking with a lower average CPU target and shorten the cooldown is reactive so it may not eliminate the morning slowdown because scaling only begins after load increases. It can also raise costs by keeping extra instances during variable load instead of only when needed.

Add an Auto Scaling warm pool to pre-initialize instances can reduce launch latency but it does not by itself guarantee full running capacity at a specific time. A warm pool can add resource and storage overhead so it is not the lowest cost option when demand is predictable.

Create a scheduled action that sets the minimum and maximum capacity to 12 before office hours only changes bounds and does not force the Auto Scaling group to start 12 instances. Without setting the desired capacity there is no guarantee the group will be at 12 at shift start.

When demand is predictable use scheduled scaling and set the desired capacity a few minutes before peak so instances are launched and ready when users arrive.

A biotech research lab migrated its shared analysis data to Amazon Elastic File System so that several Amazon EC2 instances in two private subnets can use the same files. The security team must ensure that only designated instances can mount and read the file system while other instances in the VPC are blocked. Which controls should be implemented to meet this requirement? (Choose 2)

  • ✓ A. Attach IAM policies to approved instance roles allowing efs:ClientMount and efs:ClientRead for the EFS file system

  • ✓ D. Apply VPC security groups on the EFS mount targets that allow NFS only from the approved instances’ security groups

Attach IAM policies to approved instance roles allowing efs:ClientMount and efs:ClientRead for the EFS file system and Apply VPC security groups on the EFS mount targets that allow NFS only from the approved instances’ security groups are the correct controls to ensure only designated EC2 instances can mount and read the file system while other instances in the VPC are blocked.

Attach IAM policies to approved instance roles allowing efs:ClientMount and efs:ClientRead for the EFS file system enforces which principals may perform mount and read operations when you use IAM authorization with the EFS mount helper or access points. This identity control prevents an instance without the proper role from successfully mounting the file system even if it has network reachability.

Apply VPC security groups on the EFS mount targets that allow NFS only from the approved instances’ security groups provides the network layer of enforcement by restricting NFS traffic to only the approved instance security groups. Security groups are stateful and operate at the instance or mount target level so they are effective for allowing or denying NFS connections to EFS.

Configure network ACLs on the subnets to restrict NFS traffic to the file system is not ideal because network ACLs are stateless and operate at the subnet level and they do not provide principal identity or EFS-aware authorization.

Enable Amazon GuardDuty to detect and automatically stop unauthorized mounts to the EFS file system is incorrect because GuardDuty is a threat detection service and it does not automatically block or stop mounts.

Encrypt the file system with a customer managed AWS KMS key to limit which instances can access it is insufficient for this requirement because encryption protects data at rest and KMS controls key usage but it does not by itself prevent instances with network access from mounting and reading the file system unless combined with proper network and IAM restrictions.

Think in two layers of control. Use security groups to restrict network reachability to EFS mount targets and use IAM efs:ClientMount and efs:ClientRead to restrict which instance roles may mount and read the file system.

A digital ticketing startup uses Amazon SQS to decouple its purchase workflow. The operations team has noticed that some order messages repeatedly fail to be processed by consumers and are retried multiple times, which slows the rest of the queue. What should the solutions architect implement to reliably capture these failed messages for later inspection while allowing normal traffic to continue?

  • ✓ C. Configure a dead-letter queue for messages that exceed the max receive count

Configure a dead-letter queue for messages that exceed the max receive count is correct because it moves messages that fail processing repeatedly into a separate queue for later inspection while letting normal traffic continue on the primary queue.

Using a dead-letter queue with an SQS redrive policy and a configured max receive count isolates so called poison messages and prevents them from repeatedly being retried by consumers. This approach preserves throughput for healthy messages and enables troubleshooting of payloads and error conditions in the separate queue without impacting normal processing.

Increase the queue’s visibility timeout to 90 seconds can help reduce premature retries when processing takes longer than expected, but it does not provide a place to quarantine messages that will never succeed and it will not help you inspect problematic messages.

Use long polling to reduce empty receives lowers the number of empty responses and API calls, but it does not address messages that consistently fail after being received and it will not isolate those messages for later analysis.

Use a temporary response queue for each failed message is a pattern for request and response workflows and it is not a practical failure isolation mechanism for SQS consumers. Creating per-message temporary queues would be inefficient and it would not provide the centralized investigation point that a dead-letter queue offers.

Isolate repeating failures by sending them to a dead letter queue and set a sensible max receive count so you can analyze bad messages without slowing the main queue.

A travel booking startup runs its transactional workload on Amazon Aurora. Performance is stable except when quarter-end revenue dashboards are generated, when Amazon CloudWatch shows simultaneous spikes in Read IOPS and CPUUtilization. What is the most cost-effective way to offload this reporting workload from the primary database during those periods?

  • ✓ C. Add an Aurora read replica and route reporting to the reader endpoint

Add an Aurora read replica and route reporting to the reader endpoint is the correct choice because it offloads read traffic from the primary and directly targets the simultaneous Read IOPS and CPUUtilization spikes observed in CloudWatch.

Add an Aurora read replica and route reporting to the reader endpoint isolates the OLTP writer from heavy reporting queries by serving reads from one or more replicas. Replicas are designed for read scaling and they are usually more cost effective for periodic reporting spikes than running a separate analytics cluster or permanently upgrading the primary instance. Using the reader endpoint lets you distribute read traffic across available replicas without changing application logic for each replica.

Deploy Amazon ElastiCache to cache the reporting query results is not ideal for ad hoc, wide scan reports that do not cache well and would still generate heavy database reads when the cache is cold. Caching adds complexity and does not remove the need to serve large analytical queries from a read-optimized store.

Build an Amazon Redshift cluster and run the analytics there can handle large analytics workloads but it requires data movement and ETL work and it is typically more expensive for occasional quarter end reports. For short periodic spikes it is usually less cost effective than adding replicas to Aurora.

Upgrade the Aurora DB instance to a larger class with more vCPUs raises capacity for both reads and writes but it does not separate reporting from OLTP workloads and it often costs more. Scaling up the primary does not protect it from heavy reporting queries the way offloading reads to replicas does.

Route reporting reads to Aurora reader endpoints during heavy reporting windows to protect the primary and avoid running a separate analytics cluster for short periodic spikes.

An engineering lead at a digital publishing startup plans to add Amazon RDS read replicas to improve read throughput for a multi-tenant editing platform. Before proceeding, the lead wants clarity on how data transfer is billed when setting up and using RDS read replicas. Which statement about RDS read replica data transfer pricing is accurate?

  • ✓ C. Replicating from a primary DB instance to a read replica in a different AWS Region incurs inter-Region data transfer charges

The correct choice is Replicating from a primary DB instance to a read replica in a different AWS Region incurs inter-Region data transfer charges. Amazon RDS does not charge for replication between a source DB instance and its read replicas when they are in the same AWS Region.

When you place a read replica in another AWS Region the data must traverse AWS network boundaries and that traffic is billed as inter-Region data transfer. For that reason Replicating from a primary DB instance to a read replica in a different AWS Region incurs inter-Region data transfer charges and you should plan for network costs when designing cross-Region read replicas.

Data replicated to a read replica in the same Availability Zone is billed for data transfer is incorrect because replication within the same AWS Region including within the same Availability Zone does not incur RDS data transfer fees for the replication traffic.

There are no data transfer fees when replicating to a read replica in another AWS Region is incorrect because cross-Region replication is billed as data transferred between Regions and therefore incurs charges.

Amazon RDS Proxy is not relevant to read replica data transfer pricing because it provides connection pooling and management and it does not change how replication traffic is billed.

Remember that in-Region read replica replication is not charged and that cross-Region replication generates data transfer costs so include network fees when planning replicas.

A regional insurance carrier plans to deploy an Amazon RDS Multi-AZ database to store policy and claim transactions. Their actuarial modeling application runs in the headquarters data center, and when employees are in the office the application must connect directly to the RDS database. The company needs this connectivity to be secure and efficient. Which approach provides the most secure connectivity?

  • ✓ C. Build a VPC with private subnets in separate Availability Zones, deploy the Multi-AZ RDS in those private subnets, and connect the corporate network to the VPC using AWS Site-to-Site VPN with a customer gateway

Build a VPC with private subnets in separate Availability Zones, deploy the Multi-AZ RDS in those private subnets, and connect the corporate network to the VPC using AWS Site-to-Site VPN with a customer gateway is the most secure approach because it prevents public exposure of the database and provides an encrypted, routed connection from the headquarters network to the VPC.

This approach places the Multi-AZ RDS in private subnets so the database is not reachable from the internet and it uses AWS Site-to-Site VPN with a customer gateway to create an encrypted IPsec tunnel for traffic between the corporate network and the VPC. That combination preserves least privilege and allows on-premises systems to reach the database over a managed, routed link while security groups and network ACLs control access.

Create a VPC with two public subnets, host the RDS instance in those public subnets, and allow the headquarters CIDR in the database security group to permit direct internet access is insecure because putting the database in public subnets exposes it to the internet and relies on security groups alone to block unwanted traffic.

Place the RDS database in public subnets within a VPC and have office users connect using AWS Client VPN from their desktops is not ideal because Client VPN secures individual client devices rather than the whole office network and hosting the database in public subnets contradicts the goal of minimizing exposure.

Provision AWS Direct Connect with a public virtual interface and run the RDS database in public subnets to provide low-latency access from the office is incorrect because a public virtual interface is not the right path to privately address VPC resources and making the database public undermines security. A private virtual interface would be appropriate for Direct Connect and the database should remain in private subnets.

Keep databases in private subnets and prefer private, routed connections such as Site-to-Site VPN or Direct Connect with a private VIF when the question asks for the most secure option.

A digital learning startup completed its move from a colocation facility to AWS about 90 days ago and is considering Amazon CloudFront as the CDN for its main web application; the architects want guidance on request routing, protection of sensitive form fields, and designing for failover; which statements about CloudFront are correct? (Choose 3)

  • ✓ A. Configure an origin group with a designated primary and secondary origin to enable automatic failover

  • ✓ C. Route requests to different origins by defining separate cache behaviors for path patterns such as /static/* and /api/*

  • ✓ E. Apply field-level encryption in CloudFront to encrypt sensitive fields like card numbers at the edge

Configure an origin group with a designated primary and secondary origin to enable automatic failover, Route requests to different origins by defining separate cache behaviors for path patterns such as /static/ and /api/*, and *Apply field-level encryption in CloudFront to encrypt sensitive fields like card numbers at the edge are correct.

Configure an origin group with a designated primary and secondary origin to enable automatic failover is correct because CloudFront supports origin groups that perform health checks and automatically fail over to a secondary origin when the primary is unhealthy or returns configured error responses.

Route requests to different origins by defining separate cache behaviors for path patterns such as /static/ and /api/** is correct since CloudFront uses cache behaviors and path pattern matching to send different request paths to different origins and to apply different caching and routing settings per path.

Apply field-level encryption in CloudFront to encrypt sensitive fields like card numbers at the edge is correct because CloudFront can encrypt specified request fields at the edge with public keys so that sensitive form fields remain protected until a trusted backend decrypts them.

Use geo restriction to achieve failover and high availability across countries is incorrect because geo restriction only allows or blocks viewers by location and does not provide redundancy or automatic failover between origins.

Use AWS Key Management Service (AWS KMS) directly in CloudFront to selectively encrypt specific form fields is incorrect because CloudFront does not integrate with KMS for per field encryption. Field level encryption uses public keys managed for CloudFront and decryption happens at the trusted backend.

Send viewer traffic to different origins based on the selected price class is incorrect because price classes influence which edge locations are used to serve viewers by cost tier and do not control how requests are routed to origins.

Focus on CloudFront features not billing knobs when answering routing and protection questions. Remember that cache behaviors route requests, origin groups handle failover, and field level encryption protects form fields at the edge.

Summit Drafting LLC is moving approximately 48 TB of design files from an on-premises NAS to AWS. The files must be shared within one AWS Region by Amazon EC2 instances running Windows, macOS, and Linux. Teams must access the same datasets over both SMB and NFS, with some files used often and others only occasionally. The company wants a fully managed solution that keeps administration to a minimum. Which approach best meets these needs?

  • ✓ B. Create an Amazon FSx for NetApp ONTAP file system and migrate the dataset

The correct choice is Create an Amazon FSx for NetApp ONTAP file system and migrate the dataset. This option provides native multi protocol access so the same files can be presented over SMB to Windows and macOS and over NFS to Linux within a single AWS Region.

Create an Amazon FSx for NetApp ONTAP file system and migrate the dataset is a fully managed service so administration is minimal and it includes storage efficiencies and automatic tiering so frequently used files remain on performance storage while less used files move to lower cost tiers. The service also offers NetApp data management features such as snapshots and cloning which simplify migration and ongoing operations.

Amazon FSx for OpenZFS is not suitable because it supports NFS only and cannot present the same dataset over SMB to Windows or macOS clients.

Set up Amazon EFS with lifecycle policies to Infrequent Access and use AWS DataSync to copy the data does not meet the requirement because Amazon EFS is NFS only and cannot provide SMB access for Windows or macOS SMB clients.

Amazon FSx for Windows File Server is not a fit because it provides SMB access only and lacks NFS support so Linux clients could not access the same files over NFS.

When a single dataset must be shared over both SMB and NFS by mixed OS clients choose a multi protocol file service such as Amazon FSx for NetApp ONTAP rather than an NFS only or SMB only option.

A media analytics startup in Madrid tracks live viewer behavior and associated ad exposures. Event data is processed in real time on its on-premises platform. Each night, the team aggregates the day’s records into a single compressed archive of about 3 gigabytes and stores it in an Amazon S3 bucket for backup. What is the fastest way to move the nightly compressed file from the data center to Amazon S3?

  • ✓ C. Use multipart upload with Amazon S3 Transfer Acceleration

The fastest approach is Use multipart upload with Amazon S3 Transfer Acceleration. This option combines edge accelerated network paths with parallel multipart transfer to move the nightly compressed archive quickly from Madrid to an S3 bucket in AWS.

The Use multipart upload with Amazon S3 Transfer Acceleration approach uses CloudFront edge locations to reduce latency and it allows uploading parts in parallel so throughput increases. Multipart upload also provides resumability so failed part uploads do not require restarting the whole transfer and that improves reliability for a multi gigabyte file over the public internet.

Upload the compressed object to Amazon S3 in a single PUT operation is slower and more fragile for large objects because it does not allow parallelism and a failure forces a complete retry.

Use AWS DataSync to copy the file from on premises to the S3 bucket is useful for ongoing large scale migrations and directory transfers and it requires deploying an agent and configuration. It typically will not beat edge accelerated multipart uploads for a single large file sent nightly across continents.

Use standard multipart upload to Amazon S3 without Transfer Acceleration improves throughput by uploading parts in parallel but it lacks the edge optimized routes that Transfer Acceleration provides so it is usually slower across long distance internet paths.

Transfer Acceleration plus multipart upload is the best choice for time sensitive, long distance transfers of multi gigabyte files. Begin multipart for files over 100 megabytes and run a short test to measure acceleration benefits from your location.

A new IT administrator at a nonprofit creates a fresh AWS account and launches an Amazon EC2 instance named appA in us-west-2. She then creates an EBS snapshot of appA, builds an AMI from that snapshot in us-west-2, and copies the AMI to eu-central-1. She subsequently launches an instance named appB in eu-central-1 from the copied AMI. At this moment, which resources are present in eu-central-1?

  • ✓ B. 1 Amazon EC2 instance, 1 AMI, and 1 snapshot exist in eu-central-1

The correct choice is 1 Amazon EC2 instance, 1 AMI, and 1 snapshot exist in eu-central-1. When you copy an AMI to another Region AWS also copies the underlying EBS snapshot or snapshots into the destination Region. After launching an instance from the copied AMI you therefore have one EC2 instance the copied AMI and the snapshot that backs it in eu-central-1.

Copying an AMI moves the AMI metadata and creates copies of the EBS snapshot or snapshots that back the AMI in the target Region. The copied AMI is available in the destination Region and the EBS snapshot copy is stored there as well. Launching an instance from that AMI creates an EC2 instance that is backed by a volume created from the copied snapshot.

1 Amazon EC2 instance and 1 AMI exist in eu-central-1 is incorrect because it ignores the EBS snapshot that AWS copies along with the AMI into the target Region.

1 Amazon EC2 instance and 1 snapshot exist in eu-central-1 is incorrect because the AMI itself is also present in the destination Region after the copy completes.

1 Amazon EC2 instance, 1 AMI, and 2 snapshots exist in eu-central-1 is not accurate for a typical single volume AMI because copying a single volume AMI produces one snapshot copy rather than two.

When you copy an AMI across Regions remember to count both the copied AMI and the copied EBS snapshot or snapshots when tallying resources in the destination Region.

A boutique event equipment company has 8 engineers sharing a single AWS account, and each team deployed separate VPCs for payments, logistics, and analytics services. The teams now need private connectivity so their applications can talk to each other across these VPCs with the lowest ongoing cost. What should they implement?

  • ✓ B. VPC peering connection

VPC peering connection is the correct choice because it provides private IP routing between VPCs in the same AWS account while keeping ongoing costs low.

VPC peering connection supports private IP routing with no hourly charge so teams only pay for data transfer. This makes it cost effective for a small number of VPCs that need direct communication and it is straightforward to configure within a single account.

VPC peering connection is non transitive so each pair of VPCs that must communicate requires its own peering and route entries. If the number of VPCs grows and many to many connectivity is required a hub and spoke design using AWS Transit Gateway becomes more practical to reduce operational overhead.

Internet Gateway is designed to provide internet access to and from a VPC and it does not create private VPC to VPC connectivity so it is not suitable for this requirement.

AWS Direct Connect is intended to provide dedicated links between on premises networks and AWS and it would add unnecessary cost and complexity for internal VPC to VPC traffic within the same account.

NAT gateway enables outbound internet access for instances in private subnets while blocking inbound internet initiated connections so it cannot be used to link separate VPCs together.

For a few VPCs choose VPC peering connection to minimize ongoing cost and remember that peering is non transitive so plan pairwise links or use Transit Gateway for many to many scenarios.

Riverton Labs, a geospatial analytics startup, has standardized on AWS Control Tower with a multi-account setup where each engineer develops in an isolated sandbox account. Finance has observed occasional unexpected cost spikes from individual sandboxes, and leadership needs a centrally managed way to cap spend that triggers automatic enforcement when a threshold is crossed while requiring minimal ongoing maintenance. What is the most efficient method to enforce account-level spending limits across about 60 developer accounts with the least operational overhead?

  • ✓ C. Use AWS Budgets to create per-account budgets with alerts for actual and forecasted spend, and attach Budgets actions that apply a restrictive Deny policy to the developer’s primary role when the threshold is breached

Use AWS Budgets to create per-account budgets with alerts for actual and forecasted spend, and attach Budgets actions that apply a restrictive Deny policy to the developer’s primary role when the threshold is breached is the correct option because it provides native, proactive enforcement with minimal operational overhead and it scales across many sandbox accounts.

Budgets support both actual and forecasted alerts and Budgets actions can automatically apply IAM policy changes to restrict permissions when thresholds are exceeded. This lets finance and leadership enforce account level limits from a central place while avoiding custom per account scripts or schedules and it integrates well with an AWS Control Tower multi account model.

Deploy a daily AWS Lambda in each sandbox account that reads Cost Explorer and triggers an AWS Config remediation rule when spend exceeds a threshold is not ideal because it requires custom code and per account scheduling and it only reacts at the job cadence rather than providing immediate, native budget enforcement.

Configure AWS Cost Anomaly Detection with Amazon EventBridge to invoke an AWS Systems Manager Automation runbook that stops or tags costly resources when anomalies are detected is unsuitable because anomaly detection targets unusual spending patterns rather than predictable budget thresholds and it still needs built automation that does not directly restrict account level permissions.

Publish standardized portfolios in AWS Service Catalog with template cost limits and add a scheduled Lambda in each account to shut down resources after hours and restart them each morning can reduce idle costs and standardize provisioning but it does not enforce a hard spend cap and it adds ongoing per account operational work.

Prefer AWS Budgets actions when you need centrally enforced, low maintenance account level spend controls and create per account budgets for isolated developer sandboxes.

A regional media startup, BrightWave Studios, runs its customer platform on a single on-premises MySQL server. The team wants to move to a fully managed AWS database to increase availability and improve performance while keeping operational overhead low. They also need to isolate heavy, read-only reporting queries from the primary write workload so that transactional performance remains stable. Which approach is the most operationally efficient way to achieve these objectives?

  • ✓ B. Use Amazon Aurora MySQL-Compatible Edition and route reporting and BI queries to an Aurora Replica in the same cluster

Use Amazon Aurora MySQL-Compatible Edition and route reporting and BI queries to an Aurora Replica in the same cluster is the correct choice because it provides a managed, highly available MySQL-compatible service that isolates read-heavy reporting from the primary write workload while keeping operational overhead low.

Aurora uses distributed multi-AZ storage and automated failover which improves availability and reduces management tasks. Aurora Replicas share the same storage as the primary which enables low-latency reads and allows you to run reporting and BI queries on replicas so transactional performance on the primary remains stable. The managed nature of Aurora also automates backups, patching, and scaling which matches the requirement to minimize operational effort.

Deploy self-managed MySQL on Amazon EC2 across two Availability Zones with asynchronous replication to a reporting node and handle backups and patching yourself is not ideal because self-management requires the team to own backups, patching, failover, and scaling which increases operational overhead and risk.

Use Amazon RDS for MySQL in a Single-AZ deployment and create a read replica in the same AZ for reporting is insufficient for high availability because Single-AZ deployments do not provide automatic failover across Availability Zones and a same-AZ read replica does not protect against AZ failure.

Use AWS Database Migration Service to move to an Aurora Global Database in two Regions and direct reporting to a reader in the secondary Region is overengineered for this requirement because a multi-Region Global Database adds complexity and cost and is intended for cross-Region disaster recovery or global low-latency reads rather than isolating reporting on a single-region operationally efficient setup.

When an exam asks for the most operationally efficient MySQL solution with high availability and read isolation favor Aurora MySQL with read replicas because it combines managed HA, low-latency read scaling, and automated operations.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.


Next Steps

So what’s next? A great way to secure your employment or even open the door to new opportunities is to get certified. If you’re interested in AWS products, here are a few great resources to help you get Cloud Practitioner, Solution Architect, Machine Learning and DevOps certified from AWS:

Put your career on overdrive and get AWS certified today!