Question 1
A retail analytics startup, Scrumtuous Market Insights, stores quarterly sales KPIs in an Amazon DynamoDB table named SalesMetrics. The team is building a lightweight web dashboard to present this data and wants to use fully managed components with the least possible operational overhead. Which architectures would meet these goals while minimizing operational effort? (Choose 2)
-
❏ A. An Application Load Balancer sends traffic to a target group that lists the DynamoDB table as the backend target
-
❏ B. An Amazon API Gateway REST API integrates directly with the DynamoDB table to read the sales metrics
-
❏ C. An Application Load Balancer routes traffic to a target group of Amazon EC2 instances that query the DynamoDB table
-
❏ D. An Amazon API Gateway REST API triggers an AWS Lambda function that retrieves items from the DynamoDB table
-
❏ E. An Amazon Route 53 hosted zone routes requests directly to an AWS Lambda endpoint to run code that reads the DynamoDB table
Question 2
Which AWS service provides low-latency on-premises NFS and SMB file access to objects in Amazon S3 with local caching and minimal cost?
-
❏ A. Amazon FSx for Lustre
-
❏ B. AWS Storage Gateway – S3 File Gateway
-
❏ C. Amazon EFS
-
❏ D. Mountpoint for Amazon S3
Question 3
A digital media startup runs its subscription billing platform on AWS. The application uses an Amazon RDS for MySQL Multi-AZ DB cluster as the database tier. For regulatory reasons, the team must keep database backups for 45 days. Engineers take both automated RDS backups and occasional manual snapshots for point-in-time needs. The company wants to enforce a 45-day retention policy for all backups while preserving any automated and manual backups that were created within the last 45 days. The approach should minimize cost and operational work. Which solution meets these goals most cost effectively?
-
❏ A. Disable RDS automated backups and use AWS Backup daily backup plans with a 45-day retention policy
-
❏ B. Set the RDS automated backup retention to 45 days and schedule a simple script to delete manual snapshots older than 45 days
-
❏ C. Export RDS snapshots to Amazon S3 and rely on S3 Lifecycle rules to delete objects after 45 days
-
❏ D. Use AWS Backup to enforce a 45-day policy on automated backups and invoke AWS Lambda to remove manual snapshots older than 45 days
Question 4
How should a company migrate approximately 250 TB of on premises files to Amazon S3 over an existing 20 Gbps Direct Connect while keeping the traffic private, automating recurring synchronizations, and using an accelerated managed service?
-
❏ A. AWS Storage Gateway file gateway
-
❏ B. AWS DataSync with public endpoint
-
❏ C. AWS Snowball Edge
-
❏ D. AWS DataSync via VPC interface endpoint over Direct Connect
Question 5
Orion Couriers runs a legacy report collector on a single Amazon EC2 instance in a public subnet. The application gathers scanned PDF delivery slips, writes them to an attached Amazon EBS volume, and at 01:00 UTC each night pushes the accumulated files to an Amazon S3 archive bucket. A solutions architect observes that the instance is using the public S3 endpoint over the internet for uploads. The company wants the data transfers to stay on the AWS private network and avoid the public endpoint entirely. What should the architect implement?
-
❏ A. Create an S3 access point in the same Region, grant the instance role access, and update the application to use the access point alias
-
❏ B. Enable S3 Transfer Acceleration on the bucket and update the application to use the acceleration endpoint
-
❏ C. Deploy a gateway VPC endpoint for Amazon S3 and update the subnet route table to use it; restrict access with a bucket or IAM policy tied to the endpoint
-
❏ D. Order an AWS Direct Connect dedicated connection and route VPC traffic to Amazon S3 over it
Question 6
Which scaling approach most effectively reduces cold-start latency during the morning traffic surge for an EC2 Auto Scaling group while keeping costs low?
-
❏ A. Schedule desired capacity to 28 just before business hours
-
❏ B. Enable a warm pool with running instances for the morning surge
-
❏ C. Use target tracking with a lower CPU target and shorter cooldown
-
❏ D. Use step scaling with reduced CPU thresholds and a short cooldown
Question 7
A retail analytics startup moved several cron-style workloads to Amazon EC2 instances running Amazon Linux. Each job runs for about 80 minutes, and different teams wrote them in various programming languages. All jobs currently execute on a single server, which creates throughput bottlenecks and limited scalability. The team wants to run these tasks in parallel across instances while keeping operations simple and avoiding significant rework. What approach will meet these needs with the least operational overhead?
-
❏ A. Run the tasks as jobs in AWS Batch and trigger them on a schedule with Amazon EventBridge
-
❏ B. Create an Amazon Machine Image from the existing EC2 host and use an Auto Scaling group to launch multiple identical instances concurrently
-
❏ C. Rewrite each task as an AWS Lambda function and schedule invocations with Amazon EventBridge
-
❏ D. Containerize the workloads and use Amazon ECS on AWS Fargate with EventBridge scheduled tasks
Question 8
An EC2 hosted microservice behind an Application Load Balancer has one API route that takes about four minutes to complete while other routes finish in about 200 milliseconds; what should be implemented to decouple the long running work and avoid blocking requests?
-
❏ A. AWS Step Functions
-
❏ B. Increase ALB idle timeout
-
❏ C. Amazon SQS with asynchronous processing
-
❏ D. Amazon SNS
Question 9
A sports analytics firm maintains an AWS Direct Connect link to AWS and has moved its enterprise data warehouse into AWS. Data analysts use a business intelligence dashboard to run queries. The average result set returned per query is 80 megabytes, and the dashboard does not cache responses. Each rendered dashboard page is approximately 350 kilobytes. Which approach will deliver the lowest data transfer egress cost for the company?
-
❏ A. Host the BI tool on-premises and fetch results from the AWS data warehouse over the public internet in the same AWS Region
-
❏ B. Host the BI tool in the same AWS Region as the data warehouse and let users access it through the existing Direct Connect from the corporate network
-
❏ C. Host the BI tool on-premises and fetch results from the AWS data warehouse across the Direct Connect link in the same AWS Region
-
❏ D. Host the BI tool in the same AWS Region as the data warehouse and let users access it via an AWS Site-to-Site VPN
Question 10
Which AWS relational database option provides cross Region disaster recovery with approximately a three second recovery point objective and a thirty second recovery time objective?
-
❏ A. Amazon RDS Multi-AZ
-
❏ B. Aurora Global Database
-
❏ C. AWS Elastic Disaster Recovery
-
❏ D. Amazon RDS cross-Region read replica
Question 11
Orion Retail processes shopper photos uploaded to an Amazon S3 bucket and writes summarized demographic attributes as CSV files to a second bucket every 30 minutes. Security requires encryption of all files at rest, and analysts need to run standard SQL against the dataset without managing servers. What should the solutions architect implement to meet these requirements?
-
❏ A. Encrypt the S3 buckets with AWS KMS keys and use Amazon Managed Service for Apache Flink to analyze the files
-
❏ B. Configure S3 SSE-KMS and run SQL queries with Amazon Athena
-
❏ C. Enable S3 server-side encryption and query the data with Amazon Redshift Spectrum
-
❏ D. Use S3 server-side encryption and load the CSVs into Amazon Aurora Serverless for SQL queries
Question 12
Which AWS service uses global anycast, supports UDP traffic, enables rapid cross-Region failover, and permits the continued use of an external DNS provider?
-
❏ A. Amazon CloudFront
-
❏ B. AWS Route 53 Application Recovery Controller
-
❏ C. Amazon Global Accelerator
-
❏ D. Amazon Route 53
Question 13
A sports analytics startup anticipates a massive traffic spike for the kickoff of an interactive live match tracker. Their stack runs on AWS with application servers on Amazon EC2 and a transactional backend on Amazon RDS. The operations team needs proactive visibility into performance during the event with metric updates at intervals of 90 seconds or less, and they prefer something that can be enabled quickly with minimal maintenance. What should they implement?
-
❏ A. Stream EC2 operating system logs to Amazon OpenSearch Service and visualize CPU and memory in OpenSearch Dashboards
-
❏ B. Capture EC2 state change events with Amazon EventBridge, forward to Amazon SNS, and have a dashboard subscribe to view metrics
-
❏ C. Enable EC2 Detailed Monitoring and use Amazon CloudWatch to view 1-minute instance metrics during the launch window
-
❏ D. Install the CloudWatch agent on all instances to publish high-resolution custom metrics and analyze them from CloudWatch Logs with Amazon Athena
Question 14
Which AWS database option provides MySQL compatibility along with automatic compute scaling, built-in high availability, and minimal operational effort?
-
❏ A. Amazon RDS for MySQL Multi-AZ with read replicas
-
❏ B. Amazon Aurora MySQL provisioned with read replica Auto Scaling
-
❏ C. Amazon Aurora MySQL Serverless v2
-
❏ D. Single larger MySQL on EC2
Question 15
An international film distribution firm is moving its core workloads to AWS. The company has built an Amazon S3 data lake to receive and analyze content from external partners. Many partners can upload using S3 APIs, but several operate legacy tools that only support SFTP and refuse to change their process. The firm needs a fully managed AWS solution that gives partners an SFTP endpoint which writes directly to S3 and supports identity federation so internal teams can map each partner to specific S3 buckets or prefixes. Which combination of actions will best meet these needs with minimal ongoing maintenance? (Choose 2)
-
❏ A. Use Amazon AppFlow to ingest files from legacy SFTP systems into S3 on an hourly schedule
-
❏ B. Provision an AWS Transfer Family server with SFTP enabled that stores uploads in an S3 bucket and map each partner user to a dedicated IAM role scoped to that bucket or prefix
-
❏ C. Run a custom OpenSSH-based SFTP server on Amazon EC2 and use cron to copy received files into S3, with CloudWatch for monitoring
-
❏ D. Apply S3 bucket policies that grant IAM role–based, per-partner access and integrate AWS Transfer Family with Amazon Cognito or an external identity provider for federation
-
❏ E. Set up AWS DataSync with an SFTP location to replicate partner files into S3
Question 16
Which AWS architecture provides a scalable and highly available HTTPS endpoint to receive JSON events from devices, processes those events using serverless services, and stores the results durably?
-
❏ A. Amazon EC2 single instance writing to Amazon S3
-
❏ B. Amazon Route 53 to AWS Lambda to Amazon DynamoDB
-
❏ C. Amazon EventBridge with rule to Amazon DynamoDB
-
❏ D. Amazon API Gateway to AWS Lambda to Amazon DynamoDB
Question 17
An online education startup is trialing a Linux-based Python application on a single Amazon EC2 instance. The instance uses one 2 TB Amazon EBS General Purpose SSD (gp3) volume to store customer data. The team plans to scale out the application to several EC2 instances in an Auto Scaling group, and every instance must read and write the same dataset that currently resides on the EBS volume. They want a highly available and cost-conscious approach that requires minimal changes to the application. What should the team implement?
-
❏ A. Set up Amazon FSx for Lustre, link it to an Amazon S3 bucket, and mount the file system on each EC2 instance for shared access
-
❏ B. Use Amazon EBS Multi-Attach with an io2 volume and attach it to all instances in the Auto Scaling group
-
❏ C. Create an Amazon Elastic File System in General Purpose performance mode and mount it across all EC2 instances
-
❏ D. Run a single EC2 instance as an NFS server, attach the existing EBS volume, and export the share to the Auto Scaling group instances
Question 18
Which S3 option automatically reduces storage costs across 500 buckets while requiring minimal ongoing administration and no lifecycle rule management?
-
❏ A. S3 Storage Lens
-
❏ B. S3 Glacier Deep Archive
-
❏ C. S3 Intelligent-Tiering
-
❏ D. S3 One Zone-IA
Question 19
BrightPlay Analytics runs a live leaderboard web app behind an Application Load Balancer and a fleet of Amazon EC2 instances. The service stores game results in Amazon RDS for MySQL. During weekend tournaments, read requests surge to roughly 120,000 per minute, and users see delays and occasional timeouts that trace back to slow database reads. The company needs to improve responsiveness while making the fewest possible changes to the existing architecture. What should the solutions architect recommend?
-
❏ A. Connect the application to the database using Amazon RDS Proxy
-
❏ B. Use Amazon ElastiCache to cache frequently accessed reads in front of the database
-
❏ C. Create Amazon RDS for MySQL read replicas and route read traffic to them
-
❏ D. Migrate the data layer to Amazon DynamoDB
Question 20
How can you replicate S3 objects encrypted with SSE-KMS to another Region while ensuring the same KMS key material and key ID are used in both Regions?
-
❏ A. Use identical KMS key alias names in both Regions and enable S3 replication
-
❏ B. Create a new source bucket using SSE-KMS with a KMS multi-Region key, replicate to a bucket with the replica key, and migrate existing data
-
❏ C. Convert the existing single-Region KMS key to a multi-Region key and use S3 Batch Replication
-
❏ D. Enable S3 replication and share the current KMS key across Regions
Question 21
A regional media startup named Ardent Stream runs an on-premises analytics application that updates and adds files many times per hour. A new compliance rule requires a complete audit trail of storage activity that includes object-level API actions and configuration changes retained for at least 180 days. Local NAS capacity is almost exhausted, and the team wants to offload part of the dataset to AWS without interrupting ongoing writes. Which approach best satisfies the auditing requirement while easing on-premises storage pressure?
-
❏ A. Move existing data to Amazon S3 using AWS DataSync and enable AWS CloudTrail management events
-
❏ B. Use AWS Storage Gateway to back data with Amazon S3 and enable AWS CloudTrail data events for S3
-
❏ C. Enable Amazon S3 Transfer Acceleration for uploads and turn on AWS CloudTrail data events
-
❏ D. Ship the data with AWS Snowball Edge and log AWS CloudTrail management events
Question 22
Which AWS services let you run Kubernetes pods without managing the underlying nodes and provide a managed AMQP compatible message broker with minimal code changes? (Choose 2)
-
❏ A. Amazon SQS
-
❏ B. Amazon EKS on Fargate
-
❏ C. Amazon MSK
-
❏ D. Amazon MQ
-
❏ E. Amazon EKS on EC2 with Karpenter
Question 23
An edtech startup is launching a platform to store learner progress, test submissions, and user preferences. The database must use a relational schema with ACID transactions across related records. Usage surges unpredictably during scheduled practice exams, so capacity should adjust automatically with little administration. The team also needs automated backups while keeping operations minimal. Which solution is the most cost-effective?
-
❏ A. Use Amazon DynamoDB with on-demand capacity and enable Point-in-Time Recovery
-
❏ B. Launch Amazon RDS for MySQL in Multi-AZ with provisioned IOPS and retain automated backups in Amazon S3 Glacier Deep Archive
-
❏ C. Run an open-source relational database on Amazon EC2 Spot Instances in an Auto Scaling group with nightly snapshots to Amazon S3 Standard-Infrequent Access
-
❏ D. Use Amazon Aurora Serverless v2 with automatic scaling and configure automated backups to Amazon S3 with a 10-day retention
Question 24
A company needs to securely store application secrets and have them automatically rotated about every 90 days with minimal operational overhead. Which AWS service should they use?
-
❏ A. AWS Systems Manager Parameter Store
-
❏ B. Amazon DynamoDB
-
❏ C. AWS Key Management Service
-
❏ D. AWS Secrets Manager with automatic rotation
Question 25
A travel booking startup needs to programmatically expand or shrink the geographic area that directs users to a specific application endpoint as demand fluctuates. Which Amazon Route 53 feature provides this capability?
-
❏ A. Weighted routing
-
❏ B. Latency-based routing
-
❏ C. Geoproximity routing
-
❏ D. Geolocation routing
Question 26
Which architecture preserves all tasks, decouples a fast intake stage from a slower processing stage, and lets each stage scale independently based on its backlog?
-
❏ A. Use a single Amazon SQS queue for both stages and scale on message count
-
❏ B. Create two Amazon SQS queues; each worker fleet polls its own queue and scales on its queue length
-
❏ C. Use an Amazon SNS topic to fan out tasks to all workers
-
❏ D. Create two Amazon SQS queues and have Auto Scaling react to queue notifications
Question 27
At LumenWave Retail, analysts load data into an Amazon Redshift warehouse to join and aggregate files stored in Amazon S3. Access patterns show that roughly 60 days after ingestion, these datasets are seldom queried and no longer considered hot. The team must continue to use standard SQL with queries starting immediately while minimizing ongoing Redshift costs as much as possible. What approach should they take? (Choose 2)
-
❏ A. Launch a smaller Amazon Redshift cluster to hold and query the cold data
-
❏ B. Transition the data to Amazon S3 Standard-IA after 60 days
-
❏ C. Use Amazon Redshift Spectrum to query the S3 data while keeping a minimal Redshift cluster
-
❏ D. Move the data to Amazon S3 Glacier Deep Archive after 60 days
-
❏ E. Use Amazon Athena to run SQL directly on the S3 data
Question 28
How can you expose an HTTP service hosted on EC2 instances in private subnets to the internet while keeping the instances private and minimizing operational overhead?
-
❏ A. Amazon API Gateway with VPC Link to a private NLB
-
❏ B. Internet-facing Application Load Balancer in public subnets with private instances as targets
-
❏ C. NAT gateway in a public subnet with routes from private subnets
-
❏ D. Amazon CloudFront
Question 29
A media technology startup is launching an AI-powered image tagging platform made up of small independent services, each handling a distinct processing step. When a service starts, its model loads roughly 700 MB of parameters from Amazon S3 into memory. Customers submit single images or large batches through a REST API, and traffic can spike sharply during seasonal promotions while dropping to near zero overnight. The team needs a design that scales efficiently and remains cost-effective for this bursty workload. What should the solutions architect recommend?
-
❏ A. Expose the API through an Application Load Balancer and implement the ML workers with AWS Lambda using provisioned concurrency to minimize cold starts
-
❏ B. Buffer incoming requests in Amazon SQS and run the ML processors as Amazon ECS services that poll the queue with scaling tied to queue depth
-
❏ C. Front the service with a Network Load Balancer and run the ML services on Amazon EKS with node-based CPU autoscaling
-
❏ D. Publish API events to Amazon EventBridge and have AWS Lambda targets process them with memory and concurrency increased dynamically per payload size
Question 30
Which configuration statements correctly distinguish NAT instances from NAT gateways? (Choose 3)
-
❏ A. A NAT gateway supports port forwarding
-
❏ B. A NAT instance can be used as a bastion
-
❏ C. A NAT instance supports security groups
-
❏ D. Security groups can be attached to a NAT gateway
-
❏ E. A NAT instance can forward specific ports
-
❏ F. A NAT gateway performs TLS termination
Question 31
At Kestrel Dynamics, about 18 engineers need to quickly try AWS managed policies by temporarily attaching them to their own IAM users for short experiments, but you must ensure they cannot elevate privileges by giving themselves the AdministratorAccess policy. What should you implement to meet these requirements?
-
❏ A. Create a Service Control Policy in AWS Organizations that blocks attaching AdministratorAccess to any identity in the account
-
❏ B. Configure an IAM permissions boundary on every engineer’s IAM user to restrict which managed policies they can self-attach
-
❏ C. AWS Control Tower
-
❏ D. Attach an identity-based IAM policy to each engineer that denies attaching the AdministratorAccess policy to their own user
Question 32
How can teams located in different AWS Regions access the same Amazon EFS file system for shared editing while minimizing operational effort?
-
❏ A. Move the file to Amazon S3 with versioning
-
❏ B. Use inter-Region VPC peering to reach EFS mount targets and mount the same file system
-
❏ C. Enable EFS replication to file systems in each Region
-
❏ D. Put EFS behind an NLB and use AWS Global Accelerator
Question 33
A regional transportation company operates a costly proprietary relational database in its on-premises facility. The team plans to move to an open-source engine on AWS to reduce licensing spend while preserving advanced features such as secondary indexes, foreign keys, triggers, and stored procedures. Which pair of AWS services should be used together to run the migration and handle the required schema and code conversion? (Choose 2)
-
❏ A. AWS DataSync
-
❏ B. AWS Database Migration Service (AWS DMS)
-
❏ C. AWS Schema Conversion Tool (AWS SCT)
-
❏ D. AWS Snowball Edge
-
❏ E. Basic Schema Copy in AWS DMS
Question 34
Which AWS design enables near real time fan out of about 1.5 million streaming events per hour to multiple consumers and redacts sensitive fields before storing the sanitized items in a document database for fast reads?
-
❏ A. Amazon EventBridge with Lambda redaction to Amazon DynamoDB; services subscribe to the bus
-
❏ B. Amazon Kinesis Data Firehose with Lambda transform to Amazon DynamoDB; internal services read from Firehose
-
❏ C. Amazon Kinesis Data Streams with AWS Lambda redaction writing to Amazon DynamoDB; attach additional Kinesis consumers
-
❏ D. Write to Amazon DynamoDB, auto-scrub new items, and use DynamoDB Streams for fan-out
Question 35
A regional architecture firm is retiring its on-premises Windows file server clusters and wants to centralize storage on AWS. The team needs highly durable, fully managed file storage that Windows clients in eight branch locations can access natively using the SMB protocol. Which AWS services satisfy these requirements? (Choose 2)
-
❏ A. Amazon Simple Storage Service (Amazon S3)
-
❏ B. Amazon FSx for Windows File Server
-
❏ C. Amazon Elastic Block Store (Amazon EBS)
-
❏ D. AWS Storage Gateway File Gateway
-
❏ E. Amazon Elastic File System (Amazon EFS)
Question 36
Which VPC attributes must be enabled for EC2 instances to resolve Route 53 private hosted zone records using the Amazon provided DNS?
-
❏ A. Create Route 53 Resolver inbound and outbound endpoints
-
❏ B. Turn on VPC DNS resolution and DNS hostnames
-
❏ C. Remove namespace overlap with a public hosted zone
-
❏ D. Set a DHCP options set with custom DNS servers
Question 37
A post-production house named Northlight Works ingests raw 8K footage ranging from 3 to 6 TB per file and applies noise reduction and color matching before delivery. Each file requires up to 35 minutes of compute. The team needs a solution that elastically scales for spikes while remaining cost efficient. Finished videos must stay quickly accessible for at least 120 days. Which approach best meets these requirements?
-
❏ A. Deploy Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer, use Amazon SQS for job queuing, store metadata in Amazon RDS, and place completed outputs in Amazon S3 Glacier Flexible Retrieval
-
❏ B. Run an on-premises render farm integrated with AWS Storage Gateway for S3 access, keep metadata in Amazon RDS, and depend on gateway caching for frequently used assets
-
❏ C. Use AWS Batch to orchestrate editing jobs on Spot Instances, store working metadata in Amazon ElastiCache for Redis, and place outputs in Amazon S3 Intelligent-Tiering
-
❏ D. Run containerized workers on Amazon ECS with AWS Fargate, keep job metadata in Amazon DynamoDB, and write completed files to Amazon S3 Standard-IA
Question 38
A single instance EC2 application must remain available after an Availability Zone failure while keeping costs minimal. Which actions enable automatic cross Availability Zone recovery? (Choose 3)
-
❏ A. Use an Application Load Balancer in front of the instance
-
❏ B. Allocate an Elastic IP and associate it at boot using user data
-
❏ C. Create an Auto Scaling group across two AZs with min=1, max=1, desired=1
-
❏ D. Enable EC2 Auto Recovery with a CloudWatch alarm
-
❏ E. Attach an instance role permitting AssociateAddress and DescribeAddresses so user data manages the EIP
-
❏ F. AWS Global Accelerator
Question 39
A solutions architect at Northstar Outfitters is deploying an application on Amazon EC2 inside a VPC. The application saves product images in Amazon S3 and stores customer profiles in a DynamoDB table named CustomerProfiles. The security team requires that connectivity from the EC2 subnets to these AWS services stays on the AWS network and does not traverse the public internet. What should the architect implement to meet this requirement?
-
❏ A. Configure interface VPC endpoints for Amazon S3 and Amazon DynamoDB
-
❏ B. Deploy a NAT gateway in a public subnet and update private route tables
-
❏ C. Set up gateway VPC endpoints for Amazon S3 and Amazon DynamoDB
-
❏ D. AWS Direct Connect
Question 40
Static assets are stored in an S3 bucket and served through CloudFront and they must be accessible only from specified corporate IP ranges; what actions will enforce the IP allow list and prevent direct access to the S3 bucket? (Choose 2)
-
❏ A. S3 bucket policy with aws:SourceIp allowing corporate CIDRs
-
❏ B. CloudFront origin access identity and S3 bucket policy limited to that OAI
-
❏ C. Apply AWS WAF to the S3 bucket
-
❏ D. AWS WAF web ACL with IP allow list on CloudFront
-
❏ E. CloudFront signed URLs for users

These questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site.
Question 41
Riverton Robotics is evaluating how to initialize Amazon EC2 instances for a pilot rollout and wants to test the instance user data capability to bootstrap software and configuration. Which statements accurately describe the default behavior and mutability of EC2 user data? (Choose 2)
-
❏ A. You can edit user data from inside the instance using the Instance Metadata Service
-
❏ B. User data runs automatically only on the first boot after the instance is launched
-
❏ C. You can change an instance’s user data while it is running if you use root credentials
-
❏ D. User data scripts execute with root privileges by default
-
❏ E. User data is processed on every reboot of an EC2 instance by default
Question 42
Which design ensures highly available internet egress from private subnets in two Availability Zones?
-
❏ A. Create one NAT gateway in a public subnet
-
❏ B. Two NAT gateways in public subnets, one per AZ
-
❏ C. Two NAT gateways placed in private subnets
-
❏ D. Gateway VPC endpoint
Question 43
SkyTrail Logistics runs multiple Amazon EC2 instances in private subnets across three Availability Zones within a single VPC, and these instances must call Amazon DynamoDB APIs without sending traffic over the public internet. What should the solutions architect do to keep the traffic on the AWS network path? (Choose 2)
-
❏ A. Set up VPC peering from the VPC to the DynamoDB service
-
❏ B. Configure a DynamoDB gateway VPC endpoint in the VPC
-
❏ C. Send the traffic through a NAT gateway in a public subnet
-
❏ D. Add routes in the private subnet route tables that point to the DynamoDB endpoint
-
❏ E. Create interface VPC endpoints for DynamoDB in each private subnet using AWS PrivateLink
Question 44
How should a stateless web tier running behind an Application Load Balancer with Auto Scaling across three Availability Zones be configured to remain highly available and minimize steady state cost while handling daily traffic spikes? (Choose 2)
-
❏ A. Use On-Demand Instances only
-
❏ B. Set Auto Scaling minimum to 2 instances
-
❏ C. Buy Reserved Instances for the steady baseline
-
❏ D. Set Auto Scaling minimum to 4 instances
-
❏ E. Use Spot Instances for baseline capacity
Question 45
An online video-sharing startup needs to answer relationship-heavy questions such as “How many likes are on clips uploaded by the friends of user Mia over the last 72 hours?” The data model includes users, friendships, videos, and reactions, and the team expects frequent multi-hop traversals with low-latency aggregations across connected entities. Which AWS database service is the best fit for this requirement?
-
❏ A. Amazon Redshift
-
❏ B. Amazon Neptune
-
❏ C. Amazon OpenSearch Service
-
❏ D. Amazon Aurora
Question 46
Which actions allow authenticated Amazon Cognito users to upload directly to Amazon S3 using temporary credentials while ensuring the traffic remains on the AWS network? (Choose 2)
-
❏ A. Route S3 traffic through a NAT gateway
-
❏ B. Configure a Cognito identity pool to exchange user pool logins for temporary IAM credentials to S3
-
❏ C. Create a VPC endpoint for Amazon S3
-
❏ D. Require Cognito user pool tokens in the S3 bucket policy
-
❏ E. Call STS AssumeRoleWithWebIdentity directly with Cognito user pool tokens
Question 47
A technical publisher stores about 18 TB of training videos and PDFs in a single Amazon S3 bucket in one AWS Region. A partner company in a different Region has cross-account read access to pull the content into its own platform. The publisher wants to keep its own data transfer charges as low as possible when the partner downloads the objects. What should a solutions architect recommend?
-
❏ A. Set up S3 Cross-Region Replication to the partner’s S3 bucket
-
❏ B. Enable Requester Pays on the publisher’s S3 bucket
-
❏ C. Turn on S3 Transfer Acceleration for the bucket
-
❏ D. Serve the files through Amazon CloudFront with the S3 bucket as the origin
Question 48
Which architecture provides low-cost, highly available, on-demand image transformations for objects stored in Amazon S3 that are requested through API Gateway and delivered to internet clients?
-
❏ A. EC2 Auto Scaling with an Application Load Balancer, S3 for originals and derivatives, CloudFront over S3
-
❏ B. API Gateway and Lambda returning images directly to clients without S3 or CloudFront
-
❏ C. API Gateway + AWS Lambda for transforms, store originals and outputs in S3, deliver via CloudFront with S3 origin
-
❏ D. EC2 for processing, S3 for sources, DynamoDB for transformed images, CloudFront on S3
Question 49
A ride-sharing platform runs an Auto Scaling group of Amazon EC2 instances across two Availability Zones in eu-west-2, with an Application Load Balancer distributing all traffic to the group. During a staging exercise, the team manually terminated three instances in eu-west-2a, leaving capacity uneven across zones. Later, the load balancer health check flagged an instance in eu-west-2b as unhealthy. What outcomes should you expect from Amazon EC2 Auto Scaling in response to these events? (Choose 2)
-
❏ A. When the ALB reports an instance as unhealthy, Amazon EC2 Auto Scaling first launches a replacement instance and then later terminates the unhealthy one
-
❏ B. For Availability Zone imbalance, Amazon EC2 Auto Scaling rebalances by launching instances in the under-provisioned zone first and only then terminates excess capacity
-
❏ C. Instance Refresh
-
❏ D. When an instance is marked unhealthy by the load balancer, Amazon EC2 Auto Scaling records a scaling activity to terminate it and after termination starts a new instance to maintain desired capacity
-
❏ E. For Availability Zone rebalancing, Amazon EC2 Auto Scaling terminates old instances before launching new ones so that no extra instances are created

These questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site.
Question 50
How can you ensure that objects in an Amazon S3 bucket are accessible only through a CloudFront distribution and cannot be retrieved directly via S3 URLs?
-
❏ A. Enable S3 Block Public Access only
-
❏ B. Attach an IAM role to CloudFront and allow it in the S3 bucket policy
-
❏ C. Use a CloudFront origin access identity with an S3 bucket policy allowing it
-
❏ D. Keep S3 public and use CloudFront signed URLs
Question 51
At LumaRide, a fleet telemetry processor runs on Amazon EC2 Linux instances in multiple Availability Zones. The application writes log objects using standard HTTP API calls and must keep these logs for at least 10 years while supporting concurrent access by many instances. Which AWS storage option most cost-effectively meets these requirements?
-
❏ A. Amazon EBS
-
❏ B. Amazon EFS
-
❏ C. Amazon S3
-
❏ D. Amazon EC2 instance store
Question 52
Which AWS feature provides a centralized, repeatable way to deploy standardized infrastructure across multiple AWS accounts and Regions?
-
❏ A. AWS Organizations SCPs
-
❏ B. CloudFormation StackSets
-
❏ C. AWS Resource Access Manager
-
❏ D. AWS CloudFormation stacks
Question 53
BluePeak Institute runs a nightly Python job that typically completes in about 45 minutes. The task is stateless and safe to retry, so if it gets interrupted the team simply restarts it from the beginning. It currently executes in a colocation data center, and they want to move it to AWS while minimizing compute spend. What is the most cost-effective way to run this workload?
-
❏ A. AWS Lambda
-
❏ B. Amazon EMR
-
❏ C. EC2 Spot Instance with a persistent request
-
❏ D. Application Load Balancer
Question 54
How can an EC2 hosted service be privately accessed by other VPCs and AWS accounts in the same Region without exposing any other resources in the hosting VPC and while requiring minimal management? (Choose 2)
-
❏ A. VPC peering
-
❏ B. AWS PrivateLink service
-
❏ C. Network Load Balancer in service VPC
-
❏ D. AWS Transit Gateway
-
❏ E. AWS Global Accelerator
Question 55
A Canadian startup operates an online design portfolio platform hosted on multiple Amazon EC2 instances behind an Application Load Balancer. The site currently has users in four countries, but a new compliance mandate requires the application to be reachable only from Canada and to deny requests from all other countries. What should the team configure to meet this requirement?
-
❏ A. Update the security group associated with the Application Load Balancer to allow only the approved country
-
❏ B. Attach an AWS WAF web ACL with a geo match rule to the Application Load Balancer to permit only Canada
-
❏ C. Use Amazon Route 53 geolocation routing to return responses only to Canadian users
-
❏ D. Enable Amazon CloudFront geo restriction on a distribution in an Amazon VPC
Question 56
In Elastic Beanstalk an installation takes over 45 minutes but instances must be ready in under 60 seconds, and the environment has static components that are identical across instances while dynamic assets are unique to each instance, which combination of actions will satisfy these constraints? (Choose 2)
-
❏ A. AWS CodeDeploy
-
❏ B. Run dynamic setup in EC2 user data at first boot
-
❏ C. Store installers in Amazon S3
-
❏ D. Enable Elastic Beanstalk rolling updates
-
❏ E. Prebake static components into a custom AMI
Question 57
An oceanography institute runs an image-processing workflow on AWS. Field researchers upload raw photos for processing, which are staged on an Amazon EBS volume attached to an Amazon EC2 instance. Each night at 01:00 UTC, the job writes the processed images to an Amazon S3 bucket for archival. The architect has determined that the S3 uploads are traversing the public internet. The institute requires that all traffic from the EC2 instance to Amazon S3 stay on the AWS private network and not use the public internet. What should the architect do?
-
❏ A. Deploy a NAT gateway and update the private subnet route so the instance egresses through the NAT gateway to S3
-
❏ B. Create a gateway VPC endpoint for Amazon S3 and add the S3 prefix list route to the instance subnet route table
-
❏ C. Configure an S3 Access Point and have the application upload via the access point alias
-
❏ D. Set up VPC peering to Amazon S3 and update routes to use the peering connection
Question 58
In Amazon EKS how can you assign pod IP addresses from four specific private subnets spanning two Availability Zones while ensuring pods retain private connectivity to VPC resources?
-
❏ A. Kubernetes network policies
-
❏ B. AWS PrivateLink
-
❏ C. Security groups for pods
-
❏ D. Amazon VPC CNI with custom pod networking
Question 59
A nonprofit research lab runs several Amazon EC2 instances, a couple of Amazon RDS databases, and stores data in Amazon S3. After 18 months of operations, their monthly AWS spend is higher than expected for their workloads. Which approach would most appropriately reduce costs across their compute and storage environment?
-
❏ A. Use Amazon S3 Storage Class Analysis to recommend transitions directly to S3 Glacier classes and create Lifecycle rules to move data automatically
-
❏ B. Use AWS Trusted Advisor to auto-renew expiring Amazon EC2 Reserved Instances and to flag idle Amazon RDS databases
-
❏ C. Use AWS Cost Optimization Hub with AWS Compute Optimizer to surface idle or underutilized resources and rightsize Amazon EC2 instance types
-
❏ D. Use AWS Cost Explorer to automatically purchase Savings Plans based on the last 7 days of usage
Question 60
Which approach will quickly create an isolated 25 TB test copy of EBS block data in the same Region that provides immediate high I/O performance and ensures changes do not affect production?
-
❏ A. Attach production volumes with EBS Multi-Attach to test instances
-
❏ B. AWS DataSync to copy data to new EBS volumes
-
❏ C. EBS snapshots with Fast Snapshot Restore; create new test volumes
-
❏ D. Create volumes from EBS snapshots without Fast Snapshot Restore
Question 61
Rivertown Telecom is migrating analytics to AWS. It stores about 24 months of call center recordings, with roughly 9,000 new MP3 files added each day, and the team needs an automated serverless method to turn speech into text and run ad hoc SQL to gauge customer sentiment and trends. Which approach should the architect choose?
-
❏ A. Use Amazon Kinesis Data Streams to ingest audio and Amazon Alexa to create transcripts, analyze with Amazon Kinesis Data Analytics, and visualize in Amazon QuickSight
-
❏ B. Use Amazon Transcribe to generate text from the recordings stored in Amazon S3 and query the transcripts with Amazon Athena using SQL
-
❏ C. Use Amazon Kinesis Data Streams to read audio files and build custom machine learning models to transcribe and perform sentiment scoring
-
❏ D. Use Amazon Transcribe to create text files and rely on Amazon QuickSight to perform SQL analysis and reporting
Question 62
An EC2 instance in a VPC has a security group permitting TCP port 9443 and the subnet network ACL permits inbound traffic on port 9443, but clients still cannot connect. What change to the subnet NACL will enable connectivity?
-
❏ A. Allow outbound 9443 on the subnet NACL
-
❏ B. Allow outbound ephemeral ports on the subnet NACL; keep inbound 9443
-
❏ C. Add outbound ephemeral ports in the security group
-
❏ D. Open inbound ephemeral ports on the subnet NACL
Question 63
Aurora Metrics, a retail analytics startup, needs resilient administrator access into private subnets within a VPC. The team wants a bastion host pattern that remains available across multiple Availability Zones and scales automatically as engineers connect over SSH. Which architecture should be implemented to meet these requirements?
-
❏ A. AWS Client VPN
-
❏ B. Create a public Application Load Balancer that links to Amazon EC2 instances that are bastion hosts managed by an Auto Scaling group
-
❏ C. Place a public Network Load Balancer in two Availability Zones targeting bastion Amazon EC2 instances in an Auto Scaling group
-
❏ D. Allocate a single Elastic IP and attach it to all bastion instances in an Auto Scaling group
Question 64
What method allows an EC2 instance to run an initialization script only on its first boot while requiring minimal management?
-
❏ A. AWS CodeDeploy
-
❏ B. Customize cloud-init to limit execution to first boot
-
❏ C. EC2 user data script at launch
-
❏ D. AWS Systems Manager Run Command
Question 65
A media analytics startup runs a microservices application on Amazon EKS with Amazon EC2 worker nodes. One group of Pods hosts an operations console that reads and writes tracking items in Amazon DynamoDB, and another group runs a reporting component that archives large output files to Amazon S3. The security team mandates that the console Pods can interact only with DynamoDB and the reporting Pods can interact only with S3, with access controlled through AWS Identity and Access Management. How should the team enforce these Pod-level permissions?
-
❏ A. Attach S3 and DynamoDB permissions to the EC2 node instance profile and use Kubernetes namespaces to limit which Pods can call each service
-
❏ B. Use Kubernetes RBAC to control Pod access to AWS services and put long‑lived IAM credentials in ConfigMaps consumed by the Pods
-
❏ C. Create two IAM roles with least-privilege policies for DynamoDB and S3 and bind them to separate Kubernetes service accounts via IRSA so console Pods assume the DynamoDB role and reporting Pods assume the S3 role
-
❏ D. Store IAM user access keys for S3 and DynamoDB in AWS Secrets Manager and mount them into the respective Pods
Half and Half Practice Exam Answers
AWS Solution Architect Exam Dump Answers

These questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site.
Question 1
A retail analytics startup, McKenzie Market Insights, stores quarterly sales KPIs in an Amazon DynamoDB table named SalesMetrics. The team is building a lightweight web dashboard to present this data and wants to use fully managed components with the least possible operational overhead. Which architectures would meet these goals while minimizing operational effort? (Choose 2)
-
✓ B. An Amazon API Gateway REST API integrates directly with the DynamoDB table to read the sales metrics
-
✓ D. An Amazon API Gateway REST API triggers an AWS Lambda function that retrieves items from the DynamoDB table
The goal is to build a web-facing interface over DynamoDB using managed services with minimal administration. Both An Amazon API Gateway REST API triggers an AWS Lambda function that retrieves items from the DynamoDB table and An Amazon API Gateway REST API integrates directly with the DynamoDB table to read the sales metrics are serverless patterns that avoid managing servers and provide low operational effort.
An Application Load Balancer sends traffic to a target group that lists the DynamoDB table as the backend target is not possible because DynamoDB is not a valid ALB target type.
An Application Load Balancer routes traffic to a target group of Amazon EC2 instances that query the DynamoDB table would work functionally but introduces instance lifecycle, patching, scaling, and monitoring overhead that conflicts with the requirement to keep operations minimal.
An Amazon Route 53 hosted zone routes requests directly to an AWS Lambda endpoint to run code that reads the DynamoDB table is invalid because Route 53 is a DNS service and does not directly route to Lambda; you would need an HTTP front end such as API Gateway or CloudFront.
Between the two correct choices, API Gateway to Lambda offers flexible logic and security controls, while API Gateway direct service integration to DynamoDB can remove even the Lambda layer for a fully managed, low-latency path.
Cameron’s Exam Tip
When you see serverless web APIs over DynamoDB with minimal ops, think API Gateway + Lambda or API Gateway direct integration; remember ALB cannot target DynamoDB and Route 53 is only DNS.
Question 2
Which AWS service provides low-latency on-premises NFS and SMB file access to objects in Amazon S3 with local caching and minimal cost?
-
✓ B. AWS Storage Gateway – S3 File Gateway
AWS Storage Gateway – S3 File Gateway is correct because it exposes S3 as on-premises NFS/SMB file shares with a local cache, delivering low-latency access for hot data while keeping bulk data cost-efficiently in S3. This directly matches the requirement for file-based access, on-premises latency, and minimal cost.
Amazon FSx for Lustre is not correct because it is an in-cloud file system; while it can link to S3, on-premises clients would traverse the WAN and incur higher latency and costs compared to a local cache.
Amazon EFS is not correct because it is a managed NFS file system hosted in AWS, not backed by S3, and does not provide on-prem caching; it would still require network links and is not optimized for this pattern.
Mountpoint for Amazon S3 is not correct because it lacks a local cache and full POSIX semantics, so it cannot ensure low-latency, file-like access from on-premises environments.
Cameron’s Exam Tip
When you see on-premises file access to S3 with NFS/SMB and local caching, think S3 File Gateway. Services like EFS and FSx for Lustre are AWS-hosted file systems; they are not S3-backed on-prem caches.
Mountpoint for S3 offers convenience for some workloads but not low-latency on-prem caching.
Question 3
A digital media startup runs its subscription billing platform on AWS. The application uses an Amazon RDS for MySQL Multi-AZ DB cluster as the database tier. For regulatory reasons, the team must keep database backups for 45 days. Engineers take both automated RDS backups and occasional manual snapshots for point-in-time needs. The company wants to enforce a 45-day retention policy for all backups while preserving any automated and manual backups that were created within the last 45 days. The approach should minimize cost and operational work. Which solution meets these goals most cost effectively?
-
✓ B. Set the RDS automated backup retention to 45 days and schedule a simple script to delete manual snapshots older than 45 days
Set the RDS automated backup retention to 45 days and schedule a simple script to delete manual snapshots older than 45 days is the most cost-effective and lowest effort solution. RDS can natively enforce retention for automated backups up to the required window, but it does not manage the lifecycle of manual snapshots, which a small scheduled script can safely prune.
Disable RDS automated backups and use AWS Backup daily backup plans with a 45-day retention policy is inferior because it removes native point-in-time recovery and replaces a built-in feature with a costlier, more operationally heavy approach.
Export RDS snapshots to Amazon S3 and rely on S3 Lifecycle rules to delete objects after 45 days adds unnecessary complexity and storage costs and does not control the retention of RDS automated backups themselves.
Use AWS Backup to enforce a 45-day policy on automated backups and invoke AWS Lambda to remove manual snapshots older than 45 days is not optimal because automated RDS backups already support retention without AWS Backup, so this adds extra services and expense for no gain.
Cameron’s Exam Tip
Remember that RDS automated backup retention applies only to automated backups; manual snapshots are never auto-deleted and must be managed separately, often with a simple scheduled script for cost-effective compliance.
Question 4
How should a company migrate approximately 250 TB of on premises files to Amazon S3 over an existing 20 Gbps Direct Connect while keeping the traffic private, automating recurring synchronizations, and using an accelerated managed service?
-
✓ D. AWS DataSync via VPC interface endpoint over Direct Connect
AWS DataSync via VPC interface endpoint over Direct Connect is the right choice because it keeps data transfers private on the existing Direct Connect using an interface VPC endpoint, automates recurring synchronization via tasks and scheduling, and uses DataSync’s optimized protocol to accelerate movement into Amazon S3. This directly satisfies the requirements for privacy, automation, and performance with a fully managed service.
The option AWS Storage Gateway file gateway is not ideal for large-scale migration acceleration or managed replication workflows and is primarily for providing SMB/NFS access backed by S3.
The option AWS DataSync with public endpoint would send traffic over the public internet, violating the requirement to keep transfers private over Direct Connect.
The option AWS Snowball Edge is designed for offline bulk transfers and does not support ongoing, automated sync using the existing Direct Connect link.
Cameron’s Exam Tip
When you see keywords like private over Direct Connect, automated recurring sync, and accelerated managed service, think of DataSync with an interface VPC endpoint (AWS PrivateLink). Be careful not to confuse Storage Gateway (hybrid access) or Snowball (offline transfer) with DataSync’s network-optimized migrations. If the question mentions public endpoints or internet paths, that usually disqualifies the solution when privacy over Direct Connect is required.
Question 5
Pickering is Springfield Couriers runs a legacy report collector on a single Amazon EC2 instance in a public subnet. The application gathers scanned PDF delivery slips, writes them to an attached Amazon EBS volume, and at 01:00 UTC each night pushes the accumulated files to an Amazon S3 archive bucket. A solutions architect observes that the instance is using the public S3 endpoint over the internet for uploads. The company wants the data transfers to stay on the AWS private network and avoid the public endpoint entirely. What should the architect implement?
-
✓ C. Deploy a gateway VPC endpoint for Amazon S3 and update the subnet route table to use it; restrict access with a bucket or IAM policy tied to the endpoint
The correct solution is to use a private path from the VPC to Amazon S3. Deploy a gateway VPC endpoint for Amazon S3 and update the subnet route table to use it; restrict access with a bucket or IAM policy tied to the endpoint ensures that the EC2 instance reaches S3 over the AWS backbone instead of the internet. With appropriate route table targets and endpoint policies, the uploads stay private and meet the security objective without changing the application flow.
Create an S3 access point in the same Region, grant the instance role access, and update the application to use the access point alias is incorrect because access points control access but do not alter the network path, so without a VPC endpoint the traffic still uses S3 public endpoints.
Enable S3 Transfer Acceleration on the bucket and update the application to use the acceleration endpoint is incorrect since Transfer Acceleration leverages edge public endpoints and does not provide private VPC connectivity.
Order an AWS Direct Connect dedicated connection and route VPC traffic to Amazon S3 over it is unnecessary and costly for this use case, as Direct Connect targets on-premises-to-AWS private connectivity rather than intra-VPC EC2-to-S3 access.
Cameron’s Exam Tip
When a VPC resource must access Amazon S3 privately, think gateway VPC endpoint for S3. Access points handle authorization, Transfer Acceleration optimizes public paths, and Direct Connect is for on-premises private links, not EC2-to-S3 within a VPC.
Question 6
Which scaling approach most effectively reduces cold-start latency during the morning traffic surge for an EC2 Auto Scaling group while keeping costs low?
-
✓ C. Use target tracking with a lower CPU target and shorter cooldown
Use target tracking with a lower CPU target and shorter cooldown is correct because target tracking automatically adjusts capacity to maintain a specified metric (for example, CPU at 40%), which makes the group scale out earlier as the morning load ramps up. Reducing the cooldown shortens the wait between scaling actions, further improving responsiveness while still allowing scale-in when demand subsides, keeping costs low.
The option Schedule desired capacity to 28 just before business hours will likely fix the slowdown but at a higher cost, since it pre-provisions capacity regardless of the actual early-morning load and any day-to-day variations.
The option Enable a warm pool with running instances for the morning surge reduces cold-start latency but maintains running instances in the warm pool, increasing spend and conflicting with the cost constraint; warm pools can be cost-effective only when using stopped instances, which still requires careful tuning.
The option Use step scaling with reduced CPU thresholds and a short cooldown can work but is harder to tune and less adaptive; target tracking is the recommended default policy for most workloads because it continuously seeks a target metric without manual threshold bands.
Cameron’s Exam Tip
Prefer target tracking as the default scaling policy for dynamic workloads. Use scheduled scaling when you know exact time windows and can tolerate over-provisioning. Consider warm pools only if you must minimize initialization time and accept additional cost, ideally with stopped instances for savings. Shorten cooldown or use instance warm-up to make scaling more responsive, but avoid excessive flapping by setting reasonable targets.
Question 7
A retail analytics startup moved several cron-style workloads to Amazon EC2 instances running Amazon Linux. Each job runs for about 80 minutes, and different teams wrote them in various programming languages. All jobs currently execute on a single server, which creates throughput bottlenecks and limited scalability. The team wants to run these tasks in parallel across instances while keeping operations simple and avoiding significant rework. What approach will meet these needs with the least operational overhead?
-
✓ B. Create an Amazon Machine Image from the existing EC2 host and use an Auto Scaling group to launch multiple identical instances concurrently
The least operational overhead comes from scaling what already works. Create an Amazon Machine Image from the existing EC2 host and use an Auto Scaling group to launch multiple identical instances concurrently preserves the current heterogeneous environment, avoids refactoring, and enables parallel execution across Availability Zones with simple, managed scaling.
Run the tasks as jobs in AWS Batch and trigger them on a schedule with Amazon EventBridge is powerful for batch processing, but it typically requires containerizing or packaging each task, defining job queues and compute environments, and managing additional constructs, which adds setup and operational complexity compared to cloning the instance.
Rewrite each task as an AWS Lambda function and schedule invocations with Amazon EventBridge is not viable because Lambda has a hard 15-minute maximum duration, far below the approximately 80-minute runtime of these jobs.
Containerize the workloads and use Amazon ECS on AWS Fargate with EventBridge scheduled tasks would offload server management but requires substantial containerization and orchestration work, which is more effort than using an AMI and Auto Scaling group for immediate scale-out.
Cameron’s Exam Tip
When jobs exceed Lambda’s time limits and you need rapid scale-out with minimal change, cloning the current environment with an AMI and using an Auto Scaling group can be the simplest path to parallelism and high availability.
Question 8
An EC2 hosted microservice behind an Application Load Balancer has one API route that takes about four minutes to complete while other routes finish in about 200 milliseconds; what should be implemented to decouple the long running work and avoid blocking requests?
-
✓ C. Amazon SQS with asynchronous processing
The best solution is to use Amazon SQS with asynchronous processing. Placing long-running tasks onto a queue decouples them from the synchronous API path, allowing the service to respond quickly (for example, 202 Accepted with a job ID) while separate consumers scale to process jobs. This removes head-of-line blocking, reduces connection/thread exhaustion, and isolates spikes through queue-based load leveling. You can add a DLQ, tune visibility timeouts, and horizontally scale workers as needed.
AWS Step Functions is powerful for orchestrating complex workflows, but it does not by itself turn a synchronous HTTP call into an asynchronous pattern; you still need a decoupling mechanism like SQS for simple job offloading.
Increase ALB idle timeout only keeps connections open longer and does not reduce contention or move the heavy work off the request path.
Amazon SNS is a pub/sub service, not a work-queue with competing consumers and visibility timeouts, so it is not ideal for back-pressure and work distribution for long-running jobs.
Cameron’s Exam Tip
When you see long-running HTTP handlers slowing an API, look for asynchronous decoupling, queue-based load leveling, and background workers. SQS is the go-to service for work queues; return quickly with a job token, process out-of-band, use DLQs, and scale consumers independently.
Question 9
A sports analytics firm maintains an AWS Direct Connect link to AWS and has moved its enterprise data warehouse into AWS. Data analysts use a business intelligence dashboard to run queries. The average result set returned per query is 80 megabytes, and the dashboard does not cache responses. Each rendered dashboard page is approximately 350 kilobytes. Which approach will deliver the lowest data transfer egress cost for the company?
-
✓ B. Host the BI tool in the same AWS Region as the data warehouse and let users access it through the existing Direct Connect from the corporate network
The most cost-efficient pattern is to keep the heavy data movement inside AWS and send only lightweight content to on-premises users. Host the BI tool in the same AWS Region as the data warehouse and let users access it through the existing Direct Connect from the corporate network ensures that the 80 MB query results stay within AWS while only the ~350 KB web pages traverse the link. Direct Connect typically offers lower Data Transfer Out rates than internet egress, so this combination minimizes total egress charges.
Host the BI tool on-premises and fetch results from the AWS data warehouse over the public internet in the same AWS Region is costlier because every 80 MB response leaves AWS over internet egress, which has higher per-GB pricing.
Host the BI tool on-premises and fetch results from the AWS data warehouse across the Direct Connect link in the same AWS Region reduces the per-GB price versus internet but still sends ~80 MB per query out of AWS, which costs more than sending only the small pages.
Host the BI tool in the same AWS Region as the data warehouse and let users access it via an AWS Site-to-Site VPN limits egress to the 350 KB pages, but the bytes leave AWS over internet/VPN pricing; Direct Connect is typically cheaper per GB, so this is not the lowest-cost option.
Cameron’s Exam Tip
To minimize egress costs, keep compute close to the data so large transfers remain within AWS, and send only small artifacts to users; when connecting on-premises, prefer Direct Connect over internet/VPN for lower Data Transfer Out rates.
Question 10
Which AWS relational database option provides cross Region disaster recovery with approximately a three second recovery point objective and a thirty second recovery time objective?
-
✓ B. Aurora Global Database
The best choice is Aurora Global Database. It provides storage-level, low-latency replication across Regions with typical lag under a second and fast regional failover, so achieving an RPO of about 3 seconds and an RTO near 30 seconds is realistic. This is the managed, purpose-built approach for multi-Region DR with aggressive objectives for relational workloads.
Amazon RDS Multi-AZ is limited to a single Region and cannot satisfy cross-Region DR requirements or the stated RPO/RTO.
AWS Elastic Disaster Recovery focuses on server-level replication and recovery; it is not optimized for managed RDS relational engines to guarantee sub-minute RTO and second-level RPO across Regions.
Amazon RDS cross-Region read replica uses asynchronous replication and requires manual promotion and reconfiguration, which commonly results in higher RPO and RTO than the targets here.
Cameron’s Exam Tip
When you see cross-Region DR for a relational database with very low RPO (seconds) and fast RTO (sub-minute), map it to Global Database for Aurora. Distinguish Multi-AZ (in-Region HA) from multi-Region solutions, and watch for keywords like storage-level replication and rapid regional failover.
Question 11
Orion Retail processes shopper photos uploaded to an Amazon S3 bucket and writes summarized demographic attributes as CSV files to a second bucket every 30 minutes. Security requires encryption of all files at rest, and analysts need to run standard SQL against the dataset without managing servers. What should the solutions architect implement to meet these requirements?
-
✓ B. Configure S3 SSE-KMS and run SQL queries with Amazon Athena
Configure S3 SSE-KMS and run SQL queries with Amazon Athena is the best fit. SSE-KMS ensures the CSV data in S3 is encrypted at rest with AWS KMS-managed keys, and Athena provides a fully serverless, pay-per-query SQL engine over data stored in S3. Athena also supports encrypting query results and integrates with the AWS Glue Data Catalog for schema management.
Encrypt the S3 buckets with AWS KMS keys and use Amazon Managed Service for Apache Flink to analyze the files is not suitable because Flink targets real-time stream processing, not ad hoc SQL queries over data at rest in S3.
Enable S3 server-side encryption and query the data with Amazon Redshift Spectrum introduces a Redshift data warehouse dependency and management overhead; while Spectrum can query S3, this is not the simplest fully serverless approach for direct SQL on CSV files in S3.
Use S3 server-side encryption and load the CSVs into Amazon Aurora Serverless for SQL queries requires loading data into a database and maintaining schemas, which adds data movement and operational complexity when a serverless S3-native query service is available.
Cameron’s Exam Tip
For ad hoc SQL over data stored in S3 with minimal ops, think Athena + SSE-KMS. Beware of options that require standing up clusters or moving data into databases when the requirement is a fully serverless data lake pattern.
Question 12
Which AWS service uses global anycast, supports UDP traffic, enables rapid cross-Region failover, and permits the continued use of an external DNS provider?
-
✓ C. Amazon Global Accelerator
The correct choice is Amazon Global Accelerator. It provides globally anycast static IP addresses that front Regional endpoints, supports both TCP and UDP, and performs health-based traffic steering and rapid cross-Region failover on the AWS global network. Because clients connect to the accelerator’s static IPs, you can keep an external DNS provider and simply point records at those IPs, avoiding DNS-caching delays during failover.
The option Amazon CloudFront is not suitable because it is focused on HTTP/HTTPS content delivery and origin failover and does not proxy arbitrary UDP traffic.
The option Amazon Route 53 offers DNS-based failover, but it does not provide a global data-plane proxy; failover timing can be impacted by resolver caching and TTLs, and it does not accelerate UDP traffic.
The option AWS Route 53 Application Recovery Controller helps orchestrate and govern failover at the control plane via routing controls and readiness checks, but it still relies on DNS and does not supply anycast IPs or UDP acceleration.
Cameron’s Exam Tip
When you see UDP, global anycast static IPs, fast cross-Region failover, and the ability to keep external DNS, think Global Accelerator. For HTTP(S) content distribution, think CloudFront. For pure DNS-based failover and health checks, think Route 53. For failover orchestration and readiness gates at the DNS/control layer, think Route 53 ARC.
Question 13
A sports analytics startup anticipates a massive traffic spike for the kickoff of an interactive live match tracker. Their stack runs on AWS with application servers on Amazon EC2 and a transactional backend on Amazon RDS. The operations team needs proactive visibility into performance during the event with metric updates at intervals of 90 seconds or less, and they prefer something that can be enabled quickly with minimal maintenance. What should they implement?
-
✓ C. Enable EC2 Detailed Monitoring and use Amazon CloudWatch to view 1-minute instance metrics during the launch window
The best answer is Enable EC2 Detailed Monitoring and use Amazon CloudWatch to view 1-minute instance metrics during the launch window. EC2 basic monitoring publishes at 5-minute intervals, while Detailed Monitoring provides 1-minute metrics, which meet the requirement for updates at or below 90 seconds. It is fast to enable, integrates directly with CloudWatch dashboards and alarms, and carries minimal operational overhead.
Stream EC2 operating system logs to Amazon OpenSearch Service and visualize CPU and memory in OpenSearch Dashboards is not ideal because OpenSearch does not natively collect EC2 instance performance metrics; you would need to build and maintain agents, shippers, and parsers, adding unnecessary complexity.
Capture EC2 state change events with Amazon EventBridge, forward to Amazon SNS, and have a dashboard subscribe to view metrics focuses on lifecycle events such as start and stop rather than continuous metrics, and SNS is not a metrics visualization service, so it will not deliver real-time performance visibility.
Install the CloudWatch agent on all instances to publish high-resolution custom metrics and analyze them from CloudWatch Logs with Amazon Athena could work but requires agent rollout, configuration management, and log-based querying, which increases latency and operational effort compared to native 1-minute CloudWatch metrics.
Cameron’s Exam Tip
When you see a need for near-real-time EC2 instance metrics with low operational effort, think EC2 Detailed Monitoring for 1-minute CloudWatch metrics rather than deploying the CloudWatch agent unless custom metrics are explicitly required.
Question 14
Which AWS database option provides MySQL compatibility along with automatic compute scaling, built-in high availability, and minimal operational effort?
-
✓ C. Amazon Aurora MySQL Serverless v2
The best choice is Amazon Aurora MySQL Serverless v2 because it provides MySQL compatibility with fine-grained, on-demand compute scaling, built-in high availability across multiple AZs, fast failover, continuous backups, and minimal operational overhead. It elastically adjusts capacity without manual instance resizing, aligning directly with the need for improved performance, scalability, and durability with low management effort.
Amazon RDS for MySQL Multi-AZ with read replicas is managed and highly available, but instance class changes are still manual and read replicas do not increase write capacity. It lacks automatic compute scaling for the writer.
Amazon Aurora MySQL provisioned with read replica Auto Scaling offers strong performance and HA, and it can scale read replicas, but the writer’s compute is provisioned and requires manual resizing or blue/green, so it does not meet the automatic compute scaling requirement.
Single larger MySQL on EC2 increases operational burden, creates a single point of failure, and does not deliver managed HA, automated backups, or elastic capacity.
Cameron’s Exam Tip
When you see keywords like automatic capacity scaling, minimal operational effort, and MySQL compatibility, look for Aurora Serverless v2. Distinguish it from RDS MySQL or provisioned Aurora, which require manual instance scaling. Also note that read replica Auto Scaling only addresses read throughput, not writer compute.
Question 15
An international film distribution firm is moving its core workloads to AWS. The company has built an Amazon S3 data lake to receive and analyze content from external partners. Many partners can upload using S3 APIs, but several operate legacy tools that only support SFTP and refuse to change their process. The firm needs a fully managed AWS solution that gives partners an SFTP endpoint which writes directly to S3 and supports identity federation so internal teams can map each partner to specific S3 buckets or prefixes. Which combination of actions will best meet these needs with minimal ongoing maintenance? (Choose 2)
-
✓ B. Provision an AWS Transfer Family server with SFTP enabled that stores uploads in an S3 bucket and map each partner user to a dedicated IAM role scoped to that bucket or prefix
-
✓ D. Apply S3 bucket policies that grant IAM role–based, per-partner access and integrate AWS Transfer Family with Amazon Cognito or an external identity provider for federation
The scalable, low-operations design is to use Provision an AWS Transfer Family server with SFTP enabled that stores uploads in an S3 bucket and map each partner user to a dedicated IAM role scoped to that bucket or prefix together with Apply S3 bucket policies that grant IAM role–based, per-partner access and integrate AWS Transfer Family with Amazon Cognito or an external identity provider for federation.
AWS Transfer Family provides a fully managed SFTP endpoint that writes directly to S3, so partners keep their legacy SFTP clients without the company running servers. Mapping each user to an IAM role restricts access to the appropriate bucket or prefix, enforcing least privilege. Adding identity federation via Cognito, SAML, or a custom IdP allows centralized authentication, attribute-based role mapping, and fine-grained S3 permissions with bucket policies.
Use Amazon AppFlow to ingest files from legacy SFTP systems into S3 on an hourly schedule is not suitable because AppFlow integrates with SaaS applications via APIs and does not support SFTP endpoints.
Run a custom OpenSSH-based SFTP server on Amazon EC2 and use cron to copy received files into S3, with CloudWatch for monitoring violates the fully managed requirement and adds maintenance for patching, scaling, and high availability.
Set up AWS DataSync with an SFTP location to replicate partner files into S3 is misaligned because DataSync connects to existing SFTP servers but does not expose a managed SFTP service for partners to upload into.
Cameron’s Exam Tip
When you see requirements for SFTP uploads directly to S3 with fully managed operations and identity federation, think AWS Transfer Family for the protocol endpoint plus IdP integration and S3 IAM policies for least-privilege folder scoping.
Question 16
Which AWS architecture provides a scalable and highly available HTTPS endpoint to receive JSON events from devices, processes those events using serverless services, and stores the results durably?
-
✓ D. Amazon API Gateway to AWS Lambda to Amazon DynamoDB
Amazon API Gateway to AWS Lambda to Amazon DynamoDB is the best fit because API Gateway offers a managed, scalable HTTPS ingress with authentication, throttling, and reliability features, while Lambda provides serverless compute for event processing and DynamoDB supplies highly available, durable storage with near-infinite scale.
The option Amazon EC2 single instance writing to Amazon S3 is inappropriate due to a single point of failure and operational overhead for scaling and patching.
The option Amazon Route 53 to AWS Lambda to Amazon DynamoDB is invalid because Route 53 is only DNS and cannot route directly to a Lambda function.
The option Amazon EventBridge with rule to Amazon DynamoDB is not suitable since EventBridge is not a public HTTPS API frontend for devices; it requires AWS authentication and does not directly accept anonymous HTTPS requests.
Cameron’s Exam Tip
For public HTTPS ingestion of device or app events with minimal ops, think API Gateway + Lambda. For durable, low-latency key-value storage, think DynamoDB. Avoid designs that rely on DNS alone, a single EC2 instance, or services that do not expose a public API endpoint.
Question 17
An online education startup named Scrumtuous AWS Training is trialing a Linux-based Python application on a single Amazon EC2 instance. The instance uses one 2 TB Amazon EBS General Purpose SSD (gp3) volume to store customer data. The team plans to scale out the application to several EC2 instances in an Auto Scaling group, and every instance must read and write the same dataset that currently resides on the EBS volume. They want a highly available and cost-conscious approach that requires minimal changes to the application. What should the team implement?
-
✓ C. Create an Amazon Elastic File System in General Purpose performance mode and mount it across all EC2 instances
Create an Amazon Elastic File System in General Purpose performance mode and mount it across all EC2 instances is the best fit because EFS is a managed, multi-AZ, highly available network file system that supports concurrent access from many instances with POSIX semantics, typically requiring little to no code change beyond mounting a standard NFS path.
Set up Amazon FSx for Lustre, link it to an Amazon S3 bucket, and mount the file system on each EC2 instance for shared access is oriented toward HPC and fast scratch or S3-linked workloads and introduces unnecessary complexity for a general shared, durable dataset.
Use Amazon EBS Multi-Attach with an io2 volume and attach it to all instances in the Auto Scaling group is unsuitable because Multi-Attach is limited to io1/io2, a single AZ, and requires a cluster-aware file system to prevent corruption, which increases complexity and does not meet the simplicity and availability goals.
Run a single EC2 instance as an NFS server, attach the existing EBS volume, and export the share to the Auto Scaling group instances creates a single point of failure and operational burden, failing the high availability requirement.
Cameron’s Exam Tip
When multiple EC2 instances must share the same files with minimal code changes, think POSIX file system and multi-AZ availability; Amazon EFS in General Purpose mode is often the right answer for broad, durable shared access.
Question 18
Which S3 option automatically reduces storage costs across 500 buckets while requiring minimal ongoing administration and no lifecycle rule management?
-
✓ C. S3 Intelligent-Tiering
S3 Intelligent-Tiering is correct because it automatically optimizes object placement across access tiers based on changing access patterns, reducing cost without the need to create or maintain lifecycle rules across many buckets. It delivers cost savings with minimal operational overhead and no retrieval fees for tier transitions.
S3 Glacier Deep Archive is not suitable because it is an archival class that generally requires lifecycle policies to move objects and involves long retrieval times, which does not meet the minimal-operations requirement.
S3 One Zone-IA stores data in a single Availability Zone and does not auto-tier, typically relying on lifecycle rules and offering lower resilience.
S3 Storage Lens only provides analytics and recommendations and does not automatically change storage classes or reduce costs on its own.
Cameron’s Exam Tip
When you see phrases like minimal ongoing management, unknown or changing access patterns, or many buckets, think S3 Intelligent-Tiering. Watch for distractors that are archival tiers requiring lifecycle policies or analytics services that do not perform automated cost optimization. Remember Intelligent-Tiering has a small monitoring and automation charge per object, but it removes the need to design and maintain lifecycle rules.
Question 19
BrightPlay Analytics runs a live leaderboard web app behind an Application Load Balancer and a fleet of Amazon EC2 instances. The service stores game results in Amazon RDS for MySQL. During weekend tournaments, read requests surge to roughly 120,000 per minute, and users see delays and occasional timeouts that trace back to slow database reads. The company needs to improve responsiveness while making the fewest possible changes to the existing architecture. What should the solutions architect recommend?
-
✓ B. Use Amazon ElastiCache to cache frequently accessed reads in front of the database
The best fit is Use Amazon ElastiCache to cache frequently accessed reads in front of the database. A cache such as Redis or Memcached can store hot leaderboard and profile reads, dramatically reducing database load and latency while requiring minimal code changes to integrate.
Connect the application to the database using Amazon RDS Proxy is not sufficient because it optimizes connection management and failover but does not improve the performance of slow read queries.
Create Amazon RDS for MySQL read replicas and route read traffic to them can help scale reads, but it typically requires read/write split logic and can introduce replication lag, so it is not the least-change solution for very hot, frequently read data.
Migrate the data layer to Amazon DynamoDB would entail a significant redesign of the data model and application code, which conflicts with the requirement to minimize architectural changes.
Cameron’s Exam Tip
When you see read-heavy hotspots and a requirement for minimal changes, think in-memory caching with ElastiCache; use RDS Proxy for connection optimization, and remember read replicas often need read/write split logic and may have lag.
Question 20
How can you replicate S3 objects encrypted with SSE-KMS to another Region while ensuring the same KMS key material and key ID are used in both Regions?
-
✓ B. Create a new source bucket using SSE-KMS with a KMS multi-Region key, replicate to a bucket with the replica key, and migrate existing data
Create a new source bucket using SSE-KMS with a KMS multi-Region key, replicate to a bucket with the replica key, and migrate existing data is correct because AWS KMS multi-Region keys are designed to have the same key material and key ID across Regions. By encrypting the source bucket with a multi-Region key and configuring S3 replication to use the related replica key in the destination Region, you meet the requirement for identical key IDs and materials and can backfill existing data via copy or S3 Batch Replication. This aligns with native S3 replication capabilities for SSE-KMS and minimizes operational complexity.
The option Use identical KMS key alias names in both Regions and enable S3 replication is incorrect because aliases are only names mapped to different keys per Region; matching alias names do not ensure the same key ID or material.
The option Convert the existing single-Region KMS key to a multi-Region key and use S3 Batch Replication is incorrect since you cannot convert a single-Region key into a multi-Region key; you must create a new multi-Region key pair.
The option Enable S3 replication and share the current KMS key across Regions is incorrect because KMS keys are Region-bound and cannot be shared across Regions while retaining the same key ID.
Cameron’s Exam Tip
When a requirement specifies the same key ID and key material across Regions, think multi-Region KMS keys. If historical objects must be copied under new encryption, consider S3 Batch Replication or a one-time copy and ensure the destination uses the replica key.

These questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site.
Question 21
A regional media startup named Ardent Stream runs an on-premises analytics application that updates and adds files many times per hour. A new compliance rule requires a complete audit trail of storage activity that includes object-level API actions and configuration changes retained for at least 180 days. Local NAS capacity is almost exhausted, and the team wants to offload part of the dataset to AWS without interrupting ongoing writes. Which approach best satisfies the auditing requirement while easing on-premises storage pressure?
-
✓ B. Use AWS Storage Gateway to back data with Amazon S3 and enable AWS CloudTrail data events for S3
The best fit is Use AWS Storage Gateway to back data with Amazon S3 and enable AWS CloudTrail data events for S3. File Gateway provides a local cache with S3 as the durable backing store, so ongoing frequent writes continue locally while capacity pressure is relieved to S3. Enabling CloudTrail data events for S3 delivers object-level API auditing, and management events cover configuration changes, satisfying the end-to-end audit requirement.
Move existing data to Amazon S3 using AWS DataSync and enable AWS CloudTrail management events is not ideal because DataSync focuses on transfers rather than providing ongoing hybrid access, and management events do not capture object-level actions.
Enable Amazon S3 Transfer Acceleration for uploads and turn on AWS CloudTrail data events is suboptimal because Transfer Acceleration is for client-to-S3 acceleration over distance, not a hybrid offload solution for a frequently updated on-premises workload.
Ship the data with AWS Snowball Edge and log AWS CloudTrail management events is designed for bulk offline migrations and does not address continuous change rates, and management events would still miss object-level auditing.
Cameron’s Exam Tip
For hybrid file workloads that need to keep writing locally while offloading to S3, think Storage Gateway File Gateway. For compliance that requires object-level tracking, remember CloudTrail data events record S3 object operations, while management events cover control-plane API calls.
Question 22
Which AWS services let you run Kubernetes pods without managing the underlying nodes and provide a managed AMQP compatible message broker with minimal code changes? (Choose 2)
-
✓ B. Amazon EKS on Fargate
-
✓ D. Amazon MQ
Amazon EKS on Fargate removes the need to manage Kubernetes worker nodes by running pods on serverless compute, directly addressing the goal of reducing cluster management. Amazon MQ provides a fully managed broker that natively supports AMQP, allowing existing AMQP clients to connect with minimal or no code changes while offloading broker operations.
The option Amazon MSK uses the Kafka protocol, not AMQP, so it would force client and protocol changes.
Amazon SQS is not AMQP and would require application rewrites to SQS semantics.
Amazon EKS on EC2 with Karpenter improves provisioning but still leaves you responsible for node lifecycle, patching, and capacity operations, failing the “minimal management” goal.
Cameron’s Exam Tip
Watch for keywords like AMQP, minimal code changes, and no node management. Map these to managed protocol-compatible brokers (Amazon MQ) and serverless Kubernetes execution (EKS on Fargate). Be cautious of lookalikes such as MSK/Kinesis/SQS that change the messaging protocol or semantics.
Question 23
An edtech startup named Scrumtuous Education is launching a platform to store learner progress, test submissions, and user preferences. The database must use a relational schema with ACID transactions across related records. Usage surges unpredictably during scheduled practice exams, so capacity should adjust automatically with little administration. The team also needs automated backups while keeping operations minimal. Which solution is the most cost-effective?
-
✓ D. Use Amazon Aurora Serverless v2 with automatic scaling and configure automated backups to Amazon S3 with a 10-day retention
Use Amazon Aurora Serverless v2 with automatic scaling and configure automated backups to Amazon S3 with a 10-day retention is the best fit because it is a managed relational engine that supports ACID transactions, scales capacity up and down automatically to handle unpredictable surges, and provides automated, low-ops backups, all of which align with cost-effectiveness and minimal administration.
Use Amazon DynamoDB with on-demand capacity and enable Point-in-Time Recovery is not suitable because, although it scales and offers PITR, it is a NoSQL service and is not optimized for relational schemas and joins required by this workload.
Launch Amazon RDS for MySQL in Multi-AZ with provisioned IOPS and retain automated backups in Amazon S3 Glacier Deep Archive increases cost and does not provide elastic compute scaling for sudden spikes, making it less cost-effective and less hands-off than a serverless relational option.
Run an open-source relational database on Amazon EC2 Spot Instances in an Auto Scaling group with nightly snapshots to Amazon S3 Standard-Infrequent Access introduces interruption risk and significant operational burden, which conflicts with the requirement to minimize management for a primary transactional database.
Cameron’s Exam Tip
When a workload needs relational schemas, ACID transactions, unpredictable spikes, and minimal operations, think managed serverless relational first; Aurora Serverless v2 often aligns better than DynamoDB or self-managed databases.
Question 24
A company needs to securely store application secrets and have them automatically rotated about every 90 days with minimal operational overhead. Which AWS service should they use?
-
✓ D. AWS Secrets Manager with automatic rotation
AWS Secrets Manager with automatic rotation is the right choice because it is designed specifically for managing application secrets, provides encryption with AWS KMS, integrates with IAM for fine-grained access control, supports CloudTrail auditing, and offers built-in automatic rotation using AWS Lambda. This delivers the required security while minimizing operational effort.
AWS Systems Manager Parameter Store is suitable for configuration and encrypted parameters but does not offer native automatic rotation workflows, so you must build and operate rotation yourself.
Amazon DynamoDB is a general-purpose database and would require custom logic for secure storage, retrieval, and rotation of secrets.
AWS Key Management Service manages encryption keys rather than secrets and does not provide a secrets catalog or rotation of arbitrary credentials.
Cameron’s Exam Tip
When you see keywords like secrets, automatic rotation, and least operational effort, prefer Secrets Manager. If the question emphasizes configuration values without rotation needs, Parameter Store may be correct. KMS is for key management, not secret storage.
Question 25
A travel booking startup needs to programmatically expand or shrink the geographic area that directs users to a specific application endpoint as demand fluctuates. Which Amazon Route 53 feature provides this capability?
-
✓ C. Geoproximity routing
The correct choice is Geoproximity routing. This Route 53 policy lets you apply a positive or negative bias to expand or shrink the geographic region that routes traffic to a given resource, which directly matches the requirement to dynamically adjust the area influencing DNS answers.
Latency-based routing selects the endpoint that offers the lowest latency for the requester and cannot alter geographic boundaries using a bias.
Weighted routing distributes traffic by percentage across endpoints and provides no geographic controls.
Geolocation routing routes based on the user’s location (such as country or continent), but it does not offer a bias setting to resize the geographic area around a resource.
Cameron’s Exam Tip
When you see language about expanding or shrinking a geographic area with a bias, think Geoproximity routing. If it is about mapping users by country or region, think Geolocation; if it is about best performance, think Latency-based; and if it is about percentages, think Weighted.
Question 26
Which architecture preserves all tasks, decouples a fast intake stage from a slower processing stage, and lets each stage scale independently based on its backlog?
-
✓ B. Create two Amazon SQS queues; each worker fleet polls its own queue and scales on its queue length
The best design is Create two Amazon SQS queues; each worker fleet polls its own queue and scales on its queue length. Two queues act as durable buffers between stages, ensuring no task is lost during scale-in, fully decoupling the fast intake from the slower enrichment stage, and enabling independent scaling using SQS backlog metrics such as ApproximateNumberOfMessagesVisible.
The option Use a single Amazon SQS queue for both stages and scale on message count mixes fast and slow tasks together, which prevents stage isolation and independent scaling or prioritization. It also complicates throughput control and back-pressure handling.
The option Use an Amazon SNS topic to fan out tasks to all workers does not provide queue semantics or a per-stage backlog, so it does not guarantee durable buffering or pull-based pacing required for workers processing at different speeds.
The option Create two Amazon SQS queues and have Auto Scaling react to queue notifications is incorrect because SQS does not natively push notifications for scaling. The recommended approach is to use CloudWatch metrics from SQS to drive Auto Scaling policies.
Cameron’s Exam Tip
Look for keywords like preserve every task, decouple stages, and scale independently. These point to queue-based decoupling with SQS. For scaling, think CloudWatch SQS metrics feeding Auto Scaling, not push notifications. Separate queues per stage are a common pattern when stages have different throughput or latency profiles.
Question 27
At LumenWave Retail, analysts load data into an Amazon Redshift warehouse to join and aggregate files stored in Amazon S3. Access patterns show that roughly 60 days after ingestion, these datasets are seldom queried and no longer considered hot. The team must continue to use standard SQL with queries starting immediately while minimizing ongoing Redshift costs as much as possible. What approach should they take? (Choose 2)
-
✓ B. Transition the data to Amazon S3 Standard-IA after 60 days
-
✓ E. Use Amazon Athena to run SQL directly on the S3 data
The most cost-effective path is to move colder data off the Redshift cluster and keep it readily accessible in S3 while using a serverless query engine. Combining Transition the data to Amazon S3 Standard-IA after 60 days with Use Amazon Athena to run SQL directly on the S3 data minimizes Redshift spend and preserves immediate, interactive SQL access.
Transition the data to Amazon S3 Standard-IA after 60 days reduces storage costs for infrequently accessed objects while keeping low-latency retrieval, which is ideal for on-demand analytics.
Use Amazon Athena to run SQL directly on the S3 data provides serverless, pay-per-query access with no cluster to manage or warm up, so analysts can start queries right away.
Launch a smaller Amazon Redshift cluster to hold and query the cold data still incurs ongoing cluster costs and does not achieve the maximum savings compared to serverless querying on S3.
Use Amazon Redshift Spectrum to query the S3 data while keeping a minimal Redshift cluster requires a running Redshift cluster as the query engine, which means continued compute costs and less cost reduction than Athena.
Move the data to Amazon S3 Glacier Deep Archive after 60 days is unsuitable for interactive SQL because retrieval can take hours and cannot start queries immediately.
When cold analytics data must remain instantly queryable with standard SQL, think S3 cost-optimized storage plus a serverless engine like Athena; if a Redshift cluster must stay running, Spectrum works but it will not maximize cost savings.
Question 28
How can you expose an HTTP service hosted on EC2 instances in private subnets to the internet while keeping the instances private and minimizing operational overhead?
-
✓ B. Internet-facing Application Load Balancer in public subnets with private instances as targets
The best choice is Internet-facing Application Load Balancer in public subnets with private instances as targets. An internet-facing ALB placed in at least two public subnets exposes a public endpoint and forwards HTTP traffic to target groups that include instances in private subnets. This preserves the private placement of the instances, requires minimal ongoing management, and aligns with the standard AWS pattern for inbound traffic.
The option Amazon API Gateway with VPC Link to a private NLB can work, but it is more complex to set up and manage than necessary for simply exposing an HTTP service. API Gateway is better suited when you need API management features, not just basic ingress.
The option NAT gateway in a public subnet with routes from private subnets is incorrect because NAT only supports outbound internet access from private subnets. It does not accept inbound connections from internet clients.
The option Amazon CloudFront is insufficient on its own because it requires a publicly reachable origin such as an ALB or S3. It cannot directly connect to instances in private subnets without a suitable public origin.
Cameron’s Exam Tip
When you see requirements to keep instances private while allowing internet clients to connect with minimal management, look for a public load balancer in public subnets targeting resources in private subnets. Remember that NAT is outbound-only, CloudFront needs a public origin, and for HTTP/HTTPS you typically prefer ALB over NLB unless you specifically need Layer 4 behavior.
Question 29
A media technology startup is launching an AI-powered image tagging platform made up of small independent services, each handling a distinct processing step. When a service starts, its model loads roughly 700 MB of parameters from Amazon S3 into memory. Customers submit single images or large batches through a REST API, and traffic can spike sharply during seasonal promotions while dropping to near zero overnight. The team needs a design that scales efficiently and remains cost-effective for this bursty workload. What should the solutions architect recommend?
-
✓ B. Buffer incoming requests in Amazon SQS and run the ML processors as Amazon ECS services that poll the queue with scaling tied to queue depth
Buffer incoming requests in Amazon SQS and run the ML processors as Amazon ECS services that poll the queue with scaling tied to queue depth is the best fit because SQS decouples ingestion from processing, absorbs traffic spikes, and allows ECS tasks to scale based on backlog while reusing warm containers that have already loaded large model artifacts.
Expose the API through an Application Load Balancer and implement the ML workers with AWS Lambda using provisioned concurrency to minimize cold starts is suboptimal since large model initialization and potentially long image batches can exceed Lambda’s practical limits and lead to higher cost to keep provisioned capacity warm.
Front the service with a Network Load Balancer and run the ML services on Amazon EKS with node-based CPU autoscaling increases operational overhead and scales on CPU utilization rather than direct work queue depth, which can lag or misrepresent actual demand for asynchronous jobs.
Publish API events to Amazon EventBridge and have AWS Lambda targets process them with memory and concurrency increased dynamically per payload size is not ideal because EventBridge is not a bulk-queue buffer for high-throughput batching and Lambda memory cannot be dynamically tuned per event, making it ill-suited for heavy ML model starts.
Cameron’s Exam Tip
For bursty, asynchronous workloads with heavy startup costs, favor queue-based decoupling and containerized workers that can scale on backlog, keep models warm, and match throughput to demand.

These questions all came from my Solutions Architect Udemy course and the certificationexams.pro certification site.
Question 30
Which configuration statements correctly distinguish NAT instances from NAT gateways? (Choose 3)
-
✓ B. A NAT instance can be used as a bastion
-
✓ C. A NAT instance supports security groups
-
✓ E. A NAT instance can forward specific ports
The correct statements are that a NAT instance can be used as a bastion, a NAT instance supports security groups, and a NAT instance can forward specific ports. A NAT instance is simply an EC2 instance, so you can attach security groups, allow controlled inbound admin access for bastion functionality, and implement port forwarding with iptables. In contrast, a NAT gateway is a managed service focused on outbound source NAT. It cannot have security groups, does not support port or destination mapping, and does not terminate TLS.
The option “A NAT gateway supports port forwarding” is incorrect because NAT gateways only perform source NAT for outbound traffic.
The option “Security groups can be attached to a NAT gateway” is incorrect since NAT gateways do not support security groups and are governed by subnet network ACLs.
The option “A NAT gateway performs TLS termination” is incorrect because NAT gateways do not inspect or terminate application-layer traffic. On the exam, map keywords to the right choice: port forwarding, iptables, or bastion imply a NAT instance; security groups on NAT gateway is always wrong. Also remember that NAT instances require disabling source/destination checks, while NAT gateways are managed and scale automatically but lack host-level customization.
Question 31
At Kestrel Dynamics, about 18 engineers need to quickly try AWS managed policies by temporarily attaching them to their own IAM users for short experiments, but you must ensure they cannot elevate privileges by giving themselves the AdministratorAccess policy. What should you implement to meet these requirements?
-
✓ B. Configure an IAM permissions boundary on every engineer’s IAM user to restrict which managed policies they can self-attach
The correct approach is Configure an IAM permissions boundary on every engineer’s IAM user to restrict which managed policies they can self-attach. A permissions boundary sets a hard ceiling on what the user can do, so you can allow testing specific AWS managed policies while categorically excluding AdministratorAccess.
Create a Service Control Policy in AWS Organizations that blocks attaching AdministratorAccess to any identity in the account is an organization-wide guardrail that requires AWS Organizations and is overly broad for this per-user need; it also restricts experimentation beyond the intended scope.
AWS Control Tower relies on guardrails backed by SCPs across accounts and organizational units, which is heavyweight and not designed for per-user experimentation within a single account.
Attach an identity-based IAM policy to each engineer that denies attaching the AdministratorAccess policy to their own user is weak because if users can manage their own permissions, they could remove or bypass the deny, enabling escalation.
Cameron’s Exam Tip
When you must allow limited experimentation while preventing privilege escalation, think permissions boundary on users or roles. Remember, boundaries do not apply to groups, and SCPs are org-level guardrails, not per-identity controls.
Question 32
How can teams located in different AWS Regions access the same Amazon EFS file system for shared editing while minimizing operational effort?
-
✓ B. Use inter-Region VPC peering to reach EFS mount targets and mount the same file system
The correct answer is Use inter-Region VPC peering to reach EFS mount targets and mount the same file system. Inter-Region VPC peering provides routable connectivity to the EFS mount targets so compute in other Regions can mount the exact same EFS, keeping a single authoritative file with minimal operational overhead and no data copy workflows.
The option Move the file to Amazon S3 with versioning is incorrect because S3 is object storage without shared POSIX semantics or file locking, making concurrent edits to a single workbook unsafe.
The option Enable EFS replication to file systems in each Region is incorrect since replication is asynchronous and produces multiple copies, not a single writable source for simultaneous editing.
The option Put EFS behind an NLB and use AWS Global Accelerator is incorrect because EFS cannot be placed behind NLB/Global Accelerator; NFS mounts must target EFS mount targets directly.
Cameron’s Exam Tip
When you see requirements for a single shared file across Regions and least operational effort, prefer network connectivity to the same storage (for example, VPC peering or Transit Gateway) over replication or migration. Remember that S3 is object storage and not a drop-in replacement for shared POSIX filesystems like EFS. Also note that load balancers and Global Accelerator are not used to front NFS-based services like EFS.
Question 33
A regional transportation company operates a costly proprietary relational database in its on-premises facility. The team plans to move to an open-source engine on AWS to reduce licensing spend while preserving advanced features such as secondary indexes, foreign keys, triggers, and stored procedures. Which pair of AWS services should be used together to run the migration and handle the required schema and code conversion? (Choose 2)
-
✓ B. AWS Database Migration Service (AWS DMS)
-
✓ C. AWS Schema Conversion Tool (AWS SCT)
The migration involves moving from a proprietary RDBMS to an open-source engine on AWS, which is a heterogeneous migration. You first convert schema and database code, then move the data with minimal downtime.
AWS Schema Conversion Tool (AWS SCT) converts the source schema and complex objects like indexes, foreign keys, and stored routines into the target engine’s format. After conversion, AWS Database Migration Service (AWS DMS) performs the data migration and can continuously replicate changes to minimize downtime.
AWS DataSync focuses on file and object transfers, not relational schema or code conversion.
AWS Snowball Edge handles offline bulk data movement and edge compute but does not help with database object conversion or logical migration.
Basic Schema Copy in AWS DMS creates only basic tables and primary keys and cannot migrate secondary indexes, foreign keys, or stored procedures, so it does not meet the requirement.
For heterogeneous database migrations that must preserve complex objects, think SCT for convert plus DMS for move. Features like Basic Schema Copy and services like DataSync or Snowball Edge do not handle database code or advanced schema elements.
Question 34
Which AWS design enables near real time fan out of about 1.5 million streaming events per hour to multiple consumers and redacts sensitive fields before storing the sanitized items in a document database for fast reads?
-
✓ C. Amazon Kinesis Data Streams with AWS Lambda redaction writing to Amazon DynamoDB; attach additional Kinesis consumers
Amazon Kinesis Data Streams with AWS Lambda redaction writing to Amazon DynamoDB; attach additional Kinesis consumers is the best fit. Kinesis Data Streams is purpose-built for high-throughput, low-latency streaming with multiple concurrent consumers, and can use features like enhanced fan-out for consistent, near-real-time delivery. A Lambda consumer can redact sensitive fields from each record before writing sanitized items to DynamoDB for fast lookups, while other services independently consume the same stream.
The option Amazon Kinesis Data Firehose with Lambda transform to Amazon DynamoDB; internal services read from Firehose is incorrect because Firehose is a delivery service to storage targets and does not support multiple independent consumer applications reading the raw stream. It is not a pub-sub fan-out mechanism.
The option Write to Amazon DynamoDB, auto-scrub new items, and use DynamoDB Streams for fan-out is incorrect because DynamoDB does not provide an automatic sanitization rule at write time. Using DynamoDB as the primary ingest path plus later updates adds write amplification and latency, and DynamoDB Streams is not intended as a high-scale ingestion fan-out substitute for Kinesis.
The option Amazon EventBridge with Lambda redaction to Amazon DynamoDB; services subscribe to the bus is incorrect because EventBridge is designed for event routing and integration, not sustained high-throughput streaming at low latency with many concurrent consumers. It lacks ordered shards and the streaming characteristics that Kinesis provides.
Cameron’s Exam Tip
When you see near-real-time streaming with multiple consumers, think Kinesis Data Streams rather than Firehose. Firehose is for delivery to sinks; Streams is for custom consumer applications and fan-out. Pair Streams with Lambda for per-record transformation like PII redaction, and use DynamoDB for low-latency query access. Keywords like fan-out, multiple consumers, and near-real-time are strong signals for Kinesis Data Streams and possibly enhanced fan-out.
Question 35
A regional architecture firm is retiring its on-premises Windows file server clusters and wants to centralize storage on AWS. The team needs highly durable, fully managed file storage that Windows clients in eight branch locations can access natively using the SMB protocol. Which AWS services satisfy these requirements? (Choose 2)
-
✓ B. Amazon FSx for Windows File Server
-
✓ D. AWS Storage Gateway File Gateway
The correct services are Amazon FSx for Windows File Server and AWS Storage Gateway File Gateway. Amazon FSx for Windows File Server delivers fully managed, highly available SMB shares built on Windows Server with features like AD integration and Windows ACLs, making it ideal for native Windows workloads at scale.
AWS Storage Gateway File Gateway provides SMB (and NFS) file shares that present a local or edge interface while storing data durably in Amazon S3, allowing Windows clients to access cloud-backed shares via SMB without running traditional file servers.
Amazon Simple Storage Service (Amazon S3) is object storage accessed via REST APIs or SDKs and does not natively present SMB shares to Windows clients.
Amazon Elastic Block Store (Amazon EBS) is block storage attached to a single EC2 instance and offers no SMB file protocol by itself.
Amazon Elastic File System (Amazon EFS) is a managed file system that uses NFS, which targets Linux clients and does not support SMB.
Cameron’s Exam Tip
Map protocols to services: SMB aligns with FSx for Windows and File Gateway, while NFS aligns with EFS; remember that S3 is object and EBS is block, neither provides SMB shares.
Question 36
Which VPC attributes must be enabled for EC2 instances to resolve Route 53 private hosted zone records using the Amazon provided DNS?
-
✓ B. Turn on VPC DNS resolution and DNS hostnames
Turn on VPC DNS resolution and DNS hostnames is correct because Route 53 private hosted zones resolve only when the VPC attributes enableDnsSupport (DNS resolution) and enableDnsHostnames are enabled. These settings allow instances to use the Amazon-provided VPC resolver to answer queries for records in the private zone.
The option Create Route 53 Resolver inbound and outbound endpoints is incorrect because these endpoints are for hybrid DNS scenarios with on-premises networks and are not required for resolution inside the same VPC.
The option Remove namespace overlap with a public hosted zone is incorrect since Route 53 selects the most specific match, and overlap does not prevent private zone resolution.
The option Set a DHCP options set with custom DNS servers is incorrect because it directs queries to external DNS, bypassing the Amazon-provided resolver, which would hinder private hosted zone lookups.
Cameron’s Exam Tip
Know the VPC attributes enableDnsSupport and enableDnsHostnames; both must be on for private hosted zone resolution via AmazonProvidedDNS. Remember that Route 53 Resolver endpoints are for hybrid connectivity, not intra-VPC resolution. Using a custom DHCP options set replaces the default resolver and can break private hosted zone lookups.
Question 37
A post-production house named Northlight Works ingests raw 8K footage ranging from 3 to 6 TB per file and applies noise reduction and color matching before delivery. Each file requires up to 35 minutes of compute. The team needs a solution that elastically scales for spikes while remaining cost efficient. Finished videos must stay quickly accessible for at least 120 days. Which approach best meets these requirements?
-
✓ C. Use AWS Batch to orchestrate editing jobs on Spot Instances, store working metadata in Amazon ElastiCache for Redis, and place outputs in Amazon S3 Intelligent-Tiering
The Use AWS Batch to orchestrate editing jobs on Spot Instances, store working metadata in Amazon ElastiCache for Redis, and place outputs in Amazon S3 Intelligent-Tiering option best aligns with elastic batch processing and cost control. AWS Batch manages job queuing and compute environments, seamlessly scaling across Spot Instances to lower compute cost while handling interruptions with retries. Using ElastiCache provides low-latency access to working metadata during processing, and S3 Intelligent-Tiering keeps content immediately retrievable while automatically optimizing storage cost over the 120-day window.
Deploy Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer, use Amazon SQS for job queuing, store metadata in Amazon RDS, and place completed outputs in Amazon S3 Glacier Flexible Retrieval adds operational overhead and includes an ALB that is unnecessary for asynchronous batch jobs. Glacier Flexible Retrieval is designed for archival, leading to retrieval delays and fees that conflict with the need for ready access.
Run an on-premises render farm integrated with AWS Storage Gateway for S3 access, keep metadata in Amazon RDS, and depend on gateway caching for frequently used assets lacks the elasticity required for unpredictable spikes and increases maintenance burden. Storage Gateway helps with storage integration but does not solve compute scaling for intensive rendering.
Run containerized workers on Amazon ECS with AWS Fargate, keep job metadata in Amazon DynamoDB, and write completed files to Amazon S3 Standard-IA can scale, but Fargate often costs more than Spot-backed Batch for sustained compute, and Standard-IA may incur retrieval fees with variable access during the 120-day period.
Cameron’s Exam Tip
For bursty, parallelizable workloads that run for minutes to hours, prefer AWS Batch on Spot to minimize compute costs and simplify orchestration. When outputs must stay quickly accessible yet cost efficient, choose Amazon S3 Intelligent-Tiering over archival tiers.
Question 38
A single instance EC2 application must remain available after an Availability Zone failure while keeping costs minimal. Which actions enable automatic cross Availability Zone recovery? (Choose 3)
-
✓ B. Allocate an Elastic IP and associate it at boot using user data
-
✓ C. Create an Auto Scaling group across two AZs with min=1, max=1, desired=1
-
✓ E. Attach an instance role permitting AssociateAddress and DescribeAddresses so user data manages the EIP
Create an Auto Scaling group across two AZs with min=1, max=1, desired=1, Allocate an Elastic IP and associate it at boot using user data, and Attach an instance role permitting AssociateAddress and DescribeAddresses so user data manages the EIP together provide low-cost, automatic recovery across Availability Zones for a single-instance application. The multi-AZ ASG ensures that if one AZ fails, a replacement instance is launched in a healthy AZ. The Elastic IP supplies a stable endpoint that can be reassigned to the new instance, and the instance role enables the boot script or lifecycle hook to call the EC2 API to attach that EIP without manual steps.
The option Enable EC2 Auto Recovery with a CloudWatch alarm is insufficient because it recovers the instance only in the same AZ, offering no protection from an AZ outage.
Use an Application Load Balancer in front of the instance adds ongoing cost and does not move a single target to another AZ; it is unnecessary when only one instance can run.
AWS Global Accelerator does not provide AZ failover for a lone endpoint and introduces additional cost; it is most useful when you have multiple healthy endpoints to fail over between.
For a single-instance workload that must tolerate AZ failure at minimal cost, think multi-AZ Auto Scaling group with desired=1 plus Elastic IP reattachment automation via user data or lifecycle hooks and appropriate IAM permissions. Avoid adding load balancers or global traffic managers when there is only one backend instance.
Question 39
A solutions architect at Northstar Outfitters is deploying an application on Amazon EC2 inside a VPC. The application saves product images in Amazon S3 and stores customer profiles in a DynamoDB table named CustomerProfiles. The security team requires that connectivity from the EC2 subnets to these AWS services stays on the AWS network and does not traverse the public internet. What should the architect implement to meet this requirement?
-
✓ C. Set up gateway VPC endpoints for Amazon S3 and Amazon DynamoDB
The correct approach is to use Set up gateway VPC endpoints for Amazon S3 and Amazon DynamoDB. Gateway endpoints keep traffic between your VPC and these services on the AWS backbone and add route entries to your subnet route tables, preventing any path over the public internet.
Configure interface VPC endpoints for Amazon S3 and Amazon DynamoDB is wrong because S3 and DynamoDB are natively integrated with gateway endpoints for private access; interface endpoints via PrivateLink are not the standard or supported model for DynamoDB and are unnecessary for S3 in this scenario.
Deploy a NAT gateway in a public subnet and update private route tables is incorrect since a NAT gateway forwards traffic to the internet to reach public service endpoints, which fails the “no public internet” requirement.
AWS Direct Connect is not appropriate because it addresses on-premises to AWS connectivity and does not alter EC2-to-service paths within a VPC; without VPC endpoints, EC2 would still use public endpoints.
Cameron’s Exam Tip
Memorize that S3 and DynamoDB use gateway VPC endpoints, while most other AWS services use interface endpoints (PrivateLink). If the requirement says traffic must not traverse the public internet from a VPC to AWS services, think VPC endpoints, not NAT, VPN, or internet gateways.
Question 40
Static assets are stored in an S3 bucket and served through CloudFront and they must be accessible only from specified corporate IP ranges; what actions will enforce the IP allow list and prevent direct access to the S3 bucket? (Choose 2)
-
✓ B. CloudFront origin access identity and S3 bucket policy limited to that OAI
-
✓ D. AWS WAF web ACL with IP allow list on CloudFront
The correct actions are CloudFront origin access identity and S3 bucket policy limited to that OAI and AWS WAF web ACL with IP allow list on CloudFront. Using an OAI with a restrictive bucket policy ensures S3 objects cannot be accessed directly and are only retrievable through CloudFront. Applying a WAF web ACL with an IP allow list to the CloudFront distribution enforces client IP restrictions at the edge, matching the corporate CIDRs requirement.
The option S3 bucket policy with aws:SourceIp allowing corporate CIDRs is incorrect because it evaluates the client IP only for direct S3 requests. It would either block CloudFront origin requests or still allow bypassing CloudFront, failing the edge enforcement requirement.
The option Apply AWS WAF to the S3 bucket is invalid since WAF cannot be attached to S3.
The option CloudFront signed URLs for users does not implement an IP allow list and does not by itself prevent direct S3 access.
Cameron’s Exam Tip
For IP-based client filtering at the CDN, use WAF on CloudFront. To stop direct S3 access, use OAI or the newer OAC with an S3 bucket policy allowing only that principal. Remember that S3 has no WAF integration and that S3 SourceIp conditions evaluate the requester of S3, not the end client when CloudFront fetches the object.
Question 41
Riverton Robotics is evaluating how to initialize Amazon EC2 instances for a pilot rollout and wants to test the instance user data capability to bootstrap software and configuration. Which statements accurately describe the default behavior and mutability of EC2 user data? (Choose 2)
-
✓ B. User data runs automatically only on the first boot after the instance is launched
-
✓ D. User data scripts execute with root privileges by default
The default EC2 user data behavior is to run one time at first boot and to execute with root privileges. This enables initial configuration and provisioning without needing sudo in the script.
User data scripts execute with root privileges by default is correct because EC2 processes user data as the root user, which affects ownership and permissions of created files. User data runs automatically only on the first boot after the instance is launched is also correct because cloud-init defaults to once-per-launch processing unless configured otherwise.
You can change an instance’s user data while it is running if you use root credentials is incorrect because user data cannot be altered on a running instance; you must stop the instance or provide new user data at launch.
User data is processed on every reboot of an EC2 instance by default is incorrect since per-boot execution requires explicit cloud-init or system configuration.
You can edit user data from inside the instance using the Instance Metadata Service is incorrect because IMDS is read-only for retrieval of user data and cannot be used to modify it.
Cameron’s Exam Tip
Remember the two defaults for EC2 user data: runs as root and executes once at first boot; anything else (like per-boot runs) must be explicitly configured.
Question 42
Which design ensures highly available internet egress from private subnets in two Availability Zones?
-
✓ B. Two NAT gateways in public subnets, one per AZ
The correct choice is Two NAT gateways in public subnets, one per AZ. NAT gateways are zonal. For high availability, deploy a NAT gateway in a public subnet in each Availability Zone and update each private subnet’s route table to target the NAT in the same AZ. This avoids a single point of failure and prevents cross-AZ dependencies and data transfer charges.
The option Create one NAT gateway in a public subnet is incorrect because it creates a single AZ dependency; an AZ outage or NAT failure will break egress for all private subnets.
The option Two NAT gateways placed in private subnets is incorrect since NAT gateways must reside in public subnets with a route to an internet gateway to provide outbound internet access.
The option Gateway VPC endpoint is incorrect because endpoints enable private connectivity to specific AWS services only, not general internet egress.
Cameron’s Exam Tip
Remember NAT gateways are zonal. For resilient architectures, place a NAT gateway in each AZ and route private subnets to the local NAT. NAT gateways must be in public subnets with an internet gateway. VPC endpoints and egress-only internet gateways do not provide general IPv4 internet egress.
Question 43
SkyTrail Logistics runs multiple Amazon EC2 instances in private subnets across three Availability Zones within a single VPC, and these instances must call Amazon DynamoDB APIs without sending traffic over the public internet. What should the solutions architect do to keep the traffic on the AWS network path? (Choose 2)
-
✓ B. Configure a DynamoDB gateway VPC endpoint in the VPC
-
✓ D. Add routes in the private subnet route tables that point to the DynamoDB endpoint
The right approach is to use a VPC endpoint that keeps traffic on the AWS backbone and to ensure subnet routing sends DynamoDB traffic to that endpoint. Configure a DynamoDB gateway VPC endpoint in the VPC establishes private connectivity to DynamoDB, and Add routes in the private subnet route tables that point to the DynamoDB endpoint ensures packets destined for DynamoDB follow that private path.
Set up VPC peering from the VPC to the DynamoDB service is not possible because you cannot peer a VPC directly with an AWS public service.
Create interface VPC endpoints for DynamoDB in each private subnet using AWS PrivateLink is incorrect because DynamoDB is accessed via gateway endpoints rather than interface endpoints.
Send the traffic through a NAT gateway in a public subnet would egress to the internet, which violates the requirement.
Cameron’s Exam Tip
For DynamoDB and Amazon S3, use gateway endpoints, then update the relevant route tables; most other services use interface endpoints via PrivateLink.
Question 44
How should a stateless web tier running behind an Application Load Balancer with Auto Scaling across three Availability Zones be configured to remain highly available and minimize steady state cost while handling daily traffic spikes? (Choose 2)
-
✓ B. Set Auto Scaling minimum to 2 instances
-
✓ C. Buy Reserved Instances for the steady baseline
Set Auto Scaling minimum to 2 instances ensures the service remains available across multiple AZs with at least two instances online, while still allowing the group to scale out quickly during spikes. Buy Reserved Instances for the steady baseline applies discounted pricing to the always-on portion of the fleet, minimizing steady-state cost without affecting elasticity.
The option Use On-Demand Instances only does not optimize the constant baseline and misses available discounts.
Set Auto Scaling minimum to 4 instances increases fixed cost unnecessarily; high availability does not require four baseline instances.
Use Spot Instances for baseline capacity is risky because Spot capacity can be interrupted, which is not appropriate for required baseline availability.
Distinguish baseline from burst. Use RIs or Savings Plans for the baseline and On-Demand or Spot for burst capacity. For multi-AZ availability, keep an ASG minimum of at least two instances. Be cautious with options that raise minimum capacity or rely on interruptible instances for critical baseline.
Question 45
An online video-sharing startup needs to answer relationship-heavy questions such as “How many likes are on clips uploaded by the friends of user Mia over the last 72 hours?” The data model includes users, friendships, videos, and reactions, and the team expects frequent multi-hop traversals with low-latency aggregations across connected entities. Which AWS database service is the best fit for this requirement?
-
✓ B. Amazon Neptune
The best choice is Amazon Neptune because it is a managed, high-performance graph database designed for highly connected data and multi-hop traversals. It supports graph languages like openCypher and Gremlin, enabling efficient queries such as counting likes on videos posted by a user’s friends with low latency at scale.
Amazon Redshift is optimized for columnar, warehouse-style analytics and batch reporting, not for real-time graph traversal across relationships, so it is a poor fit for social graph queries.
Amazon OpenSearch Service excels at full-text search and operational analytics on logs and documents, but it is not optimized for traversing and aggregating across graph relationships.
Amazon Aurora can model relationships with joins, but multi-hop traversals over large, highly connected datasets become complex and slow compared to a dedicated graph engine.
Cameron’s Exam Tip
When you see social network patterns, friend-of-friend queries, or billions of relationships needing low-latency traversal, think graph database and choose Amazon Neptune.
Question 46
Which actions allow authenticated Amazon Cognito users to upload directly to Amazon S3 using temporary credentials while ensuring the traffic remains on the AWS network? (Choose 2)
-
✓ B. Configure a Cognito identity pool to exchange user pool logins for temporary IAM credentials to S3
-
✓ C. Create a VPC endpoint for Amazon S3
Configure a Cognito identity pool to exchange user pool logins for temporary IAM credentials to S3 is correct because identity pools federate authenticated users into IAM, issuing short‑lived credentials that can be scoped with fine‑grained S3 permissions for direct uploads. Create a VPC endpoint for Amazon S3 is also correct because the S3 gateway endpoint keeps traffic on the AWS network and allows you to restrict access to specific buckets via an endpoint policy, meeting the private connectivity requirement.
The option Route S3 traffic through a NAT gateway is wrong because a NAT gateway egresses to public S3 endpoints over the internet and does not provide private connectivity.
Require Cognito user pool tokens in the S3 bucket policy is incorrect since S3 bucket policies cannot validate Cognito JWTs; authorization is evaluated against IAM principals and policy conditions, which is why temporary IAM credentials via an identity pool are needed.
Call STS AssumeRoleWithWebIdentity directly with Cognito user pool tokens is incorrect because STS does not directly support Cognito User Pools tokens for this call; Cognito identity pools handle the token exchange and credential issuance. For exam strategy, look for the pairing of short‑lived credentials and private connectivity. Identity federation to IAM via Cognito identity pools satisfies the no long‑term credentials requirement, while an S3 VPC endpoint satisfies the keep traffic on the AWS network requirement. Be cautious of answers that rely on NAT gateways or bucket policies to validate JWTs, as neither meets the private connectivity or auth integration needs on their own.
Question 47
A technical publisher stores about 18 TB of training videos and PDFs in a single Amazon S3 bucket in one AWS Region. A partner company in a different Region has cross-account read access to pull the content into its own platform. The publisher wants to keep its own data transfer charges as low as possible when the partner downloads the objects. What should a solutions architect recommend?
-
✓ B. Enable Requester Pays on the publisher’s S3 bucket
The correct answer is Enable Requester Pays on the publisher’s S3 bucket. Requester Pays ensures that the entity downloading the objects pays the request and data transfer charges, which directly reduces the publisher’s egress costs while the publisher continues to pay only for storage.
Set up S3 Cross-Region Replication to the partner’s S3 bucket increases the publisher’s costs for ongoing replication and cross-Region transfer, and it does not specifically make the downloader pay for transfers.
Turn on S3 Transfer Acceleration for the bucket can speed downloads from distant locations, but the data transfer fees remain billed to the bucket owner, so it does not meet the goal of minimizing the publisher’s charges.
Serve the files through Amazon CloudFront with the S3 bucket as the origin may improve performance and alter pricing, but the publisher still pays for delivery to the partner and it does not shift the costs to the requester.
Cameron’s Exam Tip
When a third party must download large amounts of S3 data and you need to reduce the bucket owner’s egress spend, think Requester Pays; do not confuse it with Transfer Acceleration, which targets performance rather than cost ownership.
Question 48
Which architecture provides low-cost, highly available, on-demand image transformations for objects stored in Amazon S3 that are requested through API Gateway and delivered to internet clients?
-
✓ C. API Gateway + AWS Lambda for transforms, store originals and outputs in S3, deliver via CloudFront with S3 origin
API Gateway + AWS Lambda for transforms, store originals and outputs in S3, deliver via CloudFront with S3 origin is the best fit because it is fully serverless, inherently highly available, and pay-per-use. API Gateway passes transformation parameters to Lambda, Lambda generates or fetches the derived image and stores it in S3, and CloudFront caches and serves content globally. This minimizes compute runtime, offloads scale to managed services, and leverages edge caching to reduce latency and cost.
EC2 Auto Scaling with an Application Load Balancer, S3 for originals and derivatives, CloudFront over S3 is less cost-efficient due to always-on instances and a load balancer, plus added operational overhead compared to serverless.
API Gateway and Lambda returning images directly to clients without S3 or CloudFront lacks CDN caching and durable storage. It increases per-request cost, may hit payload or timeout limits, and does not optimize global delivery.
EC2 for processing, S3 for sources, DynamoDB for transformed images, CloudFront on S3 misuses DynamoDB for large binary objects and still incurs EC2 costs, making it neither simple nor cost-optimal.
Cameron’s Exam Tip
When you see requirements like low cost, high availability, and internet distribution for media assets, think S3 for storage, CloudFront for global caching, and Lambda for on-demand processing. Avoid storing images in DynamoDB and be cautious of solutions that keep EC2 or load balancers running. Use API Gateway to pass transformation parameters, but rely on CloudFront for scale and caching.
Question 49
A ride-sharing platform runs an Auto Scaling group of Amazon EC2 instances across two Availability Zones in eu-west-2, with an Application Load Balancer distributing all traffic to the group. During a staging exercise, the team manually terminated three instances in eu-west-2a, leaving capacity uneven across zones. Later, the load balancer health check flagged an instance in eu-west-2b as unhealthy. What outcomes should you expect from Amazon EC2 Auto Scaling in response to these events? (Choose 2)
-
✓ B. For Availability Zone imbalance, Amazon EC2 Auto Scaling rebalances by launching instances in the under-provisioned zone first and only then terminates excess capacity
-
✓ D. When an instance is marked unhealthy by the load balancer, Amazon EC2 Auto Scaling records a scaling activity to terminate it and after termination starts a new instance to maintain desired capacity
The AZs become uneven after manual terminations, so Amazon EC2 Auto Scaling initiates rebalancing. In rebalancing, For Availability Zone imbalance, Amazon EC2 Auto Scaling rebalances by launching instances in the under-provisioned zone first and only then terminates excess capacity, which maintains application availability while capacity shifts.
For the ALB-unhealthy instance, Auto Scaling performs health replacement. When an instance is marked unhealthy by the load balancer, Amazon EC2 Auto Scaling records a scaling activity to terminate it and after termination starts a new instance to maintain desired capacity, which matches the documented terminate-then-launch sequence.
When the ALB reports an instance as unhealthy, Amazon EC2 Auto Scaling first launches a replacement instance and then later terminates the unhealthy one is incorrect because the order is reversed; Auto Scaling terminates first, then launches a replacement.
For Availability Zone rebalancing, Amazon EC2 Auto Scaling terminates old instances before launching new ones so that no extra instances are created is wrong since rebalancing launches first to avoid dropping capacity and only then terminates.
Instance Refresh is not applicable here because it is a deployment feature and is not automatically triggered by a single unhealthy instance or AZ imbalance.
Cameron’s Exam Tip
Remember the difference: AZ rebalancing launches before terminating to preserve capacity, while health replacement for an ELB-unhealthy instance terminates first and then launches a replacement.
Question 50
How can you ensure that objects in an Amazon S3 bucket are accessible only through a CloudFront distribution and cannot be retrieved directly via S3 URLs?
-
✓ C. Use a CloudFront origin access identity with an S3 bucket policy allowing it
Use a CloudFront origin access identity with an S3 bucket policy allowing it is correct because the OAI acts as CloudFront’s identity to S3. You grant that OAI read access in the bucket policy and block all other principals, ensuring objects are retrievable via CloudFront only.
The option Enable S3 Block Public Access only is insufficient because while it blocks public access, CloudFront still needs an authorized identity (OAI or OAC) to access the bucket. Without that, CloudFront requests will fail.
The option Attach an IAM role to CloudFront and allow it in the S3 bucket policy is invalid since CloudFront distributions do not have or assume IAM roles.
The option Keep S3 public and use CloudFront signed URLs fails the requirement because users could still bypass CloudFront using the public S3 object URLs.
Cameron’s Exam Tip
When the requirement is “S3 only through CloudFront,” look for OAI or the newer Origin Access Control (OAC). OAC is the modern recommendation, but OAI remains valid and widely tested. Combine OAI/OAC with a restrictive S3 bucket policy and consider enabling S3 Block Public Access to avoid accidental exposure.
Question 51
At LumaRide, a fleet telemetry processor runs on Amazon EC2 Linux instances in multiple Availability Zones. The application writes log objects using standard HTTP API calls and must keep these logs for at least 10 years while supporting concurrent access by many instances. Which AWS storage option most cost-effectively meets these requirements?
-
✓ C. Amazon S3
Amazon S3 is the most cost-effective option because it is a massively scalable, durable object storage service accessed via APIs, which aligns with the application’s API-based writes. S3 supports virtually unlimited parallel access to objects and economical long-term retention.
Amazon EFS provides shared POSIX file access and concurrency, but for large volumes of logs retained for many years it is typically more expensive than S3, making it less cost-effective for this use case.
Amazon EBS is block storage that is generally attached to a single instance (Multi-Attach has constraints), so it does not meet broad concurrent access needs and is not the most economical for indefinite retention.
Amazon EC2 instance store is ephemeral storage; data is lost when the instance stops or terminates, so it cannot satisfy long-term retention requirements.
When logs are written via APIs and need long-term retention with many readers/writers, think object storage like S3 for cost and scale. Use EFS for shared POSIX access when you truly need a file system, and remember EBS is per-instance block storage while instance store is ephemeral.
Question 52
Which AWS feature provides a centralized, repeatable way to deploy standardized infrastructure across multiple AWS accounts and Regions?
-
✓ B. CloudFormation StackSets
CloudFormation StackSets is correct because it lets you define a template once and roll it out, update it, and delete it consistently across multiple accounts and Regions from a central administrator account. This directly addresses centralized, multi-account, multi-Region standardization and provisioning.
AWS Organizations SCPs is incorrect because SCPs enforce guardrails by allowing or denying actions, but they do not create or update resources or orchestrate deployments.
AWS Resource Access Manager is incorrect because RAM enables resource sharing across accounts, not cross-account deployment orchestration.
AWS CloudFormation stacks is incorrect because a stack by itself is scoped to a single account and Region; it does not replicate across accounts or Regions without StackSets.
Cameron’s Exam Tip
When you see keywords like multi-account, multi-Region, and centralized deployment, think CloudFormation StackSets. If the question emphasizes preventing or limiting actions rather than provisioning, look for SCPs. If it focuses on sharing existing resources, consider RAM. Always distinguish between enforcing policy versus deploying infrastructure.
Question 53
BluePeak Institute runs a nightly Python job that typically completes in about 45 minutes. The task is stateless and safe to retry, so if it gets interrupted the team simply restarts it from the beginning. It currently executes in a colocation data center, and they want to move it to AWS while minimizing compute spend. What is the most cost-effective way to run this workload?
-
✓ C. EC2 Spot Instance with a persistent request
The most cost-effective approach is to use EC2 Spot Instance with a persistent request. Spot pricing delivers significant savings versus On-Demand, and a persistent request will automatically attempt to re-provision capacity if the instance is interrupted, which aligns with a job that can safely restart.
AWS Lambda cannot run this job because the maximum execution duration per invocation is 15 minutes, which is well below the 45-minute runtime.
Amazon EMR is optimized for distributed data processing frameworks like Spark or Hive and would be overkill and more expensive for a single Python script that does not require a cluster.
Application Load Balancer is for HTTP(S) request routing and does not provide compute resources to run batch code.
Cameron’s Exam Tip
When a batch task is interruptible and stateless, think EC2 Spot for maximum savings; always verify runtime limits before choosing Lambda, since its per-invocation cap can be a hard blocker.
Question 54
How can an EC2 hosted service be privately accessed by other VPCs and AWS accounts in the same Region without exposing any other resources in the hosting VPC and while requiring minimal management? (Choose 2)
-
✓ B. AWS PrivateLink service
-
✓ C. Network Load Balancer in service VPC
The correct approach is to use AWS PrivateLink service with a Network Load Balancer in service VPC. PrivateLink exposes only the intended application via interface endpoints in consumer VPCs, avoiding any VPC-level routing between producer and consumer. The NLB in the service VPC is required to front the endpoint service and target only the specific EC2 service, ensuring least-privilege access and minimal ongoing management.
VPC peering is incorrect because it establishes full bidirectional routing between VPCs, which would expose broader network surfaces and require ongoing security controls to limit access to only the service.
AWS Transit Gateway centralizes VPC connectivity but similarly provides network-level routing rather than service-scoped access, increasing operational overhead to maintain isolation.
AWS Global Accelerator is designed for improving internet-facing application performance and is not suitable for private, intra-Region VPC-to-VPC service exposure.
Cameron’s Exam Tip
When the requirement is private, service-only access across VPCs or accounts, think PrivateLink plus NLB in the service VPC. Remember that PrivateLink uses interface endpoints in consumer VPCs and does not require peering or Transit Gateway. Also note that PrivateLink endpoint services require Network Load Balancer, not Application Load Balancer. For broad L3 connectivity needs, consider peering or Transit Gateway instead. Watch for keywords like service-only access, interface endpoints, and isolation from other subnets to cue PrivateLink.
Question 55
A Canadian startup operates an online design portfolio platform hosted on multiple Amazon EC2 instances behind an Application Load Balancer. The site currently has users in four countries, but a new compliance mandate requires the application to be reachable only from Canada and to deny requests from all other countries. What should the team configure to meet this requirement?
-
✓ B. Attach an AWS WAF web ACL with a geo match rule to the Application Load Balancer to permit only Canada
The most direct and secure way to enforce country-based access at the load balancer is to use Attach an AWS WAF web ACL with a geo match rule to the Application Load Balancer to permit only Canada. AWS WAF supports geo match conditions that allow you to permit or block traffic by country, and it integrates natively with Application Load Balancers.
Update the security group associated with the Application Load Balancer to allow only the approved country is not viable because security groups cannot evaluate geographic origin; they only filter on IP, protocol, and port.
Use Amazon Route 53 geolocation routing to return responses only to Canadian users is not a true enforcement mechanism. DNS geolocation is based on the resolver’s location, can be misclassified or bypassed, and does not stop direct connections to the ALB.
Enable Amazon CloudFront geo restriction on a distribution in an Amazon VPC is incorrect because CloudFront is an edge service not deployed within a VPC. While CloudFront geo restriction can work when using a CDN, this option as stated is invalid and unnecessary for enforcing geo controls at the ALB.
Cameron’s Exam Tip
For country-based allow or block at the application layer, prefer AWS WAF geo match on the ALB. Security groups and NACLs lack geo awareness, and Route 53 geolocation affects DNS answers, not access control. CloudFront geo restriction applies only at the CDN edge.
Question 56
In Elastic Beanstalk an installation takes over 45 minutes but instances must be ready in under 60 seconds, and the environment has static components that are identical across instances while dynamic assets are unique to each instance, which combination of actions will satisfy these constraints? (Choose 2)
-
✓ B. Run dynamic setup in EC2 user data at first boot
-
✓ E. Prebake static components into a custom AMI
The fastest path to sub-minute readiness is to shift heavy, static installation work out of instance boot and leave only minimal per-instance configuration. Prebake static components into a custom AMI so the bulk of software and assets are already present when the instance launches, and Run dynamic setup in EC2 user data at first boot to generate only the small, instance-specific artifacts. This combination removes the long per-instance install and keeps boot-time work lightweight.
The option AWS CodeDeploy focuses on deployment orchestration and versioning but still performs installation on each instance, so it does not ensure readiness under 60 seconds.
Store installers in Amazon S3 only changes the source of artifacts; the lengthy install still occurs at boot.
Enable Elastic Beanstalk rolling updates controls how updates roll through the fleet but has no effect on the bootstrapping duration of each instance.
Cameron’s Exam Tip
When you see strict time-to-ready requirements, think prebuilt images plus minimal user data. Use a pipeline such as EC2 Image Builder or Packer to produce AMIs containing static dependencies, and keep user data idempotent and fast. Remember that artifact hosting or deployment tools do not eliminate per-instance installation time. Update strategies like rolling or blue/green affect availability, not boot duration.
Question 57
An oceanography institute runs an image-processing workflow on AWS. Field researchers upload raw photos for processing, which are staged on an Amazon EBS volume attached to an Amazon EC2 instance. Each night at 01:00 UTC, the job writes the processed images to an Amazon S3 bucket for archival. The architect has determined that the S3 uploads are traversing the public internet. The institute requires that all traffic from the EC2 instance to Amazon S3 stay on the AWS private network and not use the public internet. What should the architect do?
-
✓ B. Create a gateway VPC endpoint for Amazon S3 and add the S3 prefix list route to the instance subnet route table
The correct solution is to use a private path to S3 that avoids the internet. Create a gateway VPC endpoint for Amazon S3 and add the S3 prefix list route to the instance subnet route table ensures traffic stays on the AWS network and never traverses the public internet. This is the recommended, cost-effective approach for private S3 access from within a VPC.
Deploy a NAT gateway and update the private subnet route so the instance egresses through the NAT gateway to S3 is wrong because NAT gateways send traffic to public S3 endpoints over the internet, which does not satisfy the requirement to avoid the public internet.
Configure an S3 Access Point and have the application upload via the access point alias is insufficient because access points manage access and naming but do not inherently provide a private network path unless combined with a VPC endpoint.
Set up VPC peering to Amazon S3 and update routes to use the peering connection is incorrect since VPC peering only connects VPCs to other VPCs; it cannot connect a VPC directly to S3.
When you see a requirement to keep EC2-to-S3 traffic off the internet, think gateway VPC endpoint for S3. Avoid answers that involve NAT gateways, internet gateways, or VPC peering for this use case.
Question 58
In Amazon EKS how can you assign pod IP addresses from four specific private subnets spanning two Availability Zones while ensuring pods retain private connectivity to VPC resources?
-
✓ D. Amazon VPC CNI with custom pod networking
Amazon VPC CNI with custom pod networking is correct because it allows EKS pods to obtain IP addresses from specific private subnets by attaching secondary ENIs from those subnets to worker nodes. This preserves native VPC routing so pods privately communicate with databases and services in the same VPC without traversing NAT or public paths.
“Kubernetes network policies” is incorrect because network policies govern allowed traffic flows, not IP assignment or subnet selection.
“AWS PrivateLink” is incorrect since it exposes services via interface endpoints but does not influence how pods receive IPs or which subnets they use.
“Security groups for pods” is incorrect because it offers fine-grained security at the pod ENI level but does not control pod IP source subnets.
Cameron’s Exam Tip
When you see requirements like “pods must get IPs from specific private subnets” or “native, private VPC connectivity,” think of the VPC CNI and custom networking. Distinguish it from network policies (L3/4 policy), security groups for pods (filtering), and connectivity services like PrivateLink or Transit Gateway, which do not assign pod IPs.
Question 59
A nonprofit research lab runs several Amazon EC2 instances, a couple of Amazon RDS databases, and stores data in Amazon S3. After 18 months of operations, their monthly AWS spend is higher than expected for their workloads. Which approach would most appropriately reduce costs across their compute and storage environment?
-
✓ C. Use AWS Cost Optimization Hub with AWS Compute Optimizer to surface idle or underutilized resources and rightsize Amazon EC2 instance types
The correct answer is Use AWS Cost Optimization Hub with AWS Compute Optimizer to surface idle or underutilized resources and rightsize Amazon EC2 instance types. Cost Optimization Hub consolidates and prioritizes cost-saving recommendations, including idle resource cleanup and commitment opportunities across accounts and Regions. Pairing this with Compute Optimizer’s EC2 rightsizing guidance enables concrete actions that reduce compute spend without impacting performance.
Use Amazon S3 Storage Class Analysis to recommend transitions directly to S3 Glacier classes and create Lifecycle rules to move data automatically is incorrect because Storage Class Analysis helps you identify candidates for infrequent access tiers but does not generate recommendations for Glacier classes. Although Lifecycle rules can move data to Glacier, the claim that Storage Class Analysis recommends Glacier transitions is inaccurate.
Use AWS Trusted Advisor to auto-renew expiring Amazon EC2 Reserved Instances and to flag idle Amazon RDS databases is incorrect because there is no auto-renew capability for Reserved Instances. Trusted Advisor can flag expiring RIs and idle RDS databases, but you must manually purchase new commitments and take action to realize savings.
Use AWS Cost Explorer to automatically purchase Savings Plans based on the last 7 days of usage is incorrect because Cost Explorer cannot automatically buy Savings Plans. Additionally, using only a week of data is too short a window to size commitments reliably.
Cameron’s Exam Tip
Prioritize rightsizing before commitments. Use Compute Optimizer for instance rightsizing and Cost Optimization Hub or Cost Explorer for consolidated savings opportunities. Remember that RIs and Savings Plans are not auto-renewed or auto-purchased.
Question 60
Which approach will quickly create an isolated 25 TB test copy of EBS block data in the same Region that provides immediate high I/O performance and ensures changes do not affect production?
-
✓ C. EBS snapshots with Fast Snapshot Restore; create new test volumes
EBS snapshots with Fast Snapshot Restore; create new test volumes is correct because FSR pre-initializes snapshot data in selected Availability Zones, so volumes created from those snapshots deliver full, consistent I/O immediately. The new volumes are independent of production, ensuring test changes have no effect on prod, and the time-to-available is minimized since there is no need to pre-warm or copy data over the network.
The option Attach production volumes with EBS Multi-Attach to test instances is wrong because it shares the same volume between environments, failing the isolation requirement and risking data corruption unless using a cluster-aware file system.
The option AWS DataSync to copy data to new EBS volumes is wrong because it performs file-level network transfers, which are slower than snapshot-based provisioning and do not provide instant, fully-initialized block performance.
The option Create volumes from EBS snapshots without Fast Snapshot Restore is wrong because volumes start with lazy loading, causing higher latency until blocks are read or manually initialized, which does not meet the immediate high I/O requirement.
Cameron’s Exam Tip
If the question emphasizes immediate, full performance from a snapshot-based clone, look for Fast Snapshot Restore. If isolation is required, avoid approaches that share the same underlying volume like Multi-Attach. For rapid, in-Region cloning, prefer snapshot-based workflows over network copy tools like DataSync.
Question 61
Scrumtuous Telecom is migrating analytics to AWS. It stores about 24 months of call center recordings, with roughly 9,000 new MP3 files added each day, and the team needs an automated serverless method to turn speech into text and run ad hoc SQL to gauge customer sentiment and trends. Which approach should the architect choose?
-
✓ B. Use Amazon Transcribe to generate text from the recordings stored in Amazon S3 and query the transcripts with Amazon Athena using SQL
The best fit is Use Amazon Transcribe to generate text from the recordings stored in Amazon S3 and query the transcripts with Amazon Athena using SQL. Amazon Transcribe provides managed automatic speech recognition at scale, and Athena enables serverless, ad hoc SQL over S3 without provisioning infrastructure, aligning with the need for flexible query-driven sentiment insights.
Use Amazon Kinesis Data Streams to ingest audio and Amazon Alexa to create transcripts, analyze with Amazon Kinesis Data Analytics, and visualize in Amazon QuickSight is unsuitable because Kinesis Data Streams does not pull audio files at rest and Alexa is not offered as a general-purpose ASR service for batch call analytics.
Use Amazon Kinesis Data Streams to read audio files and build custom machine learning models to transcribe and perform sentiment scoring adds unnecessary complexity by building custom ASR, and Kinesis Data Streams is designed for streaming ingestion, not for directly processing stored audio files.
Use Amazon Transcribe to create text files and rely on Amazon QuickSight to perform SQL analysis and reporting is incorrect because QuickSight is a visualization tool rather than a SQL query service; Athena is the appropriate choice for ad hoc SQL on S3.
Cameron’s Exam Tip
When the requirement emphasizes ad hoc SQL on data in S3, think Amazon Athena. When you must convert speech to text at scale, think Amazon Transcribe. QuickSight is for dashboards, and Kinesis Data Streams is for streaming ingestion rather than reading files at rest.
Question 62
An EC2 instance in a VPC has a security group permitting TCP port 9443 and the subnet network ACL permits inbound traffic on port 9443, but clients still cannot connect. What change to the subnet NACL will enable connectivity?
-
✓ B. Allow outbound ephemeral ports on the subnet NACL; keep inbound 9443
The correct action is Allow outbound ephemeral ports on the subnet NACL; keep inbound 9443. Network ACLs are stateless, so they do not track connection state. Even though inbound TCP 9443 is allowed, the return traffic from the instance to clients will use the clients’ ephemeral ports as the destination, which must be explicitly allowed on the outbound direction of the NACL. Security groups already permit the return path automatically, but NACLs require both directions to be explicitly allowed.
The option Allow outbound 9443 on the subnet NACL is incorrect because return traffic does not use destination port 9443; it targets the client’s ephemeral port range.
The option Add outbound ephemeral ports in the security group is incorrect because security groups are stateful and automatically allow response traffic, so additional outbound ephemeral rules are not needed.
The option Open inbound ephemeral ports on the subnet NACL is incorrect since inbound traffic to the server is destined for 9443; inbound ephemeral ports are not part of this server-side flow.
Cameron’s Exam Tip
Remember that security groups are stateful and network ACLs are stateless. For server-initiated responses, ensure the NACL allows the ephemeral return path in the opposite direction of the initial request. When troubleshooting, verify both SG and NACL rules along the inbound and outbound paths.
Question 63
Aurora Metrics, a retail analytics startup, needs resilient administrator access into private subnets within a VPC. The team wants a bastion host pattern that remains available across multiple Availability Zones and scales automatically as engineers connect over SSH. Which architecture should be implemented to meet these requirements?
-
✓ C. Place a public Network Load Balancer in two Availability Zones targeting bastion Amazon EC2 instances in an Auto Scaling group
The correct design is Place a public Network Load Balancer in two Availability Zones targeting bastion Amazon EC2 instances in an Auto Scaling group. An NLB operates at Layer 4, cleanly passes TCP traffic like SSH, supports static IPs per AZ, and integrates with Auto Scaling to replace unhealthy hosts and scale across multiple Availability Zones.
AWS Client VPN is a valid alternative for remote access, but it eliminates the need for bastion hosts rather than implementing them, so it does not satisfy the explicit requirement for a bastion solution.
Create a public Application Load Balancer that links to Amazon EC2 instances that are bastion hosts managed by an Auto Scaling group is not appropriate because ALB is Layer 7 and only supports HTTP and HTTPS, not SSH.
Allocate a single Elastic IP and attach it to all bastion instances in an Auto Scaling group cannot work since an EIP is one-to-one with a single instance and does not provide automatic failover or scaling.
Cameron’s Exam Tip
For SSH bastion patterns, choose an NLB because SSH is TCP/Layer 4. An ALB is for HTTP/HTTPS, and an Elastic IP is one-to-one and not an HA solution. Highly available bastions should use an Auto Scaling group across multiple AZs.
Question 64
What method allows an EC2 instance to run an initialization script only on its first boot while requiring minimal management?
-
✓ C. EC2 user data script at launch
The correct choice is EC2 user data script at launch. EC2 user data processed by cloud-init (Linux) or EC2Launch/EC2Config (Windows) executes automatically on the first boot, which satisfies the one-time initialization requirement with minimal configuration and no extra services to manage. You simply provide a shell script or cloud-init directive in user data, and it runs once when the instance first starts.
The option Customize cloud-init to limit execution to first boot is unnecessary because the default user data behavior already runs at first boot. Adding custom cloud-init tweaks increases complexity without additional benefit for this requirement.
The option AWS Systems Manager Run Command can run commands remotely but requires the SSM Agent, IAM permissions, targeting/orchestration, and does not inherently enforce a single execution at first boot, contradicting the minimal-management goal.
The option AWS CodeDeploy is a deployment service suited for application lifecycle hooks, not for a simple first-boot-only initialization. It adds setup overhead (agents, appspec, deployment groups) and is overkill for this scenario.
Cameron’s Exam Tip
When you see first boot, run once, and minimal management, think of EC2 user data. Provide a small script with a proper shebang (for example, #!/bin/bash) or use cloud-init directives. If idempotency is needed across reboots, handle it in your script with a flag file or consider more advanced orchestration, but the simplest first-boot solution is user data.
Question 65
A media analytics startup runs a microservices application on Amazon EKS with Amazon EC2 worker nodes. One group of Pods hosts an operations console that reads and writes tracking items in Amazon DynamoDB, and another group runs a reporting component that archives large output files to Amazon S3. The security team mandates that the console Pods can interact only with DynamoDB and the reporting Pods can interact only with S3, with access controlled through AWS Identity and Access Management. How should the team enforce these Pod-level permissions?
-
✓ C. Create two IAM roles with least-privilege policies for DynamoDB and S3 and bind them to separate Kubernetes service accounts via IRSA so console Pods assume the DynamoDB role and reporting Pods assume the S3 role
The best approach is Create two IAM roles with least-privilege policies for DynamoDB and S3 and bind them to separate Kubernetes service accounts via IRSA so console Pods assume the DynamoDB role and reporting Pods assume the S3 role. IRSA lets Pods assume IAM roles through their service accounts, providing fine-grained, Pod-level permissions without exposing long-lived credentials.
Attach S3 and DynamoDB permissions to the EC2 node instance profile and use Kubernetes namespaces to limit which Pods can call each service is incorrect because all Pods on those nodes inherit both permissions and namespaces do not restrict IAM access to AWS APIs.
Use Kubernetes RBAC to control Pod access to AWS services and put long‑lived IAM credentials in ConfigMaps consumed by the Pods is incorrect since Kubernetes RBAC does not control AWS IAM, and storing static credentials in ConfigMaps is insecure and against best practices.
Store IAM user access keys for S3 and DynamoDB in AWS Secrets Manager and mount them into the respective Pods is incorrect because relying on long-term IAM user keys in Pods lacks least privilege and rotation at the Pod level; IRSA provides short-lived, scoped credentials.
Cameron’s Exam Tip
For EKS, Pod-level AWS permissions use IRSA with dedicated service accounts and IAM roles; avoid node instance profiles, static credentials, and trying to use Kubernetes RBAC for AWS IAM.