AWS Solutions Architect Practice Test on all Exam Topics
The AWS Certified Solutions Architect Associate certification exam, exam code SAA-C03, validates your ability to design secure, resilient, high-performing, and cost-optimized architectures on AWS using the AWS Well-Architected Framework.
It confirms that you can select appropriate services for current requirements while planning for growth and that you can review existing solutions and recommend improvements. The target audience for this AWS Certification exam typically has at least one year of hands-on experience designing solutions that use AWS services.
Exam basics
This exam includes multiple choice and multiple response questions. Your result appears as a scaled score between 100 and 1000 and the minimum passing score is 720.
The scoring model is compensatory which means you pass based on your overall performance rather than on each section individually.
The exam contains 50 scored questions and 15 unscored items that AWS uses to evaluate future content.
If you plan to continue to the advanced track you can compare this exam to the AWS Professional exams and related specialties such as the AWS ML Specialty and the AWS AI Practitioner.
Content domains and weights
If you completed fundamentals in the AWS Cloud Practitioner exam topics you will recognize the foundation but this exam goes deeper into architecture. The SAA-C03 exam is organized into four domains. Design secure architectures accounts for 30 percent. Design resilient architectures accounts for 26 percent. Design high-performing architectures accounts for 24 percent. Design cost-optimized architectures accounts for 20 percent. Security patterns align with concepts covered in AWS Security and many design choices appear across AWS Developer, AWS DevOps, and AWS Data Engineer learning paths.
AWS Certification Sample Questions
More AWS exam questions can be found in my Solutions Architect Udemy course and the certificationexams.pro certification site.
A global wildlife nonprofit plans to publish frequent updates about field projects as static web pages. The content is stored in an Amazon S3 bucket and the site is expected to receive heavy worldwide traffic. What is the most performant and cost-effective way for a solutions architect to deliver this content to users?
-
❏ A. Enable Amazon S3 Transfer Acceleration on the bucket
-
❏ B. Set up Amazon Route 53 geoproximity routing
-
❏ C. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
-
❏ D. Enable S3 cross-Region replication to buckets in multiple AWS Regions
A solutions team at Northwind Logistics needs a repeatable method to provision about 30 new member accounts in one AWS Organization. After creation, each account must be placed in the Engineering-Shared OU and initialized with a VPC that has public and private subnets across two Availability Zones. Which approach provides the most reliable automation for this task?
-
❏ A. Use the AWS CLI
-
❏ B. Automate with AWS CloudFormation templates and orchestration scripts
-
❏ C. AWS CloudFormation StackSets
-
❏ D. Use the AWS Organizations API
An environmental research consortium completed a deep-ice radar campaign and used AWS Snowball Edge Storage Optimized devices to capture and ship about 240 TB of raw waveform data. They operate a high performance computing cluster on AWS to analyze very large datasets and generate subsurface models. After the Snowball Edge devices are returned and the data is ingested into AWS, the workloads require sub-millisecond latency, extremely high throughput, and highly parallel POSIX file access across all compute nodes. Which solution should they implement to meet these needs?
-
❏ A. Import the data into Amazon S3 and create an Amazon FSx for Lustre file system linked to the S3 bucket; mount the FSx file system on the HPC nodes
-
❏ B. Set up Amazon FSx for NetApp ONTAP and configure a volume to sync with the S3 bucket that holds the dataset; mount the volume on the cluster
-
❏ C. Create an Amazon FSx for Lustre file system and ingest roughly 240 TB directly into the Lustre file system; mount it across the HPC nodes for parallel, low-latency access
-
❏ D. Copy the data to Amazon S3, transfer it into Amazon Elastic File System, and mount the EFS file system on all compute nodes
A robotics startup is moving its computational fluid dynamics workload and about 25 TB of datasets from its private HPC cluster to AWS. On premises it uses two tiers: during twice-daily job windows the compute nodes require a shared, high-throughput parallel file system, and between runs the data sits in a low-cost cold tier for up to 90 days. Which combination of AWS services should the solutions architect recommend to satisfy these storage requirements? (Choose 2)
-
❏ A. Amazon EBS io2 Block Express for the parallel tier
-
❏ B. Amazon FSx for Lustre for the high-performance shared file system
-
❏ C. Amazon S3 for cold data retention
-
❏ D. Amazon EFS for the cold data tier
-
❏ E. Amazon FSx for Windows File Server for the parallel tier
A digital ticketing startup named SeatWave runs its purchase service on Amazon EC2 instances in an Auto Scaling group, and it buffers incoming orders in an Amazon Simple Queue Service (Amazon SQS) queue for asynchronous processing by the same fleet; during flash promotions, sudden spikes in queued orders cause API latency and slower processing, so what scaling strategy should a solutions architect recommend to rapidly match instance capacity to the volatile SQS load?
-
❏ A. Use a step scaling policy that adjusts from a custom Amazon SQS queue metric
-
❏ B. Use a scheduled scaling policy keyed to a custom Amazon SQS queue metric
-
❏ C. Use a target tracking scaling policy that follows a backlog-per-instance custom Amazon SQS metric
-
❏ D. Use a simple scaling policy that triggers from a custom Amazon SQS queue metric
After a cost overrun alert from AWS Budgets, the finance director at Aurora Media Labs wants an AWS-native solution to quickly reduce spending across compute. The engineering team prefers automated rightsizing guidance for EC2, EBS, and Lambda based on the last 45 days of utilization rather than reports or charts alone. Which service best meets this need?
-
❏ A. AWS Trusted Advisor
-
❏ B. AWS Cost and Usage Reports
-
❏ C. AWS Compute Optimizer
-
❏ D. AWS Cost Explorer
A mid-size logistics startup operates an on-premises MySQL database and plans to move it to AWS. The team needs a fully managed database service that provides high availability with automatic failover across at least two Availability Zones while keeping operational effort low. Which approach best satisfies these requirements?
-
❏ A. Use AWS Application Migration Service to rehost the MySQL server to Amazon EC2 and place instances in multiple AZs
-
❏ B. Use AWS Database Migration Service to migrate to an Amazon RDS for MySQL Single-AZ instance and run AWS Schema Conversion Tool to convert the schema
-
❏ C. Use AWS Database Migration Service to migrate directly into an Amazon RDS for MySQL Multi-AZ deployment
-
❏ D. Export a database snapshot from on premises, transfer it to Amazon S3 with AWS DataSync, and restore an Amazon RDS for MySQL Multi-AZ instance from the snapshot
A telehealth provider stores patient telemetry in Amazon S3 buckets deployed across a pair of AWS Regions. Users upload data from multiple continents. The engineering team wants writes from remote clients to be automatically directed to the closest bucket based on real-time network performance while avoiding congestion on the public internet. They also need seamless regional failover with the least ongoing management of S3. What solution should they implement?
-
❏ A. Build an active-active pattern using regional S3 endpoints and have the client choose the closest Region
-
❏ B. Enable S3 Transfer Acceleration on a single bucket and send all uploads to the acceleration endpoint
-
❏ C. Use Amazon S3 Multi-Region Access Points in active-active with one global endpoint and configure S3 Replication between the buckets
-
❏ D. Adopt an active-passive setup behind S3 Multi-Region Access Points and create separate global endpoints per Region
FinEdge Payments, a fintech provider, must comply with an audit that mandates running production workloads on single-tenant hardware within their VPC. What is the most cost-efficient way to ensure their Amazon EC2 instances are isolated to a single tenant?
-
❏ A. Dedicated Hosts
-
❏ B. Spot Instances
-
❏ C. Dedicated Instances
-
❏ D. On-Demand Instances
A startup plans to expose a public API with Amazon API Gateway, and the application must persist key-value records in a backend data store. The initial dataset is about 2 GB with unknown future growth, and traffic can spike from zero to more than 1,600 requests per second. Which AWS services together provide a scalable and cost-effective backend for this workload? (Choose 2)
-
❏ A. Amazon RDS
-
❏ B. AWS Lambda
-
❏ C. Amazon ElastiCache
-
❏ D. Amazon DynamoDB
-
❏ E. Amazon EC2 Auto Scaling
A sports streaming company stores raw match footage in Amazon S3 from production crews across North America. After opening bureaus in South America and Australia, overseas teams report very slow uploads of tens of gigabytes of video to a central S3 bucket. Which approaches are the most cost-effective to speed up these uploads? (Choose 2)
-
❏ A. Provision AWS Direct Connect links from the overseas offices to AWS and send uploads over them
-
❏ B. Enable S3 Transfer Acceleration on the destination bucket
-
❏ C. Configure AWS Site-to-Site VPN tunnels from overseas offices to a VPC and route uploads through the VPN
-
❏ D. Upload objects using multipart upload with parallel part transfers
-
❏ E. Use AWS Global Accelerator to front the S3 bucket for faster uploads
A fintech startup offers a customer-facing API that lets users adjust card limits, freeze or unfreeze cards, and manage subscriptions. Traffic surges up to 20 times normal during the first and last 48 hours of each month and during 72 hour promotional events. The team must keep response latency consistently low while keeping operations work minimal. Which approach should they choose to meet these goals most efficiently?
-
❏ A. Build the API with Amazon API Gateway and AWS Fargate tasks
-
❏ B. Deploy the API on AWS Elastic Beanstalk with Auto Scaling groups
-
❏ C. Use Amazon API Gateway with AWS Lambda functions configured with provisioned concurrency
-
❏ D. Use Amazon API Gateway with AWS Lambda functions configured with reserved concurrency
A robotics startup runs a tightly coupled computational fluid dynamics workload on an EC2 cluster. Every node keeps a full replica of the dataset, and the application requires roughly 150,000 random read and write IOPS. Which storage choice will meet the performance goal at the lowest cost given that data is already replicated across nodes?
-
❏ A. Amazon S3 with byte-range fetch
-
❏ B. Amazon FSx for Lustre
-
❏ C. Amazon Instance Store
-
❏ D. Amazon EBS io2 Block Express
A digital media analytics firm that operates 22 accounts within one AWS Organization completed a security review and suspects that some IAM roles and Amazon S3 buckets are unintentionally exposed to the public internet or shared with unknown external accounts. The security team must locate any overly permissive resources and verify that only intended principals from the organization or designated AWS account IDs can access them. They require a tool that examines both identity-based and resource-based policies to uncover unintended access paths to services including S3 buckets, IAM roles, KMS keys, and SNS topics. Which solution should they choose?
-
❏ A. AWS Config
-
❏ B. Amazon Inspector
-
❏ C. IAM Access Analyzer
-
❏ D. IAM Access Advisor
A healthtech startup operates many microservices in a colocation facility where they exchange events through a self-managed broker that uses MQTT. The team plans to migrate both the services and the messaging layer to AWS without rewriting the producers or consumers. Which AWS service provides a fully managed message broker that natively supports MQTT so they can keep their existing clients?
-
❏ A. Amazon Kinesis Data Streams
-
❏ B. Amazon MQ
-
❏ C. Amazon Simple Notification Service (Amazon SNS)
-
❏ D. Amazon Simple Queue Service (Amazon SQS)
A media startup named RiverStream runs a production Amazon RDS for PostgreSQL instance configured for Multi-AZ to maximize uptime. The engineering team wants to know what will occur if the primary DB instance in this Multi-AZ pair fails unexpectedly. As the solutions architect, what should you explain happens during this event?
-
❏ A. A Route 53 health check updates the database hostname to the standby node
-
❏ B. RDS automatically remaps the DB instance endpoint CNAME to the standby in another Availability Zone
-
❏ C. The client connection string changes to a different URL after failover
-
❏ D. An operator must approve an email before failover or the application remains down until the original primary is restored
AstraFleet Logistics is rolling out a latency-sensitive API to customers in several AWS Regions across three continents. Partners require that exactly two static IP addresses be allow-listed for outbound access. The design must steer users to the nearest healthy Regional endpoint to minimize latency and support rapid Regional failover. Which approach best satisfies these requirements?
-
❏ A. Place EC2 instances behind Application Load Balancers in multiple Regions and configure Amazon Route 53 failover routing
-
❏ B. Run EC2 instances behind Network Load Balancers in multiple Regions and use Amazon Route 53 latency-based routing
-
❏ C. Deploy EC2 Auto Scaling behind Network Load Balancers in multiple Regions and front them with AWS Global Accelerator
-
❏ D. Use Amazon CloudFront in front of Regional Application Load Balancers and advertise two static IPs to clients
A regional biotech startup needs to keep research files that are seldom accessed but must be instantly available when requested. The files have to be shared concurrently by roughly 350 Amazon EC2 instances across multiple Availability Zones, and the team wants the most cost-efficient managed file storage that still offers immediate access and POSIX semantics. Which choice best meets these requirements?
-
❏ A. Amazon S3 Standard-IA
-
❏ B. Amazon EFS Standard storage class
-
❏ C. Amazon EFS Standard-IA storage class
-
❏ D. Amazon EBS
A streaming media startup runs multiple microservices on Amazon EC2, fronted by an Amazon API Gateway REST API. During promotional events, traffic can surge to 3,000 requests per second for roughly 15 minutes, and the services are not built to scale elastically in real time. What should the architect implement to absorb bursts and prevent the services from being overloaded?
-
❏ A. Distribute the microservices across three Availability Zones and use AWS Backup to schedule regular EBS volume snapshots
-
❏ B. Introduce an Amazon SQS queue to accept requests from API Gateway, and have each microservice poll and process messages at its own pace
-
❏ C. Place an Application Load Balancer in front of the microservices and track Amazon CloudWatch metrics for traffic and latency
-
❏ D. Configure Amazon API Gateway usage plans with throttling and quotas to limit request rates to the backend
A multinational media group runs a hybrid network. Its main AWS workloads are in the ap-southeast-2 Region and connect to the headquarters data center through AWS Direct Connect. After acquiring a gaming studio in Canada, the company needs to integrate the studio’s environments, which span multiple VPCs in the ca-central-1 Region and connect to the studio’s on-premises site using a separate Direct Connect circuit. All CIDR ranges are unique, and the business requires each on-premises site to reach every VPC in both Regions. The company wants a scalable design that minimizes manual routing and long-term operational overhead. Which solution best meets these requirements?
-
❏ A. Build cross-Region VPC peering between all VPCs in ap-southeast-2 and ca-central-1 and manage static routes in each VPC route table to enable communication
-
❏ B. Launch EC2 VPN appliances in every VPC and create a full mesh of IPsec tunnels between all VPCs and both on-premises sites using CloudHub-style routing
-
❏ C. Terminate both Direct Connect circuits on a single AWS Direct Connect gateway and associate the virtual private gateways for all VPCs in both Regions to that gateway to provide reachability between each data center and all VPCs
-
❏ D. Create private virtual interfaces in each Region and target VPCs in the other Region using BGP and route entries, and use VPC endpoints to forward inter-Region traffic
More AWS exam questions can be found in my Solutions Architect Udemy course and the certificationexams.pro certification site.
Arcadia Studios, a digital media startup, is launching a microservices API on Amazon Elastic Kubernetes Service. Traffic can surge to four times the normal load during prime-time activity and typically falls back within about 15 minutes. The team wants the cluster to scale out and in automatically to closely follow these bursts with minimal ongoing management. Which actions should they take to achieve this? (Choose 2)
-
❏ A. Deploy the Kubernetes Metrics Server and configure Horizontal Pod Autoscaler based on CPU or memory utilization
-
❏ B. Set up Amazon EC2 Auto Scaling scheduled actions on the EKS node group to add and remove instances at fixed times
-
❏ C. Run the Kubernetes Cluster Autoscaler to right-size the number of nodes as pending or underutilized capacity changes
-
❏ D. Integrate Amazon SQS with the microservices to absorb spikes and use the queue to manage scaling
-
❏ E. Enable the Kubernetes Vertical Pod Autoscaler to automatically raise or lower pod resource requests and limits
NorthPeak Media is launching a content platform on Amazon EC2 instances in an Auto Scaling group distributed across three Availability Zones. The application requires a shared file system that all instances can mount concurrently with strong consistency because files are updated frequently. Which approach will achieve this with the least operational effort?
-
❏ A. Amazon S3 with Amazon CloudFront
-
❏ B. Amazon FSx for Lustre
-
❏ C. Amazon Elastic File System (Amazon EFS)
-
❏ D. A multi-attach Amazon EBS volume shared by the instances
A global fintech startup operates application servers in a VPC in the ap-southeast-2 Region and hosts its database tier in a separate VPC in the eu-west-1 Region. The application instances in ap-southeast-2 must establish secure connectivity to the databases in eu-west-1. What is the most appropriate network design to accomplish this?
-
❏ A. Create a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC, update the relevant route tables, and add an inbound rule in the eu-west-1 database security group that references the security group ID of the application servers in ap-southeast-2
-
❏ B. Set up AWS Transit Gateway in each Region with an inter-Region peering attachment, configure routing, and add an inbound rule in the eu-west-1 database security group that references the application server security group in ap-southeast-2
-
❏ C. Configure VPC peering between the ap-southeast-2 and eu-west-1 VPCs, add the required routes, and create an inbound rule in the eu-west-1 database security group that allows traffic from the ap-southeast-2 application server IP addresses
-
❏ D. Establish a VPC peering connection between the ap-southeast-2 and eu-west-1 VPCs, modify route tables as needed, and add an inbound rule in the ap-southeast-2 application security group that permits traffic from the eu-west-1 database server IP addresses
A genomics analytics company runs an Amazon Elastic Kubernetes Service cluster for model training and data preprocessing. The platform team needs specific Kubernetes service accounts in selected namespaces to have least privilege access to only certain AWS resources, such as a training Amazon S3 bucket and an Amazon DynamoDB table. The approach must use IAM roles for service accounts to enforce these permissions. Which combination of steps should the team implement? (Choose 2)
-
❏ A. Attach a permissions policy to the EKS worker node instance role so all pods on those nodes can reach the required AWS resources
-
❏ B. Establish a trust policy between the IAM role for each service account and the EKS cluster OIDC identity provider
-
❏ C. Enable Kubernetes Pod Security Policies to prevent pods from accessing unauthorized AWS services
-
❏ D. Grant the necessary permissions on the worker node instance profile and map every service account in the cluster to a single shared IAM role
-
❏ E. Create a dedicated IAM role with only the required permissions and annotate the target Kubernetes service accounts with that role ARN
A solutions architect is designing a file submission system for a vocational institute. Submissions will be stored in an Amazon S3 bucket. The team needs to avoid unintended object deletions while keeping every prior revision accessible. Contributors must be able to upload new files and update existing ones. Which actions together will satisfy these requirements? (Choose 2)
-
❏ A. Turn on SSE-S3 default encryption for the bucket
-
❏ B. Activate object versioning for the bucket
-
❏ C. AWS Backup for Amazon S3
-
❏ D. Require MFA Delete for object deletions
-
❏ E. Restrict the bucket to read-only access
A biotech analytics startup is designing a pipeline to process massive sequencing workloads. Source reads are stored in an Amazon S3 bucket, dozens of Amazon EC2 instances exchange large intermediate datasets totaling several hundred gigabytes per run, and the final results are written to a separate S3 bucket. The team needs to cut network transfer expenses while keeping processing performance high. What should the solutions architect recommend?
-
❏ A. Enable Amazon S3 Transfer Acceleration on the buckets
-
❏ B. Launch all compute instances in the same Availability Zone to avoid cross-AZ transfer fees
-
❏ C. Run the Auto Scaling group across multiple AWS Regions to distribute processing
-
❏ D. Use Amazon Elastic Fabric Adapter on the EC2 instances
A digital publishing startup runs a fleet of 18 Amazon EC2 instances in private IPv4 subnets inside a VPC. The application performs heavy read and write operations against Amazon S3 buckets in the same Region, and all outbound internet traffic currently flows through a NAT gateway in a public subnet. The company wants to cut monthly costs without affecting access to S3 or general internet connectivity from the private subnets. What should a solutions architect do?
-
❏ A. Create an interface VPC endpoint for Amazon S3 and configure DNS so the instances use the PrivateLink endpoint
-
❏ B. Add a route from the private subnets to an internet gateway so the instances can reach S3 directly
-
❏ C. Create a VPC gateway endpoint for Amazon S3 and update the private route tables to send S3 traffic to the endpoint
-
❏ D. Replace the NAT gateway with a right-sized NAT instance to lower hourly costs
A competitive gaming startup, Zephyr Arcade, plans to launch a global leaderboard powered by a proprietary scoring formula that recalculates top players from fresh match metrics every few seconds. The platform must scale elastically, return personalized leaderboard slices with single-digit millisecond latency, and withstand spikes from more than eight million concurrent users. Which options justify choosing Amazon ElastiCache for this design? (Choose 2)
-
❏ A. Use Amazon ElastiCache to execute complex relational JOINs across normalized score tables
-
❏ B. Use Amazon ElastiCache to cache outputs of expensive ranking computations to reduce repeated processing
-
❏ C. Use Amazon ElastiCache to optimize sustained write-heavy ingestion of raw match events
-
❏ D. Use Amazon ElastiCache to accelerate batch ETL pipelines for the analytics lake
-
❏ E. Use Amazon ElastiCache to cache frequently read leaderboard results and deliver very low-latency responses at scale
During a compliance audit at a digital media startup, the security team discovered that several Amazon RDS instances are not encrypted at rest. You must bring these databases into compliance without changing database engines or application code. What should you do to add encryption to the existing Amazon RDS databases?
-
❏ A. Create an RDS Blue/Green deployment, enable encryption on the green environment, switch over, and delete the original
-
❏ B. Create a DB snapshot, copy it with encryption enabled, restore a new DB from the encrypted copy, and retire the old instance
-
❏ C. Modify the existing RDS instance in the console to turn on encryption at rest
-
❏ D. Create a read replica, encrypt the replica, promote it to standalone, and decommission the original
A digital animation studio is preparing to move its production environment to AWS, needing at least 12 TB of storage with the highest possible I/O for large render and transcode scratch files, around 480 TB of highly durable storage for active media assets, and approximately 960 TB for long-term archival of legacy projects; which combination of AWS services best satisfies these needs?
-
❏ A. Amazon EBS io2 volumes for highest throughput, Amazon S3 for durable storage, and Amazon S3 Glacier for archival storage
-
❏ B. Amazon S3 Standard for compute scratch, Amazon S3 Intelligent-Tiering for durable content, and Amazon S3 Glacier Deep Archive for long-term retention
-
❏ C. Amazon EC2 instance store for peak I/O on scratch data, Amazon S3 for durable media storage, and Amazon S3 Glacier for archival retention
-
❏ D. AWS Storage Gateway for durable storage, Amazon EC2 instance store for processing performance, and Amazon S3 Glacier Deep Archive for archival
A travel technology startup is moving its core booking platform from a colocation facility to AWS to boost read scalability and increase availability. The current stack runs on Microsoft SQL Server and experiences very high read traffic. Each morning at 07:45 UTC, the team clones the production database to refresh a development environment, and users see elevated latency during this operation. The team is willing to switch database engines and wants a solution that scales reads and provides a low-impact way to create the daily dev copy. What should the solutions architect recommend?
-
❏ A. Use Amazon RDS for MySQL with Multi-AZ and point the dev environment to the standby instance
-
❏ B. Use Amazon Aurora MySQL with Aurora Replicas in multiple AZs and rebuild the dev database each day using mysqldump from the primary
-
❏ C. Use Amazon Aurora MySQL with Aurora Replicas across multiple AZs and provision the dev database by restoring from Aurora automated backups
-
❏ D. Use Amazon RDS for SQL Server with Multi-AZ and read replicas and direct the dev environment to a read replica
An e-commerce analytics firm needs to run nightly end-of-day reconciliation jobs across a fleet of Amazon EC2 instances while keeping spend as low as possible. The workloads are packaged as containers, are stateless, and can be relaunched if they stop unexpectedly. The team wants a solution that reduces both compute cost and ongoing management effort. What should a solutions architect recommend?
-
❏ A. Use On-Demand Instances in an Amazon Elastic Kubernetes Service managed node group
-
❏ B. Launch Spot Instances in an Amazon EC2 Auto Scaling group to host the containerized jobs
-
❏ C. Use Spot Instances in an Amazon Elastic Kubernetes Service managed node group
-
❏ D. Launch On-Demand Instances in an Amazon EC2 Auto Scaling group to run the containers
A mobile gaming studio runs a player profile service backed by an Amazon DynamoDB table named PlayerProfiles. Occasionally, a faulty release inserts malformed items that overwrite valid records. When the issue is noticed, the team needs to quickly revert the table to the moment just before those bad writes so the corrupted items are removed. What is the most appropriate approach?
-
❏ A. Configure the table as a global table and shift traffic to a Region replica that has not seen the bad writes
-
❏ B. Restore the table using DynamoDB point-in-time recovery to a timestamp just before the corrupt items were written
-
❏ C. Use DynamoDB Streams to replay changes and rebuild the table to its prior state
-
❏ D. Create an on-demand backup and restore from it when corruption is detected
Northwind Health, a regional healthcare provider, stores compliance documents in dozens of Amazon S3 buckets across three AWS accounts. The compliance team must automatically discover any sensitive data that may be present in S3 and continuously detect and alert on suspicious or malicious activity against S3 objects across all buckets. Which AWS service combination best satisfies these requirements?
-
❏ A. Use Amazon Macie to identify sensitive data and to detect threats in S3
-
❏ B. Use AWS Security Hub to monitor threats and AWS CloudTrail to automatically find sensitive data in S3
-
❏ C. Enable Amazon Macie for sensitive data discovery in S3, and use Amazon GuardDuty with S3 protection to detect malicious activity
-
❏ D. Rely on Amazon GuardDuty to discover sensitive data and to monitor S3 for malicious actions
A news aggregation site for Aurora Media runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The public DNS zone is hosted in Amazon Route 53 and the root record currently aliases to the ALB. The team wants users to see a static site-unavailable page whenever the application is down while keeping operational effort to a minimum. What configuration should be implemented to achieve this?
-
❏ A. Configure a Route 53 active-active setup using the ALB and a single EC2 instance serving a static error page as separate endpoints
-
❏ B. Use a Route 53 weighted policy that includes an S3 static website for the error page with weight 0 and increase the weight during incidents
-
❏ C. Create an Amazon CloudFront distribution with the ALB as the origin and define custom error responses; point the domain to the distribution with a Route 53 alias
-
❏ D. Set up Route 53 failover with the ALB as primary and an S3 static website hosting the error page as secondary
AWS Certification Sample Questions Answered
More AWS exam questions can be found in my Solutions Architect Udemy course and the certificationexams.pro certification site.
A global wildlife nonprofit plans to publish frequent updates about field projects as static web pages. The content is stored in an Amazon S3 bucket and the site is expected to receive heavy worldwide traffic. What is the most performant and cost-effective way for a solutions architect to deliver this content to users?
-
✓ C. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
Configure an Amazon CloudFront distribution with the S3 bucket as the origin is the correct choice because it delivers cached static pages from edge locations around the world which improves performance for global users.
Configure an Amazon CloudFront distribution with the S3 bucket as the origin caches the site content at nearby edge locations so users experience lower latency and faster load times. It also reduces the number of requests and data egress from S3 which lowers costs when traffic is heavy and it scales automatically to handle worldwide demand without creating and managing multiple regional buckets.
Enable Amazon S3 Transfer Acceleration on the bucket is not suitable because it speeds uploads into S3 from remote clients but it does not provide global edge caching or accelerate content delivery to end users. It therefore does not address the requirement to serve frequent static updates to a global audience.
Set up Amazon Route 53 geoproximity routing only changes DNS routing to endpoints and it does not create a content distribution layer or cache S3 objects at edge locations. DNS routing alone will not reduce latency the way a CDN does and it will not offload read traffic from the S3 origin.
Enable S3 cross-Region replication to buckets in multiple AWS Regions copies objects across regions which increases storage and replication costs and adds operational complexity. Replication does not provide automatic edge caching or the broad global delivery optimization that a CDN such as CloudFront provides.
For high read volume global static sites remember to put CloudFront in front of S3 to get edge caching and lower egress costs when compared with replication or DNS routing alone.
A solutions team at Northwind Logistics needs a repeatable method to provision about 30 new member accounts in one AWS Organization. After creation, each account must be placed in the Engineering-Shared OU and initialized with a VPC that has public and private subnets across two Availability Zones. Which approach provides the most reliable automation for this task?
-
✓ B. Automate with AWS CloudFormation templates and orchestration scripts
The correct option is Automate with AWS CloudFormation templates and orchestration scripts. This approach lets the team programmatically create about thirty accounts, place each account in the Engineering-Shared OU, and bootstrap a consistent VPC that includes public and private subnets across two Availability Zones.
Orchestration scripts can call AWS Organizations to vend accounts and move them into the target OU and then trigger CloudFormation templates inside each new account to create the VPC and subnets. Using CloudFormation provides declarative, idempotent templates that are versionable and testable so the network layout is repeatable across all accounts and easier to maintain than imperative scripting.
Use the AWS CLI can perform the individual steps but relying on imperative CLI scripts is harder to maintain and validate for idempotency when you scale to dozens of accounts. Scripts tend to be more fragile and require more custom error handling than infrastructure as code.
AWS CloudFormation StackSets is excellent for deploying consistent stacks into many existing accounts and OUs but it does not create new accounts or move accounts into OUs. StackSets therefore does not satisfy the full requirement to both create accounts and bootstrap them.
Use the AWS Organizations API can create and relocate accounts into the desired OU but it does not provision the networking resources inside each account. You still need a declarative tool such as CloudFormation or another IaC solution to provision VPCs and subnets as part of the bootstrap.
Practice automating an end to end flow that calls the Organizations CreateAccount API and then deploys bootstrapping CloudFormation stacks so you can verify OU placement and idempotent network provisioning.
An environmental research consortium completed a deep-ice radar campaign and used AWS Snowball Edge Storage Optimized devices to capture and ship about 240 TB of raw waveform data. They operate a high performance computing cluster on AWS to analyze very large datasets and generate subsurface models. After the Snowball Edge devices are returned and the data is ingested into AWS, the workloads require sub-millisecond latency, extremely high throughput, and highly parallel POSIX file access across all compute nodes. Which solution should they implement to meet these needs?
-
✓ C. Create an Amazon FSx for Lustre file system and ingest roughly 240 TB directly into the Lustre file system; mount it across the HPC nodes for parallel, low-latency access
Create an Amazon FSx for Lustre file system and ingest roughly 240 TB directly into the Lustre file system; mount it across the HPC nodes for parallel, low-latency access is the correct choice for these requirements. This option provides the sub-millisecond latency and massively parallel POSIX file access that the HPC cluster needs to analyze the waveform data at high throughput.
FSx for Lustre is purpose built for high performance computing workloads and delivers very high aggregate throughput and low latency across many clients. Ingesting the 240 TB dataset directly into the file system avoids on‑first‑access fetch penalties and ensures consistent performance for tightly coupled jobs that need parallel I/O and fast metadata operations.
Import the data into Amazon S3 and create an Amazon FSx for Lustre file system linked to the S3 bucket; mount the FSx file system on the HPC nodes can be useful for S3-backed workflows, but the data repository association can cause on demand retrieval or first access delays for files. Those delays add avoidable latency for workloads that require immediate sub-millisecond response and consistent throughput.
Set up Amazon FSx for NetApp ONTAP and configure a volume to sync with the S3 bucket that holds the dataset; mount the volume on the cluster is oriented toward enterprise NAS features and data management. It generally does not match the extreme parallel throughput and very low latency that Lustre provides for tightly coupled HPC applications.
Copy the data to Amazon S3, transfer it into Amazon Elastic File System, and mount the EFS file system on all compute nodes uses a general purpose NFS interface and is unlikely to meet the sub-millisecond latency and the massive parallel IOPS and throughput required by this analysis workload.
When an exam scenario calls for sub‑millisecond latency and massively parallel POSIX access think FSx for Lustre and prefer preloading data into the file system rather than relying on on demand S3 retrieval.
A robotics startup is moving its computational fluid dynamics workload and about 25 TB of datasets from its private HPC cluster to AWS. On premises it uses two tiers: during twice-daily job windows the compute nodes require a shared, high-throughput parallel file system, and between runs the data sits in a low-cost cold tier for up to 90 days. Which combination of AWS services should the solutions architect recommend to satisfy these storage requirements? (Choose 2)
-
✓ B. Amazon FSx for Lustre for the high-performance shared file system
-
✓ C. Amazon S3 for cold data retention
The correct choices are Amazon FSx for Lustre for the high-performance shared file system and Amazon S3 for cold data retention.
Amazon FSx for Lustre for the high-performance shared file system provides the low latency and very high throughput needed for parallel I O across many compute nodes and it exposes a POSIX namespace so jobs can access files concurrently. Amazon FSx for Lustre for the high-performance shared file system also integrates directly with Amazon S3 for cold data retention so you can import datasets for runs and export results back to S3 for durable, low cost storage between runs.
Amazon EBS io2 Block Express for the parallel tier is block storage attached to individual instances and even with Multi Attach it does not act as a scalable, POSIX parallel file system for many HPC nodes, so it is unsuitable for the shared parallel tier.
Amazon EFS for the cold data tier is a managed NFS service intended for general purpose shared file storage and it is typically more expensive for large volumes of infrequently accessed data than object storage, so it is not the most economical cold tier.
Amazon FSx for Windows File Server for the parallel tier uses the SMB protocol and targets Windows workloads and it is not optimized for the POSIX parallel I O patterns common in HPC clusters, so it is not the right fit for the high performance parallel tier.
For HPC workloads think FSx for Lustre for the hot, shared POSIX file system and use S3 to store datasets and results long term.
A digital ticketing startup named SeatWave runs its purchase service on Amazon EC2 instances in an Auto Scaling group, and it buffers incoming orders in an Amazon Simple Queue Service (Amazon SQS) queue for asynchronous processing by the same fleet; during flash promotions, sudden spikes in queued orders cause API latency and slower processing, so what scaling strategy should a solutions architect recommend to rapidly match instance capacity to the volatile SQS load?
-
✓ C. Use a target tracking scaling policy that follows a backlog-per-instance custom Amazon SQS metric
Use a target tracking scaling policy that follows a backlog-per-instance custom Amazon SQS metric is correct because it lets the Auto Scaling group continuously adjust capacity to hold a desired backlog per instance and so quickly match capacity to sudden SQS spikes during flash promotions.
Use a target tracking scaling policy that follows a backlog-per-instance custom Amazon SQS metric works by defining a target backlog per instance derived from SQS ApproximateNumberOfMessagesVisible divided by the number of in service instances. The Auto Scaling group then increases or decreases instances automatically to move the actual backlog toward that target which provides fast and stable scaling for bursty, asynchronous workloads.
Use a simple scaling policy that triggers from a custom Amazon SQS queue metric is weaker because simple policies rely on single alarm actions and cooldown periods and that delay can make response to rapid bursts too slow and cause underprovisioning.
Use a step scaling policy that adjusts from a custom Amazon SQS queue metric can react to varying queue sizes but it depends on discrete alarm thresholds and step sizes and that often causes overshoot or slower convergence compared with continuous target tracking.
Use a scheduled scaling policy keyed to a custom Amazon SQS queue metric is inappropriate for unpredictable flash traffic because scheduled actions assume known patterns and they cannot react in real time to sudden spikes.
When EC2 workers drain SQS and traffic is bursty use target tracking with a backlog per instance metric calculated as SQS ApproximateNumberOfMessagesVisible divided by the number of in service instances.
After a cost overrun alert from AWS Budgets, the finance director at Aurora Media Labs wants an AWS-native solution to quickly reduce spending across compute. The engineering team prefers automated rightsizing guidance for EC2, EBS, and Lambda based on the last 45 days of utilization rather than reports or charts alone. Which service best meets this need?
-
✓ C. AWS Compute Optimizer
The correct option is AWS Compute Optimizer because it analyzes recent utilization and delivers automated rightsizing recommendations for compute services.
AWS Compute Optimizer inspects historical utilization metrics and produces actionable recommendations for EC2 instances, EBS volumes, and Lambda functions so teams can apply sizing and configuration changes quickly to reduce spend without relying only on charts or manual analysis.
AWS Trusted Advisor provides broad cost optimization checks and best practice guidance but it is not focused on per resource rightsizing for EC2, EBS, and Lambda so it is less suitable for the targeted automated compute savings described.
AWS Cost and Usage Reports supply raw, highly detailed billing and usage data for downstream analysis and reporting but they do not generate optimization recommendations on their own and therefore require extra tooling to act on rightsizing opportunities.
AWS Cost Explorer helps visualize and trend costs to understand where spend is occurring but it does not recommend specific compute configuration changes to reduce spend and so it will not directly automate the rightsizing the engineering team prefers.
When a question asks for automated compute rightsizing based on recent utilization choose the service that analyzes resource metrics and provides direct recommendations rather than tools that only visualize or export cost data.
A mid-size logistics startup operates an on-premises MySQL database and plans to move it to AWS. The team needs a fully managed database service that provides high availability with automatic failover across at least two Availability Zones while keeping operational effort low. Which approach best satisfies these requirements?
-
✓ C. Use AWS Database Migration Service to migrate directly into an Amazon RDS for MySQL Multi-AZ deployment
Use AWS Database Migration Service to migrate directly into an Amazon RDS for MySQL Multi-AZ deployment is correct because it delivers a fully managed MySQL database with high availability and automatic failover across multiple Availability Zones while enabling online migration from an on premises MySQL with minimal downtime.
Amazon RDS for MySQL Multi-AZ provides automated backups, patching, and synchronous standby replicas that enable automatic failover and reduce operational overhead. AWS Database Migration Service supports continuous replication and a cutover process that keeps downtime low during the migration, so the combination meets the requirement for a managed, highly available service with minimal operational effort.
Use AWS Application Migration Service to rehost the MySQL server to Amazon EC2 and place instances in multiple AZs is incorrect because rehosting to EC2 does not give you an RDS style managed Multi-AZ deployment and you would need to design, run, and maintain clustering and failover yourself.
Use AWS Database Migration Service to migrate to an Amazon RDS for MySQL Single-AZ instance and run AWS Schema Conversion Tool to convert the schema is incorrect because a Single-AZ RDS instance does not meet the high availability and automatic failover requirement and the Schema Conversion Tool is primarily used for heterogeneous engine migrations rather than MySQL to MySQL.
Export a database snapshot from on premises, transfer it to Amazon S3 with AWS DataSync, and restore an Amazon RDS for MySQL Multi-AZ instance from the snapshot is incorrect because you cannot directly restore an on premises MySQL snapshot into RDS from S3 and this approach does not provide a supported automated migration path into RDS Multi-AZ.
When you need managed cross Availability Zone failover and low operational effort remember that RDS Multi-AZ handles automatic failover and AWS DMS can perform near zero downtime migrations.
A telehealth provider stores patient telemetry in Amazon S3 buckets deployed across a pair of AWS Regions. Users upload data from multiple continents. The engineering team wants writes from remote clients to be automatically directed to the closest bucket based on real-time network performance while avoiding congestion on the public internet. They also need seamless regional failover with the least ongoing management of S3. What solution should they implement?
-
✓ C. Use Amazon S3 Multi-Region Access Points in active-active with one global endpoint and configure S3 Replication between the buckets
The correct choice is Use Amazon S3 Multi-Region Access Points in active-active with one global endpoint and configure S3 Replication between the buckets. This option provides a single global endpoint that automatically directs writes to the closest healthy Region while keeping data synchronized across Regions.
Use Amazon S3 Multi-Region Access Points in active-active with one global endpoint and configure S3 Replication between the buckets is appropriate because Multi-Region Access Points use AWS global network telemetry to route requests to the best Region and avoid public internet congestion, and replication between the buckets keeps data consistent so failover is seamless and requires little ongoing operational management.
Build an active-active pattern using regional S3 endpoints and have the client choose the closest Region is less suitable because it forces you to implement and maintain client or control plane routing and failover logic, and it does not leverage AWS network-aware routing out of the box.
Enable S3 Transfer Acceleration on a single bucket and send all uploads to the acceleration endpoint is not correct because Transfer Acceleration speeds uploads to a single bucket but does not provide automatic proximity-based Region selection or cross-Region failover, and it centralizes writes rather than distributing them.
Adopt an active-passive setup behind S3 Multi-Region Access Points and create separate global endpoints per Region is incorrect because Multi-Region Access Points are intended to provide a single global endpoint for active-active access, and an active-passive design with multiple global endpoints adds unnecessary complexity and slower recovery.
Remember that S3 Multi-Region Access Points give a single global endpoint and use the AWS global network to pick the best Region, so choose them when you need proximity routing and automatic cross-Region failover.
FinEdge Payments, a fintech provider, must comply with an audit that mandates running production workloads on single-tenant hardware within their VPC. What is the most cost-efficient way to ensure their Amazon EC2 instances are isolated to a single tenant?
-
✓ C. Dedicated Instances
Dedicated Instances are the correct choice because they place EC2 instances on hardware dedicated to a single AWS account and they meet the audit requirement for single tenant hardware while remaining more cost efficient than host level allocation.
Dedicated Instances run EC2 workloads on physical servers that are dedicated to your account which satisfies the single tenant mandate and they do not require the detailed socket and core allocation or the placement controls that Dedicated Hosts provide so they are simpler to manage and typically cheaper when you only need isolation.
Dedicated Hosts allocate entire physical servers to you and they give you host level visibility and control which helps with bring your own license requirements and affinity but they usually cost more and are unnecessary if you only need single tenant isolation.
On-Demand Instances describe a billing model where you pay for compute by the hour or second with no long term commitment and they do not guarantee that instances run on single tenant hardware so they cannot meet the audit requirement for isolation.
Spot Instances provide discounted spare capacity that can be reclaimed by AWS and they do not provide single tenant hardware guarantees so they are not suitable for compliance that requires dedicated servers.
When an audit requires single tenant hardware and you want the lowest operational cost choose Dedicated Instances unless you must control physical sockets or use BYOL in which case consider Dedicated Hosts.
A startup plans to expose a public API with Amazon API Gateway, and the application must persist key-value records in a backend data store. The initial dataset is about 2 GB with unknown future growth, and traffic can spike from zero to more than 1,600 requests per second. Which AWS services together provide a scalable and cost-effective backend for this workload? (Choose 2)
-
✓ B. AWS Lambda
-
✓ D. Amazon DynamoDB
The correct options are AWS Lambda and Amazon DynamoDB. These two services together provide a serverless, highly scalable, and cost effective backend for a public API that needs durable key value persistence and the ability to handle sudden spikes to over 1,600 requests per second.
AWS Lambda provides per request billing and elastic concurrency which lets the compute scale from zero to meet traffic bursts without managing servers. Lambda integrates with API Gateway so the API can remain public while the backend only runs when requests arrive.
Amazon DynamoDB is a fully managed key value store that supports on demand capacity so you do not provision throughput ahead of time and the service can absorb sudden traffic spikes. DynamoDB also scales storage automatically and offers single digit millisecond latency which suits a 2 GB initial dataset with unknown growth.
Amazon RDS is a relational database that requires instance management and capacity planning and it is not optimal for simple key value access with highly variable traffic.
Amazon ElastiCache is an in memory cache that accelerates reads but it is not a durable system of record and so it cannot replace a persistent key value store.
Amazon EC2 Auto Scaling can scale server instances but it usually incurs baseline costs and increases operational complexity compared with a serverless approach that scales to zero.
Choose services that provide on demand capacity and per request billing for spiky traffic so you minimize cost and operational overhead under unpredictable load.
More AWS exam questions can be found in my Solutions Architect Udemy course and the certificationexams.pro certification site.
A sports streaming company stores raw match footage in Amazon S3 from production crews across North America. After opening bureaus in South America and Australia, overseas teams report very slow uploads of tens of gigabytes of video to a central S3 bucket. Which approaches are the most cost-effective to speed up these uploads? (Choose 2)
-
✓ B. Enable S3 Transfer Acceleration on the destination bucket
-
✓ D. Upload objects using multipart upload with parallel part transfers
The most cost effective choices are Enable S3 Transfer Acceleration on the destination bucket and Upload objects using multipart upload with parallel part transfers. These two approaches speed long distance uploads without requiring heavy network changes or long provisioning times.
Enable S3 Transfer Acceleration on the destination bucket uses Amazon CloudFront edge locations to receive data closer to the upload source and then forwards it over optimized AWS network paths to the target bucket. This reduces latency and can increase throughput for long haul transfers from overseas bureaus.
Upload objects using multipart upload with parallel part transfers breaks large files into parts and uploads parts in parallel which increases throughput and reduces the impact of single part failures because you can retry individual parts rather than the whole file. Multipart upload is simple to implement and is very cost effective for tens of gigabytes or larger objects.
Provision AWS Direct Connect links from the overseas offices to AWS and send uploads over them is not the best choice because Direct Connect requires physical circuits and can be expensive and slow to provision, so it is not cost effective for rapidly improving uploads from many distributed teams.
Configure AWS Site-to-Site VPN tunnels from overseas offices to a VPC and route uploads through the VPN is focused on VPC connectivity and adds IPsec overhead, and it does not provide the edge ingestion and optimized public paths that accelerate S3 uploads, so it will not reliably speed large transfers.
Use AWS Global Accelerator to front the S3 bucket for faster uploads is incorrect because Global Accelerator does not support S3 as an endpoint and it is designed for accelerating traffic to ALB, NLB, and EC2, so it cannot be used to accelerate direct S3 uploads.
When you see long distance uploads think first about edge ingestion with S3 Transfer Acceleration and parallelism with multipart upload because they are quick to enable and cost effective compared with private circuits.
A fintech startup offers a customer-facing API that lets users adjust card limits, freeze or unfreeze cards, and manage subscriptions. Traffic surges up to 20 times normal during the first and last 48 hours of each month and during 72 hour promotional events. The team must keep response latency consistently low while keeping operations work minimal. Which approach should they choose to meet these goals most efficiently?
-
✓ C. Use Amazon API Gateway with AWS Lambda functions configured with provisioned concurrency
Use Amazon API Gateway with AWS Lambda functions configured with provisioned concurrency is the correct choice because it delivers consistently low latency while keeping operational work minimal for highly spiky traffic.
With Use Amazon API Gateway with AWS Lambda functions configured with provisioned concurrency execution environments are pre initialized so cold starts are eliminated and p95 and p99 latencies remain stable during sudden bursts. The serverless stack removes server and OS maintenance and scales automatically with API Gateway and Lambda. You can also adjust provisioned concurrency and use Application Auto Scaling to cover predictable peak windows which keeps operations effort low.
Build the API with Amazon API Gateway and AWS Fargate tasks can scale but container task startup and image lifecycle management add latency variability and increase operational overhead compared with pre warmed functions.
Deploy the API on AWS Elastic Beanstalk with Auto Scaling groups can handle load but EC2 instance provisioning and application warm up can hurt tail latency and require more platform maintenance than a serverless approach.
Use Amazon API Gateway with AWS Lambda functions configured with reserved concurrency does provide capacity isolation but it does not initialize execution environments ahead of time so cold starts can still cause latency spikes under spiky workloads.
For spiky and latency sensitive APIs use provisioned concurrency to remove cold starts and monitor usage so you scale provisioned capacity only for peak windows.
A robotics startup runs a tightly coupled computational fluid dynamics workload on an EC2 cluster. Every node keeps a full replica of the dataset, and the application requires roughly 150,000 random read and write IOPS. Which storage choice will meet the performance goal at the lowest cost given that data is already replicated across nodes?
-
✓ C. Amazon Instance Store
Amazon Instance Store is the correct choice because it provides the highest local IOPS at the lowest cost when each EC2 node already holds a full replica of the dataset.
Amazon Instance Store uses NVMe attached local storage to avoid network overhead and to deliver very low latency and high random read and write IOPS. The storage is included in the instance price which makes it cost effective for per node replicated HPC workloads and it can meet the 150,000 random IOPS requirement when you select instances with adequate instance store capacity.
Amazon EBS io2 Block Express can deliver extremely high IOPS but it is network attached and requires provisioning and billing for IOPS and capacity which usually makes it more expensive for per node replicas. EBS does provide durable volumes that persist across instance stops so choose it when persistence or reattachment is required.
Amazon S3 with byte-range fetch is object storage and it does not provide block level semantics or predictable per volume IOPS which makes it unsuitable for a low latency, high IOPS block workload.
Amazon FSx for Lustre is a high throughput shared filesystem designed for HPC but it introduces extra network latency and cost which are unnecessary when every node already maintains a full replica and requires maximum local IOPS.
When nodes already hold full replicas and the workload needs maximum per node IOPS pick instance store for best latency and cost unless you require data durability across instance stops.
A digital media analytics firm that operates 22 accounts within one AWS Organization completed a security review and suspects that some IAM roles and Amazon S3 buckets are unintentionally exposed to the public internet or shared with unknown external accounts. The security team must locate any overly permissive resources and verify that only intended principals from the organization or designated AWS account IDs can access them. They require a tool that examines both identity-based and resource-based policies to uncover unintended access paths to services including S3 buckets, IAM roles, KMS keys, and SNS topics. Which solution should they choose?
-
✓ C. IAM Access Analyzer
IAM Access Analyzer is the correct choice because it inspects both identity based and resource based policies to find resources that are shared publicly or with external accounts and it covers services such as S3 buckets IAM roles KMS keys and SNS topics.
IAM Access Analyzer uses automated reasoning to evaluate policy semantics and it generates findings that identify which principals outside the organization or outside specified account IDs can access a resource. This capability makes it suitable for a multi account AWS Organization that needs to locate overly permissive resources and verify that only intended principals have access.
AWS Config records configuration changes and supports rules for compliance and remediation but it does not perform policy semantics analysis to infer whether policies grant access to external principals or the public internet.
Amazon Inspector focuses on vulnerability and exposure assessments for compute resources such as EC2 instances containers and Lambda and it does not analyze IAM or resource based policies for unintended sharing.
IAM Access Advisor provides service last accessed data to help right size permissions but it does not analyze resource based policies or detect cross account or public access paths.
When the question asks to detect unintended external access across both resource based and identity based policies choose IAM Access Analyzer and do not confuse it with AWS Config or IAM Access Advisor.
A healthtech startup operates many microservices in a colocation facility where they exchange events through a self-managed broker that uses MQTT. The team plans to migrate both the services and the messaging layer to AWS without rewriting the producers or consumers. Which AWS service provides a fully managed message broker that natively supports MQTT so they can keep their existing clients?
-
✓ B. Amazon MQ
Amazon MQ is the correct choice because it provides a fully managed message broker that natively supports MQTT so existing MQTT producers and consumers can connect without code changes.
Amazon MQ runs managed broker engines such as Apache ActiveMQ and exposes MQTT endpoints while also supporting other protocols like AMQP, STOMP, and JMS. It offloads broker maintenance and high availability tasks so the team can migrate the messaging layer to AWS without rewriting clients.
Amazon Kinesis Data Streams is built for high throughput event streaming and shard based ingestion and it does not act as an MQTT broker nor provide MQTT endpoints so it cannot accept MQTT clients without a custom bridge.
Amazon Simple Notification Service (Amazon SNS) is a pub sub notification service that uses its own APIs and delivery protocols and it does not provide native MQTT broker endpoints so existing MQTT clients cannot connect directly.
Amazon Simple Queue Service (Amazon SQS) is a message queuing service that provides polling based queue semantics and it does not implement broker semantics or MQTT endpoints so it will not accept MQTT clients natively.
When you must preserve standard protocols such as MQTT with minimal code change choose a managed broker that exposes the same protocol like Amazon MQ. If you plan to redesign for cloud native messaging consider SNS, SQS, or Kinesis.
A media startup named RiverStream runs a production Amazon RDS for PostgreSQL instance configured for Multi-AZ to maximize uptime. The engineering team wants to know what will occur if the primary DB instance in this Multi-AZ pair fails unexpectedly. As the solutions architect, what should you explain happens during this event?
-
✓ B. RDS automatically remaps the DB instance endpoint CNAME to the standby in another Availability Zone
RDS automatically remaps the DB instance endpoint CNAME to the standby in another Availability Zone is correct. When the primary fails the standby is promoted and the DB endpoint CNAME is updated to point to the new primary so applications continue to use the same connection endpoint.
This behavior is built into RDS Multi-AZ deployments and provides an automated failover path that preserves the existing endpoint. RDS manages the promotion and the DNS remap so client connection strings do not need to change and downtime is minimized while clients reconnect to the promoted standby.
A Route 53 health check updates the database hostname to the standby node is incorrect because RDS handles Multi-AZ failover through its managed DB endpoint DNS and it does not rely on Route 53 health checks to perform the promotion.
The client connection string changes to a different URL after failover is wrong because the DB endpoint remains the same and RDS updates the CNAME to point to the promoted standby so the connection string stays valid.
An operator must approve an email before failover or the application remains down until the original primary is restored is incorrect because failover in Multi-AZ is automated and does not require manual approval or waiting for the failed primary to be restored.
Remember that Multi-AZ provides automated failover to a standby using the same endpoint and that read replicas are for scaling reads and require manual promotion if you need a failover target.
AstraFleet Logistics is rolling out a latency-sensitive API to customers in several AWS Regions across three continents. Partners require that exactly two static IP addresses be allow-listed for outbound access. The design must steer users to the nearest healthy Regional endpoint to minimize latency and support rapid Regional failover. Which approach best satisfies these requirements?
-
✓ C. Deploy EC2 Auto Scaling behind Network Load Balancers in multiple Regions and front them with AWS Global Accelerator
The best choice is Deploy EC2 Auto Scaling behind Network Load Balancers in multiple Regions and front them with AWS Global Accelerator.
Deploy EC2 Auto Scaling behind Network Load Balancers in multiple Regions and front them with AWS Global Accelerator gives you two anycast static IP addresses that clients can allow list and it uses the AWS global network to steer users to the nearest healthy regional endpoint for lower latency and for rapid failover. Network Load Balancers make reliable regional endpoints and preserve client source IPs while Auto Scaling provides capacity and availability in each Region.
Place EC2 instances behind Application Load Balancers in multiple Regions and configure Amazon Route 53 failover routing relies on DNS based failover and will send traffic to a primary until Route 53 detects a failure. That approach does not provide deterministic low latency routing and it does not supply exactly two static client IPs for allow listing.
Run EC2 instances behind Network Load Balancers in multiple Regions and use Amazon Route 53 latency-based routing can improve proximity but it still depends on DNS which can be slow to react due to client side caching. It also cannot present a fixed pair of global static IP addresses that clients can allow list.
Use Amazon CloudFront in front of Regional Application Load Balancers and advertise two static IPs to clients is not appropriate because CloudFront does not expose a dedicated pair of static client facing IPs and its edge IP ranges are large and subject to change. CloudFront is optimized for cached content delivery and it is not the correct mechanism to guarantee two static endpoints for a latency sensitive API.
When a design requires exactly two static client IPs and low latency global routing and fast regional failover prefer AWS Global Accelerator in front of regional NLB endpoints.
A regional biotech startup needs to keep research files that are seldom accessed but must be instantly available when requested. The files have to be shared concurrently by roughly 350 Amazon EC2 instances across multiple Availability Zones, and the team wants the most cost-efficient managed file storage that still offers immediate access and POSIX semantics. Which choice best meets these requirements?
-
✓ C. Amazon EFS Standard-IA storage class
The correct choice is Amazon EFS Standard-IA storage class.
Amazon EFS Standard-IA storage class is a managed POSIX compliant network file system that can be mounted by hundreds to thousands of Amazon EC2 instances across multiple Availability Zones. It is optimized for data that is infrequently accessed while still allowing immediate mount and read access and it offers lower storage cost with an access retrieval charge which meets the startup requirement for seldom accessed files that must be instantly available.
Amazon S3 Standard-IA is incorrect because it is object storage and not a mountable POSIX file system so it cannot provide file locking or native concurrent mounts for EC2 instances.
Amazon EFS Standard storage class is incorrect because it is intended for frequently accessed workloads and therefore results in higher storage cost for cold datasets compared with the Standard-IA class despite providing the required POSIX semantics.
Amazon EBS is incorrect because it is block storage that is typically attached to a single instance and even Multi Attach does not provide a scalable POSIX shared file system for hundreds of instances.
When a question asks for a shared POSIX file system for many EC2 instances and the data is infrequently accessed but must be immediately available choose EFS Standard-IA rather than S3 or EBS.
A streaming media startup runs multiple microservices on Amazon EC2, fronted by an Amazon API Gateway REST API. During promotional events, traffic can surge to 3,000 requests per second for roughly 15 minutes, and the services are not built to scale elastically in real time. What should the architect implement to absorb bursts and prevent the services from being overloaded?
-
✓ B. Introduce an Amazon SQS queue to accept requests from API Gateway, and have each microservice poll and process messages at its own pace
The correct choice is Introduce an Amazon SQS queue to accept requests from API Gateway, and have each microservice poll and process messages at its own pace. This option provides a durable buffer and lets the backend consume work at a rate it can handle so bursts do not overwhelm the services.
Decoupling the API from the microservices with SQS smooths demand during a 15 minute promotional spike. SQS stores requests durably and each microservice can poll and process messages asynchronously which prevents sudden overload and allows eventual processing of all requests.
Distribute the microservices across three Availability Zones and use AWS Backup to schedule regular EBS volume snapshots improves fault tolerance and data protection but it does not add elasticity or a buffering mechanism to absorb traffic spikes so it will not prevent overload during short bursts.
Place an Application Load Balancer in front of the microservices and track Amazon CloudWatch metrics for traffic and latency helps distribute traffic and provides observability but it does not queue requests or create back pressure so the backend can still be overwhelmed if it cannot scale quickly.
Configure Amazon API Gateway usage plans with throttling and quotas to limit request rates to the backend can protect downstream systems by limiting request rates but throttling will reject or delay client calls and does not ensure that all requests are eventually processed like a queue does.
When backends cannot scale instantly think decouple and buffer and use a message queue such as SQS to absorb bursts and ensure eventual processing.
A multinational media group runs a hybrid network. Its main AWS workloads are in the ap-southeast-2 Region and connect to the headquarters data center through AWS Direct Connect. After acquiring a gaming studio in Canada, the company needs to integrate the studio’s environments, which span multiple VPCs in the ca-central-1 Region and connect to the studio’s on-premises site using a separate Direct Connect circuit. All CIDR ranges are unique, and the business requires each on-premises site to reach every VPC in both Regions. The company wants a scalable design that minimizes manual routing and long-term operational overhead. Which solution best meets these requirements?
-
✓ C. Terminate both Direct Connect circuits on a single AWS Direct Connect gateway and associate the virtual private gateways for all VPCs in both Regions to that gateway to provide reachability between each data center and all VPCs
The best choice is Terminate both Direct Connect circuits on a single AWS Direct Connect gateway and associate the virtual private gateways for all VPCs in both Regions to that gateway to provide reachability between each data center and all VPCs. This design provides a global attachment point that lets each on premises site reach every VPC across Regions while keeping routing simple and scalable.
By associating the virtual private gateways from the VPCs in ap-southeast-2 and ca-central-1 with a single Direct Connect gateway you avoid creating many peering links or manual per VPC static routes and you can advertise on premises prefixes over BGP so routes propagate without heavy operational overhead. A Direct Connect gateway aggregates circuits and VGW associations which reduces the number of configurations to manage and improves long term maintainability for a hybrid, multi Region environment.
Build cross-Region VPC peering between all VPCs in ap-southeast-2 and ca-central-1 and manage static routes in each VPC route table to enable communication is not suitable. That approach does not scale because VPC peering is non transitive and it will not forward on premises traffic without extensive manual route management.
Launch EC2 VPN appliances in every VPC and create a full mesh of IPsec tunnels between all VPCs and both on-premises sites using CloudHub-style routing would functionally connect everything. That option is wrong for this requirement because it introduces high complexity cost and ongoing operational and availability burden which conflicts with the need to minimize manual routing and long term overhead.
Create private virtual interfaces in each Region and target VPCs in the other Region using BGP and route entries, and use VPC endpoints to forward inter-Region traffic is incorrect because private virtual interfaces must terminate to a VGW or a Transit Gateway in the same Region and VPC endpoints do not provide cross VPC or cross Region routing. That combination cannot deliver the required multi Region hybrid reachability.
For multi Region hybrid connectivity think Direct Connect gateway for global reach and use BGP so routes propagate without per VPC static entries.
More AWS exam questions can be found in my Solutions Architect Udemy course and the certificationexams.pro certification site.
Arcadia Studios, a digital media startup, is launching a microservices API on Amazon Elastic Kubernetes Service. Traffic can surge to four times the normal load during prime-time activity and typically falls back within about 15 minutes. The team wants the cluster to scale out and in automatically to closely follow these bursts with minimal ongoing management. Which actions should they take to achieve this? (Choose 2)
-
✓ A. Deploy the Kubernetes Metrics Server and configure Horizontal Pod Autoscaler based on CPU or memory utilization
-
✓ C. Run the Kubernetes Cluster Autoscaler to right-size the number of nodes as pending or underutilized capacity changes
Deploy the Kubernetes Metrics Server and configure Horizontal Pod Autoscaler based on CPU or memory utilization and Run the Kubernetes Cluster Autoscaler to right-size the number of nodes as pending or underutilized capacity changes are the correct actions to let the EKS cluster scale out and in automatically to follow short traffic bursts.
The Deploy the Kubernetes Metrics Server and configure Horizontal Pod Autoscaler based on CPU or memory utilization entry gives per pod metrics that the HPA uses to increase replicas during surges and reduce replicas when load falls. The Metrics Server supplies the observed CPU and memory metrics and the HPA uses those metrics to adjust pod counts with minimal manual effort.
The Run the Kubernetes Cluster Autoscaler to right-size the number of nodes as pending or underutilized capacity changes adjusts node count when pods cannot be scheduled because of insufficient resources and it removes nodes when they are no longer needed. Together the HPA handles replica changes and the Cluster Autoscaler ensures there is enough node capacity for those replicas so the cluster tracks demand and helps control costs.
Set up Amazon EC2 Auto Scaling scheduled actions on the EKS node group to add and remove instances at fixed times is not suitable because scheduled actions operate on a fixed timetable and cannot respond to unpredictable or short lived surges that occur at irregular times.
Integrate Amazon SQS with the microservices to absorb spikes and use the queue to manage scaling can help decouple and smooth traffic but it does not by itself trigger pod or node scaling in EKS without additional scaling logic. It therefore does not meet the requirement on its own.
Enable the Kubernetes Vertical Pod Autoscaler to automatically raise or lower pod resource requests and limits changes resource requests rather than replica counts and it can conflict with horizontal scaling. It is not the preferred mechanism to follow rapid traffic spikes for this use case.
Pair the pod autoscaler with the node autoscaler and confirm the metrics pipeline and IAM permissions are configured so scaling can react quickly.
NorthPeak Media is launching a content platform on Amazon EC2 instances in an Auto Scaling group distributed across three Availability Zones. The application requires a shared file system that all instances can mount concurrently with strong consistency because files are updated frequently. Which approach will achieve this with the least operational effort?
-
✓ C. Amazon Elastic File System (Amazon EFS)
Amazon Elastic File System (Amazon EFS) is the correct choice because it provides a fully managed, POSIX compliant file system that all EC2 instances can mount concurrently across multiple Availability Zones while providing strong consistency and minimal operational overhead.
Amazon EFS is a network file system that supports concurrent access and file locking and it scales automatically so you do not need to provision or manage storage servers and it simplifies multi AZ Auto Scaling deployments where instances need a shared, frequently updated file system.
Amazon S3 with Amazon CloudFront is not suitable because S3 is object storage and not a mountable POSIX file system and the application would need to be redesigned to use object APIs while CloudFront caching can complicate frequent updates.
Amazon FSx for Lustre is not the least operational effort because it is optimized for high performance computing workloads and often requires additional configuration and integration and it is not the typical general purpose, multi AZ shared file system for a web tier.
A multi-attach Amazon EBS volume shared by the instances does not meet the requirement because multi attach is limited to a single Availability Zone and sharing a block device across instances requires a cluster aware file system to avoid corruption which prevents a simple multi AZ Auto Scaling deployment.
Remember choose Amazon Elastic File System (Amazon EFS) when you need a mountable POSIX file system with concurrent multi AZ access and the least management overhead.
A global fintech startup operates application servers in a VPC in the ap-southeast-2 Region and hosts its database tier in a separate VPC in the eu-west-1 Region. The application instances in ap-southeast-2 must establish secure connectivity to the databases in eu-west-1. What is the most appropriate network design to accomplish this?
-
✓ C. Configure VPC peering between the ap-southeast-2 and eu-west-1 VPCs, add the required routes, and create an inbound rule in the eu-west-1 database security group that allows traffic from the ap-southeast-2 application server IP addresses
The correct choice is Configure VPC peering between the ap-southeast-2 and eu-west-1 VPCs, add the required routes, and create an inbound rule in the eu-west-1 database security group that allows traffic from the ap-southeast-2 application server IP addresses. This option provides private, cross Region connectivity and places the allow rule on the database side where it belongs.
VPC peering supports inter Region private routing when you update the relevant route tables and ensure the CIDR ranges do not overlap. Security groups are regional so you cannot reference a security group in a different Region. For that reason you must allow the application server IP addresses or CIDR ranges on the destination database security group when using inter Region peering.
Create a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC, update the relevant route tables, and add an inbound rule in the eu-west-1 database security group that references the security group ID of the application servers in ap-southeast-2 is incorrect because security groups cannot be referenced across Regions. The route changes would be valid but you cannot point a security group rule at a security group in another Region.
Set up AWS Transit Gateway in each Region with an inter-Region peering attachment, configure routing, and add an inbound rule in the eu-west-1 database security group that references the application server security group in ap-southeast-2 is not the best answer because cross Region security group references remain unsupported and Transit Gateway adds complexity that is not required for a single pair of VPCs. Transit Gateway can be useful at scale but it does not eliminate the regional scope of security groups.
Establish a VPC peering connection between the ap-southeast-2 and eu-west-1 VPCs, modify route tables as needed, and add an inbound rule in the ap-southeast-2 application security group that permits traffic from the eu-west-1 database server IP addresses is wrong because the allow rule must be applied on the destination resource. The database security group must permit incoming connections from the application IPs for the connection to succeed.
Keep in mind that security groups are regional so place allow rules on the destination security group and use CIDR ranges for inter Region peering.
A genomics analytics company runs an Amazon Elastic Kubernetes Service cluster for model training and data preprocessing. The platform team needs specific Kubernetes service accounts in selected namespaces to have least privilege access to only certain AWS resources, such as a training Amazon S3 bucket and an Amazon DynamoDB table. The approach must use IAM roles for service accounts to enforce these permissions. Which combination of steps should the team implement? (Choose 2)
-
✓ B. Establish a trust policy between the IAM role for each service account and the EKS cluster OIDC identity provider
-
✓ E. Create a dedicated IAM role with only the required permissions and annotate the target Kubernetes service accounts with that role ARN
The correct combination is Establish a trust policy between the IAM role for each service account and the EKS cluster OIDC identity provider and Create a dedicated IAM role with only the required permissions and annotate the target Kubernetes service accounts with that role ARN.
The first step creates the OIDC trust that IRSA depends on so a projected service account token can assume an IAM role. When you establish that trust policy for the cluster OIDC provider you allow specific service accounts to request credentials without granting node level permissions.
Creating a dedicated IAM role that contains only the S3 and DynamoDB permissions you need and annotating the Kubernetes service account with that role ARN implements least privilege. This binds permissions to the service account rather than to the node and it enables fine grained access control per namespace or workload.
Attach a permissions policy to the EKS worker node instance role so all pods on those nodes can reach the required AWS resources is incorrect because attaching policies to the node role grants every pod on those nodes access and it defeats per-service-account isolation and least privilege.
Enable Kubernetes Pod Security Policies to prevent pods from accessing unauthorized AWS services is incorrect because Pod Security Policies control pod security attributes rather than AWS IAM authorization and PSPs are deprecated in newer Kubernetes releases so they are not the mechanism for IRSA.
Grant the necessary permissions on the worker node instance profile and map every service account in the cluster to a single shared IAM role is incorrect because a single shared role removes granularity and violates least privilege by giving many service accounts the same broad permissions.
Remember that IRSA requires creating the cluster OIDC provider and a role trust policy and then annotating specific service accounts with the role ARN so permissions are applied per service account.
A solutions architect is designing a file submission system for a vocational institute. Submissions will be stored in an Amazon S3 bucket. The team needs to avoid unintended object deletions while keeping every prior revision accessible. Contributors must be able to upload new files and update existing ones. Which actions together will satisfy these requirements? (Choose 2)
-
✓ B. Activate object versioning for the bucket
-
✓ D. Require MFA Delete for object deletions
The correct options are Activate object versioning for the bucket and Require MFA Delete for object deletions. Activate object versioning for the bucket ensures every prior revision is preserved and Require MFA Delete for object deletions requires a second factor for delete operations so contributors can still upload and update objects while accidental or unauthorized deletions are harder to perform.
When you Activate object versioning for the bucket Amazon S3 keeps a distinct version for each put or delete so overwrites do not remove earlier content and administrators can restore or view prior revisions as needed. This satisfies the requirement to keep every prior revision accessible while allowing normal uploads and updates.
When you Require MFA Delete for object deletions S3 requires multi factor authentication for operations that delete object versions or suspend versioning so unintended deletions are blocked by an additional authentication step. This adds protection against accidental or malicious deletion without preventing contributors from adding new versions.
Turn on SSE-S3 default encryption for the bucket secures objects at rest but it does not prevent deletions or preserve previous versions so it does not meet the primary requirements.
AWS Backup for Amazon S3 can help with recoverability but it does not inherently stop users from deleting objects in the live bucket or guarantee that every change is retained as live versions, so it is not the direct solution for preventing unintended deletes while retaining all revisions.
Restrict the bucket to read-only access would prevent uploads and updates and so it would conflict with the requirement that contributors must be able to upload new files and update existing ones.
Remember enable versioning to retain every revision and pair it with MFA Delete to protect delete operations while still allowing normal uploads and updates
A biotech analytics startup is designing a pipeline to process massive sequencing workloads. Source reads are stored in an Amazon S3 bucket, dozens of Amazon EC2 instances exchange large intermediate datasets totaling several hundred gigabytes per run, and the final results are written to a separate S3 bucket. The team needs to cut network transfer expenses while keeping processing performance high. What should the solutions architect recommend?
-
✓ B. Launch all compute instances in the same Availability Zone to avoid cross-AZ transfer fees
The correct option is Launch all compute instances in the same Availability Zone to avoid cross-AZ transfer fees. Placing the chatty compute fleet together minimizes instance to instance network charges while keeping high throughput for large intermediate datasets.
Within the same Availability Zone private IP traffic between EC2 instances is not charged for data transfer. Cross AZ and inter Region transfers incur data transfer costs so co locating the compute instances reduces the largest cost driver for heavy east west traffic without sacrificing performance.
Enable Amazon S3 Transfer Acceleration on the buckets is intended to speed uploads and downloads to S3 over long distances and it does not lower EC2 to EC2 transfer costs during processing.
Run the Auto Scaling group across multiple AWS Regions to distribute processing increases inter Region network traffic and costs so it works against the goal of reducing transfer expenses.
Use Amazon Elastic Fabric Adapter on the EC2 instances can improve latency and throughput for tightly coupled high performance computing workloads and it does not change AWS data transfer pricing so it does not address the cost reduction requirement.
Co-locate compute instances that exchange large amounts of data in the same Availability Zone to avoid cross AZ transfer charges and enable performance optimizations only when you need lower latency.
A digital publishing startup runs a fleet of 18 Amazon EC2 instances in private IPv4 subnets inside a VPC. The application performs heavy read and write operations against Amazon S3 buckets in the same Region, and all outbound internet traffic currently flows through a NAT gateway in a public subnet. The company wants to cut monthly costs without affecting access to S3 or general internet connectivity from the private subnets. What should a solutions architect do?
-
✓ C. Create a VPC gateway endpoint for Amazon S3 and update the private route tables to send S3 traffic to the endpoint
The correct option is Create a VPC gateway endpoint for Amazon S3 and update the private route tables to send S3 traffic to the endpoint. This routes S3 traffic over the AWS network and eliminates NAT data processing charges while allowing the existing NAT gateway to continue handling non S3 internet traffic.
A VPC gateway endpoint for Amazon S3 is free to use and is highly available across the Region. You update the private subnet route tables so S3 prefixes go to the endpoint and the instances keep private IP addressing. This keeps heavy read and write traffic on the AWS backbone which reduces egress costs and preserves internet access for other destinations.
Create an interface VPC endpoint for Amazon S3 and configure DNS so the instances use the PrivateLink endpoint would functionally allow private access to S3 but it adds hourly and per gigabyte data processing fees. Those costs make an interface endpoint more expensive than a gateway endpoint for sustained, high throughput S3 traffic.
Add a route from the private subnets to an internet gateway so the instances can reach S3 directly would effectively convert those subnets into public subnets and expose instances to the internet. That change is unnecessary for S3 access and undermines the existing security posture.
Replace the NAT gateway with a right-sized NAT instance to lower hourly costs may reduce hourly charges but it still sends S3 traffic through a NAT and the public internet which keeps per gigabyte transfer costs. It also adds operational overhead and potential availability drawbacks compared with a managed gateway endpoint.
Prefer gateway endpoints for Amazon S3 when you have heavy S3 traffic to avoid NAT data processing fees and keep traffic on the AWS network. Use interface endpoints when a service does not support gateway endpoints or you need private IP based access to API endpoints.
A competitive gaming startup, Zephyr Arcade, plans to launch a global leaderboard powered by a proprietary scoring formula that recalculates top players from fresh match metrics every few seconds. The platform must scale elastically, return personalized leaderboard slices with single-digit millisecond latency, and withstand spikes from more than eight million concurrent users. Which options justify choosing Amazon ElastiCache for this design? (Choose 2)
-
✓ B. Use Amazon ElastiCache to cache outputs of expensive ranking computations to reduce repeated processing
-
✓ E. Use Amazon ElastiCache to cache frequently read leaderboard results and deliver very low-latency responses at scale
Use Amazon ElastiCache to cache outputs of expensive ranking computations to reduce repeated processing and Use Amazon ElastiCache to cache frequently read leaderboard results and deliver very low-latency responses at scale are correct because the design requires precomputed rankings and hot read slices that must be served in single digit milliseconds while scaling to millions of concurrent users.
The first correct option is appropriate because you can compute the proprietary scoring formula once and store the results in memory. ElastiCache for Redis supports in memory data structures such as sorted sets that map directly to top N and ranking queries. Caching computed outputs reduces repeated CPU work and keeps latency predictable under heavy request fanout.
The second correct option is appropriate because leaderboards are read heavy and benefit from in memory caches that return microsecond to millisecond responses. Redis clusters with replication and sharding let you absorb traffic spikes and deliver personalized leaderboard slices quickly while keeping origin systems focused on writes and computation.
Use Amazon ElastiCache to execute complex relational JOINs across normalized score tables is incorrect because ElastiCache is not a relational engine and it does not perform JOINs. For relational joins and normalized schemas you should use Amazon RDS or Amazon Aurora which are designed for SQL operations.
Use Amazon ElastiCache to optimize sustained write-heavy ingestion of raw match events is incorrect because caches are not a durable ingest layer and they do not give the durability and streaming semantics needed for sustained high write rates. For continuous ingestion you should choose streaming and durable stores such as Amazon Kinesis or Amazon DynamoDB.
Use Amazon ElastiCache to accelerate batch ETL pipelines for the analytics lake is incorrect because ETL is batch oriented and typically processes large datasets sequentially. Services such as AWS Glue or Amazon EMR are a better fit for batch ETL and analytics workloads.
For real time leaderboards favor ElastiCache for Redis with sorted sets so you cache precomputed rankings and serve hot slices at millisecond latency.
During a compliance audit at a digital media startup, the security team discovered that several Amazon RDS instances are not encrypted at rest. You must bring these databases into compliance without changing database engines or application code. What should you do to add encryption to the existing Amazon RDS databases?
-
✓ B. Create a DB snapshot, copy it with encryption enabled, restore a new DB from the encrypted copy, and retire the old instance
The correct option is Create a DB snapshot, copy it with encryption enabled, restore a new DB from the encrypted copy, and retire the old instance. This option yields a new encrypted RDS instance without changing the database engine or application code.
This approach works because Amazon RDS does not allow enabling storage encryption on an existing DB instance after it is created. You take a snapshot of the unencrypted database and then copy that snapshot while enabling encryption with an AWS KMS key. You restore a new DB instance from the encrypted copy and then cut over traffic to the new instance and retire the old one. The restored instance and its automated backups and future snapshots are encrypted.
Create an RDS Blue/Green deployment, enable encryption on the green environment, switch over, and delete the original is incorrect because Blue/Green deployments do not change the underlying storage encryption of an existing environment. Encryption must be set at creation or when restoring from an encrypted snapshot.
Modify the existing RDS instance in the console to turn on encryption at rest is incorrect because the RDS service does not support flipping storage encryption on for a running instance. You must rebuild or restore into an encrypted instance.
Create a read replica, encrypt the replica, promote it to standalone, and decommission the original is incorrect because replicas inherit the encryption state of their source. An unencrypted primary will not produce an encrypted replica so this method will not create an encrypted standalone database.
Remember you cannot enable storage encryption on an existing RDS instance. Use a snapshot then copy it with KMS encryption and restore the new instance before cutting over.
A digital animation studio is preparing to move its production environment to AWS, needing at least 12 TB of storage with the highest possible I/O for large render and transcode scratch files, around 480 TB of highly durable storage for active media assets, and approximately 960 TB for long-term archival of legacy projects; which combination of AWS services best satisfies these needs?
-
✓ C. Amazon EC2 instance store for peak I/O on scratch data, Amazon S3 for durable media storage, and Amazon S3 Glacier for archival retention
The correct choice is Amazon EC2 instance store for peak I/O on scratch data, Amazon S3 for durable media storage, and Amazon S3 Glacier for archival retention.
Amazon EC2 instance store gives locally attached NVMe or SATA SSD capacity that delivers extremely low latency and very high IOPS for temporary scratch workloads so it matches the 12 TB high I/O requirement. Amazon S3 provides virtually unlimited, highly durable storage for active media assets and it suits the approximately 480 TB of durable content. Amazon S3 Glacier is a cost effective archival tier for long term retention and it is appropriate for the 960 TB of legacy projects when slower retrieval is acceptable.
Amazon EBS io2 volumes for highest throughput, Amazon S3 for durable storage, and Amazon S3 Glacier for archival storage is close but not optimal because EBS io2 offers strong durability and consistent performance and it is not typically able to match the absolute peak single instance I/O and lowest latency of instance store for ephemeral scratch workloads.
Amazon S3 Standard for compute scratch, Amazon S3 Intelligent-Tiering for durable content, and Amazon S3 Glacier Deep Archive for long-term retention is incorrect because S3 is object storage and cannot act as block attached, low latency scratch for compute. Using S3 Standard as a scratch tier would not meet high IOPS needs and Intelligent Tiering is for automated cost management rather than delivering scratch performance.
AWS Storage Gateway for durable storage, Amazon EC2 instance store for processing performance, and Amazon S3 Glacier Deep Archive for archival is a poor fit because Storage Gateway is designed for hybrid on premises integration and it adds complexity when the workload is moving fully to AWS. For cloud native durable media storage it is simpler and more direct to place assets in S3.
Match storage to access patterns and pick instance store for ephemeral highest I/O scratch, use S3 for active durable assets, and choose Glacier or Glacier Deep Archive based on acceptable retrieval times.
More AWS exam questions can be found in my Solutions Architect Udemy course and the certificationexams.pro certification site.
A travel technology startup is moving its core booking platform from a colocation facility to AWS to boost read scalability and increase availability. The current stack runs on Microsoft SQL Server and experiences very high read traffic. Each morning at 07:45 UTC, the team clones the production database to refresh a development environment, and users see elevated latency during this operation. The team is willing to switch database engines and wants a solution that scales reads and provides a low-impact way to create the daily dev copy. What should the solutions architect recommend?
-
✓ C. Use Amazon Aurora MySQL with Aurora Replicas across multiple AZs and provision the dev database by restoring from Aurora automated backups
Use Amazon Aurora MySQL with Aurora Replicas across multiple AZs and provision the dev database by restoring from Aurora automated backups is the correct option because it delivers scalable, highly available reads and a low impact method to create a writable development copy.
Aurora Replicas spread read traffic across multiple Availability Zones and leverage Aurora’s shared storage architecture so replicas are quick to catch up and can absorb heavy read loads without overloading the primary. Automated backups and point in time restore let you restore a separate cluster or instance for the dev environment so you avoid running heavy export operations against the production primary and you get a writable copy that does not affect live traffic.
Use Amazon RDS for MySQL with Multi-AZ and point the dev environment to the standby instance is incorrect because a Multi-AZ standby is not accessible for reads or writes and cannot be used as a development database.
Use Amazon Aurora MySQL with Aurora Replicas in multiple AZs and rebuild the dev database each day using mysqldump from the primary is incorrect because performing mysqldump against the primary reintroduces significant load and latency on production during the dump window.
Use Amazon RDS for SQL Server with Multi-AZ and read replicas and direct the dev environment to a read replica is incorrect because read replicas are read only and a development environment generally requires write capability so directing dev to a replica would not provide the needed functionality.
For heavy read workloads choose Aurora Replicas for scaling and use automated backups or PITR to provision writable test copies with minimal impact on production.
An e-commerce analytics firm needs to run nightly end-of-day reconciliation jobs across a fleet of Amazon EC2 instances while keeping spend as low as possible. The workloads are packaged as containers, are stateless, and can be relaunched if they stop unexpectedly. The team wants a solution that reduces both compute cost and ongoing management effort. What should a solutions architect recommend?
-
✓ C. Use Spot Instances in an Amazon Elastic Kubernetes Service managed node group
The best choice is Use Spot Instances in an Amazon Elastic Kubernetes Service managed node group. This option aligns with stateless nightly batch jobs because EC2 Spot can deliver steep cost savings and EKS managed node groups reduce ongoing management effort.
Use Spot Instances in an Amazon Elastic Kubernetes Service managed node group provides large compute cost reductions because Spot uses spare capacity at a discount and Kubernetes reschedules interrupted pods automatically. EKS managed node groups automate node lifecycle tasks and integrate with Kubernetes features for graceful draining and autoscaling which lowers operational toil for the team.
Launch Spot Instances in an Amazon EC2 Auto Scaling group to host the containerized jobs can reduce cost but it forces you to manage container scheduling, bin packing, graceful draining, and interruption handling yourself which increases operational overhead compared with a managed Kubernetes node group.
Use On-Demand Instances in an Amazon Elastic Kubernetes Service managed node group gives the operational benefits of EKS but the higher On Demand price does not meet the goal of minimizing compute spend for interruptible workloads.
Launch On-Demand Instances in an Amazon EC2 Auto Scaling group to run the containers provides neither the lowest cost nor the lowest management burden and so it is the least aligned with the stated requirements.
When a workload is stateless and interruptible choose Spot and pair it with a managed orchestrator like EKS managed node groups to minimize cost and reduce operational effort.
A mobile gaming studio runs a player profile service backed by an Amazon DynamoDB table named PlayerProfiles. Occasionally, a faulty release inserts malformed items that overwrite valid records. When the issue is noticed, the team needs to quickly revert the table to the moment just before those bad writes so the corrupted items are removed. What is the most appropriate approach?
-
✓ B. Restore the table using DynamoDB point-in-time recovery to a timestamp just before the corrupt items were written
The best choice is Restore the table using DynamoDB point-in-time recovery to a timestamp just before the corrupt items were written. This option lets the team roll the table back to the exact second before the faulty release and remove the corrupted items quickly.
Restore the table using DynamoDB point-in-time recovery to a timestamp just before the corrupt items were written provides continuous backups with per second granularity for up to 35 days and it does not require a prior manual snapshot. Using Restore the table using DynamoDB point-in-time recovery to a timestamp just before the corrupt items were written avoids building and testing complex replay logic and it gives a fast, precise way to revert accidental overwrites.
Create an on-demand backup and restore from it when corruption is detected is not ideal because an on-demand backup only captures the table at the moment it was taken and if a suitable snapshot was not created before the bad writes you cannot restore to the exact prior second.
Use DynamoDB Streams to replay changes and rebuild the table to its prior state requires custom code to capture and replay or reverse changes and Streams only retain data for 24 hours so they are not a reliable, turnkey restore mechanism for arbitrary past times.
Configure the table as a global table and shift traffic to a Region replica that has not seen the bad writes will not help because global tables replicate writes across Regions and corruption is therefore propagated rather than isolated by design.
When you must undo accidental writes remember that PITR restores to any second in the retention window and is faster than building replay tooling.
Northwind Health, a regional healthcare provider, stores compliance documents in dozens of Amazon S3 buckets across three AWS accounts. The compliance team must automatically discover any sensitive data that may be present in S3 and continuously detect and alert on suspicious or malicious activity against S3 objects across all buckets. Which AWS service combination best satisfies these requirements?
-
✓ C. Enable Amazon Macie for sensitive data discovery in S3, and use Amazon GuardDuty with S3 protection to detect malicious activity
Enable Amazon Macie for sensitive data discovery in S3, and use Amazon GuardDuty with S3 protection to detect malicious activity is the correct option because it pairs automated sensitive data classification with continuous threat detection for S3 across multiple accounts.
Amazon Macie applies machine learning and pattern matching to discover and classify personally identifiable information and other sensitive data stored in S3 objects and it can run across multiple accounts. Amazon GuardDuty with S3 protection analyzes S3 data events and object access patterns using threat intelligence and anomaly detection so it can alert on suspicious access, possible exfiltration, and other malicious activity.
Use Amazon Macie to identify sensitive data and to detect threats in S3 is incorrect because it suggests Macie also performs threat detection in S3 which it does not. Macie focuses on data classification and discovery and it does not provide continuous malicious activity monitoring for S3.
Use AWS Security Hub to monitor threats and AWS CloudTrail to automatically find sensitive data in S3 is incorrect because Security Hub aggregates and prioritizes findings and CloudTrail records API activity, but neither performs content classification of S3 objects. CloudTrail can record data events for S3 and it supports GuardDuty analysis but it does not classify object contents as sensitive.
Rely on Amazon GuardDuty to discover sensitive data and to monitor S3 for malicious actions is incorrect because GuardDuty does not scan object contents to identify sensitive data types. GuardDuty excels at threat detection and anomaly detection, but it must be paired with Macie or another data classification tool to locate sensitive data in S3.
Remember that Macie finds and classifies sensitive data and that GuardDuty detects suspicious S3 access and exfiltration patterns so pair them for full coverage.
A news aggregation site for Aurora Media runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The public DNS zone is hosted in Amazon Route 53 and the root record currently aliases to the ALB. The team wants users to see a static site-unavailable page whenever the application is down while keeping operational effort to a minimum. What configuration should be implemented to achieve this?
-
✓ C. Create an Amazon CloudFront distribution with the ALB as the origin and define custom error responses; point the domain to the distribution with a Route 53 alias
Create an Amazon CloudFront distribution with the ALB as the origin and define custom error responses; point the domain to the distribution with a Route 53 alias is correct because CloudFront can return a static error page automatically for defined HTTP errors when the origin is unhealthy and this requires minimal operational action during incidents.
Fronting the Application Load Balancer with CloudFront provides edge caching and configurable custom error responses that can be set to serve a static error page and to cache error responses for a defined time period. This approach keeps DNS configuration stable and avoids manual failover steps while ensuring users see the static site unavailable page whenever the backend is down.
Set up Route 53 failover with the ALB as primary and an S3 static website hosting the error page as secondary is less ideal because DNS failover relies on health checks and introduces failover timing and propagation considerations, which increases operational complexity compared with CloudFront custom error handling.
Use a Route 53 weighted policy that includes an S3 static website for the error page with weight 0 and increase the weight during incidents is not recommended because it requires manual weight changes during outages and does not provide an automatic fallback, which increases response time and operational burden.
Configure a Route 53 active-active setup using the ALB and a single EC2 instance serving a static error page as separate endpoints is inappropriate because an active active configuration would route some traffic to the static error endpoint even when the application is healthy and it adds ongoing maintenance for the extra instance.
Think CloudFront first when you need an automatic static fallback with the least operational effort and avoid DNS failover or manual weighted changes.
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
Next Steps
The AWS Solutions Architect Book of Exam Questions by Cameron McKenzie
So what’s next?
A great way to secure your employment or even open the door to new opportunities is to get certified.
If you’re interested in AWS products, here are a few great resources to help you get Cloud Practitioner, Solution Architect, Machine Learning and DevOps certified from AWS:
- AWS Certified Cloud Practitioner Book of Exam Questions
- AWS Certified Developer Associate Book of Exam Questions
- AWS Certified AI Practitioner Book of Exam Questions & Answers
- AWS Certified Machine Learning Associate Book of Exam Questions
- AWS Certified DevOps Professional Book of Exam Questions
- AWS Certified Data Engineer Associate Book of Exam Questions
- AWS Certified Solutions Architect Associate Book of Exam Questions
Put your career on overdrive and get AWS certified today!
