Sample Questions to help you ace the AWS Solutions Architect exam

How AWS Solutions Architect Practice Tests are Important

When I prepared for my AWS Solutions Architect Associate certification, I did not just want to pass, I wanted to walk into the exam room knowing exactly what to expect.

I had used a similar approach for other credentials such as the Scrum Master and Product Owner exams, and I adapted that approach for architecting on AWS.

If you are building a broader roadmap, the AWS catalog at AWS Certification exams and the foundational Cloud Practitioner track provide useful context.

AWS Solution Architect practice exams

I wanted to sit the Solutions Architect Associate exam with the same confidence I had on prior certifications. I also wanted a plan that would carry into advanced paths such as the Solutions Architect Professional and focused tracks like Security, DevOps, Developer, Data Engineer, Machine Learning, and the AI Practitioner.

Over time I developed a repeatable strategy that I used to pass multiple IT certifications. If you are targeting the SAA C03, here is a five step strategy that works well.

  1. Thoroughly read the stated exam objectives and align your study plan
  2. Start with practice exams to learn the question style
  3. Take a course from a reputable trainer and supplement with labs
  4. Do focused hands on projects that match the blueprint
  5. Use the final weekend for full length practice tests and review

Add a sensible exam day strategy and you will greatly improve your chance of passing your AWS certification on the first attempt.

How I tailored this plan for the Solutions Architect Associate exam will be explained after these AWS Solutions Architect practice test questions.

A data engineering group at a fintech startup needs to store and run daily analytics on application log files. The number and size of the logs are unpredictable, and the data will be kept for no longer than 18 hours before deletion. Which storage class offers the most cost-effective option while keeping the data immediately accessible for processing?

  • ❏ A. Amazon S3 Intelligent-Tiering

  • ❏ B. Amazon S3 Standard

  • ❏ C. Amazon S3 Glacier Deep Archive

  • ❏ D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

A regional broadcaster has spun off its content studio and needs to move the studio’s AWS account into a different AWS Organization that is managed by the acquiring company while keeping all resources with the account. What is the appropriate way to complete this transfer?

  • ❏ A. Create a new account in the target organization and replicate workloads with AWS Application Migration Service

  • ❏ B. Use AWS CloudFormation StackSets to relocate the account between organizations

  • ❏ C. Use the AWS Organizations console to remove the account from the source organization and accept the invitation from the target organization

  • ❏ D. Create a new account in the target organization and share needed resources through AWS Resource Access Manager

BlueHarbor Media runs a multi-account landing zone with AWS Organizations that spans over 80 AWS accounts. A solutions architect must restrict a central Amazon S3 bucket so that only principals from accounts that are members of this Organization can access it, and the team does not want to maintain an allow list of account IDs as accounts are frequently added or removed. What is the simplest approach to achieve this?

  • ❏ A. Build an attribute-based access control model by tagging each account as in-organization and reference those tags in the S3 bucket policy

  • ❏ B. AWS Resource Access Manager

  • ❏ C. Use the aws:PrincipalOrgID global condition key in the S3 bucket policy to allow only principals from the Organization

  • ❏ D. Place all external accounts in an Organizational Unit and attach a Service Control Policy that denies access to the bucket

A regional healthcare consortium runs a microservices platform on Kubernetes in its private on-site data center. Strict compliance rules require that all protected health information and all compute remain within the facility. The platform team wants to refresh the Kubernetes environment and use AWS-managed capabilities such as automatic Kubernetes version updates, Amazon CloudWatch metrics and logs, and IAM-based authentication, without moving any application data or compute into the cloud. Which AWS approach best enables this modernization while ensuring every workload stays on premises?

  • ❏ A. Establish a dedicated AWS Direct Connect link to a region, run Amazon EKS in that region, and integrate with IAM and Amazon API Gateway for hybrid traffic

  • ❏ B. Run Amazon ECS on AWS Fargate in a nearby AWS Local Zone, stream logs to CloudWatch, connect to the data center over VPN, and mount on-prem network shares for data access

  • ❏ C. Deploy an AWS Outposts rack in the data center and operate Amazon EKS Anywhere on the Outposts hardware to keep Kubernetes local while using AWS integrations

  • ❏ D. Install Amazon EKS Anywhere on existing on-premises servers, register the cluster with EKS Connector for visibility, and forward logs to Amazon CloudWatch

A regional fintech firm runs 12 VPCs across three AWS accounts and needs a straightforward way to connect all VPCs with two on-premises data centers through a single central hub while keeping day-to-day management minimal. Which approach should the solutions architect choose to achieve the lowest operational overhead?

  • ❏ A. Fully meshed VPC peering

  • ❏ B. AWS Transit Gateway

  • ❏ C. AWS Direct Connect gateway

  • ❏ D. Transit VPC Solution

Solara Health has migrated a monolithic job into two Amazon ECS services on AWS Fargate. Service A quickly parses incoming records, while Service B performs time-consuming enrichment that can build up during peak loads. The team wants the services to be loosely coupled so each can scale on its own and so surges from Service A are buffered without dropping work. What is the best way to connect the services to meet these goals?

  • ❏ A. Have Service A publish to an Amazon SNS topic and have Service B subscribe to the topic

  • ❏ B. Have Service A write payloads to an Amazon S3 bucket and use S3 event notifications to invoke Service B

  • ❏ C. Have Service A send messages to an Amazon SQS queue and have Service B poll and process messages from the queue

  • ❏ D. Have Service A stream records to Amazon Kinesis Data Firehose and have Service B read from Firehose

An edtech startup is moving a legacy three-tier web application from a private colocation facility to AWS. The stack includes a web layer, a business logic layer, and a MySQL backend. The team wants to avoid provisioning and maintaining any servers or clusters. Which services should the solutions architect choose for the application compute and the database to meet these requirements? (Choose 2)

  • ❏ A. Amazon DynamoDB

  • ❏ B. Amazon RDS for MySQL

  • ❏ C. AWS Fargate

  • ❏ D. Amazon EKS

  • ❏ E. Amazon EC2 Spot Instances

A multinational retail logistics firm is replatforming dozens of internal applications from its two regional data centers to AWS. These workloads will run across about 18 AWS accounts that are governed centrally with AWS Organizations. The company manages all users, groups, and access control in its on-premises Microsoft Active Directory and wants to keep identity administration there. The team needs seamless single sign-on to every AWS account without duplicating identities or manually provisioning users. Which approach provides the most operational efficiency?

  • ❏ A. Use Amazon Cognito as the primary user store and federate to the on-premises Active Directory with a custom OIDC setup; grant multi-account access using identity pools and resource policies

  • ❏ B. Enable AWS IAM Identity Center and manually create users and groups there; assign permission sets per account and keep it synced with on-prem AD using custom PowerShell tasks

  • ❏ C. Configure AWS IAM Identity Center to use AWS Directory Service for Microsoft Active Directory (Enterprise) and establish a two-way forest trust with the on-premises AD to enable federation across all accounts

  • ❏ D. Connect IAM Identity Center to AWS Directory Service AD Connector pointing to the on-premises AD to provide SSO across accounts

A fintech startup in Berlin is building a serverless invoice processing service using AWS Lambda and an Amazon Aurora MySQL-Compatible cluster. The Lambda function must connect with a standard username and password, and the security team insists these secrets are never embedded in the deployment package or code. What is the most secure way to store the database credentials and supply them to the function at runtime?

  • ❏ A. Put the credentials into AWS Key Management Service and reference them via Lambda environment variables

  • ❏ B. Store the credentials as SecureString parameters in AWS Systems Manager Parameter Store and grant the function role permission to retrieve them

  • ❏ C. Save the credentials in an Amazon S3 object encrypted with SSE-KMS and read them at startup using the function execution role

  • ❏ D. Enable IAM database authentication with the AWSAuthenticationPlugin and map an IAM user inside MySQL

A media analytics startup is moving a significant archive of critical files into Amazon S3. The objects will land in a versioned bucket in the eu-central-1 Region. The business requires that all objects are automatically copied to a different AWS Region to support disaster recovery. What configuration should the solutions architect implement to meet this requirement?

  • ❏ A. Configure CORS between the source bucket and a bucket in another Region

  • ❏ B. Use AWS Backup with a backup plan to copy Amazon S3 backups to another Region

  • ❏ C. Create a second S3 bucket with versioning in another Region and enable S3 Replication across Regions

  • ❏ D. Enable S3 Multi-Region Access Points for the bucket

A boutique travel publisher hosts a static marketing microsite on Amazon S3. Most visitors are located in the United States, Canada, and Mexico, and the company wants to lower latency for these users while keeping delivery costs as low as possible. Which approach should the team implement to meet these goals?

  • ❏ A. Attach Lambda@Edge functions to a CloudFront distribution to process requests near viewers

  • ❏ B. Create a CloudFront distribution with separate origins hosted in the United States, Canada, and Mexico

  • ❏ C. Provision an Amazon CloudFront distribution and configure the price class to use only the United States, Canada, and Mexico

  • ❏ D. Enable CloudFront with the All Edge Locations price class for maximum performance worldwide

An ad-tech company operates roughly 3,200 Amazon EC2 instances in production that run a proprietary application dependent on a commercial third-party component. The vendor has released an urgent security fix and the team must update the entire fleet as quickly as possible while maintaining visibility of patch compliance. What should the solutions architect do?

  • ❏ A. Use AWS Systems Manager Run Command to execute a custom patch script across all managed instances

  • ❏ B. Create an AWS Systems Manager State Manager association that installs the patch on the fleet

  • ❏ C. Configure AWS Systems Manager Patch Manager with an appropriate patch baseline and run an on-demand patch operation for the fleet

  • ❏ D. Write an AWS Lambda function that iterates through instances and applies the vendor patch

An engineer at Polaris FinTech is reviewing an AWS CloudFormation resource excerpt that reads SecurityGroupIngress with two entries IpProtocol tcp FromPort 443 ToPort 443 CidrIp 0.0.0.0/0 and IpProtocol tcp FromPort 22 ToPort 22 CidrIp 203.0.113.7/32. What behavior will these rules implement? (Choose 3)

  • ❏ A. It configures the inbound rules of a network ACL

  • ❏ B. AWS WAF

  • ❏ C. It opens HTTPS to the world on port 443

  • ❏ D. It affects the security group’s egress rules

  • ❏ E. It restricts SSH access to a single source IP address

  • ❏ F. It only permits the literal IP 0.0.0.0 to access HTTPS

  • ❏ G. It sets inbound rules for a security group

A regional media streaming startup runs its transactional workload on Amazon RDS for MySQL. To absorb a holiday surge of read only reporting queries, the team added a read replica. During the surge, the replica averaged 70% CPU and the writer instance was around 65%. After the event, the replica now averages 22% CPU, while the primary remains steady near 65% most of the time. The company wants to reduce costs while keeping enough headroom for future growth. What should an architect do?

  • ❏ A. Migrate the workload to Amazon Aurora MySQL and remove the read replica

  • ❏ B. Enable Multi-AZ on the primary instance and delete the read replica

  • ❏ C. Resize the read replica to a smaller instance class and keep the primary instance as is

  • ❏ D. Upgrade the read replica to a larger instance class and downgrade the primary instance

WaveCast Labs is rolling out a live streaming platform behind an Application Load Balancer that forwards requests to EC2 instances in an Auto Scaling group spanning two Availability Zones. Operations observes that when the load balancer marks a target as unhealthy, it is removed from the target group, yet the Auto Scaling group does not launch a replacement and overall capacity shrinks. Which configuration mismatch would most likely explain this behavior?

  • ❏ A. The Auto Scaling group uses Application Load Balancer health checks, and the Application Load Balancer evaluates EC2 instance status checks

  • ❏ B. Both the Auto Scaling group and the Application Load Balancer use EC2 instance status checks

  • ❏ C. The Auto Scaling group is configured with EC2 instance status checks, while the Application Load Balancer relies on target group health checks

  • ❏ D. Both the Auto Scaling group and the Application Load Balancer use ALB target group health checks

A research intern at the European Southern Observatory is uploading a 4.5 GB deep-space image to an Amazon S3 bucket. The intern enabled Amazon S3 Transfer Acceleration to speed up the upload, but measurements show the transfer was not accelerated. Considering AWS billing for inbound data and Transfer Acceleration behavior, what will be charged for this upload?

  • ❏ A. Only S3 data transfer charges apply for the upload

  • ❏ B. Only S3 Transfer Acceleration charges apply for the upload

  • ❏ C. No transfer charges apply for the upload

  • ❏ D. Both S3 data transfer and S3 Transfer Acceleration charges apply for the upload

An analytics team at a travel booking startup operates five Amazon EC2 instances that issue mostly read queries against an Amazon RDS for PostgreSQL instance. Leadership requires a regional disaster recovery capability with minimal downtime and data loss if an entire AWS Region becomes unavailable. Which features should you implement to meet this requirement? (Choose 2)

  • ❏ A. Switch the RDS instance to Provisioned IOPS (SSD) storage instead of General Purpose storage

  • ❏ B. Enable cross-Region automated backups on a Multi-AZ RDS for PostgreSQL deployment

  • ❏ C. Rely only on RDS Multi-AZ with automated backups retained in the same Region

  • ❏ D. Create a cross-Region Read Replica and plan to promote it during a Regional outage

  • ❏ E. Migrate the database to Amazon Aurora Global Database

A nationwide insurance firm runs an inbound claims contact center on AWS. The team wants to upgrade the system to automatically transcribe calls with multi-speaker separation and to run ad hoc queries on the transcripts to uncover operational patterns. Which solution will meet these requirements?

  • ❏ A. Use Amazon Rekognition for multi-speaker detection, store transcripts in Amazon S3, and apply custom ML models for analytics

  • ❏ B. Use Amazon Comprehend for speaker recognition and sentiment, then load results into Amazon Redshift for SQL queries

  • ❏ C. Use Amazon Transcribe for speaker diarization and transcription, then query the transcript data with Amazon Athena

  • ❏ D. Use Amazon Rekognition for speaker recognition and Amazon Textract to extract text from audio files in Amazon S3

Orion eNotary processes digital signatures for highly sensitive legal agreements. To satisfy strict compliance rules, the company must encrypt the finalized documents using its own proprietary cryptographic algorithm rather than any AWS-managed algorithm. As they migrate the archive to Amazon S3, which encryption approach will allow them to continue using their custom cipher while storing these objects in S3?

  • ❏ A. Server-side encryption with Amazon S3 managed keys (SSE-S3)

  • ❏ B. Client-side encryption

  • ❏ C. Server-side encryption with customer-provided keys (SSE-C)

  • ❏ D. Server-side encryption with AWS KMS keys (SSE-KMS)

A regional accounting firm has lifted and shifted a three-tier web application to AWS. The containers for the web tier run on Linux-based Amazon EC2 instances and connect to a PostgreSQL database hosted on separate dedicated EC2 instances. Leadership wants to reduce operational effort and improve overall performance. What actions should the solutions architect recommend to achieve these goals? (Choose 2)

  • ❏ A. Create an Amazon CloudFront distribution to cache the site’s static assets

  • ❏ B. Run the containers on AWS Fargate using Amazon Elastic Container Service (Amazon ECS)

  • ❏ C. Move the PostgreSQL database to Amazon Aurora PostgreSQL-Compatible Edition

  • ❏ D. Deploy the web tier and the database on the same EC2 instances to reduce hops

  • ❏ E. Insert an Amazon ElastiCache cluster between the application and the database

A global biotech firm is moving from a collection of isolated AWS accounts to a governed multi-account structure. The company expects to spin up several dozen AWS accounts for different business units and wants all workforce access to authenticate against its existing on-premises directory. Which combination of actions should a solutions architect recommend to satisfy these requirements? (Choose 2)

  • ❏ A. Configure AWS Transit Gateway to centralize connectivity between VPCs and accounts

  • ❏ B. Deploy AWS Directory Service and connect it to the on-premises directory, then use IAM Identity Center for cross-account sign-in

  • ❏ C. Set up Amazon Cognito user pools and configure IAM Identity Center to trust Cognito

  • ❏ D. Create an organization in AWS Organizations with all features turned on, and create the new accounts under the organization

  • ❏ E. Use AWS Control Tower to centrally manage accounts and enable IAM Identity Center

A regional logistics startup runs its production workload on Amazon EC2. The instances must stay up without interruption from Wednesday through Monday. On Tuesdays, the same workload is required for only 6 hours and still cannot tolerate any interruptions. The team wants to minimize cost while meeting these needs. What is the most cost-effective approach?

  • ❏ A. Use Spot Instances for the 6-hour Tuesday workload and purchase Standard Reserved Instances for continuous operation from Wednesday through Monday

  • ❏ B. Purchase Convertible Reserved Instances for the Wednesday through Monday workload and use Spot Instances for the Tuesday hours

  • ❏ C. Purchase Standard Reserved Instances for the Wednesday through Monday workload and use Scheduled Reserved Instances for the 6-hour Tuesday window

  • ❏ D. Use Compute Savings Plans for the Wednesday through Monday workload and On-Demand Instances for the Tuesday usage

A real-time gaming startup uses Amazon ElastiCache for Redis in front of an Amazon Aurora MySQL cluster to accelerate reads. The team needs a resilient recovery approach for the cache tier that keeps downtime and data loss to an absolute minimum without hurting application latency. Which solution should they implement?

  • ❏ A. Schedule automatic daily snapshots during the quietest traffic window

  • ❏ B. Add read replicas for the Redis primary in separate Availability Zones to lower the risk of data loss

  • ❏ C. Perform manual backups using the Redis append-only file feature

  • ❏ D. Enable a Multi-AZ Redis deployment with automatic failover

A small fintech startup runs an AWS Lambda function in Account X that must read from and write to an Amazon S3 bucket owned by another AWS account. As the solutions architect, what is the most secure way to enable this cross-account access?

  • ❏ A. Create an IAM role for the Lambda function with S3 permissions and set it as the execution role; that will provide cross-account access by itself

  • ❏ B. Make the S3 bucket public so the Lambda function in the other account can reach it

  • ❏ C. Create an IAM role for the Lambda function with S3 permissions and set it as the execution role, and update the bucket policy in the other account to allow that role

  • ❏ D. AWS Lambda cannot access resources across accounts; use identity federation instead

RiverShop Marketplace aggregates listings from independent vendors and stores each seller’s product description as a plain text file in an Amazon S3 bucket. The platform ingests roughly 800 new descriptions per day, and some contain ingredients for consumables such as pet treats, supplements, or drinks. The company wants a fully automated workflow that extracts ingredient names from each new file and then queries an Amazon DynamoDB table of precomputed safety ratings for those ingredients; submissions that are not consumables or are malformed can be skipped without affecting the application. The team has no machine learning specialists and seeks the lowest-cost option with minimal operational effort. Which approach most cost-effectively satisfies these requirements?

  • ❏ A. Deploy a custom NLP model on Amazon SageMaker and trigger an AWS Lambda function via Amazon EventBridge to invoke the endpoint and write results to DynamoDB, retraining the model monthly with open-source labels

  • ❏ B. Use Amazon Textract from an S3-triggered Lambda function to read each uploaded text file and apply keyword matching to derive ingredient names before updating DynamoDB

  • ❏ C. Configure S3 Event Notifications to invoke an AWS Lambda function that uses Amazon Comprehend custom entity recognition to extract ingredient entities and then looks up scores in DynamoDB

  • ❏ D. Run Amazon Lookout for Vision on the uploaded files via a Lambda trigger and publish results to clients using Amazon API Gateway

An energy analytics firm needs to run large-scale risk simulations on Amazon EC2. The 25 TB input library resides in Amazon S3 and is updated every 30 minutes. The team requires a high-throughput, POSIX-compliant file system that can seamlessly present S3 objects as files and write results back to the bucket. Which AWS storage service should they choose?

  • ❏ A. Amazon Elastic File System (EFS)

  • ❏ B. AWS Storage Gateway File Gateway

  • ❏ C. Amazon FSx for Lustre

  • ❏ D. Amazon FSx for Windows File Server

An online ticketing platform gathers customer comments through embedded forms in its mobile and web apps. During major on-sale events or incident notifications, submissions can spike to about 12,000 per hour. Today the messages are sent to a shared support inbox for manual review. The company wants an automated pipeline that ingests feedback at scale, performs sentiment analysis quickly, and retains the insights for 12 months for trend reporting. Which approach best meets these goals with minimal operational overhead?

  • ❏ A. Route all feedback to Amazon Kinesis Data Streams, use an AWS Lambda consumer to batch records, call Amazon Translate to identify language and normalize to English, then index the results in Amazon OpenSearch Service with an ISM policy to purge after 12 months

  • ❏ B. Build a service on Amazon EC2 that accepts feedback and writes raw records to a DynamoDB table named TicketFeedbackRaw; from the EC2 app, invoke Amazon Comprehend for sentiment and store outcomes in a second table TicketFeedbackInsights with a TTL of 365 days

  • ❏ C. Create a REST API in Amazon API Gateway that sends incoming feedback to an Amazon SQS queue; use AWS Lambda to process queued items, analyze sentiment with Amazon Comprehend, and persist results in a DynamoDB table named FeedbackTrends with a per-item TTL of 12 months

  • ❏ D. Use Amazon EventBridge to capture feedback events and trigger an AWS Step Functions workflow that runs validation Lambdas, calls Amazon Transcribe to convert text into audio for archiving, and stores data in Amazon RDS with records deleted after 1 year via a lifecycle job

A geospatial research lab plans to move archived datasets from its data center into a POSIX-compliant file system in AWS. The archives are only needed for read access for roughly 8 days each year. Which AWS service would be the most cost-effective choice for this use case?

  • ❏ A. Amazon S3 Standard-IA

  • ❏ B. Amazon EFS Infrequent Access

  • ❏ C. Amazon EFS Standard

  • ❏ D. Amazon S3 Standard

A genomics startup runs tightly coupled simulations on Amazon EC2 instances spread across three Availability Zones in a single Region. The jobs perform thousands of small read and write operations per second and require a shared POSIX-compliant file system. The team has chosen Amazon Elastic File System for its elasticity and managed operations. To minimize latency and avoid unnecessary inter-AZ traffic, how should the architect configure access from the EC2 instances to the EFS file system?

  • ❏ A. Create one Amazon EFS mount target in a single AZ and have all instances in other AZs mount through that endpoint

  • ❏ B. Use Mountpoint for Amazon S3 on every instance and mount a shared S3 bucket as the common storage

  • ❏ C. Deploy an Amazon EFS mount target in each used Availability Zone and have instances mount via the local AZ mount target

  • ❏ D. Run an EC2-hosted NFS proxy in each AZ in front of Amazon EFS and have all instances mount through the proxy

A fintech startup runs a web portal where customers upload identification documents, which are saved to an Amazon S3 bucket in the ap-southeast-1 Region. The team wants to route uploads through Amazon CloudFront under a custom domain to accelerate performance and keep the bucket private. The design must terminate HTTPS on the custom domain and ensure only CloudFront can access the bucket for upload requests. Which actions should be implemented? (Choose 2)

  • ❏ A. Request a public certificate from AWS Certificate Manager in the ap-southeast-1 Region and attach it to the CloudFront distribution

  • ❏ B. Configure CloudFront origin access control for the S3 origin and restrict bucket access so only CloudFront can perform PUT and POST

  • ❏ C. Create a CloudFront distribution that uses the S3 static website endpoint as the origin to support uploads

  • ❏ D. Request a public certificate from AWS Certificate Manager in the us-east-1 Region and associate it with the CloudFront distribution for the custom domain

  • ❏ E. Use Amazon API Gateway and AWS Lambda in front of S3 for uploads and keep CloudFront only for downloads

A geospatial analytics startup ingests transactional event logs into an Amazon S3 bucket. Analysts query the newest data heavily for the first 5 days. After that period, the objects must remain instantly accessible with high availability for occasional ad hoc analysis. Which storage approach is the most cost-effective while meeting these requirements?

  • ❏ A. Create an S3 Lifecycle rule to move objects to S3 One Zone-Infrequent Access after 7 days

  • ❏ B. Configure an S3 Lifecycle rule to transition objects to S3 Standard-Infrequent Access after 7 days

  • ❏ C. Use an S3 Lifecycle rule to transition objects to S3 Standard-Infrequent Access 30 days after creation

  • ❏ D. Transition objects to S3 Glacier Flexible Retrieval after 7 days

LumenCart, a global e-commerce marketplace, runs its personalized product-ranking service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in a single AWS Region. The application computes per-user results from recent browsing activity and returns dynamic responses to tens of millions of shoppers worldwide. The company wants a cost-efficient way to improve responsiveness and scale without building full multi-Region stacks, while keeping latency low for users around the world. What should the company do?

  • ❏ A. Deploy additional stacks in multiple Regions and use Amazon Route 53 latency-based routing to direct users to the closest Region

  • ❏ B. Place an Amazon CloudFront distribution in front of the existing ALB and tune dynamic caching and origin keep-alives to accelerate requests globally

  • ❏ C. Use AWS Global Accelerator to route client traffic to the current ALB and Auto Scaling group in the Region nearest to each user

  • ❏ D. Put an Amazon API Gateway edge-optimized endpoint in front of the ALB to improve global performance

A multinational automotive parts supplier plans to migrate most of its on-premises file shares and object repositories to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server using an online approach that reduces manual effort, speeds up transfers, and keeps costs down. Which solution is the best choice to automate and accelerate these data movements to the AWS storage targets?

  • ❏ A. AWS Snowball Edge Storage Optimized

  • ❏ B. AWS Transfer Family

  • ❏ C. AWS DataSync

  • ❏ D. AWS Storage Gateway File Gateway

A digital media startup, BlueOrbit Media, is decoupling four microservices that run in private subnets across two Availability Zones and plans to use Amazon SQS for service-to-service messaging. The security team requires that traffic from the VPC to the SQS queues must not traverse the public internet or use public IP addresses. What is the most appropriate way to satisfy these requirements?

  • ❏ A. Attach an internet gateway and use public IPs to reach Amazon SQS

  • ❏ B. Create an interface VPC endpoint for Amazon SQS

  • ❏ C. AWS Direct Connect

  • ❏ D. Launch a NAT instance in a public subnet and route private subnets’ default traffic through it

Norfield Mutual, a regional insurer, runs several Microsoft Windows workloads on Amazon EC2, including .NET application servers and Microsoft SQL Server on Windows Server 2019, and needs a shared file system that is highly available and durable while delivering very high throughput and IOPS over SMB across the instances. Which approach should they choose to meet these requirements?

  • ❏ A. Extend the environment to Amazon Elastic File System with a Multi-AZ design and migrate the file shares to EFS

  • ❏ B. Provision Amazon FSx for Windows File Server in a Multi-AZ deployment and move the shared data to the FSx file system

  • ❏ C. Set up an Amazon S3 File Gateway and mount it on the EC2 instances to serve the share

  • ❏ D. Use Amazon FSx for NetApp ONTAP and create SMB shares integrated with Active Directory

A data engineering group at a fintech startup needs to store and run daily analytics on application log files. The number and size of the logs are unpredictable, and the data will be kept for no longer than 18 hours before deletion. Which storage class offers the most cost-effective option while keeping the data immediately accessible for processing?

  • ✓ B. Amazon S3 Standard

Amazon S3 Standard is the correct option for this scenario because it provides immediate access with no retrieval fees and it does not impose a minimum storage duration which makes it suitable for logs retained for only 18 hours.

Amazon S3 Standard keeps objects immediately available so analytics jobs can read data without retrieval delay. It does not carry minimum retention charges so you can delete objects within hours without extra cost. You can also apply a lifecycle rule to automatically remove objects after the short retention window to control spend when the number and size of logs are unpredictable.

Amazon S3 Intelligent-Tiering is not ideal because it incurs per object monitoring and automation charges and its archive tiers have minimum storage durations which reduce cost effectiveness for data that exists only for hours.

Amazon S3 Glacier Deep Archive is intended for long term archival and it has long retrieval delays and retrieval fees which make it impractical for daily analytics that require fast access.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is unsuitable because it carries a 30 day minimum storage duration and retrieval fees which would waste cost when objects are deleted within hours.

Choose the immediately accessible storage class and configure lifecycle rules to auto delete objects within the retention window to avoid retrieval fees and minimum duration charges

A regional broadcaster has spun off its content studio and needs to move the studio’s AWS account into a different AWS Organization that is managed by the acquiring company while keeping all resources with the account. What is the appropriate way to complete this transfer?

  • ✓ C. Use the AWS Organizations console to remove the account from the source organization and accept the invitation from the target organization

Use the AWS Organizations console to remove the account from the source organization and accept the invitation from the target organization is correct because it lets you move the existing AWS account into the acquiring organization while preserving the account’s resources, identities and configuration.

You remove the member account from the current organization and the destination organization sends an invitation which the account accepts. After acceptance the account becomes part of the new organization and its resources remain intact. Make sure you have the necessary management account permissions and review billing, service control policies and IAM roles as part of the move.

Create a new account in the target organization and replicate workloads with AWS Application Migration Service is wrong because it requires creating a separate account and performing full workload migration which increases effort and risk when a direct account transfer is supported.

Use AWS CloudFormation StackSets to relocate the account between organizations is incorrect because CloudFormation and StackSets manage resources inside accounts and they do not transfer account membership between organizations.

Create a new account in the target organization and share needed resources through AWS Resource Access Manager is not suitable because RAM only shares resources across accounts and does not change account ownership or organization membership so you would still have two accounts to manage.

Use the AWS Organizations invite and leave flow to move an entire account between organizations instead of migrating resources.

BlueHarbor Media runs a multi-account landing zone with AWS Organizations that spans over 80 AWS accounts. A solutions architect must restrict a central Amazon S3 bucket so that only principals from accounts that are members of this Organization can access it, and the team does not want to maintain an allow list of account IDs as accounts are frequently added or removed. What is the simplest approach to achieve this?

  • ✓ C. Use the aws:PrincipalOrgID global condition key in the S3 bucket policy to allow only principals from the Organization

Use the aws:PrincipalOrgID global condition key in the S3 bucket policy to allow only principals from the Organization is the correct option because it lets the bucket policy allow access only for principals whose accounts belong to the specified AWS Organization and it removes the need to maintain a manual list of account IDs.

Using aws:PrincipalOrgID in the S3 bucket policy lets you compare the principal account owner s organization ID to a single value so access automatically follows organization membership changes. This approach scales to many accounts and avoids ongoing operational overhead of updating policies as accounts are added or removed.

Build an attribute-based access control model by tagging each account as in-organization and reference those tags in the S3 bucket policy is impractical because it depends on consistent tag application and governance across all accounts and it creates extra operational work to keep tags accurate.

AWS Resource Access Manager does not apply because RAM cannot share or restrict access to S3 buckets with a resource policy and it is not intended to control S3 resource access in this way.

Place all external accounts in an Organizational Unit and attach a Service Control Policy that denies access to the bucket is not effective because service control policies act as account level guardrails and do not function as S3 resource policies and they cannot control access for non member accounts.

When you need to restrict a resource to members of an AWS Organization remember to use aws:PrincipalOrgID in the resource policy to avoid maintaining per account allow lists.

A regional healthcare consortium runs a microservices platform on Kubernetes in its private on-site data center. Strict compliance rules require that all protected health information and all compute remain within the facility. The platform team wants to refresh the Kubernetes environment and use AWS-managed capabilities such as automatic Kubernetes version updates, Amazon CloudWatch metrics and logs, and IAM-based authentication, without moving any application data or compute into the cloud. Which AWS approach best enables this modernization while ensuring every workload stays on premises?

  • ✓ C. Deploy an AWS Outposts rack in the data center and operate Amazon EKS Anywhere on the Outposts hardware to keep Kubernetes local while using AWS integrations

The best choice is Deploy an AWS Outposts rack in the data center and operate Amazon EKS Anywhere on the Outposts hardware to keep Kubernetes local while using AWS integrations. This keeps all compute and protected health information physically on site while letting the platform team use AWS managed APIs and services.

This option works because Outposts extends native AWS infrastructure into the customer facility and it supports Amazon EKS Anywhere running on the Outposts hardware. That combination preserves residency and compliance by keeping workloads and data on premises while enabling AWS integrations such as CloudWatch metrics and logs and IAM based authentication under a managed lifecycle.

Establish a dedicated AWS Direct Connect link to a region, run Amazon EKS in that region, and integrate with IAM and Amazon API Gateway for hybrid traffic is not acceptable because the Kubernetes cluster would run in the AWS Region and that would violate the requirement that all workloads and data remain on premises.

Run Amazon ECS on AWS Fargate in a nearby AWS Local Zone, stream logs to CloudWatch, connect to the data center over VPN, and mount on-prem network shares for data access fails the residency requirement because compute would execute in an AWS operated Local Zone and it also changes the platform from Kubernetes to ECS so it does not meet the platform expectations.

Install Amazon EKS Anywhere on existing on-premises servers, register the cluster with EKS Connector for visibility, and forward logs to Amazon CloudWatch keeps compute local but it does not deliver the same AWS managed lifecycle, automated upgrades, and fully integrated on premises control environment that a Outposts rack provides.

When you must keep all data and compute on premises choose a solution that brings AWS infrastructure into your facility such as AWS Outposts so you can use managed AWS APIs without moving workloads to the cloud.

A regional fintech firm runs 12 VPCs across three AWS accounts and needs a straightforward way to connect all VPCs with two on-premises data centers through a single central hub while keeping day-to-day management minimal. Which approach should the solutions architect choose to achieve the lowest operational overhead?

  • ✓ B. AWS Transit Gateway

The correct choice is AWS Transit Gateway. It provides a managed hub and spoke architecture that can connect all 12 VPCs across three accounts and both on premises data centers through a single central gateway while keeping daily operations minimal.

AWS Transit Gateway is a fully managed service so you do not have to run, patch, or scale your own router appliances. It supports multiple VPC attachments and integrations with Direct Connect and VPN so you can aggregate on premises connectivity and centralize routing in one place. The service also enables transitive routing through the gateway which simplifies route management and reduces the number of individual connections.

Fully meshed VPC peering is unsuitable because peering is non transitive and requires many point to point connections which quickly becomes unmanageable as the number of VPCs grows. Peering also cannot forward routes from an attached on premises network through another VPC.

AWS Direct Connect gateway is useful for associating Direct Connect circuits with multiple VPCs but it does not provide inter VPC transit on its own. You would typically pair a Direct Connect gateway with AWS Transit Gateway to bring Direct Connect into a centralized hub rather than using the Direct Connect gateway as the sole transit solution.

Transit VPC Solution relies on customer managed or third party router instances running on EC2 which increases operational overhead for scaling, high availability, and software updates. That extra management makes it a poorer choice when the primary goal is to minimize day to day operations.

For minimal operational overhead choose AWS Transit Gateway for hub and spoke multi VPC and on premises aggregation and consider pairing it with Direct Connect for stable data center links.

Solara Health has migrated a monolithic job into two Amazon ECS services on AWS Fargate. Service A quickly parses incoming records, while Service B performs time-consuming enrichment that can build up during peak loads. The team wants the services to be loosely coupled so each can scale on its own and so surges from Service A are buffered without dropping work. What is the best way to connect the services to meet these goals?

  • ✓ C. Have Service A send messages to an Amazon SQS queue and have Service B poll and process messages from the queue

Have Service A send messages to an Amazon SQS queue and have Service B poll and process messages from the queue is correct because it provides asynchronous decoupling and durable buffering so the slower enrichment stage can catch up without losing work.

SQS offers a pull based consumption model that lets Service B control processing rate and scale independently from Service A. The queue holds messages durably so bursts from the parser are buffered and features like visibility timeout and dead letter queues support retries and failure handling.

Have Service A publish to an Amazon SNS topic and have Service B subscribe to the topic is push based and does not give Service B fine control over when to process messages or provide a persistent work backlog for smoothing uneven processing times.

Have Service A write payloads to an Amazon S3 bucket and use S3 event notifications to invoke Service B is event driven but S3 notifications are not a purpose built work queue and they do not provide simple consumer controlled backpressure or durable message semantics for processing pipelines.

Have Service A stream records to Amazon Kinesis Data Firehose and have Service B read from Firehose is not suitable because Firehose is designed to deliver to storage and analytics targets rather than to act as a direct consumer facing message queue. If you need streaming consumers consider Amazon Kinesis Data Streams instead.

When one service is much slower use queue based decoupling such as SQS for durable buffering and independent scaling. Use SNS when you need fanout to multiple subscribers and not consumer managed backpressure.

An edtech startup is moving a legacy three-tier web application from a private colocation facility to AWS. The stack includes a web layer, a business logic layer, and a MySQL backend. The team wants to avoid provisioning and maintaining any servers or clusters. Which services should the solutions architect choose for the application compute and the database to meet these requirements? (Choose 2)

  • ✓ B. Amazon RDS for MySQL

  • ✓ C. AWS Fargate

AWS Fargate and Amazon RDS for MySQL are the correct choices for compute and the database because they let the team run the web and application tiers and the MySQL backend without provisioning or maintaining servers or clusters.

AWS Fargate runs containers in a serverless manner so the team can deploy the web layer and business logic as container tasks without managing EC2 instances or worker nodes. This removes instance lifecycle and cluster administration from the operations burden while still supporting containerized application architectures.

Amazon RDS for MySQL provides a managed MySQL service with automated backups, patching, maintenance, and optional high availability so the team does not have to operate the database server itself. RDS gives a drop in managed relational backend for an existing MySQL schema and avoids manual database host management.

Amazon DynamoDB is incorrect because it is a NoSQL key value and document database and it is not a direct replacement for a relational MySQL schema without application redesign.

Amazon EKS is incorrect because it requires creating and operating Kubernetes clusters and worker compute even though some control plane components are managed, so it does not meet the requirement to avoid cluster management.

Amazon EC2 Spot Instances are incorrect because they require managing EC2 instances and handle interruptions, so they do not satisfy the goal of avoiding server provisioning and lifecycle management.

When a question says do not manage servers or clusters choose serverless or fully managed compute and managed relational database services and avoid answers that require instance or cluster administration.

A multinational retail logistics firm is replatforming dozens of internal applications from its two regional data centers to AWS. These workloads will run across about 18 AWS accounts that are governed centrally with AWS Organizations. The company manages all users, groups, and access control in its on-premises Microsoft Active Directory and wants to keep identity administration there. The team needs seamless single sign-on to every AWS account without duplicating identities or manually provisioning users. Which approach provides the most operational efficiency?

  • ✓ C. Configure AWS IAM Identity Center to use AWS Directory Service for Microsoft Active Directory (Enterprise) and establish a two-way forest trust with the on-premises AD to enable federation across all accounts

The correct answer is Configure AWS IAM Identity Center to use AWS Directory Service for Microsoft Active Directory (Enterprise) and establish a two-way forest trust with the on-premises AD to enable federation across all accounts. This approach keeps the on premises Active Directory as the authoritative identity source and enables seamless single sign on across all AWS accounts governed by AWS Organizations.

By deploying AWS IAM Identity Center with AWS Managed Microsoft AD and a two-way forest trust users and groups remain managed on premises and do not need to be duplicated in the cloud. Identity Center can consume identities from the trusted managed AD so administrators can assign permission sets centrally across the 18 accounts and maintain consistent governance and least privilege policies.

Enable AWS IAM Identity Center and manually create users and groups there; assign permission sets per account and keep it synced with on-prem AD using custom PowerShell tasks is inefficient and it contradicts the requirement to keep identity administration on premises. Maintaining custom sync scripts adds operational risk and complexity and it is not the recommended method for enterprise scale SSO.

Use Amazon Cognito as the primary user store and federate to the on-premises Active Directory with a custom OIDC setup; grant multi-account access using identity pools and resource policies is targeted at application authentication rather than providing centralized AWS console SSO across many accounts. Cognito does not integrate with Identity Center permission sets so it does not meet the multi account access and administration needs described.

Connect IAM Identity Center to AWS Directory Service AD Connector pointing to the on-premises AD to provide SSO across accounts is not viable because Identity Center does not support AD Connector as its identity source. AD Connector can proxy certain directory calls for some services but it cannot be used as the Identity Center identity source so it will not provide the required centralized SSO capability.

When identities must remain on premises prefer a managed AD with a trust and connect it to AWS IAM Identity Center for centralized SSO and use AWS Organizations to assign permission sets across accounts.

A fintech startup in Berlin is building a serverless invoice processing service using AWS Lambda and an Amazon Aurora MySQL-Compatible cluster. The Lambda function must connect with a standard username and password, and the security team insists these secrets are never embedded in the deployment package or code. What is the most secure way to store the database credentials and supply them to the function at runtime?

  • ✓ B. Store the credentials as SecureString parameters in AWS Systems Manager Parameter Store and grant the function role permission to retrieve them

Store the credentials as SecureString parameters in AWS Systems Manager Parameter Store and grant the function role permission to retrieve them is the correct choice because it lets the Lambda function fetch encrypted username and password at runtime without embedding secrets in the deployment package.

Store the credentials as SecureString parameters in AWS Systems Manager Parameter Store and grant the function role permission to retrieve them stores values as SecureString which are encrypted with AWS KMS and protected by IAM policies. The Lambda function can retrieve the parameters at invocation using the SDK and the function execution role, so secrets are never hardcoded or packaged. This approach provides centralized access control and auditing and it is a purpose built pattern for runtime secret retrieval.

Put the credentials into AWS Key Management Service and reference them via Lambda environment variables is incorrect because AWS KMS is for managing encryption keys not for storing secret text, and using Lambda environment variables as the primary secret store risks exposing credentials in configuration and does not replace a managed secret store.

Save the credentials in an Amazon S3 object encrypted with SSE-KMS and read them at startup using the function execution role is not ideal because S3 is not a purpose built secret store and this pattern adds operational complexity and a larger attack surface even when objects are encrypted.

Enable IAM database authentication with the AWSAuthenticationPlugin and map an IAM user inside MySQL does not meet the requirement because IAM DB authentication issues temporary tokens instead of the static database username and password that the question requires.

When a question forbids embedding secrets in code prefer a managed secret or parameter store with KMS encryption and IAM access controls and remember that Parameter Store SecureString is a common correct choice when rotation is not the focus.

A media analytics startup is moving a significant archive of critical files into Amazon S3. The objects will land in a versioned bucket in the eu-central-1 Region. The business requires that all objects are automatically copied to a different AWS Region to support disaster recovery. What configuration should the solutions architect implement to meet this requirement?

  • ✓ C. Create a second S3 bucket with versioning in another Region and enable S3 Replication across Regions

The correct option is Create a second S3 bucket with versioning in another Region and enable S3 Replication across Regions because this configuration automatically copies objects and their versions from the source bucket to a destination bucket in a different AWS Region for disaster recovery.

Create a second S3 bucket with versioning in another Region and enable S3 Replication across Regions works because Amazon S3 Replication performs asynchronous, cross-Region replication of object versions when you enable versioning on both the source and destination buckets and grant the required IAM permissions. Replication handles new objects and versioned updates without manual intervention and helps meet low recovery point objectives that require continuous copying rather than periodic exports.

Configure CORS between the source bucket and a bucket in another Region is incorrect because CORS only controls browser cross origin access and it does not move or duplicate objects between buckets.

Use AWS Backup with a backup plan to copy Amazon S3 backups to another Region is incorrect because AWS Backup creates scheduled backups and copies rather than providing continuous, object-level replication, and it may not meet stringent RPO requirements for immediate disaster recovery.

Enable S3 Multi-Region Access Points for the bucket is incorrect because Multi-Region Access Points provide a global endpoint and route traffic across Regions but they do not perform data replication, so you still need S3 Replication to copy objects to another Region.

Remember that replication requires versioning on both source and destination buckets and that features like CORS and Multi-Region Access Points do not copy objects.

A boutique travel publisher hosts a static marketing microsite on Amazon S3. Most visitors are located in the United States, Canada, and Mexico, and the company wants to lower latency for these users while keeping delivery costs as low as possible. Which approach should the team implement to meet these goals?

  • ✓ C. Provision an Amazon CloudFront distribution and configure the price class to use only the United States, Canada, and Mexico

Provision an Amazon CloudFront distribution and configure the price class to use only the United States, Canada, and Mexico is correct because it restricts which edge locations are used so the site is cached close to the North American viewers while keeping delivery costs lower than a full global footprint.

CloudFront serves static content from edge caches so latency is reduced for viewers in the targeted regions. Configuring the appropriate price class limits edge locations to the requested countries and therefore lowers data transfer and request costs compared to using all edges. A single S3 origin works well with CloudFront for a static microsite and the price class is the cost control for regional distribution.

Attach Lambda@Edge functions to a CloudFront distribution to process requests near viewers is unnecessary because the site is static and no per-request compute is required. Adding Lambda@Edge increases complexity and costs without improving basic caching or regional delivery for static assets.

Create a CloudFront distribution with separate origins hosted in the United States, Canada, and Mexico is not needed because one S3 origin combined with CloudFront edge caching provides locality. Multiple origins add deployment complexity and do not materially lower delivery costs for this use case.

Enable CloudFront with the All Edge Locations price class for maximum performance worldwide gives the best global performance but it is more expensive. It is not a cost efficient choice when the audience is concentrated in the United States, Canada, and Mexico.

When viewers are concentrated in specific regions set the CloudFront price class to those regions to balance lower latency and lower cost.

An ad-tech company operates roughly 3,200 Amazon EC2 instances in production that run a proprietary application dependent on a commercial third-party component. The vendor has released an urgent security fix and the team must update the entire fleet as quickly as possible while maintaining visibility of patch compliance. What should the solutions architect do?

  • ✓ C. Configure AWS Systems Manager Patch Manager with an appropriate patch baseline and run an on-demand patch operation for the fleet

Configure AWS Systems Manager Patch Manager with an appropriate patch baseline and run an on-demand patch operation for the fleet is correct because it is purpose built for fast, large scale remediation and it provides centralized compliance visibility.

Patch Manager automates scanning, approval, and installation of operating system and supported software patches across Windows and Linux instances. It uses patch baselines and patch groups to control which updates apply and when, and it provides built in compliance reporting so the team can track remediation progress across all 3,200 instances.

Use AWS Systems Manager Run Command to execute a custom patch script across all managed instances can execute scripts across many instances but it does not provide patch baselines, automatic approval workflows, or built in compliance dashboards and that makes standardized urgent remediation slower and more error prone.

Create an AWS Systems Manager State Manager association that installs the patch on the fleet focuses on maintaining a desired state over time and it is better suited for continuous enforcement than for one off emergency patching. Patch Manager is the more appropriate choice for immediate fleet wide patch operations and reporting.

Write an AWS Lambda function that iterates through instances and applies the vendor patch introduces custom orchestration and operational risk and it lacks the native patch governance, auditing, and compliance reporting that Patch Manager provides.

When you need fast, fleet wide OS patching with compliance visibility choose Systems Manager Patch Manager and prefer built in patch baselines over custom scripts.

An engineer at Polaris FinTech is reviewing an AWS CloudFormation resource excerpt that reads SecurityGroupIngress with two entries IpProtocol tcp FromPort 443 ToPort 443 CidrIp 0.0.0.0/0 and IpProtocol tcp FromPort 22 ToPort 22 CidrIp 203.0.113.7/32. What behavior will these rules implement? (Choose 3)

  • ✓ C. It opens HTTPS to the world on port 443

  • ✓ E. It restricts SSH access to a single source IP address

  • ✓ G. It sets inbound rules for a security group

It opens HTTPS to the world on port 443, It restricts SSH access to a single source IP address, and It sets inbound rules for a security group are correct.

The CloudFormation snippet uses SecurityGroupIngress entries to define inbound permissions on a security group. The rule that uses 0.0.0.0/0 with port 443 allows HTTPS traffic from any IPv4 address, and the rule that uses 203.0.113.7/32 with port 22 allows SSH only from that single IP address. SecurityGroupIngress affects inbound traffic so the template is setting inbound rules for the security group.

It configures the inbound rules of a network ACL is incorrect because network ACLs are subnet level constructs and use different CloudFormation resource types so SecurityGroupIngress does not configure NACLs.

AWS WAF is incorrect because the snippet does not reference a web ACL or any WAF resources so it is unrelated.

It affects the security group’s egress rules is incorrect because SecurityGroupIngress controls inbound rules and not outbound rules.

It only permits the literal IP 0.0.0.0 to access HTTPS is incorrect because the CIDR 0.0.0.0/0 represents all IPv4 addresses and not a single literal IP.

Remember that SecurityGroupIngress defines inbound rules and that 0.0.0.0/0 means all IPv4 addresses so always verify CIDR blocks before exposing ports to the internet.

A regional media streaming startup runs its transactional workload on Amazon RDS for MySQL. To absorb a holiday surge of read only reporting queries, the team added a read replica. During the surge, the replica averaged 70% CPU and the writer instance was around 65%. After the event, the replica now averages 22% CPU, while the primary remains steady near 65% most of the time. The company wants to reduce costs while keeping enough headroom for future growth. What should an architect do?

  • ✓ C. Resize the read replica to a smaller instance class and keep the primary instance as is

Resize the read replica to a smaller instance class and keep the primary instance as is is correct because the read replica is now lightly utilized at about 22 percent CPU while the primary remains around 65 percent CPU so rightsizing the replica saves cost and preserves write capacity on the primary.

The main reason to choose Resize the read replica to a smaller instance class and keep the primary instance as is is that the replica served its purpose during the surge and now shows significant unused capacity. Reducing the replica instance class lowers ongoing cost and still leaves room to add replicas or scale up again for future spikes. The primary should be left at its current size because it sustains most of the write load and needs headroom for growth and peak write activity.

Migrate the workload to Amazon Aurora MySQL and remove the read replica is not the best move because migration adds operational effort and risk and removing the replica would reduce read capacity for future spikes without guaranteeing lower cost in the current pattern.

Enable Multi-AZ on the primary instance and delete the read replica is incorrect because Multi-AZ provides availability and automatic failover and it does not increase read throughput. Deleting the replica would remove the read scaling option used during the surge.

Upgrade the read replica to a larger instance class and downgrade the primary instance misallocates resources because the writer is already moderately loaded and lowering its capacity could create write bottlenecks while the replica does not need more resources given its low utilization.

Focus on observed utilization and rightsize underused read replicas to cut cost while keeping the primary sized for sustained write load.

WaveCast Labs is rolling out a live streaming platform behind an Application Load Balancer that forwards requests to EC2 instances in an Auto Scaling group spanning two Availability Zones. Operations observes that when the load balancer marks a target as unhealthy, it is removed from the target group, yet the Auto Scaling group does not launch a replacement and overall capacity shrinks. Which configuration mismatch would most likely explain this behavior?

  • ✓ C. The Auto Scaling group is configured with EC2 instance status checks, while the Application Load Balancer relies on target group health checks

The Auto Scaling group is configured with EC2 instance status checks, while the Application Load Balancer relies on target group health checks is correct. The ALB can mark an instance as unhealthy and remove it from the target group because of failed application level checks while the ASG still sees the EC2 instance status checks as passing and so it does not replace the instance.

EC2 instance status checks monitor system and instance reachability and they do not reflect whether the application or endpoint is responding. ALB target group health checks probe application endpoints and they will de register targets that fail those probes. When an ASG is configured to use only EC2 status checks it will not react to application level failures detected by the ALB. To have the ASG replace instances that the ALB de registers configure the ASG to use ELB target group health checks and set an appropriate health check grace period.

The Auto Scaling group uses Application Load Balancer health checks, and the Application Load Balancer evaluates EC2 instance status checks is incorrect because an ALB does not evaluate EC2 status checks and if the ASG used ALB health checks it would respond to targets removed by the load balancer.

Both the Auto Scaling group and the Application Load Balancer use EC2 instance status checks is incorrect because the ALB cannot use EC2 status checks and this setup would not cause the ALB to de register targets for application level failures.

Both the Auto Scaling group and the Application Load Balancer use ALB target group health checks is incorrect because if both used the same target group health checks the ASG would replace unhealthy instances and overall capacity would not shrink as observed.

Match the ASG health check type to the load balancer so the ASG can replace instances that fail application level health checks.

A research intern at the European Southern Observatory is uploading a 4.5 GB deep-space image to an Amazon S3 bucket. The intern enabled Amazon S3 Transfer Acceleration to speed up the upload, but measurements show the transfer was not accelerated. Considering AWS billing for inbound data and Transfer Acceleration behavior, what will be charged for this upload?

  • ✓ C. No transfer charges apply for the upload

The correct answer is No transfer charges apply for the upload.

AWS does not bill for data uploaded into Amazon S3 from the public internet and S3 Transfer Acceleration fees are charged only when a transfer actually used the accelerated path and delivered acceleration. Because the intern’s upload did not get accelerated there are no inbound data transfer charges and there are no Transfer Acceleration fees for the 4.5 GB upload.

The option Only S3 data transfer charges apply for the upload is incorrect because data transferred into S3 from the internet is not billed by AWS.

The option Only S3 Transfer Acceleration charges apply for the upload is incorrect because S3 Transfer Acceleration only incurs fees when the transfer is actually accelerated relative to the standard S3 path.

The option Both S3 data transfer and S3 Transfer Acceleration charges apply for the upload is incorrect because neither inbound S3 data transfer charges nor S3 Transfer Acceleration fees apply when the upload was not accelerated.

Keep in mind that data transfer into S3 from the internet is free and that S3 Transfer Acceleration is billed only when acceleration actually occurs.

An analytics team at a travel booking startup operates five Amazon EC2 instances that issue mostly read queries against an Amazon RDS for PostgreSQL instance. Leadership requires a regional disaster recovery capability with minimal downtime and data loss if an entire AWS Region becomes unavailable. Which features should you implement to meet this requirement? (Choose 2)

  • ✓ B. Enable cross-Region automated backups on a Multi-AZ RDS for PostgreSQL deployment

  • ✓ D. Create a cross-Region Read Replica and plan to promote it during a Regional outage

Enable cross-Region automated backups on a Multi-AZ RDS for PostgreSQL deployment and Create a cross-Region Read Replica and plan to promote it during a Regional outage are the correct choices because they provide cross-Region recoverability and a warm standby that can be used if an entire AWS Region fails.

Enable cross-Region automated backups on a Multi-AZ RDS for PostgreSQL deployment lets you have automated backups copied to a different Region so you can perform point in time recovery outside the affected Region. This approach preserves recoverable snapshots and logs and lets you restore the database in another Region with minimal data loss.

Create a cross-Region Read Replica and plan to promote it during a Regional outage provides a read scale target that lives in another Region and can be promoted to a primary if the home Region becomes unavailable. Promoting the replica gives you a warm standby and faster recovery of service compared to rebuilding from backup alone.

Switch the RDS instance to Provisioned IOPS (SSD) storage instead of General Purpose storage improves disk performance and IOPS consistency but it does not replicate data to another Region and so it does not deliver regional disaster recovery.

Rely only on RDS Multi-AZ with automated backups retained in the same Region provides strong in-Region availability and failover to a standby within the same Region but it does not protect against a full Region outage so it does not meet a regional disaster recovery requirement.

Migrate the database to Amazon Aurora Global Database would offer cross-Region capabilities but it requires moving to the Aurora engine which is a migration task and out of scope for an RDS for PostgreSQL instance in this scenario. This makes it an impractical choice for the given constraint.

When a question asks for regional disaster recovery pick options that explicitly mention cross-Region replication or backup copies and not options that only improve performance or provide in-Region high availability.

A nationwide insurance firm runs an inbound claims contact center on AWS. The team wants to upgrade the system to automatically transcribe calls with multi-speaker separation and to run ad hoc queries on the transcripts to uncover operational patterns. Which solution will meet these requirements?

  • ✓ C. Use Amazon Transcribe for speaker diarization and transcription, then query the transcript data with Amazon Athena

The correct choice is Use Amazon Transcribe for speaker diarization and transcription, then query the transcript data with Amazon Athena. This option directly addresses the need for automatic multi speaker transcription and ad hoc queries over transcripts.

Amazon Transcribe performs speech to text conversion and supports speaker diarization so it separates multiple speakers and produces timestamped transcripts that can be stored in Amazon S3. Those transcripts can be queried with Amazon Athena which runs standard SQL directly against files in S3 in a serverless way so analysts can quickly uncover operational patterns without managing a database cluster.

Use Amazon Rekognition for multi-speaker detection, store transcripts in Amazon S3, and apply custom ML models for analytics is incorrect because Amazon Rekognition is intended for image and video analysis and it does not perform audio transcription or speaker diarization. Building custom ML could work but it adds unnecessary complexity when managed services cover the core requirements.

Use Amazon Comprehend for speaker recognition and sentiment, then load results into Amazon Redshift for SQL queries is incorrect because Amazon Comprehend analyzes text and cannot perform speech to text or separate speakers. Also Amazon Redshift is a data warehouse and it is not required for ad hoc queries over transcripts that can be analyzed directly in S3 with Athena.

Use Amazon Rekognition for speaker recognition and Amazon Textract to extract text from audio files in Amazon S3 is incorrect because Amazon Textract extracts text from document images and does not process audio. In addition Amazon Rekognition does not provide audio diarization so it cannot separate speakers in voice calls.

Match the task to the service and remember that Amazon Transcribe handles speech to text and diarization and that Amazon Athena lets you run ad hoc SQL over files stored in S3.

Orion eNotary processes digital signatures for highly sensitive legal agreements. To satisfy strict compliance rules, the company must encrypt the finalized documents using its own proprietary cryptographic algorithm rather than any AWS-managed algorithm. As they migrate the archive to Amazon S3, which encryption approach will allow them to continue using their custom cipher while storing these objects in S3?

  • ✓ B. Client-side encryption

Client-side encryption is the only correct choice because it allows Orion eNotary to apply its proprietary cipher before uploading objects so Amazon S3 only stores ciphertext and AWS never performs or controls the encryption algorithm or sees the plaintext.

Client-side encryption gives the company full control over the encryption process and the key lifecycle so the proprietary algorithm and key material remain on the client side. You encrypt files locally with your custom cipher then upload the resulting ciphertext to S3 which treats the objects as opaque data.

Server-side encryption with AWS KMS keys (SSE-KMS) is incorrect because AWS KMS performs encryption with AWS supported implementations and algorithms so you cannot substitute a custom cipher. KMS controls key usage and the service uses standard, managed algorithms.

Server-side encryption with Amazon S3 managed keys (SSE-S3) is incorrect because S3 manages both keys and the encryption algorithms and offers no mechanism to run a customer proprietary cipher during server side encryption.

Server-side encryption with customer-provided keys (SSE-C) is incorrect because although you provide key material S3 still executes encryption with its own implementation and it does not support custom cipher implementations. SSE-C also requires you to supply the key with each request which raises additional operational exposure.

When a requirement specifies a proprietary algorithm and AWS must not perform encryption choose client side encryption so you encrypt locally and retain control of keys and the cipher.

A regional accounting firm has lifted and shifted a three-tier web application to AWS. The containers for the web tier run on Linux-based Amazon EC2 instances and connect to a PostgreSQL database hosted on separate dedicated EC2 instances. Leadership wants to reduce operational effort and improve overall performance. What actions should the solutions architect recommend to achieve these goals? (Choose 2)

  • ✓ B. Run the containers on AWS Fargate using Amazon Elastic Container Service (Amazon ECS)

  • ✓ C. Move the PostgreSQL database to Amazon Aurora PostgreSQL-Compatible Edition

The correct choices are Run the containers on AWS Fargate using Amazon Elastic Container Service (Amazon ECS) and Move the PostgreSQL database to Amazon Aurora PostgreSQL-Compatible Edition.

Run the containers on AWS Fargate using Amazon Elastic Container Service (Amazon ECS) offloads provisioning and patching of container hosts and provides automatic scaling and tighter integration with load balancing which reduces day to day operations and improves reliability and responsiveness under load.

Move the PostgreSQL database to Amazon Aurora PostgreSQL-Compatible Edition replaces self managed PostgreSQL on EC2 with a managed, fault tolerant engine that offers storage auto scaling and read replicas and it typically delivers higher throughput and lower administrative effort than running PostgreSQL on EC2.

Create an Amazon CloudFront distribution to cache the site’s static assets can speed up static content delivery for users but it does not address the operational burden of managing container hosts or the database which are the main targets for reduction in this scenario.

Insert an Amazon ElastiCache cluster between the application and the database can reduce read latency for frequently accessed data but it still leaves the application and database running on EC2 so it does not significantly lower the overall operational overhead compared with using managed compute and a managed database.

Deploy the web tier and the database on the same EC2 instances to reduce hops increases coupling and resource contention and it reduces scalability and availability which makes operations harder and can degrade performance in production.

Choose managed compute and a managed database when refactoring a lift and shift to reduce operational effort and gain performance improvements.

A global biotech firm is moving from a collection of isolated AWS accounts to a governed multi-account structure. The company expects to spin up several dozen AWS accounts for different business units and wants all workforce access to authenticate against its existing on-premises directory. Which combination of actions should a solutions architect recommend to satisfy these requirements? (Choose 2)

  • ✓ B. Deploy AWS Directory Service and connect it to the on-premises directory, then use IAM Identity Center for cross-account sign-in

  • ✓ D. Create an organization in AWS Organizations with all features turned on, and create the new accounts under the organization

Deploy AWS Directory Service and connect it to the on-premises directory, then use IAM Identity Center for cross-account sign-in and Create an organization in AWS Organizations with all features turned on, and create the new accounts under the organization are correct because they together satisfy centralized workforce authentication and a governed multi account structure.

AWS Directory Service connects the existing on premises directory to AWS and IAM Identity Center provides single sign on and cross account sign in so workforce users can authenticate with their corporate credentials. Directory Service supports patterns such as AD Connector or Managed Microsoft AD to avoid migrating users and Identity Center maps those identities to permission sets across accounts.

AWS Organizations with all features turned on establishes the multi account foundation and enables centralized governance. Organizations provide consolidated billing, service control policies, and organizational units so you can apply guardrails and automate account creation under a managed structure.

Configure AWS Transit Gateway to centralize connectivity between VPCs and accounts focuses on network connectivity and routing rather than directory federation and workforce authentication so it does not meet the authentication requirement.

Set up Amazon Cognito user pools and configure IAM Identity Center to trust Cognito is intended for application end users and custom user pools and is not the standard approach for federating a corporate directory for workforce access to AWS accounts.

Use AWS Control Tower to centrally manage accounts and enable IAM Identity Center can simplify account provisioning and baseline governance but it does not itself connect to an on premises directory. Directory federation via Directory Service and Identity Center is still required so this option alone is incomplete.

Use AWS Organizations for account governance and provision accounts, and use AWS Directory Service with IAM Identity Center to federate your on premises directory for workforce SSO across accounts.

A regional logistics startup runs its production workload on Amazon EC2. The instances must stay up without interruption from Wednesday through Monday. On Tuesdays, the same workload is required for only 6 hours and still cannot tolerate any interruptions. The team wants to minimize cost while meeting these needs. What is the most cost-effective approach?

  • ✓ C. Purchase Standard Reserved Instances for the Wednesday through Monday workload and use Scheduled Reserved Instances for the 6-hour Tuesday window

Purchase Standard Reserved Instances for the Wednesday through Monday workload and use Scheduled Reserved Instances for the 6-hour Tuesday window is correct because it matches the steady five day baseline with a reserved purchase and it covers the short, fixed Tuesday window with capacity that will not be interrupted.

Standard Reserved Instances provide the lowest effective hourly cost for a predictable, always on baseline and they ensure that the baseline capacity has committed pricing. Scheduled Reserved Instances are designed to cover a recurring, fixed weekly time slot so they deliver cost savings for the brief Tuesday run while guaranteeing capacity during that window.

Use Spot Instances for the 6-hour Tuesday workload and purchase Standard Reserved Instances for continuous operation from Wednesday through Monday is wrong because Spot Instances can be reclaimed at any time and they do not meet the non interruptible requirement for the Tuesday window.

Purchase Convertible Reserved Instances for the Wednesday through Monday workload and use Spot Instances for the Tuesday hours is wrong because Convertible Reserved Instances typically cost more than Standard Reserved Instances for a known steady pattern and Spot still fails the no interruption constraint.

Use Compute Savings Plans for the Wednesday through Monday workload and On-Demand Instances for the Tuesday usage is wrong because Savings Plans provide flexibility and a discount for steady spend but they do not give a reserved time slot for a short, fixed weekly window and On-Demand has no discount nor capacity guarantee for the recurring Tuesday requirement.

Note that Scheduled Reserved Instances are an older scheduled offering and some modern designs favor Savings Plans or capacity reservations for baseline and fixed capacity needs. Exams may still present scheduled options in legacy scenarios so recognize both the historical Scheduled Instances approach and the current emphasis on Savings Plans and reservations.

Match the purchase option to the usage pattern. Choose Standard RIs for multi day steady baselines and avoid Spot when workloads cannot tolerate interruptions.

A real-time gaming startup uses Amazon ElastiCache for Redis in front of an Amazon Aurora MySQL cluster to accelerate reads. The team needs a resilient recovery approach for the cache tier that keeps downtime and data loss to an absolute minimum without hurting application latency. Which solution should they implement?

  • ✓ D. Enable a Multi-AZ Redis deployment with automatic failover

The correct choice is Enable a Multi-AZ Redis deployment with automatic failover because it provides continuous replication to replicas in other Availability Zones and automatically promotes a healthy replica if the primary fails. This design yields very low recovery time objectives and low recovery point objectives while keeping application latency impact to a minimum.

Enable a Multi-AZ Redis deployment with automatic failover achieves resilience by maintaining standby replicas in separate Availability Zones and by orchestrating fast, automated failover without manual intervention. This approach reduces the chance of data loss and keeps the cache available for real-time gaming traffic so that the application can continue to serve low-latency reads.

The option Schedule automatic daily snapshots during the quietest traffic window is insufficient because periodic snapshots can leave up to a day of data unprotected and restores are time consuming and disruptive to latency.

The option Add read replicas for the Redis primary in separate Availability Zones to lower the risk of data loss improves read scalability but does not by itself provide automated detection and promotion of a failed primary, and replication lag can still create gaps in protection.

The option Perform manual backups using the Redis append-only file feature adds operational overhead and does not deliver rapid, automated failover for an AZ or node failure so it does not meet the low RTO and RPO requirements.

Remember choose Multi-AZ with automatic failover for in-Region high availability and minimal RTO and RPO and use snapshots or AOF for backups and Global Datastore for cross-Region disaster recovery.

A small fintech startup runs an AWS Lambda function in Account X that must read from and write to an Amazon S3 bucket owned by another AWS account. As the solutions architect, what is the most secure way to enable this cross-account access?

  • ✓ C. Create an IAM role for the Lambda function with S3 permissions and set it as the execution role, and update the bucket policy in the other account to allow that role

Create an IAM role for the Lambda function with S3 permissions and set it as the execution role, and update the bucket policy in the other account to allow that role is the correct choice because it requires granting permissions on both the principal and the resource so the Lambda in Account X can read from and write to the S3 bucket in the other account.

The Lambda execution role must have an IAM policy that allows the necessary S3 actions and the bucket in the other account must have a bucket policy that grants that role or its account those actions. Combining an execution role policy with a resource based bucket policy enforces least privilege and follows AWS best practices for secure cross account access.

Create an IAM role for the Lambda function with S3 permissions and set it as the execution role; that will provide cross-account access by itself is wrong because an IAM role in the caller account alone does not grant access to a bucket in another account when the bucket policy denies or does not allow that principal.

Make the S3 bucket public so the Lambda function in the other account can reach it is wrong because making the bucket public is insecure and unnecessary and S3 Block Public Access and least privilege practices make this an unsuitable solution.

AWS Lambda cannot access resources across accounts; use identity federation instead is incorrect because Lambda can access cross account resources by using IAM execution roles and resource policies without requiring identity federation.

When you see a cross account access question remember to check both the caller IAM role and the resource policy. The exam usually expects an answer that updates both sides rather than making resources public.

RiverShop Marketplace aggregates listings from independent vendors and stores each seller’s product description as a plain text file in an Amazon S3 bucket. The platform ingests roughly 800 new descriptions per day, and some contain ingredients for consumables such as pet treats, supplements, or drinks. The company wants a fully automated workflow that extracts ingredient names from each new file and then queries an Amazon DynamoDB table of precomputed safety ratings for those ingredients; submissions that are not consumables or are malformed can be skipped without affecting the application. The team has no machine learning specialists and seeks the lowest-cost option with minimal operational effort. Which approach most cost-effectively satisfies these requirements?

  • ✓ C. Configure S3 Event Notifications to invoke an AWS Lambda function that uses Amazon Comprehend custom entity recognition to extract ingredient entities and then looks up scores in DynamoDB

Configure S3 Event Notifications to invoke an AWS Lambda function that uses Amazon Comprehend custom entity recognition to extract ingredient entities and then looks up scores in DynamoDB is the correct choice. This option implements an event driven serverless pipeline that extracts ingredient names and queries precomputed safety scores without requiring in house machine learning expertise.

The S3 Event Notifications plus Lambda plus Comprehend approach is appropriate because Amazon Comprehend provides managed entity recognition so the team does not need to build or retrain models. The solution fits an automated, low cost pattern because S3 events trigger short lived Lambda invocations and DynamoDB lookups are low latency and inexpensive. The Lambda function can simply skip files that return no ingredient entities which prevents malformed or non consumable uploads from affecting the application.

Deploy a custom NLP model on Amazon SageMaker and trigger an AWS Lambda function via Amazon EventBridge to invoke the endpoint and write results to DynamoDB, retraining the model monthly with open-source labels is incorrect because it requires ML expertise and ongoing training and hosting costs. This approach is heavier to operate and more expensive than using a managed NLP service for a use case that does not need custom model development.

Use Amazon Textract from an S3-triggered Lambda function to read each uploaded text file and apply keyword matching to derive ingredient names before updating DynamoDB is incorrect because Textract is designed for extracting text from images and scanned documents and not for entity recognition on plain text. Relying on brittle keyword matching increases false positives and maintenance effort compared with managed entity extraction.

Run Amazon Lookout for Vision on the uploaded files via a Lambda trigger and publish results to clients using Amazon API Gateway is incorrect because Lookout for Vision targets image anomaly detection and not natural language processing of text files. It cannot extract ingredient entities from plain text and so it is not suitable for this scenario.

Favor fully managed NLP like Amazon Comprehend for teams with no ML specialists and use event driven serverless integrations with S3 and AWS Lambda to keep cost and operations minimal.

An energy analytics firm needs to run large-scale risk simulations on Amazon EC2. The 25 TB input library resides in Amazon S3 and is updated every 30 minutes. The team requires a high-throughput, POSIX-compliant file system that can seamlessly present S3 objects as files and write results back to the bucket. Which AWS storage service should they choose?

  • ✓ C. Amazon FSx for Lustre

Amazon FSx for Lustre is the correct choice because it is built for high performance computing and analytics and it can link an Amazon S3 data repository so S3 objects appear as POSIX files and results can be exported back to the bucket.

Amazon FSx for Lustre provides a POSIX compliant file system with massive parallel throughput and low latency which suits EC2 based large scale simulations. It can mount on Linux instances and stream data from S3 on demand so the 25 TB input library can be presented as files and updates can be synchronized back to the bucket after processing.

Amazon Elastic File System (EFS) is a general purpose NFS file system that offers shared storage for many workloads but it does not natively project S3 objects as files and it is not optimized for the parallel high throughput requirements of HPC simulations.

AWS Storage Gateway File Gateway exposes S3 as NFS or SMB for hybrid on premises access and object upload but it is not a parallel POSIX file system tuned for EC2 clusters and it is not intended for high throughput analytics pipelines.

Amazon FSx for Windows File Server targets Windows SMB workloads and does not integrate with S3 to present objects as POSIX files and it is not suitable for Linux based HPC or Lustre style parallel I O.

For exam scenarios where you must present S3 data as POSIX files and need parallel high throughput on EC2 choose FSx for Lustre and link an S3 data repository so you can stream inputs and export outputs efficiently.

An online ticketing platform gathers customer comments through embedded forms in its mobile and web apps. During major on-sale events or incident notifications, submissions can spike to about 12,000 per hour. Today the messages are sent to a shared support inbox for manual review. The company wants an automated pipeline that ingests feedback at scale, performs sentiment analysis quickly, and retains the insights for 12 months for trend reporting. Which approach best meets these goals with minimal operational overhead?

  • ✓ C. Create a REST API in Amazon API Gateway that sends incoming feedback to an Amazon SQS queue; use AWS Lambda to process queued items, analyze sentiment with Amazon Comprehend, and persist results in a DynamoDB table named FeedbackTrends with a per-item TTL of 12 months

The best choice is Create a REST API in Amazon API Gateway that sends incoming feedback to an Amazon SQS queue; use AWS Lambda to process queued items, analyze sentiment with Amazon Comprehend, and persist results in a DynamoDB table named FeedbackTrends with a per-item TTL of 12 months. This option is correct because it is fully serverless and it decouples ingestion from processing with a durable buffer so the system can absorb spikes and scale automatically.

Create a REST API in Amazon API Gateway that sends incoming feedback to an Amazon SQS queue; use AWS Lambda to process queued items, analyze sentiment with Amazon Comprehend, and persist results in a DynamoDB table named FeedbackTrends with a per-item TTL of 12 months uses SQS as a durable buffer to handle bursty traffic, Lambda to process items at scale without server management, Amazon Comprehend to perform fast, purpose built sentiment analysis, and DynamoDB TTL to enforce the 12 month retention with minimal operational overhead.

Route all feedback to Amazon Kinesis Data Streams, use an AWS Lambda consumer to batch records, call Amazon Translate to identify language and normalize to English, then index the results in Amazon OpenSearch Service with an ISM policy to purge after 12 months is not ideal because Amazon Translate and OpenSearch focus on language translation and indexing for search rather than quick sentiment analytics, and using OpenSearch increases management and cost compared with a serverless analytics pipeline.

Build a service on Amazon EC2 that accepts feedback and writes raw records to a DynamoDB table named TicketFeedbackRaw; from the EC2 app, invoke Amazon Comprehend for sentiment and store outcomes in a second table TicketFeedbackInsights with a TTL of 365 days would work functionally but requires provisioning and operating EC2 instances which adds patching and scaling overhead and reduces the elasticity and simplicity offered by serverless alternatives.

Use Amazon EventBridge to capture feedback events and trigger an AWS Step Functions workflow that runs validation Lambdas, calls Amazon Transcribe to convert text into audio for archiving, and stores data in Amazon RDS with records deleted after 1 year via a lifecycle job is a poor fit because Amazon Transcribe is a speech to text service and is unnecessary for text feedback, Step Functions and RDS add orchestration and operational complexity, and RDS is not as cost effective or scalable for very bursty write workloads as a serverless datastore.

Favor serverless ingestion patterns like API Gateway + SQS + Lambda for bursty traffic and choose purpose built services such as Amazon Comprehend for sentiment and DynamoDB TTL for automated retention.

A geospatial research lab plans to move archived datasets from its data center into a POSIX-compliant file system in AWS. The archives are only needed for read access for roughly 8 days each year. Which AWS service would be the most cost-effective choice for this use case?

  • ✓ B. Amazon EFS Infrequent Access

The correct choice is Amazon EFS Infrequent Access because the lab needs a POSIX compliant file system and the archives are read only a few days per year so lower storage cost for cold files is the priority.

Amazon EFS Infrequent Access gives a native POSIX and NFS mountable file system so existing workflows can read files without changes and lifecycle management moves seldom accessed files to lower cost storage while preserving access semantics. This combination delivers the needed file system behavior and much lower storage charges for data that is rarely accessed.

Amazon EFS Standard does meet the POSIX requirement but it charges active storage rates for all files so it is less cost effective for archives that sit idle most of the year.

Amazon S3 Standard-IA provides lower cost object storage for infrequent access but it is not a POSIX file system and using it would require object APIs or a gateway which adds complexity and may not satisfy a direct mount requirement.

Amazon S3 Standard is general purpose object storage with higher availability and frequent access pricing and it does not provide a native POSIX interface so it does not meet the functional requirement.

When a workload explicitly requires a mountable POSIX file system and access is rare choose Amazon EFS Infrequent Access to minimize storage cost and avoid object storage classes if a native file system is needed.

A genomics startup runs tightly coupled simulations on Amazon EC2 instances spread across three Availability Zones in a single Region. The jobs perform thousands of small read and write operations per second and require a shared POSIX-compliant file system. The team has chosen Amazon Elastic File System for its elasticity and managed operations. To minimize latency and avoid unnecessary inter-AZ traffic, how should the architect configure access from the EC2 instances to the EFS file system?

  • ✓ C. Deploy an Amazon EFS mount target in each used Availability Zone and have instances mount via the local AZ mount target

The correct choice is Deploy an Amazon EFS mount target in each used Availability Zone and have instances mount via the local AZ mount target. This option ensures that EC2 instances connect to a mount target that is local to their Availability Zone.

Deploy an Amazon EFS mount target in each used Availability Zone and have instances mount via the local AZ mount target creates an ENI in every AZ so NFS traffic remains within the AZ. Keeping traffic local reduces latency and avoids cross AZ data transfer charges and it increases resilience because a problem in one AZ does not force cross AZ NFS access for all instances.

Create one Amazon EFS mount target in a single AZ and have all instances in other AZs mount through that endpoint is inefficient because it forces cross AZ NFS traffic which adds latency and incurs inter AZ transfer costs and it creates a dependency on a single AZ that reduces availability.

Use Mountpoint for Amazon S3 on every instance and mount a shared S3 bucket as the common storage is unsuitable because Amazon S3 is object storage and it is not POSIX compliant. S3 cannot provide the low latency small I/O behavior and POSIX semantics that tightly coupled simulations require.

Run an EC2-hosted NFS proxy in each AZ in front of Amazon EFS and have all instances mount through the proxy adds operational complexity and extra network hops and it can become a performance bottleneck and a management burden compared with using native EFS mount targets.

Create a mount target in every AZ and mount to the local AZ endpoint to minimize latency and avoid cross AZ charges. If a question points to extreme HPC needs consider Amazon FSx for Lustre as an alternative when the scenario suggests a different file system.

A fintech startup runs a web portal where customers upload identification documents, which are saved to an Amazon S3 bucket in the ap-southeast-1 Region. The team wants to route uploads through Amazon CloudFront under a custom domain to accelerate performance and keep the bucket private. The design must terminate HTTPS on the custom domain and ensure only CloudFront can access the bucket for upload requests. Which actions should be implemented? (Choose 2)

  • ✓ B. Configure CloudFront origin access control for the S3 origin and restrict bucket access so only CloudFront can perform PUT and POST

  • ✓ D. Request a public certificate from AWS Certificate Manager in the us-east-1 Region and associate it with the CloudFront distribution for the custom domain

The correct options are Configure CloudFront origin access control for the S3 origin and restrict bucket access so only CloudFront can perform PUT and POST and Request a public certificate from AWS Certificate Manager in the us-east-1 Region and associate it with the CloudFront distribution for the custom domain.

Configure CloudFront origin access control for the S3 origin and restrict bucket access so only CloudFront can perform PUT and POST is correct because Origin Access Control allows CloudFront to sign origin requests with SigV4 and you can enforce an S3 bucket policy that blocks direct public write access while permitting CloudFront to forward uploads securely.

Request a public certificate from AWS Certificate Manager in the us-east-1 Region and associate it with the CloudFront distribution for the custom domain is correct because CloudFront requires ACM public certificates issued in the us-east-1 Region for custom domain HTTPS termination.

Request a public certificate from AWS Certificate Manager in the ap-southeast-1 Region and attach it to the CloudFront distribution is wrong because ACM certificates created outside us-east-1 cannot be used by CloudFront for custom domain HTTPS.

Create a CloudFront distribution that uses the S3 static website endpoint as the origin to support uploads is wrong because S3 website endpoints are intended for static website hosting and do not provide the secure, signed SigV4 write path needed for PUT and POST uploads through CloudFront.

Use Amazon API Gateway and AWS Lambda in front of S3 for uploads and keep CloudFront only for downloads is wrong because this does not satisfy the requirement to route uploads through CloudFront under the custom domain and it adds extra complexity and latency without addressing the CloudFront certificate and private-bucket access pattern.

Remember that ACM certificates used by CloudFront must be in us-east-1 and prefer Origin Access Control to sign and restrict S3 uploads through CloudFront.

A geospatial analytics startup ingests transactional event logs into an Amazon S3 bucket. Analysts query the newest data heavily for the first 5 days. After that period, the objects must remain instantly accessible with high availability for occasional ad hoc analysis. Which storage approach is the most cost-effective while meeting these requirements?

  • ✓ C. Use an S3 Lifecycle rule to transition objects to S3 Standard-Infrequent Access 30 days after creation

The correct choice is Use an S3 Lifecycle rule to transition objects to S3 Standard-Infrequent Access 30 days after creation. This option preserves immediate access and high availability while lowering storage cost for data that is accessed only occasionally after the initial heavy query period.

Use an S3 Lifecycle rule to transition objects to S3 Standard-Infrequent Access 30 days after creation is appropriate because Standard-Infrequent Access offers multi Availability Zone durability and supports instant retrieval so analysts can run ad hoc queries without restore delays. The class is priced for infrequent access and has a 30 day minimum storage duration so scheduling the transition at 30 days avoids early transition penalties and aligns costs with access patterns.

Create an S3 Lifecycle rule to move objects to S3 One Zone-Infrequent Access after 7 days is not suitable because One Zone IA stores data in a single Availability Zone and that does not meet the high availability requirement.

Configure an S3 Lifecycle rule to transition objects to S3 Standard-Infrequent Access after 7 days is incorrect because Standard-IA has a 30 day minimum storage duration and transitioning at 7 days would violate that constraint and be cost inefficient.

Transition objects to S3 Glacier Flexible Retrieval after 7 days is not acceptable because Glacier Flexible Retrieval requires restore operations that introduce minutes to hours of latency and that prevents immediate access for ad hoc analysis.

Remember that storage classes with a 30 day minimum should be transitioned at or after 30 days to avoid extra charges and to preserve immediate, multi Availability Zone access for occasional queries.

LumenCart, a global e-commerce marketplace, runs its personalized product-ranking service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in a single AWS Region. The application computes per-user results from recent browsing activity and returns dynamic responses to tens of millions of shoppers worldwide. The company wants a cost-efficient way to improve responsiveness and scale without building full multi-Region stacks, while keeping latency low for users around the world. What should the company do?

  • ✓ B. Place an Amazon CloudFront distribution in front of the existing ALB and tune dynamic caching and origin keep-alives to accelerate requests globally

Place an Amazon CloudFront distribution in front of the existing ALB and tune dynamic caching and origin keep-alives to accelerate requests globally is the best choice because CloudFront’s global edge network lowers latency even for dynamic content by optimizing connections, reusing TCP sessions to the origin, and caching cacheable portions such as API responses with short TTLs and static assets, all with minimal changes and cost-effective pricing.

Use AWS Global Accelerator to route client traffic to the current ALB and Auto Scaling group in the Region nearest to each user is less suitable because it accelerates network paths and provides anycast entry points but does not cache responses, and it shines when you already have or plan multi-Region endpoints, adding cost without solving the caching need.

Deploy additional stacks in multiple Regions and use Amazon Route 53 latency-based routing to direct users to the closest Region would improve latency but conflicts with the goal of cost optimization and avoiding multi-Region build-outs, increasing operational overhead and infrastructure spend.

Put an Amazon API Gateway edge-optimized endpoint in front of the ALB to improve global performance is not ideal because it does not offer edge caching for dynamic web responses and introduces additional layers and cost that do not directly address global content acceleration.

Exam Tip

When you see a single-Region dynamic application that needs global low latency with cost-optimized and minimal change requirements, think CloudFront in front of an ALB; reserve Global Accelerator primarily for multi-Region, TCP/UDP acceleration and fast failover scenarios.

A multinational automotive parts supplier plans to migrate most of its on-premises file shares and object repositories to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server using an online approach that reduces manual effort, speeds up transfers, and keeps costs down. Which solution is the best choice to automate and accelerate these data movements to the AWS storage targets?

  • ✓ C. AWS DataSync

The best solution is AWS DataSync. It is purpose built for online data movement and it automates scheduling and verification while accelerating transfers directly into Amazon S3 Amazon EFS and Amazon FSx for Windows File Server.

AWS DataSync uses optimized agents and a high performance transfer protocol to handle large scale migrations with incremental syncs and built in data validation. It supports NFS and SMB sources and can write to S3 EFS and FSx which makes it a single service to automate and accelerate the required migrations while keeping manual effort and costs down.

AWS Snowball Edge Storage Optimized is designed for offline physical data transfer and edge compute and it is appropriate when network bandwidth is insufficient. It does not match the requirement for an online automated accelerated transfer workflow.

AWS Transfer Family provides managed SFTP FTPS and FTP endpoints and it integrates with S3 and EFS. It does not natively migrate data into Amazon FSx for Windows File Server and it lacks the orchestration and bulk migration acceleration features of AWS DataSync.

AWS Storage Gateway File Gateway exposes SMB or NFS interfaces backed by S3 and it is intended for hybrid access and caching rather than bulk online migration into EFS and FSx for Windows File Server.

When a question specifies online automated and accelerated transfers to S3 EFS and FSx choose the service that supports all targets and provides scheduling and verification. Use Snowball Edge only for offline bulk migrations.

A digital media startup, BlueOrbit Media, is decoupling four microservices that run in private subnets across two Availability Zones and plans to use Amazon SQS for service-to-service messaging. The security team requires that traffic from the VPC to the SQS queues must not traverse the public internet or use public IP addresses. What is the most appropriate way to satisfy these requirements?

  • ✓ B. Create an interface VPC endpoint for Amazon SQS

Create an interface VPC endpoint for Amazon SQS is the correct choice because it provides private connectivity from the VPC to Amazon SQS using AWS PrivateLink and ensures traffic remains on the AWS network without public IPs or internet routing.

A Create an interface VPC endpoint for Amazon SQS provisions elastic network interfaces in your subnets and routes SQS API traffic over the AWS backbone. This keeps traffic inside AWS, works across Availability Zones, and avoids the need for an internet gateway or public addresses.

Attach an internet gateway and use public IPs to reach Amazon SQS is incorrect because using an internet gateway requires public IP addressing and sends traffic over the public internet which conflicts with the security requirement.

AWS Direct Connect is incorrect because Direct Connect is aimed at linking on premises networks to AWS and it is not the simple VPC scoped private access mechanism that an interface endpoint provides for SQS.

Launch a NAT instance in a public subnet and route private subnets’ default traffic through it is incorrect because a NAT instance still sends outbound traffic to the internet using public IPs and it adds operational and scaling overhead while failing to keep traffic solely on the AWS backbone.

When an exam scenario requires private VPC to AWS service access with no public IPs think interface VPC endpoints and AWS PrivateLink rather than IGWs or NATs.

Norfield Mutual, a regional insurer, runs several Microsoft Windows workloads on Amazon EC2, including .NET application servers and Microsoft SQL Server on Windows Server 2019, and needs a shared file system that is highly available and durable while delivering very high throughput and IOPS over SMB across the instances. Which approach should they choose to meet these requirements?

  • ✓ B. Provision Amazon FSx for Windows File Server in a Multi-AZ deployment and move the shared data to the FSx file system

Provision Amazon FSx for Windows File Server in a Multi-AZ deployment and move the shared data to the FSx file system is the correct option for Norfield Mutual because it is purpose built for Windows workloads and provides SMB access with Active Directory integration, Windows ACL support, and Multi AZ deployments for high availability while delivering the throughput and IOPS needed by .NET application servers and SQL Server.

FSx for Windows File Server supports native SMB semantics and Windows security models which preserves permissions and file locking behavior required by Windows applications. It offers managed, high performing file storage and a Multi AZ deployment option that keeps file data durable and available across failures. These characteristics make it the best fit for shared Windows file systems on EC2 Windows instances.

Extend the environment to Amazon Elastic File System with a Multi-AZ design and migrate the file shares to EFS is not suitable because Amazon EFS is an NFS file system that is optimized for Linux clients and it does not provide native SMB features and Windows ACL semantics required by Windows workloads.

Set up an Amazon S3 File Gateway and mount it on the EC2 instances to serve the share is not a good fit because the Storage Gateway with S3 targets object storage and hybrid access and it is not optimized for the Windows file semantics, low latency, and high IOPS that these server workloads require.

Use Amazon FSx for NetApp ONTAP and create SMB shares integrated with Active Directory can provide SMB and AD integration and can meet performance needs, but it is not the most straightforward choice for native Windows file services when FSx for Windows File Server exists specifically to deliver native Windows SMB features and simplified management. FSx for NetApp ONTAP is more appropriate when NetApp specific data management features are required.

Remember that when an exam scenario needs SMB, Windows ACLs, and AD integration for EC2 Windows workloads you should choose FSx for Windows File Server over EFS or S3 based gateways.

Chart showing cloud career titles and salaries

Step 1: read the exam objectives

Begin with the blueprint for the Solutions Architect Associate exam. The guide spells out domains and weightings which are secure design, resilient design, high performing design, and cost optimized design. It also clarifies in scope services such as VPC, IAM, KMS, Route 53, ELB, Auto Scaling, CloudFront, RDS, Aurora, DynamoDB, S3, EFS, EBS, and many more.

If you are transitioning from the Cloud Practitioner, note that the architect exam expects deeper design reasoning. It is also useful to compare what comes next such as the Solutions Architect Professional and specialties like the ML Specialty and the AI Practitioner.

Mapping objectives to a personal backlog keeps you focused. Sites like Scrumtuous and technical write ups on MCNZ can help you turn the blueprint into a sprint plan.

Step 2: do practice exams before studying

Complete a set of practice questions before you dive into lessons. This shows how AWS frames scenarios and reveals blind spots. If you want a quick warm up, a bank such as this Udemy practice exam collection builds exam stamina. Pair that with architect specific sets from your preferred provider.

Practice early helps you spot recurring services like IAM, KMS, VPC, ALB, Auto Scaling, RDS Multi AZ, and S3 lifecycle policies. It also trains your eye for phrases such as most cost effective, highest availability, or least operational effort.

If you study across clouds, awareness of GCP certifications and roles such as Associate Cloud Engineer, Professional Cloud Architect, Professional Cloud Developer, and Professional DevOps Engineer can help you see common architecture patterns.

Step 3: take a course

Commit to a structured course that covers the design domains and then reinforce with targeted labs. Build a study path that borrows from related AWS tracks such as Security, DevOps, Developer, and Data Engineer. If you are preparing for future ML workloads, add context from Machine Learning as well.

If you like to broaden with multi cloud, map topics to GCP roles such as Professional Cloud Security Engineer, Professional Cloud Network Engineer, Professional Data Engineer, Professional Database Engineer, Workspace Administrator, Data Practitioner, ML Engineer, and Generative AI Leader.

For pacing and accountability, many learners follow the sprint themed advice on Scrumtuous and complement with deep dives from MCNZ.

Step 4: do simple hands on projects in the AWS console

Hands on practice cements design trade offs. Keep projects small and inexpensive while targeting the blueprint.

  1. Design a VPC with public and private subnets, route tables, NAT gateway patterns, security groups, and network ACLs. Add an Application Load Balancer and EC2 Auto Scaling policy to serve a simple app.

  2. Create an Amazon RDS instance with Multi AZ, enable automated backups, test read replicas, and add Amazon RDS Proxy for connection scaling.

  3. Set up Amazon S3 with lifecycle rules, Intelligent Tiering, bucket policies, and default encryption with AWS KMS. Front a static site with Amazon CloudFront and use Amazon Route 53 for routing.

  4. Build an event driven pattern with Amazon SQS and Amazon SNS, coordinate steps with AWS Step Functions, and expose an API with Amazon API Gateway and AWS Lambda.

  5. Stream data with Amazon Kinesis, transform with AWS Glue, and batch process with Amazon EMR, then visualize with Amazon Athena and Amazon QuickSight.

These projects mirror core scenarios on the Solutions Architect Associate exam and prepare you for questions like which design meets the requirement with the least operational effort or the lowest cost.

Step 5: get serious about mock exams

When your study is solid, spend full days on mock exams. Do a complete set, review every answer, and repeat. Use your notes to explain each correct option and why the distractors are wrong. You can also build question stamina with resources like the Udemy practice exam collection and then switch to architect focused sets.

Average salary, by AWS certification

Your exam day strategy

On exam day use a consistent routine and trust your preparation.

  • Read each question carefully and watch for keywords like most secure, least effort, or lowest cost.

  • Eliminate the clear distractors first which often leaves two viable choices.

  • Prefer managed services where requirements allow since they reduce undifferentiated work.

  • Complete a fast first pass and flag questions to revisit. Use remaining time to analyze the tricky scenarios.

  • Answer every question. A guess is better than leaving it blank.

  • Track your time and aim to finish the first pass with at least twenty minutes left for review.

  • Use later questions as clues. A later scenario sometimes clarifies an earlier one.

This approach helped me make two complete passes through the exam and leave with confidence. There are always variables on test day, but a clear plan minimizes risk and improves your chance of passing on the first attempt.

Maybe go multi-cloud?

After passing SAA C03 you can target Solutions Architect Professional or branch into Security, DevOps, Developer, Data Engineer, Machine Learning, or AI Practitioner.

If you are building a multi cloud perspective, explore GCP certifications such as Associate Cloud Engineer, Professional Cloud Architect, Professional Cloud Developer, Professional DevOps Engineer, Professional Cloud Security Engineer, Professional Cloud Network Engineer, Professional Data Engineer, Professional Database Engineer, Workspace Administrator, Data Practitioner, ML Engineer, and Generative AI Leader.


Next Steps

So what’s next?

A great way to secure your employment or even open the door to new opportunities is to get certified.

If you’re interested in AWS products, here are a few great resources to help you get Cloud Practitioner, Solution Architect, Machine Learning and DevOps certified from AWS:

Put your career on overdrive and get AWS certified today!