Professional Solutions Architect Practice Exam
Question 1
Ardent Mutual is moving users from traditional desktops to Amazon WorkSpaces with zero clients so that staff can access claims processing applications, and the security policy mandates that connections must originate only from corporate branch networks while a new branch will go live within 45 days. Which approach will satisfy these controls while providing the highest operational efficiency?
-
❏ A. Create an AWS Client VPN endpoint and require WorkSpaces users to connect through the VPN from branch networks
-
❏ B. Publish a custom WorkSpaces image that uses the Windows Firewall to allow only the public IP ranges of the branches
-
❏ C. Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory
-
❏ D. Issue device certificates with AWS Certificate Manager to office endpoints and enable device trust on the WorkSpaces directory
Question 2
LumaPress, a global publishing firm with operations on several continents, runs fifteen separate AWS accounts that are administered by local IT teams. The headquarters governance team must gain centralized visibility and enforce consistent security standards across every member account. The architect has already enabled AWS Organizations and all member accounts have joined. What should be done next to allow the headquarters team to audit and manage the security posture across these accounts?
-
❏ A. Create an IAM policy named SecurityAudit in each member account and attach it to the regional administrator groups
-
❏ B. Create a SecurityAudit IAM role in every member account and set a trust policy that allows the management account to assume it
-
❏ C. Provision a dedicated IAM user for the headquarters team in each member account with AdministratorAccess
-
❏ D. Google Cloud Organization Policy
Question 3
As part of a staged move out of a corporate data center into AWS, a solutions architect at Northstar Analytics must automatically discover network dependencies for their on premises Linux virtual machines that run on VMware during the last 21 days and produce a diagram that includes host IP addresses, hostnames, and TCP connection details. Which approach should be used to accomplish this?
-
❏ A. Use the AWS Application Discovery Service Agentless Collector for VMware to gather server inventory and then export topology images from AWS Migration Hub as .png files
-
❏ B. Deploy the AWS Application Discovery Service agent on the Linux servers after setting an AWS Migration Hub home Region and grant the service permission to publish discovery data so network diagrams can be created
-
❏ C. Install the AWS Systems Manager Agent on the VMs and enable Inventory and Application Manager to capture software and relationship data across the fleet
-
❏ D. Install the AWS Application Migration Service replication agent on the VMs and use Workload Discovery on AWS to build network diagrams from data stored in Migration Hub
Question 4
Helios Imaging is launching a mobile app that uploads pictures for processing, and usage jumps to 25 times normal during evening beta runs. The solutions architect needs a design that scales image processing elastically and also notifies users on their devices when their jobs complete. Which actions should the architect implement to meet these requirements? (Choose 3)
-
❏ A. Publish a message to Amazon SNS to send a mobile push notification when processing is finished
-
❏ B. Configure Amazon MQ as the target for S3 notifications so that workers can read from the broker
-
❏ C. Process each image in an AWS Lambda function that is invoked by messages from the SQS queue
-
❏ D. Use S3 Batch Operations to process objects individually when a message arrives in the queue
-
❏ E. Have the mobile app upload directly to Amazon S3 and configure an S3 event that posts messages to an Amazon SQS standard queue
-
❏ F. Send a push notification to the app by using Amazon Simple Email Service when processing is complete
Question 5
Bryer Logistics operates an Amazon FSx for Windows File Server that was deployed as Single-AZ 2 for back-office file shares. A new corporate standard now requires highly available file storage across Availability Zones for all teams. The operations group must also observe storage and performance on the file system and capture detailed end-user access events on the FSx share for audit. Which combination of changes and monitoring should the team implement? (Choose 2)
-
❏ A. Recreate the file system as Single-AZ 1 and move data with AWS DataSync, then depend on snapshots for continuity
-
❏ B. Use Amazon CloudWatch to watch capacity and performance metrics for FSx and enable file access auditing to publish user access events to CloudWatch Logs or stream them with Amazon Kinesis Data Firehose
-
❏ C. Track file system activity with AWS CloudTrail and log end-user actions to CloudWatch Logs
-
❏ D. Set up a new Amazon FSx for Windows File Server with Multi-AZ, copy data with AWS DataSync, update client mappings, and trigger a controlled failover by changing the file system throughput capacity
-
❏ E. Test Multi-AZ failover by modifying or deleting the elastic network interfaces that FSx created
Question 6
A regional credit union needs the ability to shift 420 employees to remote work within hours during a crisis. Their environment includes Windows and Linux desktops that run office productivity and messaging applications. The solution must integrate with the company’s existing on premises Active Directory so employees keep their current credentials. It must also enforce multifactor authentication and present a desktop experience that closely resembles what users already use. Which AWS solution best satisfies these needs?
-
❏ A. Use Amazon AppStream 2.0 to stream applications and customize a desktop style image while connecting to the data center over a site to site VPN and integrating identity with Active Directory Federation Services
-
❏ B. Deploy Amazon WorkSpaces and link it to the on premises Active Directory through an AD Connector over a site to site VPN and enable MFA by configuring a RADIUS server for WorkSpaces
-
❏ C. Provision Amazon WorkSpaces and connect it to the on premises network with a VPN and an AD Connector and enable MFA directly in the WorkSpaces console without a RADIUS service
-
❏ D. Use Amazon WorkSpaces Web with SAML federation to Active Directory Federation Services and require MFA at the identity provider while publishing access over a site to site VPN
Question 7
Evergreen Fabrication needs an inexpensive Amazon S3 based backup for its data center file shares that must present NFS to on-premises servers. The business wants the data to transition to an archive tier after 7 days and it accepts that disaster recovery restores can take several days. Which approach best satisfies these needs at the lowest cost?
-
❏ A. Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Standard Infrequent Access after 7 days
-
❏ B. Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Glacier Deep Archive after 7 days
-
❏ C. Deploy AWS Storage Gateway volume gateway with an S3 bucket and configure a lifecycle rule to move objects to S3 Glacier Deep Archive after 7 days
-
❏ D. Migrate the file shares to Amazon EFS and enable EFS lifecycle to infrequent access to reduce costs
Question 8
PixelForge Studios runs several cross platform titles that track session state, player profiles, match history, and a global scoreboard, and the company plans to migrate these systems to AWS to handle tens of millions of concurrent players and API requests while keeping latency in the low single digit milliseconds. The engineering group needs an in memory data layer that can power a highly available real time and personalized leaderboard at internet scale. Which approach should they implement? (Choose 2)
-
❏ A. Build the leaderboard on Amazon Neptune to model player relationships and query results quickly
-
❏ B. Run the leaderboard on Amazon DynamoDB fronted by DynamoDB Accelerator DAX to achieve in-memory reads with low latency at scale
-
❏ C. Use Google Cloud Spanner to host score data for globally consistent and highly available storage
-
❏ D. Implement the scoreboard on Amazon ElastiCache for Redis to keep rankings in memory and serve lookups in near real time
-
❏ E. Store scores directly in Amazon DynamoDB and query the table for ranking
Question 9
The platform engineering team at MeridianLabs needs to manage AWS infrastructure as code to support a rapid scale up. Their current footprint runs in one AWS Region, but their near term plan requires standardized deployments into three AWS Regions and six AWS accounts with centralized governance. What should the solutions architect implement to meet these requirements?
-
❏ A. Author AWS CloudFormation templates, attach per account IAM policies, and run the templates in each target Region independently
-
❏ B. Use AWS Organizations with AWS Control Tower, then push CloudFormation stacks from the management account to every account
-
❏ C. Adopt AWS Organizations with AWS CloudFormation StackSets, assign a delegated administrator, and distribute one template across many accounts and Regions from a single control point
-
❏ D. Publish standardized products in AWS Service Catalog and share them to all accounts so each team can launch the products in their own Region
Question 10
BrightBazaar, an online marketplace, runs a static site on Amazon S3 served through Amazon CloudFront, with Amazon API Gateway invoking AWS Lambda for cart and checkout, and the functions write to an Amazon RDS for MySQL cluster that currently uses On-Demand instances with steady utilization for the last 18 months. Security scans and logs show repeated SQL injection and other web exploit attempts, customers report slower checkout during traffic spikes, and the team observes Lambda cold starts at peak times. The company wants to keep latency low under bursty load, reduce database spend given the stable usage pattern, and add targeted protection against SQL injection and similar attacks. Which approach best satisfies these goals?
-
❏ A. Increase the memory of the Lambda functions, migrate the transactional database to Amazon Redshift, and integrate Amazon Inspector with CloudFront
-
❏ B. Configure Lambda with provisioned concurrency for anticipated surges, purchase RDS Reserved Instances for the MySQL cluster, and attach AWS WAF to the CloudFront distribution with managed rules for SQL injection and common exploits
-
❏ C. Raise Lambda timeouts during peak hours, switch the database to RDS Reserved Instances, and subscribe to AWS Shield Advanced on CloudFront
-
❏ D. Migrate the workload to Google Cloud SQL and place the site behind Cloud CDN with security policies enforced by Cloud Armor

All questions come from my Solutions Architect Udemy Course and certificationexams.pro
Question 11
SummitPay processes purchase events in Amazon DynamoDB tables and the risk operations group needs to detect suspicious activity and requires that every item change be captured and available for analysis within 45 minutes. What should a Solutions Architect implement to satisfy this requirement while enabling near real-time anomaly detection and alerting?
-
❏ A. Forward data changes to Google Cloud Pub/Sub and run streaming detection in Dataflow with notifications through Cloud Functions
-
❏ B. Enable DynamoDB Streams on the tables and trigger an AWS Lambda function that writes the change records to Amazon Kinesis Data Streams then analyze anomalies with Amazon Kinesis Data Analytics and send alerts with Amazon SNS
-
❏ C. Export the DynamoDB tables to Apache Hive on Amazon EMR every hour and run batch queries to flag anomalies then publish Amazon SNS notifications
-
❏ D. Use AWS CloudTrail to record all DynamoDB write APIs and create Amazon SNS notifications using CloudTrail event filters when behavior looks suspicious
Question 12
North Coast Furnishings plans to move an undocumented set of VMware vSphere virtual machines into AWS for a consolidation effort and the team needs an easy way to automatically discover and inventory the VMs for migration planning with minimal ongoing work. Which approach best meets these needs while keeping operational overhead low?
-
❏ A. Install the AWS Application Migration Service agent on every VM then aggregate configuration and performance data into Amazon Redshift and build dashboards in Amazon QuickSight
-
❏ B. Deploy the agentless Migration Evaluator collector to the ESXi hypervisor layer and review the results in Migration Evaluator then exclude idle VMs and send the inventory to AWS Migration Hub
-
❏ C. Export the vCenter inventory to a .csv file and manually check disk and CPU usage for each server then import the subset into AWS Application Migration Service and migrate any remaining systems with AWS Server Migration Service
-
❏ D. Run Google Migration Center discovery in the data center and use its right sizing report to plan the move
Question 13
The media platform team at VistaMosaic Media is experiencing failures in its image transformation workflow as uploaded files now average 85 MB. A Python 3.11 Lambda function reacts to Amazon S3 object created events, downloads the file from one bucket, performs the transformation, writes the output to another bucket, and updates a DynamoDB table named FramesCatalog. The Lambda frequently reaches the 900 second maximum timeout and the company wants a redesign that prevents these timeouts while avoiding any server management. Which combination of changes will satisfy these goals? (Choose 2)
-
❏ A. Set up an AWS Step Functions workflow that invokes the existing Lambda in parallel and raise its provisioned concurrency
-
❏ B. Migrate images to Amazon EFS and move metadata to Amazon RDS then mount the EFS file system from the Lambda function
-
❏ C. Create an Amazon ECS task definition for AWS Fargate that uses the container image from Amazon ECR and update the Lambda to start a task when a new object lands in Amazon S3
-
❏ D. Create an Amazon ECS task definition with the EC2 launch type and have the Lambda trigger those tasks on file arrival
-
❏ E. Build a container image of the processor and publish it to Amazon ECR
Question 14
BlueOrbit Media runs analytics, recommendations and video processing on AWS and ingests about 25 TB of VPC Flow Logs each day to reveal cross Region traffic and place dependent services together for better performance. The logs are written to Amazon Kinesis Data Streams and that stream is configured as the source for a Kinesis Data Firehose delivery stream that forwards to an S3 bucket. The team installed Kinesis Agent on a new set of network appliances and pointed the agent at the same Firehose delivery stream, but no records appear at the destination. As the Solutions Architect Professional, what is the most likely root cause?
-
❏ A. Kinesis Agent can only publish to Kinesis Data Streams and cannot send directly to Kinesis Data Firehose
-
❏ B. Kinesis Data Firehose has reached a scaling limit and requires manual capacity increases
-
❏ C. A Firehose delivery stream that uses a Kinesis data stream as its source does not accept direct writes from Kinesis Agent
-
❏ D. The IAM role used by the Kinesis Agent is missing firehose:PutRecord permissions
Question 15
Harvest Logistics operates a three tier workload in a single AWS Region and must implement disaster recovery with a recovery time objective of 45 minutes and a recovery point objective of 4 minutes for the database layer. The web and API tiers run on stateless Amazon EC2 Auto Scaling groups and the data layer is a 24 TB Amazon Aurora MySQL cluster. Which combination of actions will meet these objectives while keeping costs under control? (Choose 2)
-
❏ A. Use AWS Database Migration Service to continuously replicate from Aurora to an Amazon RDS MySQL instance in another Region
-
❏ B. Run a minimal hot standby of the web and application layers in another Region and scale out during failover
-
❏ C. Configure manual snapshots of the Aurora cluster every 4 minutes and copy them to another Region
-
❏ D. Provision an Aurora MySQL cross Region read replica and plan to promote it during a disaster
-
❏ E. Enable Multi AZ for the Aurora cluster and rely on automatic backups for recovery
Question 16
Marston Media purchased four boutique agencies and rolled their accounts into a single AWS Organization, yet the accounting team cannot easily produce combined cost views for every subsidiary and they want to feed the results into their own reporting application. What approach will enable consistent cross entity cost reporting for their self managed app?
-
❏ A. Use the AWS Price List Query API to fetch per account pricing data and create a saved view in AWS Cost Explorer for the finance group
-
❏ B. Publish an AWS Cost and Usage Report at the organization level and enable cost allocation tags and cost categories and have the accounting team rely on a reusable report in AWS Cost Explorer
-
❏ C. Create an organization wide AWS Cost and Usage Report and turn on cost allocation tags and cost categories and query the CUR with Amazon Athena using an external table named acct_rollup_ledger and build and share an Amazon QuickSight dataset for the finance team
-
❏ D. Configure AWS Billing Conductor to define billing groups for each subsidiary and export the pro forma charges for the self managed application to ingest
Question 17
A DevOps team at Riverbend Labs is trying to scale a compute intensive workload by adding more Amazon EC2 instances to an existing cluster placement group in a single Availability Zone, but the launches fail with an insufficient capacity error. What should the solutions architect do to troubleshoot this situation?
-
❏ A. Use a spread placement group to distribute instances across distinct hardware in multiple Availability Zones
-
❏ B. Create a new placement group and attempt to merge it with the existing placement group
-
❏ C. Stop and then start all instances in the placement group and try the launches again
-
❏ D. Google Compute Engine
Question 18
StreamForge Media is moving its catalog metadata API to a serverless design on AWS. About 30 percent of its older set top boxes fail when certain response headers are present and the on premises load balancer currently strips those headers according to the User Agent. The team must preserve this behavior for at least the next 12 months while directing requests to different Lambda functions based on request type. Which architecture should they implement to satisfy these goals?
-
❏ A. Build an Amazon API Gateway REST API that integrates with multiple AWS Lambda functions per route and customize gateway responses to remove the offending headers for requests from specific User Agent values
-
❏ B. Place an Amazon CloudFront distribution in front of an Application Load Balancer and have the ALB invoke the appropriate AWS Lambda function per request type and use a CloudFront Function to strip the problematic headers when the User Agent matches legacy clients
-
❏ C. Use Google Cloud CDN with Cloud Load Balancing and Cloud Functions to route traffic and remove the headers for legacy user agents
-
❏ D. Front the service with Amazon CloudFront and an Application Load Balancer and have a Lambda@Edge function delete the problematic headers during viewer responses based on the User Agent
Question 19
RoadGrid is a logistics SaaS that runs a multi-tenant fleet tracking platform using shared Amazon DynamoDB tables with AWS Lambda handling requests. Each carrier includes a unique carrier_id in every API call. The business wants to introduce tiered billing that charges tenants based on actual DynamoDB consumption for both reads and writes. The finance team already ingests hourly AWS Cost and Usage Reports into a centralized payer account and plans to use those reports for monthly chargebacks. They need the most precise and economical approach that requires minimal ongoing maintenance to measure and allocate DynamoDB usage per tenant. Which approach should they choose?
-
❏ A. Enable DynamoDB Streams and process them with a separate Lambda to extract carrier_id and item size on every write then aggregate writes per tenant and map that to the overall DynamoDB bill
-
❏ B. Use AWS Application Cost Profiler to ingest per tenant usage records from the app and produce allocated DynamoDB costs by customer
-
❏ C. Log structured JSON from the Lambda handlers that includes carrier_id plus computed RCUs and WCUs for each request into CloudWatch Logs and run a scheduled Lambda to aggregate by tenant and pull the monthly DynamoDB spend from the Cost Explorer API to allocate costs by proportion
-
❏ D. Apply a cost allocation tag for carrier_id to the shared DynamoDB table and activate the tag in the billing console then query the CUR by tag to analyze tenant consumption and cost
Question 20
Riverbend Media recently rolled out SAML 2.0 single sign on using its on premises identity provider to grant workforce access to its AWS accounts. The architect validated the setup with a lab account through the federation portal and the console opened successfully. Later three pilot users tried to sign in through the same portal and they could not get access to the AWS environment. What should the architect verify to confirm that the identity federation is configured correctly? (Choose 3)
-
❏ A. Enable Google Cloud Identity Aware Proxy in front of the federation portal to pass authenticated headers to AWS
-
❏ B. Configure each IAM role that federated users will assume to trust the SAML provider as the principal
-
❏ C. Ensure the identity provider issues SAML assertions that map users or groups to specific IAM roles with the required permissions
-
❏ D. Verify that the federation portal calls the AWS STS AssumeRoleWithSAML API with the SAML provider ARN the target role ARN and the SAML assertion from the identity provider
-
❏ E. Require IAM users to enter a time based one time password during portal sign in
-
❏ F. Add the on premises identity provider as an event source in AWS STS to process authentication requests
AWS Solutions Architect Professional Practice Exam Answered

All questions come from my Solutions Architect Udemy Course and certificationexams.pro
Question 1
Ardent Mutual is moving users from traditional desktops to Amazon WorkSpaces with zero clients so that staff can access claims processing applications, and the security policy mandates that connections must originate only from corporate branch networks while a new branch will go live within 45 days. Which approach will satisfy these controls while providing the highest operational efficiency?
-
✓ C. Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory
The correct option is Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory.
WorkSpaces IP access control groups enforce source IP allow lists at the WorkSpaces service edge, which ensures that streaming sessions can be initiated only from the specified corporate branch public IP ranges. Because the control is applied to the directory, it covers all WorkSpaces registered to that directory and is managed centrally. It is straightforward to add the new branch public IPs when they become available, which aligns with the 45 day timeline, and changes take effect quickly without reimaging WorkSpaces, installing agents, or changing user workflows. This approach also works with zero clients since enforcement occurs in the service rather than on the device.
Create an AWS Client VPN endpoint and require WorkSpaces users to connect through the VPN from branch networks is incorrect because zero clients typically cannot run a VPN client and this adds unnecessary infrastructure and operational overhead. It also does not inherently restrict access to branch origin unless you separately constrain the VPN, which duplicates what the service already provides with Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory, and it introduces additional latency and complexity.
Publish a custom WorkSpaces image that uses the Windows Firewall to allow only the public IP ranges of the branches is incorrect because filtering inside the guest OS does not control session establishment at the WorkSpaces edge. The WorkSpaces instance typically sees connections from the streaming gateways rather than the user’s public IP, so the Windows Firewall rules would not reliably enforce branch-only access. This also creates image maintenance overhead and does not meet the requirement as effectively as Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory.
Issue device certificates with AWS Certificate Manager to office endpoints and enable device trust on the WorkSpaces directory is incorrect because device trust verifies managed devices but does not guarantee that connections originate from branch networks. Many zero clients cannot host device certificates, which limits applicability, and standing up certificate issuance and ongoing lifecycle management reduces operational efficiency compared to Configure a WorkSpaces IP access control group with the public IP addresses of the office sites and attach it to the directory.
When a question asks to restrict where sessions originate, prefer native network allow lists at the managed service boundary. Look for features that apply centrally at the directory or service level for easier rollout to new sites and avoid solutions that require agents or custom images on each desktop.
Question 2
LumaPress, a global publishing firm with operations on several continents, runs fifteen separate AWS accounts that are administered by local IT teams. The headquarters governance team must gain centralized visibility and enforce consistent security standards across every member account. The architect has already enabled AWS Organizations and all member accounts have joined. What should be done next to allow the headquarters team to audit and manage the security posture across these accounts?
-
✓ B. Create a SecurityAudit IAM role in every member account and set a trust policy that allows the management account to assume it
The correct option is Create a SecurityAudit IAM role in every member account and set a trust policy that allows the management account to assume it.
This approach establishes a standard cross account access pattern that lets the headquarters governance team assume a role into each member account. By defining a trust relationship that grants the management account permission to assume the SecurityAudit IAM role, the team can use temporary credentials to centrally inspect configurations and security settings without creating long lived users. Attaching the AWS managed SecurityAudit permissions to the SecurityAudit IAM role grants broad read only visibility into security relevant metadata across services while keeping write privileges out of scope for auditing.
Create an IAM policy named SecurityAudit in each member account and attach it to the regional administrator groups is incorrect because a policy alone does not enable cross account access and attaching it to local groups does not grant the headquarters team any ability to assume access from the management account.
Provision a dedicated IAM user for the headquarters team in each member account with AdministratorAccess is incorrect because creating users in every account increases operational overhead and credential sprawl and it violates least privilege by granting full administrator rights when only audit level access is required.
Google Cloud Organization Policy is incorrect because it applies to Google Cloud rather than AWS and does not address governance in AWS Organizations.
When a question asks for centralized visibility or governance across many AWS accounts, look for the cross account assume role pattern that uses a trust policy and attach only the least privilege permissions needed for the task.
Question 3
As part of a staged move out of a corporate data center into AWS, a solutions architect at Northstar Analytics must automatically discover network dependencies for their on premises Linux virtual machines that run on VMware during the last 21 days and produce a diagram that includes host IP addresses, hostnames, and TCP connection details. Which approach should be used to accomplish this?
-
✓ B. Deploy the AWS Application Discovery Service agent on the Linux servers after setting an AWS Migration Hub home Region and grant the service permission to publish discovery data so network diagrams can be created
The correct option is Deploy the AWS Application Discovery Service agent on the Linux servers after setting an AWS Migration Hub home Region and grant the service permission to publish discovery data so network diagrams can be created.
The Application Discovery Service agent collects in-guest data that includes hostnames, IP addresses, running processes, and detailed TCP connection information over time. This data supports dependency mapping in Migration Hub so you can view communication between servers and create network diagrams that satisfy a 21 day lookback. Setting a Migration Hub home Region and granting permissions to publish discovery data are required steps for the agent to send data that Migration Hub can visualize.
Use the AWS Application Discovery Service Agentless Collector for VMware to gather server inventory and then export topology images from AWS Migration Hub as .png files is incorrect because the agentless collector gathers VM inventory from vCenter and basic performance data but it does not collect in-guest process or TCP connection details that are needed for dependency mapping. Migration Hub dependency visualization relies on agent data and it is not designed to export PNG topology images from agentless inventory alone.
Install the AWS Systems Manager Agent on the VMs and enable Inventory and Application Manager to capture software and relationship data across the fleet is incorrect because Systems Manager Inventory focuses on software and configuration metadata and Application Manager organizes resources. These features do not discover or map on-premises TCP connections or server-to-server dependencies.
Install the AWS Application Migration Service replication agent on the VMs and use Workload Discovery on AWS to build network diagrams from data stored in Migration Hub is incorrect because the replication agent is for lift and shift replication and does not collect discovery or network dependency data. Workload Discovery visualizes AWS resources rather than on-premises network dependencies and it does not build diagrams from Migration Hub discovery data.
When a question emphasizes network dependencies or TCP connection details for on premises servers, prefer the agent based AWS Application Discovery Service approach and remember to set a Migration Hub home Region before collecting data.
Question 4
Helios Imaging is launching a mobile app that uploads pictures for processing, and usage jumps to 25 times normal during evening beta runs. The solutions architect needs a design that scales image processing elastically and also notifies users on their devices when their jobs complete. Which actions should the architect implement to meet these requirements? (Choose 3)
-
✓ A. Publish a message to Amazon SNS to send a mobile push notification when processing is finished
-
✓ C. Process each image in an AWS Lambda function that is invoked by messages from the SQS queue
-
✓ E. Have the mobile app upload directly to Amazon S3 and configure an S3 event that posts messages to an Amazon SQS standard queue
The correct options are Publish a message to Amazon SNS to send a mobile push notification when processing is finished, Process each image in an AWS Lambda function that is invoked by messages from the SQS queue, and Have the mobile app upload directly to Amazon S3 and configure an S3 event that posts messages to an Amazon SQS standard queue.
Having the app upload to Amazon S3 and using an S3 event that posts to an Amazon SQS standard queue creates an asynchronous buffer that absorbs the 25 times spike during evening runs. SQS provides durable queuing and high throughput so producers and consumers are decoupled and the system can smooth bursty traffic.
Processing each message with AWS Lambda that is invoked from SQS enables elastic scaling because Lambda increases consumer concurrency in response to queue depth. This pattern keeps workers stateless and cost efficient and it scales automatically with demand.
Publishing to Amazon SNS mobile push when an image completes allows the service to notify devices through platform push services without managing device tokens yourself. This cleanly completes the workflow by informing users as soon as their jobs finish.
The option Configure Amazon MQ as the target for S3 notifications so that workers can read from the broker is not valid because Amazon MQ is not a supported destination for Amazon S3 event notifications. S3 can notify only Amazon SQS, Amazon SNS, or AWS Lambda.
The option Use S3 Batch Operations to process objects individually when a message arrives in the queue does not meet the need for real time elastic processing because S3 Batch Operations is designed for large batch jobs that are started by a manifest or filter and it does not trigger from a queue message.
The option Send a push notification to the app by using Amazon Simple Email Service when processing is complete is incorrect because Amazon Simple Email Service sends email rather than mobile push notifications. Device notifications for this use case belong with Amazon SNS mobile push.
Map event sources to their supported destinations. For S3 notifications think of SQS, SNS, and Lambda. For bursty ingestion choose SQS standard to buffer and Lambda to scale consumers, then use SNS mobile push to notify users.
Question 5
Bryer Logistics operates an Amazon FSx for Windows File Server that was deployed as Single-AZ 2 for back-office file shares. A new corporate standard now requires highly available file storage across Availability Zones for all teams. The operations group must also observe storage and performance on the file system and capture detailed end-user access events on the FSx share for audit. Which combination of changes and monitoring should the team implement? (Choose 2)
-
✓ B. Use Amazon CloudWatch to watch capacity and performance metrics for FSx and enable file access auditing to publish user access events to CloudWatch Logs or stream them with Amazon Kinesis Data Firehose
-
✓ D. Set up a new Amazon FSx for Windows File Server with Multi-AZ, copy data with AWS DataSync, update client mappings, and trigger a controlled failover by changing the file system throughput capacity
The correct options are Use Amazon CloudWatch to watch capacity and performance metrics for FSx and enable file access auditing to publish user access events to CloudWatch Logs or stream them with Amazon Kinesis Data Firehose and Set up a new Amazon FSx for Windows File Server with Multi-AZ, copy data with AWS DataSync, update client mappings, and trigger a controlled failover by changing the file system throughput capacity.
The first choice meets the observability and audit requirements. FSx for Windows File Server publishes storage and performance metrics that can be monitored with CloudWatch, which lets operations track capacity, throughput, and latency. FSx file access auditing generates Windows Security event logs which can be delivered to CloudWatch Logs for search and retention or streamed through Kinesis Data Firehose for delivery to a destination such as Amazon S3, which satisfies detailed end user access auditing.
The second choice addresses the new availability standard. Multi AZ deployment provides high availability across Availability Zones for FSx for Windows File Server. DataSync is the recommended service to copy file data into the new file system efficiently. After updating client mappings to the new share, you can initiate a controlled failover by changing the throughput capacity, which exercises the Multi AZ failover path without disrupting managed networking components.
Recreate the file system as Single-AZ 1 and move data with AWS DataSync, then depend on snapshots for continuity is incorrect because Single AZ does not meet the requirement for high availability across Availability Zones and snapshots do not provide automatic cross AZ service continuity for client access.
Track file system activity with AWS CloudTrail and log end-user actions to CloudWatch Logs is incorrect because CloudTrail records control plane API activity and does not capture file level user access events. FSx file access auditing must be used to obtain detailed end user access logs.
Test Multi-AZ failover by modifying or deleting the elastic network interfaces that FSx created is incorrect because the service manages these network interfaces and they must not be altered. You should use supported operations such as modifying throughput capacity to trigger a safe failover.
Map each requirement to the native feature of the service. For FSx for Windows, think CloudWatch metrics for performance, file access auditing for user events, Multi AZ for cross AZ resilience, and use DataSync for migrations. Avoid actions that tamper with managed infrastructure.
Question 6
A regional credit union needs the ability to shift 420 employees to remote work within hours during a crisis. Their environment includes Windows and Linux desktops that run office productivity and messaging applications. The solution must integrate with the company’s existing on premises Active Directory so employees keep their current credentials. It must also enforce multifactor authentication and present a desktop experience that closely resembles what users already use. Which AWS solution best satisfies these needs?
-
✓ B. Deploy Amazon WorkSpaces and link it to the on premises Active Directory through an AD Connector over a site to site VPN and enable MFA by configuring a RADIUS server for WorkSpaces
The correct choice is Deploy Amazon WorkSpaces and link it to the on premises Active Directory through an AD Connector over a site to site VPN and enable MFA by configuring a RADIUS server for WorkSpaces.
This meets every requirement because Amazon WorkSpaces provides managed Windows and Linux desktops with a familiar, full desktop experience. It integrates with your on premises Active Directory using an AD Connector over a site to site VPN so employees keep their existing credentials. You can enforce multifactor authentication by configuring a RADIUS server with the directory that registers your WorkSpaces. It also scales quickly which fits the need to move 420 employees to remote work within hours.
Use Amazon AppStream 2.0 to stream applications and customize a desktop style image while connecting to the data center over a site to site VPN and integrating identity with Active Directory Federation Services is not the best fit because AppStream 2.0 streams individual applications rather than provisioning full desktops. This does not closely match the requested traditional desktop experience for office productivity and messaging clients.
Provision Amazon WorkSpaces and connect it to the on premises network with a VPN and an AD Connector and enable MFA directly in the WorkSpaces console without a RADIUS service is incorrect because WorkSpaces does not offer a native MFA switch for AD integrated directories. MFA is enforced by integrating a RADIUS server with the directory, not by enabling it directly in the console.
Use Amazon WorkSpaces Web with SAML federation to Active Directory Federation Services and require MFA at the identity provider while publishing access over a site to site VPN does not meet the requirement because WorkSpaces Web delivers a secure browser for web applications rather than full Windows or Linux desktops. It cannot replicate the full desktop experience that users already have.
When a scenario requires a full desktop experience with existing AD credentials, map it to Amazon WorkSpaces with AD Connector over private connectivity and remember that MFA is implemented using a RADIUS server. If an option suggests enabling MFA directly in the console or proposes AppStream 2.0 for full desktops, reconsider.
Question 7
Evergreen Fabrication needs an inexpensive Amazon S3 based backup for its data center file shares that must present NFS to on-premises servers. The business wants the data to transition to an archive tier after 7 days and it accepts that disaster recovery restores can take several days. Which approach best satisfies these needs at the lowest cost?
-
✓ B. Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Glacier Deep Archive after 7 days
The correct option is Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Glacier Deep Archive after 7 days.
This approach meets the NFS requirement because File Gateway presents NFS file shares on premises while storing objects in Amazon S3. It also directly aligns with the desire for an S3 based backup. Using an S3 lifecycle rule to transition data to Glacier Deep Archive after a short warm period delivers the lowest storage cost for long term retention. The organization accepts that disaster recovery restores can take several days, and Glacier Deep Archive retrievals are intentionally slower, which matches that tolerance while minimizing spend.
Set up AWS Storage Gateway file gateway linked to an S3 bucket and configure a lifecycle rule to shift objects to S3 Standard Infrequent Access after 7 days does not minimize cost as well as Glacier Deep Archive for archival data. Standard Infrequent Access provides quicker access than needed in this scenario and carries a higher per gigabyte storage price than Glacier Deep Archive.
Deploy AWS Storage Gateway volume gateway with an S3 bucket and configure a lifecycle rule to move objects to S3 Glacier Deep Archive after 7 days fails the NFS requirement because Volume Gateway exposes iSCSI block storage rather than NFS file shares. In addition, lifecycle rules target S3 objects and do not apply to the block storage volumes and their snapshots in the same way, so this design does not meet the stated needs.
Migrate the file shares to Amazon EFS and enable EFS lifecycle to infrequent access to reduce costs does not satisfy the request for an S3 based backup and would generally cost more for archival use than Glacier Deep Archive. While EFS offers NFS, it is a different managed file system service and is not the low cost archival pattern described in the question.
Match the required on premises NFS protocol to the right gateway and then map restore tolerance to the S3 storage class. If slow recovery is acceptable, prefer S3 Glacier Deep Archive with lifecycle transitions for the lowest cost.
Question 8
PixelForge Studios runs several cross platform titles that track session state, player profiles, match history, and a global scoreboard, and the company plans to migrate these systems to AWS to handle tens of millions of concurrent players and API requests while keeping latency in the low single digit milliseconds. The engineering group needs an in memory data layer that can power a highly available real time and personalized leaderboard at internet scale. Which approach should they implement? (Choose 2)
-
✓ B. Run the leaderboard on Amazon DynamoDB fronted by DynamoDB Accelerator DAX to achieve in-memory reads with low latency at scale
-
✓ D. Implement the scoreboard on Amazon ElastiCache for Redis to keep rankings in memory and serve lookups in near real time
The correct options are Run the leaderboard on Amazon DynamoDB fronted by DynamoDB Accelerator DAX to achieve in-memory reads with low latency at scale and Implement the scoreboard on Amazon ElastiCache for Redis to keep rankings in memory and serve lookups in near real time.
Run the leaderboard on Amazon DynamoDB fronted by DynamoDB Accelerator DAX to achieve in-memory reads with low latency at scale is well suited for internet scale read heavy access patterns where player profiles and session state are fetched frequently. This approach adds a managed in memory cache in front of the durable key value store and delivers microsecond to low millisecond read latency while absorbing extreme read throughput. It supports write through behavior so updates to scores and profiles are persisted while subsequent reads are served from memory, and it offers high availability across multiple Availability Zones to meet uptime goals.
Implement the scoreboard on Amazon ElastiCache for Redis to keep rankings in memory and serve lookups in near real time is ideal for a global leaderboard because sorted set operations efficiently maintain rankings and retrieve top N or rank by member with very low latency. This in memory engine scales horizontally with cluster mode and maintains availability with replication across Availability Zones, which enables a continuously updating and highly responsive leaderboard for tens of millions of players.
Build the leaderboard on Amazon Neptune to model player relationships and query results quickly is not a good fit because a graph database is optimized for traversing relationships rather than performing high throughput rank and top N queries, and it does not provide an in memory ranking primitive comparable to sorted sets for ultra low latency lookups.
Use Google Cloud Spanner to host score data for globally consistent and highly available storage does not align with the requirement to implement on AWS and it is not an in memory cache layer for sub millisecond ranking queries. It targets different consistency and global distribution needs and would introduce cross cloud complexity.
Store scores directly in Amazon DynamoDB and query the table for ranking is insufficient for a real time leaderboard because computing global rank or top N from the base table requires scans or complex precomputation, which does not meet consistent single digit millisecond latency at internet scale without an in memory layer.
When you see a real time leaderboard with very low latency and massive read volume, map the requirement to an in memory solution. Think sorted sets for rank queries or a managed cache like DAX in front of a durable store, and quickly eliminate options that require scans, graph traversals, or a different cloud.
Question 9
The platform engineering team at MeridianLabs needs to manage AWS infrastructure as code to support a rapid scale up. Their current footprint runs in one AWS Region, but their near term plan requires standardized deployments into three AWS Regions and six AWS accounts with centralized governance. What should the solutions architect implement to meet these requirements?
-
✓ C. Adopt AWS Organizations with AWS CloudFormation StackSets, assign a delegated administrator, and distribute one template across many accounts and Regions from a single control point
The correct option is Adopt AWS Organizations with AWS CloudFormation StackSets, assign a delegated administrator, and distribute one template across many accounts and Regions from a single control point.
This approach provides centralized governance with Organizations and uses StackSets to deploy a single CloudFormation template consistently to multiple accounts and Regions. Assigning a delegated administrator lets a designated member account manage deployments at scale without using the management account for day to day operations. Service managed permissions with organizational units enable automatic deployment to existing and newly created accounts and reduce operational overhead while ensuring standardization across Regions.
Author AWS CloudFormation templates, attach per account IAM policies, and run the templates in each target Region independently is inefficient and error prone because it requires manual or per account orchestration. It does not give a single control point for multi account and multi Region rollout and makes centralized governance and drift control difficult.
Use AWS Organizations with AWS Control Tower, then push CloudFormation stacks from the management account to every account is not the right pattern for broad multi account and multi Region distribution. Control Tower is for landing zones and guardrails and running stacks directly from the management account does not scale well and is not the recommended operational model when StackSets exists for this purpose.
Publish standardized products in AWS Service Catalog and share them to all accounts so each team can launch the products in their own Region enables self service but relies on each team to initiate deployments. It does not provide a centralized one to many push model or automatic deployment to new accounts and Regions, so it does not meet the requirement for centralized governance and standardized rollout.
When you see many accounts and many Regions with a need for centralized governance, look for AWS Organizations with CloudFormation StackSets and a delegated administrator. Keywords like distribute one template and single control point point strongly to StackSets.
Question 10
BrightBazaar, an online marketplace, runs a static site on Amazon S3 served through Amazon CloudFront, with Amazon API Gateway invoking AWS Lambda for cart and checkout, and the functions write to an Amazon RDS for MySQL cluster that currently uses On-Demand instances with steady utilization for the last 18 months. Security scans and logs show repeated SQL injection and other web exploit attempts, customers report slower checkout during traffic spikes, and the team observes Lambda cold starts at peak times. The company wants to keep latency low under bursty load, reduce database spend given the stable usage pattern, and add targeted protection against SQL injection and similar attacks. Which approach best satisfies these goals?
-
✓ B. Configure Lambda with provisioned concurrency for anticipated surges, purchase RDS Reserved Instances for the MySQL cluster, and attach AWS WAF to the CloudFront distribution with managed rules for SQL injection and common exploits
The correct option is Configure Lambda with provisioned concurrency for anticipated surges, purchase RDS Reserved Instances for the MySQL cluster, and attach AWS WAF to the CloudFront distribution with managed rules for SQL injection and common exploits.
Using provisioned concurrency keeps a pool of preinitialized Lambda execution environments ready, which removes cold starts and sustains low latency during bursty traffic. This directly addresses the slow checkouts that occur at peak times because the functions can respond immediately rather than incurring initialization delay.
Buying RDS Reserved Instances aligns with the steady utilization that has been observed for the last 18 months and provides a significant discount compared to On-Demand pricing. This reduces database spend without changing the architecture and preserves the transactional MySQL engine that is appropriate for cart and checkout workloads.
Attaching AWS WAF to CloudFront with the managed rule groups for SQL injection and common exploits adds targeted layer 7 protection close to users. The managed rules help block malicious patterns before they reach API Gateway or the origin which complements existing logging and scanning and improves security posture with minimal operational overhead.
Increase the memory of the Lambda functions, migrate the transactional database to Amazon Redshift, and integrate Amazon Inspector with CloudFront is not suitable. Simply increasing memory can speed up execution but it does not prevent cold starts during bursts. Moving a transactional shopping cart database to a data warehouse platform is a mismatch for OLTP workloads. The listed security scanner does not integrate with CloudFront to block web layer attacks and it does not provide SQL injection mitigation at the edge.
Raise Lambda timeouts during peak hours, switch the database to RDS Reserved Instances, and subscribe to AWS Shield Advanced on CloudFront does not meet the goals. Increasing timeouts only tolerates slower performance and does not remove cold starts or improve throughput. The DDoS service focuses on volumetric and protocol attacks and does not provide the targeted inspection and blocking that SQL injection requires, which is the role of a web application firewall. The database commitment piece could reduce cost but the overall choice still fails on performance and application layer protection.
Migrate the workload to Google Cloud SQL and place the site behind Cloud CDN with security policies enforced by Cloud Armor introduces an unnecessary cross cloud migration that increases complexity and risk. It does not directly address Lambda cold starts or provide a focused improvement for the existing AWS stack, and it is not aligned with the goal of optimizing the current architecture.
Map each requirement to the most specific native control and prefer targeted features over broad workarounds. Use provisioned concurrency for Lambda cold starts, choose commitment discounts like RDS reservations for steady usage, and select a web application firewall for SQL injection rather than a DDoS service.
Question 11
SummitPay processes purchase events in Amazon DynamoDB tables and the risk operations group needs to detect suspicious activity and requires that every item change be captured and available for analysis within 45 minutes. What should a Solutions Architect implement to satisfy this requirement while enabling near real-time anomaly detection and alerting?
-
✓ B. Enable DynamoDB Streams on the tables and trigger an AWS Lambda function that writes the change records to Amazon Kinesis Data Streams then analyze anomalies with Amazon Kinesis Data Analytics and send alerts with Amazon SNS
The correct option is Enable DynamoDB Streams on the tables and trigger an AWS Lambda function that writes the change records to Amazon Kinesis Data Streams then analyze anomalies with Amazon Kinesis Data Analytics and send alerts with Amazon SNS. This approach continuously captures every item change from DynamoDB with low latency and provides a streaming pipeline that can detect anomalies almost immediately and send alerts well within the 45 minute requirement.
DynamoDB Streams emits an ordered record for each insert, update, or delete which gives you a reliable change feed. A Lambda function subscribed to the stream scales with traffic and persists the records into Kinesis Data Streams which provides durable buffering and fan out for multiple consumers. Streaming analytics with Kinesis Data Analytics or the managed Flink service can evaluate patterns and thresholds on the live stream to identify suspicious activity in near real time. Amazon SNS then delivers notifications to the operations team with minimal delay.
Forward data changes to Google Cloud Pub/Sub and run streaming detection in Dataflow with notifications through Cloud Functions is incorrect. This is a cross cloud design that requires building and operating custom integration from DynamoDB to Google Cloud which adds complexity and latency and it is not aligned with an AWS native architecture expected for this scenario.
Export the DynamoDB tables to Apache Hive on Amazon EMR every hour and run batch queries to flag anomalies then publish Amazon SNS notifications is incorrect. An hourly export is a batch process that can introduce up to sixty minutes of delay which does not guarantee availability for analysis within forty five minutes and does not provide near real time detection.
Use AWS CloudTrail to record all DynamoDB write APIs and create Amazon SNS notifications using CloudTrail event filters when behavior looks suspicious is incorrect. CloudTrail is designed for auditing API activity rather than streaming every item change for analytics. It does not provide the detailed and timely change feed or the continuous processing capabilities needed for near real time anomaly detection.
When you see a requirement to capture every item change with near real time detection, choose a change stream from the data store feeding a managed streaming analytics service and finish with a simple alerting mechanism. Prefer native services in the same cloud to minimize latency and complexity.
Question 12
North Coast Furnishings plans to move an undocumented set of VMware vSphere virtual machines into AWS for a consolidation effort and the team needs an easy way to automatically discover and inventory the VMs for migration planning with minimal ongoing work. Which approach best meets these needs while keeping operational overhead low?
-
✓ B. Deploy the agentless Migration Evaluator collector to the ESXi hypervisor layer and review the results in Migration Evaluator then exclude idle VMs and send the inventory to AWS Migration Hub
The correct option is Deploy the agentless Migration Evaluator collector to the ESXi hypervisor layer and review the results in Migration Evaluator then exclude idle VMs and send the inventory to AWS Migration Hub.
This agentless approach discovers vSphere virtual machines at the hypervisor layer without installing anything on the guests and it automatically gathers configuration and utilization needed for right sizing and inventory. You can exclude idle servers to focus on meaningful workloads and you can export the results for planning in AWS with minimal setup and ongoing effort. This keeps operational overhead low because you avoid agents on every VM and you avoid building and maintaining custom data pipelines.
Install the AWS Application Migration Service agent on every VM then aggregate configuration and performance data into Amazon Redshift and build dashboards in Amazon QuickSight is not a good fit because deploying agents to all machines and constructing a data warehouse and dashboards adds significant operational work. The migration agent is intended for replication and cutover rather than broad discovery and inventory planning.
Export the vCenter inventory to a .csv file and manually check disk and CPU usage for each server then import the subset into AWS Application Migration Service and migrate any remaining systems with AWS Server Migration Service is manual and error prone and it lacks automated utilization capture for right sizing. It also mixes tools and includes a legacy service that has been superseded by newer capabilities which makes it less likely on newer exams.
Run Google Migration Center discovery in the data center and use its right sizing report to plan the move does not meet the requirement because it is a Google Cloud tool and it does not integrate natively with AWS migration planning workflows.
When you see a requirement for automatic discovery and minimal effort in a VMware environment, look for agentless hypervisor level collection that feeds directly into AWS planning tools. Be cautious of answers that add custom pipelines or rely on manual spreadsheets or that use services from another cloud.
Question 13
The media platform team at VistaMosaic Media is experiencing failures in its image transformation workflow as uploaded files now average 85 MB. A Python 3.11 Lambda function reacts to Amazon S3 object created events, downloads the file from one bucket, performs the transformation, writes the output to another bucket, and updates a DynamoDB table named FramesCatalog. The Lambda frequently reaches the 900 second maximum timeout and the company wants a redesign that prevents these timeouts while avoiding any server management. Which combination of changes will satisfy these goals? (Choose 2)
-
✓ C. Create an Amazon ECS task definition for AWS Fargate that uses the container image from Amazon ECR and update the Lambda to start a task when a new object lands in Amazon S3
-
✓ E. Build a container image of the processor and publish it to Amazon ECR
The correct options are Create an Amazon ECS task definition for AWS Fargate that uses the container image from Amazon ECR and update the Lambda to start a task when a new object lands in Amazon S3 and Build a container image of the processor and publish it to Amazon ECR.
Fargate provides serverless container compute so it removes the Lambda maximum duration constraint and eliminates server management. Having the existing event driven Lambda invoke an ECS task for each new S3 object lets the heavier image transformation run in a container that can take as long as needed while the Lambda does only quick orchestration work and exits before hitting its timeout. This pattern scales with object arrivals and meets the requirement to avoid managing servers.
Packaging the processor as a container image and publishing it to Amazon ECR is necessary because ECS tasks pull images from a registry. This allows you to bundle Python 3.11 code and native dependencies in a reproducible environment which is well suited for CPU or memory intensive image transformations on large files.
Set up an AWS Step Functions workflow that invokes the existing Lambda in parallel and raise its provisioned concurrency is not sufficient because invoking Lambda more times does not shorten the per file processing and each Lambda invocation still has a hard 15 minutes limit. Provisioned concurrency only reduces cold start latency and does not change the maximum timeout for a function.
Migrate images to Amazon EFS and move metadata to Amazon RDS then mount the EFS file system from the Lambda function does not address the root cause which is the Lambda duration limit during heavy processing. It also replaces DynamoDB with a relational database that adds operational overhead and does not align with the requirement to avoid server management.
Create an Amazon ECS task definition with the EC2 launch type and have the Lambda trigger those tasks on file arrival requires you to provision and manage EC2 instances in the cluster which violates the goal of avoiding server management. Fargate is the serverless launch type that removes this burden.
When you see a Lambda nearing the 15 minutes limit for compute heavy work, consider offloading the processing to a serverless container on AWS Fargate and package the code into an image in Amazon ECR. Pick launch types and data stores that keep the design fully managed.
Question 14
BlueOrbit Media runs analytics, recommendations and video processing on AWS and ingests about 25 TB of VPC Flow Logs each day to reveal cross Region traffic and place dependent services together for better performance. The logs are written to Amazon Kinesis Data Streams and that stream is configured as the source for a Kinesis Data Firehose delivery stream that forwards to an S3 bucket. The team installed Kinesis Agent on a new set of network appliances and pointed the agent at the same Firehose delivery stream, but no records appear at the destination. As the Solutions Architect Professional, what is the most likely root cause?
-
✓ C. A Firehose delivery stream that uses a Kinesis data stream as its source does not accept direct writes from Kinesis Agent
The correct option is A Firehose delivery stream that uses a Kinesis data stream as its source does not accept direct writes from Kinesis Agent.
A delivery stream is created with either Direct PUT or with a Kinesis Data Streams source. When you choose a Kinesis data stream as the source, producers must write to that stream and Kinesis Data Firehose pulls from it and delivers to the destination. Pointing a Kinesis Agent at the Firehose endpoint will not work in this configuration. The agent should either publish to the Kinesis data stream or to a different Firehose delivery stream that uses Direct PUT.
Kinesis Agent can only publish to Kinesis Data Streams and cannot send directly to Kinesis Data Firehose is incorrect. The agent supports sending directly to Firehose delivery streams when the stream is configured for Direct PUT.
Kinesis Data Firehose has reached a scaling limit and requires manual capacity increases is incorrect. Firehose scales automatically for ingestion and you do not provision capacity. Hitting limits would surface as throttling metrics and errors rather than complete absence of records solely from one producer path.
The IAM role used by the Kinesis Agent is missing firehose:PutRecord permissions is incorrect. PutRecord permissions are required only when writing directly to a Direct PUT delivery stream. In this scenario the delivery stream is sourced from Kinesis Data Streams, so the agent is targeting the wrong endpoint rather than lacking a permission.
Always check the Firehose source type. If it is set to Kinesis Data Streams then producers must write to that stream. Only a Direct PUT delivery stream accepts PutRecord calls from agents or applications.
Question 15
Harvest Logistics operates a three tier workload in a single AWS Region and must implement disaster recovery with a recovery time objective of 45 minutes and a recovery point objective of 4 minutes for the database layer. The web and API tiers run on stateless Amazon EC2 Auto Scaling groups and the data layer is a 24 TB Amazon Aurora MySQL cluster. Which combination of actions will meet these objectives while keeping costs under control? (Choose 2)
-
✓ B. Run a minimal hot standby of the web and application layers in another Region and scale out during failover
-
✓ D. Provision an Aurora MySQL cross Region read replica and plan to promote it during a disaster
The correct options are Run a minimal hot standby of the web and application layers in another Region and scale out during failover and Provision an Aurora MySQL cross Region read replica and plan to promote it during a disaster.
The minimal hot standby aligns with the stateless web and API tiers and keeps cost under control by running only a small footprint in the secondary Region and scaling out during an event. Because the standby environment already exists it can meet the 45 minute recovery time objective when combined with automated failover steps and traffic redirection.
The cross Region read replica for Aurora is the native approach for disaster recovery across Regions and maintains low replication lag which supports the 4 minute recovery point objective. Promotion of the replica to a writer during a disaster can be completed within the recovery time objective even for a 24 TB cluster because there is no need to restore from snapshots and only a controlled role change and endpoint update is required.
The Use AWS Database Migration Service to continuously replicate from Aurora to an Amazon RDS MySQL instance in another Region choice is not optimal for disaster recovery. DMS focuses on migration and ongoing data movement and replication lag can exceed the 4 minute recovery point objective at this scale and write volume and moving to RDS MySQL introduces engine differences and extra cutover steps that increase recovery time.
The Configure manual snapshots of the Aurora cluster every 4 minutes and copy them to another Region choice cannot meet these goals. Manual snapshots at that frequency are impractical and cross Region copy adds additional delay and a 24 TB restore and instance provisioning would almost certainly exceed a 45 minute recovery time and would lose more than 4 minutes of data.
The Enable Multi AZ for the Aurora cluster and rely on automatic backups for recovery choice only protects within a single Region and does not address a Regional disaster. Restoring from automated backups also takes significant time for a 24 TB cluster which makes both the recovery time and recovery point objectives unlikely to be met.
Map the stated RTO and RPO to a disaster recovery pattern. Use warm standby for stateless tiers with small always on capacity and choose service native replication like an Aurora cross Region replica for tight objectives. Be cautious with snapshots and tools based replication when sub hour objectives are required.

All questions come from my Solutions Architect Udemy Course and certificationexams.pro
Question 16
Marston Media purchased four boutique agencies and rolled their accounts into a single AWS Organization, yet the accounting team cannot easily produce combined cost views for every subsidiary and they want to feed the results into their own reporting application. What approach will enable consistent cross entity cost reporting for their self managed app?
-
✓ C. Create an organization wide AWS Cost and Usage Report and turn on cost allocation tags and cost categories and query the CUR with Amazon Athena using an external table named acct_rollup_ledger and build and share an Amazon QuickSight dataset for the finance team
The correct option is Create an organization wide AWS Cost and Usage Report and turn on cost allocation tags and cost categories and query the CUR with Amazon Athena using an external table named acct_rollup_ledger and build and share an Amazon QuickSight dataset for the finance team.
An organization level Cost and Usage Report captures detailed usage and cost line items from all member accounts in one place and it supports cost allocation tags and cost categories so you can align spend to business dimensions across subsidiaries. Storing the CUR in Amazon S3 and querying it with Amazon Athena through an external table such as acct_rollup_ledger gives you a repeatable SQL interface that a self managed application can ingest reliably. Building a shared Amazon QuickSight dataset then provides finance with governed and consistent reporting while your application consumes the same authoritative data.
Use the AWS Price List Query API to fetch per account pricing data and create a saved view in AWS Cost Explorer for the finance group is incorrect because the Pricing API returns public rates rather than your actual metered usage and costs. A saved view in Cost Explorer is a console artifact and does not provide a robust programmatic dataset for cross account consolidation.
Publish an AWS Cost and Usage Report at the organization level and enable cost allocation tags and cost categories and have the accounting team rely on a reusable report in AWS Cost Explorer is not sufficient because Cost Explorer reports do not deliver a queryable dataset that your application can ingest. The requirement is to feed results into a self managed app which is better met by querying the CUR with Athena and producing shareable datasets.
Configure AWS Billing Conductor to define billing groups for each subsidiary and export the pro forma charges for the self managed application to ingest is not the best fit because Billing Conductor focuses on pro forma pricing constructs for custom billing and charge sharing and it does not replace an organization wide CUR pipeline for detailed analytics. Pro forma outputs are not the source of truth for actual invoiced costs and are less suitable for ad hoc SQL analysis and application ingestion.
When a scenario asks for consolidated cross account cost reporting that must feed a self managed app, prefer an organization level Cost and Usage Report with cost allocation tags and cost categories and a query layer such as Athena. Console views in Cost Explorer help analysts but rarely satisfy programmatic ingestion needs.
Question 17
A DevOps team at Riverbend Labs is trying to scale a compute intensive workload by adding more Amazon EC2 instances to an existing cluster placement group in a single Availability Zone, but the launches fail with an insufficient capacity error. What should the solutions architect do to troubleshoot this situation?
-
✓ C. Stop and then start all instances in the placement group and try the launches again
The correct option is Stop and then start all instances in the placement group and try the launches again. This action can relocate the instances onto different underlying hardware within the same Availability Zone which can free the contiguous capacity that a cluster placement group requires and often resolves insufficient capacity errors.
Cluster placement groups place instances close together for low latency and high throughput networking. When capacity in the targeted racks is fragmented the launches can fail with an insufficient capacity error. Performing a stop and start cycle can move the existing instances to new host hardware which can open enough contiguous capacity to allow the new instance launches to succeed.
Use a spread placement group to distribute instances across distinct hardware in multiple Availability Zones is not appropriate because spread placement groups optimize for high availability by distributing instances and they limit the number of instances per Availability Zone. This does not address the low latency requirements of a clustered workload and it does not solve the immediate capacity fragmentation in a single Availability Zone.
Create a new placement group and attempt to merge it with the existing placement group is not viable because merging placement groups is not supported and instances cannot be combined across groups in that manner. This would not resolve the capacity constraints for the existing cluster placement group.
Google Compute Engine is unrelated to the AWS EC2 scenario and does not troubleshoot capacity issues for an Amazon EC2 placement group.
When you see an insufficient capacity error in a cluster placement group, first consider a quick stop and start of the group to relocate instances to different hardware. If that does not work, try another Availability Zone or a compatible instance type.
Question 18
StreamForge Media is moving its catalog metadata API to a serverless design on AWS. About 30 percent of its older set top boxes fail when certain response headers are present and the on premises load balancer currently strips those headers according to the User Agent. The team must preserve this behavior for at least the next 12 months while directing requests to different Lambda functions based on request type. Which architecture should they implement to satisfy these goals?
-
✓ B. Place an Amazon CloudFront distribution in front of an Application Load Balancer and have the ALB invoke the appropriate AWS Lambda function per request type and use a CloudFront Function to strip the problematic headers when the User Agent matches legacy clients
The correct option is Place an Amazon CloudFront distribution in front of an Application Load Balancer and have the ALB invoke the appropriate AWS Lambda function per request type and use a CloudFront Function to strip the problematic headers when the User Agent matches legacy clients.
This approach keeps the compute layer serverless by using Lambda while the Application Load Balancer routes by path or host to different Lambda targets so each request type can be handled by the appropriate function. CloudFront Functions run at the edge on viewer response and are purpose built for fast and cost effective header manipulation which lets you remove the problematic response headers only for legacy User Agent values. This preserves the existing behavior for older set top boxes while providing low latency and simple operations.
Build an Amazon API Gateway REST API that integrates with multiple AWS Lambda functions per route and customize gateway responses to remove the offending headers for requests from specific User Agent values is not correct because a single route in API Gateway has one integration so it does not integrate with multiple Lambda functions per route. Gateway Responses mainly control error responses that API Gateway generates rather than normal integration responses. Conditional removal of arbitrary response headers for successful responses based on the User Agent is not what Gateway Responses are designed for.
Use Google Cloud CDN with Cloud Load Balancing and Cloud Functions to route traffic and remove the headers for legacy user agents is not correct because it moves the solution to Google Cloud rather than AWS which does not meet the requirements.
Front the service with Amazon CloudFront and an Application Load Balancer and have a Lambda@Edge function delete the problematic headers during viewer responses based on the User Agent is not the best choice here. While it can work functionally, Lambda@Edge is heavier and costlier for simple header edits and CloudFront Functions are now the preferred option for lightweight viewer request or response header manipulation.
When you see a need for lightweight header manipulation at the edge based on User Agent or simple logic, think CloudFront Functions. Reserve Lambda@Edge for cases that need more compute time, libraries, or origin request and response processing.
Question 19
RoadGrid is a logistics SaaS that runs a multi-tenant fleet tracking platform using shared Amazon DynamoDB tables with AWS Lambda handling requests. Each carrier includes a unique carrier_id in every API call. The business wants to introduce tiered billing that charges tenants based on actual DynamoDB consumption for both reads and writes. The finance team already ingests hourly AWS Cost and Usage Reports into a centralized payer account and plans to use those reports for monthly chargebacks. They need the most precise and economical approach that requires minimal ongoing maintenance to measure and allocate DynamoDB usage per tenant. Which approach should they choose?
-
✓ C. Log structured JSON from the Lambda handlers that includes carrier_id plus computed RCUs and WCUs for each request into CloudWatch Logs and run a scheduled Lambda to aggregate by tenant and pull the monthly DynamoDB spend from the Cost Explorer API to allocate costs by proportion
The correct choice is Log structured JSON from the Lambda handlers that includes carrier_id plus computed RCUs and WCUs for each request into CloudWatch Logs and run a scheduled Lambda to aggregate by tenant and pull the monthly DynamoDB spend from the Cost Explorer API to allocate costs by proportion.
This approach measures both reads and writes at the point of execution and uses the same capacity units that DynamoDB bills on. The application can compute consumed RCUs and WCUs per call and include the carrier identifier which gives precise attribution across all tenants. Storing structured events in existing logs keeps operational overhead low and a lightweight aggregator can roll up usage by tenant on a schedule.
It also aligns cleanly with the finance process. You can pull the monthly DynamoDB cost from the billing APIs or the cost and usage reports and allocate it to tenants by the proportion of their measured capacity units. This yields accurate and economical chargebacks with minimal ongoing maintenance.
Enable DynamoDB Streams and process them with a separate Lambda to extract carrier_id and item size on every write then aggregate writes per tenant and map that to the overall DynamoDB bill is not sufficient because it only captures writes and does not measure reads which are often a large share of consumption. It also adds ongoing cost and operational complexity for streams and extra functions without improving read visibility.
Use AWS Application Cost Profiler to ingest per tenant usage records from the app and produce allocated DynamoDB costs by customer is not the best fit because it introduces another service to run and it does not natively account for DynamoDB read and write capacity at the per request level. It has also been deprecated which makes it less likely on newer exams and less suitable for a long term solution.
Apply a cost allocation tag for carrier_id to the shared DynamoDB table and activate the tag in the billing console then query the CUR by tag to analyze tenant consumption and cost cannot work because a single shared table can only have one set of tags. You cannot tag per tenant usage within one resource so tags will not partition costs among multiple carriers.
When allocating costs for a shared service, favor approaches that measure per request work in the same billing units the service uses and then map those proportions to the monthly cost from the CUR or Cost Explorer. Watch for options that miss either reads or writes or that rely on tagging a single shared resource since those cannot separate multi-tenant usage.
Question 20
Riverbend Media recently rolled out SAML 2.0 single sign on using its on premises identity provider to grant workforce access to its AWS accounts. The architect validated the setup with a lab account through the federation portal and the console opened successfully. Later three pilot users tried to sign in through the same portal and they could not get access to the AWS environment. What should the architect verify to confirm that the identity federation is configured correctly? (Choose 3)
-
✓ B. Configure each IAM role that federated users will assume to trust the SAML provider as the principal
-
✓ C. Ensure the identity provider issues SAML assertions that map users or groups to specific IAM roles with the required permissions
-
✓ D. Verify that the federation portal calls the AWS STS AssumeRoleWithSAML API with the SAML provider ARN the target role ARN and the SAML assertion from the identity provider
The correct options are Configure each IAM role that federated users will assume to trust the SAML provider as the principal, Ensure the identity provider issues SAML assertions that map users or groups to specific IAM roles with the required permissions, and Verify that the federation portal calls the AWS STS AssumeRoleWithSAML API with the SAML provider ARN the target role ARN and the SAML assertion from the identity provider.
Federation requires that the target IAM roles explicitly trust the SAML provider. If the trust policy does not name the SAML provider as a federated principal and does not permit the assume role with SAML action then the request is denied even if the assertion is valid.
The identity provider must issue assertions that include the role mapping for each user or group that should access AWS. This is typically done by including the role attribute that pairs a role ARN with the SAML provider ARN so that the user can be authorized to assume a specific role with the needed permissions. Without a correct mapping the pilot users would have no role to assume and the sign in would fail.
The federation portal must complete the flow by exchanging the assertion for temporary credentials through the STS API that assumes a role with SAML. The call needs the assertion along with the correct role and provider ARNs. If the call is not made or if any of these values are wrong then access will not be granted.
Enable Google Cloud Identity Aware Proxy in front of the federation portal to pass authenticated headers to AWS is not required. This is a Google Cloud service and authenticated headers are not how AWS SAML federation is validated. AWS expects a signed SAML assertion and the appropriate STS call or AWS sign in endpoint exchange.
Require IAM users to enter a time based one time password during portal sign in is not appropriate for SAML federation because users do not authenticate as IAM users. If multi factor authentication is needed it should be enforced by the identity provider or through conditions on the role that require MFA context in the assertion.
Add the on premises identity provider as an event source in AWS STS to process authentication requests is incorrect because STS does not support configuring event sources and there is no such integration model. STS simply evaluates requests to obtain temporary credentials based on the API input.
When troubleshooting federation, think in three checkpoints. Verify the role trust policy, verify the IdP issues a correct SAML assertion with role mapping, and verify the portal makes a successful AssumeRoleWithSAML call to exchange the assertion for credentials.