Certification Practice Exam Questions
At Horizon Motors, a senior engineer is reviewing a YAML CloudFormation template submitted by a new hire and notices that it includes a section that CloudFormation does not support. Which section name is not a valid top-level part of an AWS CloudFormation template?
-  
❏ A. Parameters section
 -  
❏ B. Dependencies section
 -  
❏ C. Resources section
 -  
❏ D. Conditions section
 
A fintech startup is moving its core platform from a colocation facility to AWS and plans to use Amazon RDS for the database tier. The architects want clarity on the behavior and benefits of Multi-AZ deployments. Which statements about RDS Multi-AZ are accurate? (Choose 2)
-  
❏ A. You can direct read traffic to the Multi-AZ standby to increase read throughput
 -  
❏ B. During scheduled OS maintenance, RDS updates the standby first, promotes it, then maintains the former primary
 -  
❏ C. During automated backups in Multi-AZ, writes on the primary are paused because snapshots are taken from the primary
 -  
❏ D. Multi-AZ automatically provides cross-Region disaster recovery without additional setup
 -  
❏ E. If the primary DB instance fails, RDS automatically fails over to the synchronized standby
 
At CedarWave, you are building a social feed where members can follow one another. A subset of accounts will be highly popular, so their profile data will be read far more frequently than others. The profile records are stored in Amazon RDS, and your team wants to add Amazon ElastiCache to speed up reads. Because the cache cannot cost-effectively hold the entire user corpus, you only want to keep hot profiles in memory. The site tolerates slightly stale reads for these profiles as long as the data is no more than 45 seconds old. Which caching approach should you choose?
-  
❏ A. Write-through with TTL
 -  
❏ B. Cache-aside (lazy loading) with a 45-second TTL
 -  
❏ C. Write-through without TTL
 -  
❏ D. Cache-aside (lazy loading) without TTL
 
EcoRides, a bike-sharing platform, runs its mobile backend using a serverless architecture on AWS. The team wants to add a feature that sends promotional push alerts to opted-in users on iOS and Android every 6 hours. From the application code, which AWS service should the developer integrate to publish notifications to subscribers?
-  
❏ A. Amazon EventBridge
 -  
❏ B. Amazon Simple Notification Service (SNS)
 -  
❏ C. Amazon Simple Queue Service (SQS)
 -  
❏ D. Amazon Simple Workflow Service (SWF)
 
About eleven months ago, a boutique e-commerce company obtained a public TLS certificate for shop.luma-retail.com using AWS Certificate Manager and proved control of the domain with DNS validation by creating the ACM CNAME in public DNS. As the certificate approaches its end of validity, what actions will ACM take regarding renewal and notifications? (Choose 2)
-  
❏ A. ACM sends renewal emails via Amazon SNS for DNS-validated certificates that are in service
 -  
❏ B. ACM automatically renews the certificate when it is associated with an AWS service and the ACM CNAME remains publicly resolvable
 -  
❏ C. ACM contacts the external certificate authority to re-validate domain ownership for imported third-party certificates
 -  
❏ D. AWS Support opens a proactive case and replaces the certificate automatically even if the validation record is missing
 -  
❏ E. ACM emits Amazon EventBridge or AWS Health events if it cannot validate the domain ahead of renewal
 
Orion Outfitters plans to host a static marketing site from an Amazon S3 bucket. All visitors must use HTTPS so data is encrypted while in transit. What should the Developer implement to satisfy this requirement? (Choose 2)
-  
❏ A. Attach an SSL/TLS certificate from AWS Certificate Manager to the CloudFront distribution
 -  
❏ B. Configure the S3 bucket to present its own SSL/TLS certificate
 -  
❏ C. Create an Amazon CloudFront distribution that uses the S3 bucket as the origin
 -  
❏ D. Associate an AWS WAF web ACL with the distribution to enforce HTTPS
 -  
❏ E. Place an Application Load Balancer in front of the S3 static website to terminate HTTPS
 
A fast-growing health-tech startup runs a customer-facing mobile app that authenticates through an Amazon Cognito user pool with MFA required for every user. The app stores confidential roadmap and revenue data. The security team wants an email each time any account completes a successful sign-in, and leadership expects a working solution within 48 hours with minimal code changes. What is the most efficient way to implement this requirement?
-  
❏ A. Configure a pre-authentication Lambda trigger on the Cognito user pool that uses Amazon SES to email the security team
 -  
❏ B. Use AWS CloudTrail with an Amazon EventBridge rule on Cognito user pool authentication API calls to publish to Amazon SNS for email
 -  
❏ C. Wire an AWS Lambda post-authentication trigger on the Cognito user pool that calls Amazon SES to email the security mailbox
 -  
❏ D. Set up a Lambda trigger on Amazon Cognito identity pools authenticated API operations that sends emails via Amazon SES
 
An engineer at Cobalt Retail is using AWS X-Ray to investigate latency issues in a containerized checkout API. The application already sends segment documents that include custom annotations, and the team needs to quickly include or exclude particular requests based on those annotations while reviewing traces from the last 90 minutes with minimal setup. What should they do? (Choose 2)
-  
❏ A. Export trace data to Amazon S3 and analyze it with Amazon Athena
 -  
❏ B. Use filter expressions in the AWS X-Ray console to search by annotation values
 -  
❏ C. Adjust X-Ray sampling rules to only record requests that include those annotations
 -  
❏ D. Call the GetTraceSummaries API with a filter expression to retrieve trace IDs and indexed annotations
 -  
❏ E. Use the BatchGetTraces API to download full trace documents and filter them client-side
 
A media startup runs a public website on a single Amazon EC2 instance within a public subnet. The site must be reachable on HTTPS over TCP 443 from the internet, and administrators should connect by SSH over TCP 22 only from the company network 10.32.0.0/12 that is reachable through a VPN. What inbound security group rules satisfy these requirements?
-  
❏ A. Allow inbound 443 and 22 from the VPC CIDR 10.0.0.0/16
 -  
❏ B. Allow inbound 22 from 0.0.0.0/0 and allow inbound 443 from 10.32.0.0/12
 -  
❏ C. Allow inbound 443 from 0.0.0.0/0 and allow inbound 22 from 10.32.0.0/12
 -  
❏ D. Allow inbound 443 and 22 from both 0.0.0.0/0 and 10.32.0.0/12
 
NovaSight Analytics, a regional healthcare data startup, is tightening its IAM practices during a security review. According to AWS recommended practices, how should the team handle access keys to strengthen security? (Choose 2)
-  
❏ A. Rotate access keys every 24 hours
 -  
❏ B. Remove any access keys for the root user
 -  
❏ C. Store access keys directly in application code repositories
 -  
❏ D. Assign distinct access keys to each application or automation script
 -  
❏ E. Use a single access key across all services for consistency
 
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
Trailhead Motors recently moved its core platform to AWS and migrated data from MariaDB into Amazon DynamoDB. You are provisioning a new table named InvoicesV3, and the application must query items using the same partition key while providing an alternate sort key. This capability needs to be defined when the table is first created because it cannot be added afterward. What should you implement?
-  
❏ A. Run a Scan operation
 -  
❏ B. Create a global secondary index
 -  
❏ C. Enable DynamoDB Streams
 -  
❏ D. Define a local secondary index when creating the table
 
NovaFit Labs is preparing to launch a fitness tracking mobile app and wants to streamline account creation and sign-in for new users. The team expects about 3 million registrations within the next nine months and prefers a fully managed, highly scalable user management solution that minimizes development work. What should the developer recommend?
-  
❏ A. Build a custom authentication service with AWS Lambda and Amazon DynamoDB
 -  
❏ B. Amazon Cognito Identity Pools
 -  
❏ C. Amazon Cognito User Pools
 -  
❏ D. AWS Directory Service for Microsoft Active Directory
 
An engineer at a regional consulting firm is planning the architecture for an internal analytics portal that will be accessed by roughly 120 employees. The solution must automatically right-size capacity with changing load while keeping costs low. Which AWS services would deliver the highest elasticity for this design? (Choose 2)
-  
❏ A. Amazon RDS
 -  
❏ B. Amazon DynamoDB
 -  
❏ C. Amazon CloudFront
 -  
❏ D. AWS Backup
 -  
❏ E. Amazon EC2 Spot Fleet
 
A sports streaming analytics startup named RallyView exposes APIs that return aggregated, precomputed metrics like plays, reactions, and minutes watched. The APIs run on Amazon API Gateway with AWS Lambda, and the data is read from a single file in Amazon S3 that is regenerated every 18 hours. After a sudden surge in requests, clients experience higher latency when calling the API. The team wants faster responses without altering the existing backend components. What should they do to improve the API’s responsiveness?
-  
❏ A. Configure Amazon CloudFront to cache API responses ahead of API Gateway
 -  
❏ B. Turn on CORS on the API Gateway stage
 -  
❏ C. Enable response caching in Amazon API Gateway
 -  
❏ D. Add Amazon ElastiCache to maintain a memory cache for hot keys
 
A developer at Nimbly Tech maintains a Node.js AWS Lambda function that connects to an Amazon RDS for PostgreSQL database. The handler currently opens a new connection on every invocation and closes it before returning. Which Lambda capability should the developer use so that a previously established connection can be preserved and reused by later invocations in the same runtime environment?
-  
❏ A. Provisioned Concurrency
 -  
❏ B. Lambda execution context
 -  
❏ C. Environment variables
 -  
❏ D. Event source mapping
 
A compliance audit at OrionPay Labs requires that all traffic to Amazon S3 use TLS. Under which S3 encryption option will Amazon S3 reject any request that is sent over HTTP rather than HTTPS?
-  
❏ A. SSE-KMS
 -  
❏ B. SSE-C
 -  
❏ C. Client-side encryption
 -  
❏ D. SSE-S3
 
An engineer at Alpine Health Tech maintains a Node.js tool that calls the low-level DynamoDB BatchGetItem API to pull batches of about 90 records from the PatientEvents table. Many calls return incomplete results and list numerous items under UnprocessedKeys. What should the engineer do to make these bulk reads complete more reliably? (Choose 2)
-  
❏ A. Add a new Global Secondary Index with separate read capacity settings
 -  
❏ B. Retry unprocessed keys using exponential backoff with jitter between attempts
 -  
❏ C. On failure, reissue the batch request immediately without delay
 -  
❏ D. Switch to the AWS SDK’s batch request client to leverage built-in retries
 -  
❏ E. Raise the table’s read capacity and turn on Auto Scaling
 
A developer at Orion Media Labs has instrumented a microservices application with the AWS X-Ray SDK to capture request telemetry. The team needs a custom debugging dashboard that shows full trace details without using the X-Ray console and should query traces from the last 36 hours. What should the developer do to support this requirement?
-  
❏ A. Use the GetServiceGraph API to enumerate trace IDs, then fetch the traces with GetTraceSummaries
 -  
❏ B. Call BatchGetTraces first to discover trace IDs, then query the traces with GetTraceSummaries
 -  
❏ C. Use the GetGroup API to obtain trace IDs for the app, then retrieve the traces with BatchGetTraces
 -  
❏ D. Use GetTraceSummaries to list trace IDs for the time window, then call BatchGetTraces to download the full trace data
 
Aurora Metrics, a media analytics startup, runs several serverless microservices on Amazon API Gateway with AWS Lambda backends. The team needs to release a new version of one public API and keep the current endpoint available while early adopters move to the new one for about 90 days before decommissioning the old version. How should they roll out the change to enable a smooth, side-by-side transition?
-  
❏ A. Update the Lambda code, publish a new version, point the API integration to that version, and redeploy the API to the same stage
 -  
❏ B. Publish the update as a new Lambda version and expose it via a Lambda function URL instead of API Gateway
 -  
❏ C. Update the Lambda, publish a new version, reference it in the API integration, and deploy the API to a separate stage for the new version
 -  
❏ D. Enable an Amazon API Gateway canary release on the existing stage to shift 10% of traffic to the new deployment
 
At a boutique travel-booking startup, you are designing an order management backend. The team wants a serverless, event-driven pattern so that every insert, update, or delete on the orders data automatically generates change records that invoke AWS Lambda functions. Which AWS database should you select to satisfy this requirement?
-  
❏ A. Kinesis
 -  
❏ B. RDS
 -  
❏ C. DynamoDB
 -  
❏ D. ElastiCache
 
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
Orion Markets, a regulated fintech exchange, runs a digital asset trading application whose on premises key manager uses an HSM and stores RSA 4096 key pairs. The company is moving the workload to AWS in a hybrid model, and auditors require that cryptographic keys live in dedicated, third party validated hardware security modules that you exclusively control and not in shared infrastructure. What should you implement to meet these requirements?
-  
❏ A. AWS Secrets Manager
 -  
❏ B. Configure an AWS KMS custom key store backed by your CloudHSM cluster
 -  
❏ C. Import the existing RSA key material into an AWS CloudHSM cluster
 -  
❏ D. AWS KMS
 
You are the lead engineer at a startup building multiple Amazon API Gateway endpoints for internal microservices. In the development account, the team updates routes and Lambda integrations, but calling the dev endpoint still returns behavior from the prior version. What should you recommend to make the latest changes available to callers?
-  
❏ A. Grant developers IAM permissions for API execution in API Gateway
 -  
❏ B. Turn on stage-level caching in API Gateway
 -  
❏ C. Redeploy the API to an existing stage or create a new stage deployment
 -  
❏ D. Use stage variables to toggle the development state of the API
 
A smart transportation startup ingests telemetry from connected buses into Amazon Kinesis Data Streams. During occasional traffic bursts, the application’s PutRecords calls return HTTP 200 but show a nonzero FailedRecordCount, and the per record responses include ProvisionedThroughputExceededException for a shard and InternalFailure for another in stream metroSensorsStream under account 222222222222. What should the developer do to reliably handle these bursty write patterns without overloading shards? (Choose 2)
-  
❏ A. Merge shards to reduce the number of shards in the stream
 -  
❏ B. Implement retries with exponential backoff and jitter for failed PutRecords entries
 -  
❏ C. Increase the PutRecords batch size or send requests more frequently
 -  
❏ D. Enable enhanced fan out on all consumers
 -  
❏ E. Reduce the request rate or decrease the amount of data per PutRecords call
 
A regional airline ticketing platform must update the TicketSwaps DynamoDB table and the WalletLedger DynamoDB table in a single all-or-nothing operation so the data remains consistent if any part of the write fails. Which DynamoDB capability should the developers use?
-  
❏ A. DynamoDB Streams
 -  
❏ B. DynamoDB Transactions
 -  
❏ C. DynamoDB TTL
 -  
❏ D. DynamoDB Indexes
 
A logistics startup, Trailhead Freight, manages infrastructure with AWS CloudFormation. A dedicated network stack provisions a VPC and a private subnet named app-subnet-02. A separate application stack must reference the subnet ID created by the network stack. What should the developer do so the application stack can consume this value?
-  
❏ A. AWS Systems Manager Parameter Store
 -  
❏ B. Fn::ImportValue
 -  
❏ C. Define an Outputs entry with an Export name in the network stack template
 -  
❏ D. Fn::Transform
 
A mobile gaming startup is deploying a low-latency API on Amazon EC2 instances running Nginx. The platform must handle tens of millions of concurrent TCP connections, and the application needs to log each client’s original source IP address and source port without using X-Forwarded-For headers. Which AWS load balancing option best satisfies these requirements?
-  
❏ A. Classic Load Balancer
 -  
❏ B. Network Load Balancer
 -  
❏ C. Application Load Balancer
 -  
❏ D. Elastic Load Balancer
 
A small e-commerce startup wants to publish a static website on AWS and serve it at www.willowpets.net. You created an Amazon S3 bucket, enabled static website hosting, and set home.html as the index document. You then added an Alias record in Amazon Route 53 that points to the bucket’s S3 website endpoint. When you visit www.willowpets.net, the browser shows HTTP 403 Access Denied. What should you change to make the site publicly accessible?
-  
❏ A. Create an IAM role
 -  
❏ B. Add a bucket policy that grants public read access and verify S3 Block Public Access is not blocking it
 -  
❏ C. Enable CORS on the bucket
 -  
❏ D. Turn on default server-side encryption for the bucket
 
A team at a note-taking startup is building a cross platform app where a user’s preferences and current note edits must stay consistent across phone, tablet, and web. The app must also let several collaborators work on the same shared notebooks with near real time updates and resolve conflicts after offline changes. Which AWS service should you use to implement this?
-  
❏ A. AWS Amplify
 -  
❏ B. Amazon DynamoDB Streams
 -  
❏ C. AWS AppSync
 -  
❏ D. Amazon Cognito Sync
 
An engineer at Orion HealthTech set up an Application Load Balancer in front of several Amazon EC2 instances for a staging rollout. During initial smoke tests, they notice the listener does not yet have a target group attached. What HTTP status code will appear in the load balancer logs until targets are registered?
-  
❏ A. HTTP 500
 -  
❏ B. HTTP 504
 -  
❏ C. HTTP 503
 -  
❏ D. HTTP 403
 
A developer at Zephyr Health needs to publish new application code to an AWS Elastic Beanstalk environment that runs behind a load balancer across sixteen Amazon EC2 instances. The rollout must not reduce capacity or harm user experience at any time, and the team wants to keep extra cost to a minimum. Which deployment policy should be used?
-  
❏ A. Rolling
 -  
❏ B. Immutable
 -  
❏ C. Rolling with additional batch
 -  
❏ D. All at once
 
A software engineer at a healthcare analytics startup needs to store confidential reports in an Amazon S3 bucket. All objects must be encrypted at rest, and the security policy requires the encryption keys to be rotated every 12 months. What is the simplest way to meet these requirements?
-  
❏ A. Perform client-side encryption before uploading objects to Amazon S3
 -  
❏ B. Import your own key material into AWS KMS and enable yearly rotation
 -  
❏ C. Enable automatic annual rotation on a customer managed AWS KMS key and use SSE-KMS for the bucket
 -  
❏ D. Turn on default bucket encryption with SSE-S3
 
An engineer at LumiTrack Analytics deployed a new Application Load Balancer with a listener and a target group, but no instances or IP addresses have been registered to that group yet. When the engineer makes an HTTP request to the load balancer DNS name, which status code will be returned?
-  
❏ A. HTTP 504: Gateway timeout
 -  
❏ B. HTTP 502: Bad gateway
 -  
❏ C. HTTP 503: Service unavailable
 -  
❏ D. HTTP 500: Internal server error
 
Arcadia Goods has an Application Load Balancer sending requests to a Lambda function named Alpha as the target, but Alpha never processes any of the incoming requests. Monitoring shows another Lambda function named Beta in the same account frequently consumes about 900 out of the account’s 1,000 concurrent executions. What should the team change to ensure Alpha can reliably run when invoked by the ALB?
-  
❏ A. Use Amazon API Gateway instead of an Application Load Balancer for Lambda function Alpha
 -  
❏ B. Enable provisioned concurrency on Lambda function Alpha
 -  
❏ C. Configure reserved concurrency for Lambda function Beta to cap its maximum concurrent executions and prevent it from exhausting the account
 -  
❏ D. Enable provisioned concurrency on Lambda function Beta to restrict its concurrency during spikes
 
A media analytics startup needs to display Amazon S3 object listings in its admin portal, showing 75 items per page while keeping AWS API requests to a minimum. Which AWS CLI parameters will return exactly 75 items per page and allow the next page to continue from where the previous one ended? (Choose 2)
-  
❏ A. –page-size
 -  
❏ B. –max-items
 -  
❏ C. –next-token
 -  
❏ D. –starting-token
 -  
❏ E. –limit
 
CineWave Media is a global streaming platform with over 140 million subscribers who watch roughly 180 million hours of content each day. Nearly all workloads run on AWS across more than 15,000 instances, producing a rapidly changing network where services communicate within AWS and over the internet. The operations team must continuously monitor and optimize networking. They need a cost‑efficient way to ingest and analyze several terabytes of real-time VPC flow logs daily while retaining the flexibility to route the stream to multiple downstream analytics, storage, and alerting consumers. Which AWS service best meets these needs?
-  
❏ A. AWS Glue
 -  
❏ B. Amazon Kinesis Data Streams
 -  
❏ C. Amazon SQS
 -  
❏ D. Amazon Kinesis Data Firehose
 
Certification Practice Exam Questions Answered
      These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
At Horizon Motors, a senior engineer is reviewing a YAML CloudFormation template submitted by a new hire and notices that it includes a section that CloudFormation does not support. Which section name is not a valid top-level part of an AWS CloudFormation template?
-  
✓ B. Dependencies section
 
The correct choice is Dependencies section. CloudFormation does not support a top level Dependencies section and resource ordering or relationships are expressed using attributes such as DependsOn and by referencing resources inside the Resources section.
CloudFormation templates follow a defined anatomy and do not include arbitrary top level sections. The template expresses dependency relationships at the resource level so you use attributes and intrinsic functions rather than a separate Dependencies section to control creation order and references.
Parameters section is valid and lets you pass values into a stack at creation or update time so you can parameterize templates for reuse.
Resources section is valid and it is the only required top level section because it contains the declarations of the AWS resources that the stack manages.
Conditions section is valid and it lets you conditionally create resources or set property values based on evaluated expressions.
Memorize the CloudFormation template anatomy and remember that Resources is the only required top level section and that dependencies are handled with DependsOn or intrinsic references rather than a Dependencies section.
A fintech startup is moving its core platform from a colocation facility to AWS and plans to use Amazon RDS for the database tier. The architects want clarity on the behavior and benefits of Multi-AZ deployments. Which statements about RDS Multi-AZ are accurate? (Choose 2)
-  
✓ B. During scheduled OS maintenance, RDS updates the standby first, promotes it, then maintains the former primary
 -  
✓ E. If the primary DB instance fails, RDS automatically fails over to the synchronized standby
 
During scheduled OS maintenance, RDS updates the standby first, promotes it, then maintains the former primary and If the primary DB instance fails, RDS automatically fails over to the synchronized standby are correct because they describe how Amazon RDS Multi-AZ handles maintenance and failure to minimize downtime and preserve data durability.
Amazon RDS provisions a synchronous standby in a different Availability Zone and uses synchronous replication so that committed writes are mirrored to the standby. During scheduled OS maintenance RDS patches the synchronous standby first and then performs a controlled promotion so the impact on the running database is minimized. If the primary instance fails RDS orchestrates an automatic failover to the synchronized standby to restore availability quickly.
You can direct read traffic to the Multi-AZ standby to increase read throughput is wrong because the standby is not open for application reads. Multi-AZ exists to provide high availability and failover protection rather than read scaling.
During automated backups in Multi-AZ, writes on the primary are paused because snapshots are taken from the primary is incorrect because automated backups are taken from the standby in Multi-AZ configurations which avoids suspending writes on the primary during the backup window.
Multi-AZ automatically provides cross-Region disaster recovery without additional setup is false because Multi-AZ operates across multiple Availability Zones within a single AWS Region. Cross Region disaster recovery requires additional configuration such as cross Region read replicas or other replication solutions.
Keep in mind that Multi-AZ provides high availability and durability and not read scaling. Use read replicas for read scaling and use cross Region replicas when you need disaster recovery across Regions.
At CedarWave, you are building a social feed where members can follow one another. A subset of accounts will be highly popular, so their profile data will be read far more frequently than others. The profile records are stored in Amazon RDS, and your team wants to add Amazon ElastiCache to speed up reads. Because the cache cannot cost-effectively hold the entire user corpus, you only want to keep hot profiles in memory. The site tolerates slightly stale reads for these profiles as long as the data is no more than 45 seconds old. Which caching approach should you choose?
-  
✓ B. Cache-aside (lazy loading) with a 45-second TTL
 
Cache-aside (lazy loading) with a 45-second TTL is correct because it caches entries only after they are requested so the cache concentrates on hot profiles while the 45 second TTL bounds how stale data can become.
The Cache-aside (lazy loading) with a 45-second TTL pattern avoids populating memory with cold accounts because items are added on demand. The short TTL ensures cached profiles are evicted or refreshed frequently so reads remain within the tolerated staleness window for the social feed.
Write-through with TTL is not ideal because it populates the cache on every write and that behavior can fill limited cache capacity with profiles that are never read even though TTL limits staleness.
Write-through without TTL is not appropriate because it keeps every written profile in cache indefinitely and that risks serving stale data beyond the acceptable 45 seconds while also wasting memory on cold records.
Cache-aside (lazy loading) without TTL focuses the cache on hot keys but lacks an expiration mechanism and so cached entries can remain stale longer than the allowed freshness window.
Match cache capacity and freshness to the pattern. Use cache-aside to keep only hot keys in memory and set a short TTL to ensure data stays within the allowed staleness window.
EcoRides, a bike-sharing platform, runs its mobile backend using a serverless architecture on AWS. The team wants to add a feature that sends promotional push alerts to opted-in users on iOS and Android every 6 hours. From the application code, which AWS service should the developer integrate to publish notifications to subscribers?
-  
✓ B. Amazon Simple Notification Service (SNS)
 
The correct option is Amazon Simple Notification Service (SNS) because it natively supports mobile push notifications and topic based fan out to registered iOS and Android endpoints.
The application can publish a message to an SNS topic and SNS will deliver it to the registered mobile endpoints through platform applications that integrate with Apple Push Notification service and Firebase Cloud Messaging. This lets your serverless backend publish promotional alerts every six hours by sending messages to the topic and letting SNS handle delivery to subscribed devices.
Amazon EventBridge is an event bus for routing and integrating events and it does not directly deliver push notifications to mobile devices. You could use EventBridge to trigger a function that calls SNS but EventBridge is not the direct push delivery mechanism.
Amazon Simple Queue Service (SQS) is a pull based queuing service used for decoupling and buffering messages and it does not push messages to device endpoints or integrate with mobile push providers on its own.
Amazon Simple Workflow Service (SWF) is designed to coordinate tasks and workflows and it is not used for sending mobile push notifications. It is considered a legacy orchestration option and more modern services are preferred on newer exams.
When a question asks about mobile push notifications and topic based fan out think SNS rather than queues or workflow services.
About eleven months ago, a boutique e-commerce company obtained a public TLS certificate for shop.luma-retail.com using AWS Certificate Manager and proved control of the domain with DNS validation by creating the ACM CNAME in public DNS. As the certificate approaches its end of validity, what actions will ACM take regarding renewal and notifications? (Choose 2)
-  
✓ B. ACM automatically renews the certificate when it is associated with an AWS service and the ACM CNAME remains publicly resolvable
 -  
✓ E. ACM emits Amazon EventBridge or AWS Health events if it cannot validate the domain ahead of renewal
 
ACM automatically renews the certificate when it is associated with an AWS service and the ACM CNAME remains publicly resolvable and ACM emits Amazon EventBridge or AWS Health events if it cannot validate the domain ahead of renewal are correct.
The first correct option means that for public certificates validated by DNS ACM will manage the renewal as long as the certificate is in use by a supported AWS service and the ACM validation CNAME record remains publicly resolvable. ACM will attempt to renew the certificate using the existing DNS validation without manual intervention when those conditions are met.
The second correct option explains how you are alerted when renewal cannot proceed. ACM publishes events to Amazon EventBridge and integrates with AWS Health when it cannot complete validation ahead of renewal so you can detect and remediate missing or broken DNS records before the certificate expires.
ACM sends renewal emails via Amazon SNS for DNS-validated certificates that are in service is incorrect because ACM does not rely on Amazon SNS to perform renewals and it does not use SNS as the primary renewal notification mechanism for DNS-validated certificates. Auto renewal is driven by validation state and service association rather than SNS notifications.
ACM contacts the external certificate authority to re-validate domain ownership for imported third-party certificates is incorrect because ACM cannot renew third party certificates. Certificates that were imported from external CAs must be renewed with that CA and then reimported into ACM if you want ACM to manage them.
AWS Support opens a proactive case and replaces the certificate automatically even if the validation record is missing is incorrect because ACM requires successful validation to renew certificates and AWS Support does not automatically open cases to replace certificates when validation fails. You must restore the validation record or manually replace the certificate.
Keep the ACM validation CNAME in public DNS and keep the certificate associated with a supported AWS service so ACM can auto renew. Monitor EventBridge or AWS Health events to catch renewal problems early.
Orion Outfitters plans to host a static marketing site from an Amazon S3 bucket. All visitors must use HTTPS so data is encrypted while in transit. What should the Developer implement to satisfy this requirement? (Choose 2)
-  
✓ A. Attach an SSL/TLS certificate from AWS Certificate Manager to the CloudFront distribution
 -  
✓ C. Create an Amazon CloudFront distribution that uses the S3 bucket as the origin
 
The correct choices are Attach an SSL/TLS certificate from AWS Certificate Manager to the CloudFront distribution and Create an Amazon CloudFront distribution that uses the S3 bucket as the origin.
Amazon CloudFront can front an S3 origin and terminate TLS at the edge while presenting a secure HTTPS endpoint to visitors. Attaching an ACM certificate to the distribution lets CloudFront serve the site over HTTPS without requiring certificates on the S3 website endpoint. CloudFront retrieves objects from S3 and serves them to clients over encrypted connections.
Configure the S3 bucket to present its own SSL/TLS certificate is incorrect because S3 static website hosting does not support configuring custom certificates and the website endpoint is HTTP only.
Associate an AWS WAF web ACL with the distribution to enforce HTTPS is wrong because AWS WAF filters and blocks web requests and it does not perform TLS termination or enforce HTTPS. You still need a TLS termination point such as CloudFront with an ACM certificate.
Place an Application Load Balancer in front of the S3 static website to terminate HTTPS is not viable because an Application Load Balancer cannot use an S3 bucket as a target and ALBs are not the recommended way to front S3 static websites for HTTPS.
Remember that S3 website endpoints are HTTP only and you must use CloudFront with an ACM certificate to serve HTTPS to clients.
A fast-growing health-tech startup runs a customer-facing mobile app that authenticates through an Amazon Cognito user pool with MFA required for every user. The app stores confidential roadmap and revenue data. The security team wants an email each time any account completes a successful sign-in, and leadership expects a working solution within 48 hours with minimal code changes. What is the most efficient way to implement this requirement?
-  
✓ C. Wire an AWS Lambda post-authentication trigger on the Cognito user pool that calls Amazon SES to email the security mailbox
 
Wire an AWS Lambda post-authentication trigger on the Cognito user pool that calls Amazon SES to email the security mailbox is the correct option because it runs after Cognito completes the full authentication flow including MFA and it lets you send a notification for every confirmed sign in with minimal code and fast deployment.
The post authentication trigger executes only after a successful sign in so it reliably captures completed logins. A small Lambda function can call Amazon SES to send the email and you do not need to change the client app or introduce complex event tracing. This approach meets the 48 hour and minimal code change constraints while ensuring each successful sign in generates a security notification.
Configure a pre-authentication Lambda trigger on the Cognito user pool that uses Amazon SES to email the security team is not appropriate because the pre authentication trigger runs before a sign in is finalized and it can fire for failed attempts. That trigger cannot guarantee the login completed successfully so it does not meet the requirement to notify only on successful sign ins.
Use AWS CloudTrail with an Amazon EventBridge rule on Cognito user pool authentication API calls to publish to Amazon SNS for email is less efficient because it requires capturing and filtering low level API calls and it adds configuration and potential lag. It is viable in some cases but it is more indirect than a post authentication trigger and it does not meet the minimal change goal as cleanly.
Set up a Lambda trigger on Amazon Cognito identity pools authenticated API operations that sends emails via Amazon SES is incorrect because identity pools handle federated identities and authorization rather than user pool sign in events. Identity pools do not emit the user pool post sign in event so they cannot reliably detect user pool successful logins.
When you need actions after a confirmed Cognito user pool sign in think post authentication Lambda trigger and remember pre authentication runs before success and identity pools are for authorization.
An engineer at Cobalt Retail is using AWS X-Ray to investigate latency issues in a containerized checkout API. The application already sends segment documents that include custom annotations, and the team needs to quickly include or exclude particular requests based on those annotations while reviewing traces from the last 90 minutes with minimal setup. What should they do? (Choose 2)
-  
✓ B. Use filter expressions in the AWS X-Ray console to search by annotation values
 -  
✓ D. Call the GetTraceSummaries API with a filter expression to retrieve trace IDs and indexed annotations
 
Use filter expressions in the AWS X-Ray console to search by annotation values and Call the GetTraceSummaries API with a filter expression to retrieve trace IDs and indexed annotations are the correct choices. These options use X‑Ray annotation indexing so you can quickly include or exclude requests by annotation while reviewing recent traces with minimal setup.
Use filter expressions in the AWS X-Ray console to search by annotation values lets you interactively filter traces in the console for the last 90 minutes and apply expressions that match annotation keys and values. The console query is fast because annotations are indexed and you can immediately view or drill into matching traces without exporting data.
Call the GetTraceSummaries API with a filter expression to retrieve trace IDs and indexed annotations gives a programmatic way to run the same indexed queries. GetTraceSummaries returns matching trace IDs and summary metadata so you can quickly find relevant traces or hand off IDs to the console or a follow up BatchGetTraces call when you need full trace documents.
Export trace data to Amazon S3 and analyze it with Amazon Athena is unnecessary for a quick investigation because it requires configuring exports and query schemas which adds setup and latency compared with using X‑Ray’s built in indexing and console filters.
Adjust X-Ray sampling rules to only record requests that include those annotations is about what gets recorded going forward and does not let you filter the existing recorded traces. Sampling changes capture behavior and is not a substitute for indexed querying of current traces.
Use the BatchGetTraces API to download full trace documents and filter them client-side is heavier because it requires fetching full trace documents and performing client side filtering. BatchGetTraces does not provide server side annotation filtering so it is less efficient for quick searches by annotation.
Use console filter expressions or the GetTraceSummaries API to query indexed annotations for fast, minimal setup searches. Remember that sampling controls capture and not querying.
A media startup runs a public website on a single Amazon EC2 instance within a public subnet. The site must be reachable on HTTPS over TCP 443 from the internet, and administrators should connect by SSH over TCP 22 only from the company network 10.32.0.0/12 that is reachable through a VPN. What inbound security group rules satisfy these requirements?
-  
✓ C. Allow inbound 443 from 0.0.0.0/0 and allow inbound 22 from 10.32.0.0/12
 
Allow inbound 443 from 0.0.0.0/0 and allow inbound 22 from 10.32.0.0/12 is correct because it exposes the website to the public on HTTPS while restricting SSH access to the company network that is reachable via the VPN.
The rule Allow inbound 443 from 0.0.0.0/0 and allow inbound 22 from 10.32.0.0/12 permits TCP 443 from any internet client so the site is reachable over HTTPS and it restricts TCP 22 to the 10.32.0.0/12 CIDR so only administrators on the corporate network can SSH into the instance. Security groups are stateful so once an inbound connection is allowed return traffic is automatically permitted.
Allow inbound 443 and 22 from the VPC CIDR 10.0.0.0/16 is wrong because it confines access to the VPC range and prevents internet users from reaching the public HTTPS site.
Allow inbound 22 from 0.0.0.0/0 and allow inbound 443 from 10.32.0.0/12 is wrong because it opens SSH to the entire internet and limits HTTPS to the corporate CIDR which blocks public access to the website.
Allow inbound 443 and 22 from both 0.0.0.0/0 and 10.32.0.0/12 is wrong because it still permits SSH from anywhere and fails the requirement to restrict administrative SSH to the company network.
Use 0.0.0.0/0 only for the public service port and restrict admin ports to your corporate CIDR or VPN network. Remember that security groups are stateful so you only need inbound rules for the client side of a connection.
NovaSight Analytics, a regional healthcare data startup, is tightening its IAM practices during a security review. According to AWS recommended practices, how should the team handle access keys to strengthen security? (Choose 2)
-  
✓ B. Remove any access keys for the root user
 -  
✓ D. Assign distinct access keys to each application or automation script
 
Remove any access keys for the root user and Assign distinct access keys to each application or automation script are the correct choices for strengthening IAM practices at NovaSight Analytics.
Remove any access keys for the root user eliminates the highest risk credential because the root account has full control of the AWS account. Removing root keys reduces the chance of a wide ranging compromise and follows AWS guidance to perform only a few account level tasks with the root account.
Assign distinct access keys to each application or automation script limits the blast radius by giving each identity only the permissions it needs and by making rotation and revocation targeted and auditable. Using separate keys or, preferably, roles and temporary credentials, makes it easier to enforce least privilege and to investigate or remediate incidents without affecting unrelated systems.
Rotate access keys every 24 hours is impractical for most environments and creates operational burden that can lead to risky workarounds. AWS advises regular rotation based on risk and operational readiness rather than an extreme daily schedule.
Store access keys directly in application code repositories is unsafe because code repositories can be shared or exposed and commits can leak secrets. The better approach is to use IAM roles, secret managers, or environment based temporary credentials instead of hard coding keys.
Use a single access key across all services for consistency centralizes risk and removes traceability. A single key makes incident response and targeted revocation difficult and contradicts the principle of least privilege.
Keep the root account free of access keys and prefer temporary credentials via IAM roles. Use per application credentials to isolate risk and rotate keys on a reasonable schedule that balances security and operations.
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
Trailhead Motors recently moved its core platform to AWS and migrated data from MariaDB into Amazon DynamoDB. You are provisioning a new table named InvoicesV3, and the application must query items using the same partition key while providing an alternate sort key. This capability needs to be defined when the table is first created because it cannot be added afterward. What should you implement?
-  
✓ D. Define a local secondary index when creating the table
 
The correct choice is Define a local secondary index when creating the table. A local secondary index preserves the same partition key and provides an alternate sort key and it must be declared at table creation to satisfy the requirement.
An LSI shares the table partition key and lets you query items with a different sort key without changing the partitioning of the data. An LSI must be created with the base table so you cannot add it later and that constraint matches the question exactly.
Run a Scan operation is incorrect because scanning reads every item in the table and it does not provide any indexing or efficient targeted queries.
Create a global secondary index is incorrect because a global secondary index can be added after table creation and it often uses a different partition key, so it does not enforce the same-partition-key plus alternate sort-key requirement at creation time.
Enable DynamoDB Streams is incorrect because streams capture change events for replication and processing and they do not create or change query indexes or sort keys.
When a question requires the same partition key and an alternate sort key that must exist at table creation choose LSI.
NovaFit Labs is preparing to launch a fitness tracking mobile app and wants to streamline account creation and sign-in for new users. The team expects about 3 million registrations within the next nine months and prefers a fully managed, highly scalable user management solution that minimizes development work. What should the developer recommend?
-  
✓ C. Amazon Cognito User Pools
 
Amazon Cognito User Pools is the correct choice because it provides a fully managed user directory with built in sign up and sign in flows and it minimizes custom development while supporting large scale consumer registrations.
Amazon Cognito User Pools handles user registration, password policies, email and phone verification, multi factor authentication and federation with social and OIDC identity providers while integrating with mobile SDKs and scaling to millions of users so the development team can focus on the app instead of building authentication plumbing.
Amazon Cognito Identity Pools is not correct because it issues temporary AWS credentials for accessing AWS resources and does not itself provide user registration or profile management.
Build a custom authentication service with AWS Lambda and Amazon DynamoDB would satisfy functionality but it increases development effort, operational burden and security responsibility which contradicts the requirement for a fully managed, low maintenance solution.
AWS Directory Service for Microsoft Active Directory targets enterprise directory and Windows integrated workloads inside a VPC and is not suited for public consumer mobile app sign up at internet scale.
Remember to decide whether you need a managed user directory or temporary AWS credentials when choosing between User Pools and Identity Pools so you map the service to the core requirement.
An engineer at a regional consulting firm is planning the architecture for an internal analytics portal that will be accessed by roughly 120 employees. The solution must automatically right-size capacity with changing load while keeping costs low. Which AWS services would deliver the highest elasticity for this design? (Choose 2)
-  
✓ B. Amazon DynamoDB
 -  
✓ E. Amazon EC2 Spot Fleet
 
The most elastic services for this design are Amazon DynamoDB and Amazon EC2 Spot Fleet. These two services can automatically right size capacity while keeping costs low which fits an internal analytics portal serving roughly 120 employees.
Amazon DynamoDB provides on demand capacity and supports Application Auto Scaling for provisioned mode which allows tables and indexes to scale up quickly during spikes and scale down when traffic subsides. That behaviour aligns resource supply with demand and avoids paying for idle infrastructure.
Amazon EC2 Spot Fleet allows you to request and manage fleets of Spot instances across multiple capacity pools and to integrate with scaling policies to add or remove instances automatically. Using Spot Fleet delivers strong cost savings and rapid elasticity for compute tasks that tolerate interruptions.
Amazon RDS can scale with read replicas and instance resizing but it relies on fixed instances and scaling actions are generally slower which makes it less suitable for sudden changes in load.
Amazon CloudFront improves content delivery and reduces origin load but it does not provide compute or database elasticity needed to right size application capacity.
AWS Backup orchestrates backups and restores and it does not scale application compute or data services to match workload.
Focus on services that provide automatic, rapid scaling and minimize idle cost when a question asks for the highest elasticity.
A sports streaming analytics startup named RallyView exposes APIs that return aggregated, precomputed metrics like plays, reactions, and minutes watched. The APIs run on Amazon API Gateway with AWS Lambda, and the data is read from a single file in Amazon S3 that is regenerated every 18 hours. After a sudden surge in requests, clients experience higher latency when calling the API. The team wants faster responses without altering the existing backend components. What should they do to improve the API’s responsiveness?
-  
✓ C. Enable response caching in Amazon API Gateway
 
Enable response caching in Amazon API Gateway is the correct choice because it directly reduces latency without changing the backend. The cache stores method responses in memory keyed by request parameters so calls can be served from the cache instead of invoking Lambda.
The startup regenerates the S3 file every 18 hours so a suitably configured TTL will serve most traffic from the cache and greatly reduce Lambda invocations and cold starts. API Gateway caching is configured at the stage level so you can enable and tune it without modifying Lambda code or S3 output.
Configure Amazon CloudFront to cache API responses ahead of API Gateway can help for global edge caching and reduce latency for distant clients. It is more complex to set up because you must manage cache key and header policies and you may need to adjust origins and behaviors. For a quick drop in fix with no backend changes the built in API Gateway cache is simpler.
Turn on CORS on the API Gateway stage only affects browser cross origin access and does not reduce backend execution or latency. Enabling CORS will not resolve the higher latency experienced after the traffic surge.
Add Amazon ElastiCache to maintain a memory cache for hot keys would improve performance but it requires code changes to read and populate the cache and it adds infrastructure to manage. The team wanted to avoid altering the backend so ElastiCache is not a suitable drop in solution.
When responses are mostly static and you must avoid code changes use API Gateway caching and tune the TTL to match your data refresh window.
A developer at Nimbly Tech maintains a Node.js AWS Lambda function that connects to an Amazon RDS for PostgreSQL database. The handler currently opens a new connection on every invocation and closes it before returning. Which Lambda capability should the developer use so that a previously established connection can be preserved and reused by later invocations in the same runtime environment?
-  
✓ B. Lambda execution context
 
The correct choice is Lambda execution context. When AWS Lambda reuses a runtime environment for subsequent invocations objects that are created outside the handler remain in memory so a previously established database connection can be preserved and reused.
The Lambda execution context lets you initialize database clients and connections in the global scope so warm invocations reuse them and reduce connection overhead. By relying on the execution context you avoid creating a new connection on every invocation and you can combine global initialization with pooling or reconnect logic to avoid exhausting database resources.
Provisioned Concurrency helps keep function environments warm to reduce cold start latency but it is not the mechanism that preserves in memory objects across invocations. It can make reuse events more consistent but it does not itself provide the runtime behavior that enables connection reuse.
Environment variables only supply configuration values such as endpoints or credentials and they cannot maintain open network sockets or database connections. They are useful for settings but not for preserving runtime objects.
Event source mapping controls how events are polled or delivered to the function and it does not influence the function runtime memory or the lifecycle of open connections.
Initialize database clients in the global scope so warm invocations can reuse them and do not close connections at the end of every handler. Consider connection pooling or a managed proxy when many concurrent connections are possible.
A compliance audit at OrionPay Labs requires that all traffic to Amazon S3 use TLS. Under which S3 encryption option will Amazon S3 reject any request that is sent over HTTP rather than HTTPS?
-  
✓ B. SSE-C
 
The correct choice is SSE-C. Amazon S3 will reject requests sent over plain HTTP when you use customer provided keys because the key material is transmitted in request headers and must be protected in transit.
With SSE-C the client supplies the encryption key with each request and S3 enforces transport layer security to protect that key. For this reason S3 requires HTTPS for SSE-C requests and will refuse requests that arrive over HTTP.
SSE-KMS uses AWS KMS to manage keys for server side encryption at rest and it does not by itself make S3 reject HTTP requests. HTTPS is strongly recommended and AWS SDKs use it by default but the presence of SSE-KMS is not what enforces rejection.
Client-side encryption means data is encrypted before it is uploaded so S3 never receives the unencrypted key material to protect. S3 does not enforce transport requirements based on client side encryption and it will not automatically refuse HTTP requests for this reason.
SSE-S3 uses S3 managed keys for encryption at rest and it does not mandate HTTPS at the API level. S3 will accept requests when SSE-S3 is used though using HTTPS is best practice for all uploads.
When a question mentions customer provided keys remember that SSE-C requires transport security because the key is sent in headers.
An engineer at Alpine Health Tech maintains a Node.js tool that calls the low-level DynamoDB BatchGetItem API to pull batches of about 90 records from the PatientEvents table. Many calls return incomplete results and list numerous items under UnprocessedKeys. What should the engineer do to make these bulk reads complete more reliably? (Choose 2)
-  
✓ B. Retry unprocessed keys using exponential backoff with jitter between attempts
 -  
✓ D. Switch to the AWS SDK’s batch request client to leverage built-in retries
 
Retry unprocessed keys using exponential backoff with jitter between attempts and Switch to the AWS SDK’s batch request client to leverage built-in retries are correct.
Retry unprocessed keys using exponential backoff with jitter between attempts addresses the common causes of partial results such as throttling and hot partitions. DynamoDB returns UnprocessedKeys when the request hits capacity or size limits and retrying only the unprocessed items with exponential backoff and jitter reduces contention and spreads retries over time.
Switch to the AWS SDK’s batch request client to leverage built-in retries is also a reliable approach because the SDK implements retries that follow best practices for backoff and jitter. Using the SDK reduces custom code for retry logic and helps ensure consistent, proven retry behavior.
Add a new Global Secondary Index with separate read capacity settings does not change how BatchGetItem returns UnprocessedKeys and will not guarantee complete batch results in the face of size limits or throttling on the primary table.
On failure, reissue the batch request immediately without delay tends to repeat the same throttled traffic and increases the chance of repeated failures rather than resolving the UnprocessedKeys.
Raise the table’s read capacity and turn on Auto Scaling can reduce throttling but it does not prevent partial results that occur because of response size limits and it may not solve hot partition effects by itself.
When you see UnprocessedKeys retry only the unprocessed items and use exponential backoff with jitter. Prefer the SDKs built in retry behavior to avoid reinventing retry logic.
A developer at Orion Media Labs has instrumented a microservices application with the AWS X-Ray SDK to capture request telemetry. The team needs a custom debugging dashboard that shows full trace details without using the X-Ray console and should query traces from the last 36 hours. What should the developer do to support this requirement?
-  
✓ D. Use GetTraceSummaries to list trace IDs for the time window, then call BatchGetTraces to download the full trace data
 
Use GetTraceSummaries to list trace IDs for the time window, then call BatchGetTraces to download the full trace data is correct because it discovers trace IDs within a specified time window and then retrieves the full trace documents required to render complete traces in a custom dashboard.
GetTraceSummaries supports start time and end time filters so you can restrict queries to the last 36 hours and it returns trace identifiers and summary information. After you collect the IDs you call BatchGetTraces with those IDs and it returns the full segment documents and annotations that your custom viewer needs to show end to end trace detail.
Use the GetServiceGraph API to enumerate trace IDs, then fetch the traces with GetTraceSummaries is incorrect because GetServiceGraph produces a service topology and aggregated metrics and it does not provide a list of individual trace IDs. Also GetTraceSummaries returns summaries and not full trace documents so it cannot by itself provide complete trace payloads.
Call BatchGetTraces first to discover trace IDs, then query the traces with GetTraceSummaries is wrong because BatchGetTraces requires trace IDs as input and so it cannot be used to discover IDs. The summary call is what returns IDs and the batch call is what fetches the full traces.
Use the GetGroup API to obtain trace IDs for the app, then retrieve the traces with BatchGetTraces is invalid because GetGroup returns group metadata and the filter expression for a group and it does not produce the individual trace identifiers needed to call BatchGetTraces.
Remember that GetTraceSummaries is for time based discovery of trace IDs and BatchGetTraces is for downloading full trace documents. Use the time filters to limit results to the last 36 hours.
Aurora Metrics, a media analytics startup, runs several serverless microservices on Amazon API Gateway with AWS Lambda backends. The team needs to release a new version of one public API and keep the current endpoint available while early adopters move to the new one for about 90 days before decommissioning the old version. How should they roll out the change to enable a smooth, side-by-side transition?
-  
✓ C. Update the Lambda, publish a new version, reference it in the API integration, and deploy the API to a separate stage for the new version
 
Update the Lambda, publish a new version, reference it in the API integration, and deploy the API to a separate stage for the new version is correct because it provides two independent API Gateway stages that point to different Lambda versions so clients can continue using the existing endpoint while early adopters move to the new one.
Publishing a new Lambda version and wiring the API integration to that version in a new stage creates distinct invoke URLs and immutable backends. This lets the team test and validate the new release, allow gradual migration, and retire the old stage on a planned schedule without disrupting current users.
Update the Lambda code, publish a new version, point the API integration to that version, and redeploy the API to the same stage is incorrect because redeploying to the same stage replaces the previous deployment and removes the old endpoint so clients cannot use both versions side by side.
Publish the update as a new Lambda version and expose it via a Lambda function URL instead of API Gateway is incorrect because function URLs bypass API Gateway features such as authorization, throttling, and usage plans and they do not provide two API Gateway endpoints for a controlled migration.
Enable an Amazon API Gateway canary release on the existing stage to shift 10% of traffic to the new deployment is incorrect because canary releases shift traffic for a single stage and URL rather than providing two stable endpoints that clients can explicitly choose between for a long side by side migration.
Use a separate stage when you need distinct invoke URLs for versioned APIs and use canary deployments when you want gradual traffic shifting on a single URL.
At a boutique travel-booking startup, you are designing an order management backend. The team wants a serverless, event-driven pattern so that every insert, update, or delete on the orders data automatically generates change records that invoke AWS Lambda functions. Which AWS database should you select to satisfy this requirement?
-  
✓ C. DynamoDB
 
The correct choice is DynamoDB because it provides native change capture and can invoke Lambda functions on item inserts updates and deletes.
DynamoDB exposes DynamoDB Streams which captures item level inserts updates and deletes and integrates natively with AWS Lambda through event source mappings enabling near real time serverless processing. Streams retain records for up to 24 hours and you can scale consumers to process changes efficiently so DynamoDB can be the system of record while also driving event driven Lambda workflows.
Kinesis is a streaming platform and not a database so it does not serve as the system of record for orders. Kinesis can ingest and process streams but the question asks you to pick a database that emits change events natively.
RDS is a managed relational database and it does not provide native change streams that automatically trigger Lambda. To capture changes from RDS you would need additional tools such as AWS DMS or custom CDC pipelines which adds complexity.
ElastiCache is an in memory cache and not suitable as the system of record for order data and it does not emit change events to Lambda.
When a question requires serverless change data capture that directly invokes Lambda on item changes think DynamoDB Streams and prefer a database with native stream integration rather than adding CDC tooling.
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
Orion Markets, a regulated fintech exchange, runs a digital asset trading application whose on premises key manager uses an HSM and stores RSA 4096 key pairs. The company is moving the workload to AWS in a hybrid model, and auditors require that cryptographic keys live in dedicated, third party validated hardware security modules that you exclusively control and not in shared infrastructure. What should you implement to meet these requirements?
-  
✓ C. Import the existing RSA key material into an AWS CloudHSM cluster
 
The correct option is Import the existing RSA key material into an AWS CloudHSM cluster. This option maps directly to the audit requirement for cryptographic keys to reside in dedicated, third party validated hardware security modules that you exclusively control.
Import the existing RSA key material into an AWS CloudHSM cluster is appropriate because AWS CloudHSM provides customer controlled HSM appliances that are FIPS validated and single tenant in logical terms so you can import RSA 4096 keys and manage key lifecycle and cryptographic users yourself. CloudHSM keeps key material inside HSM hardware under your control and does not expose key plaintext to AWS managed services.
AWS KMS is not sufficient because KMS relies on AWS managed HSMs and KMS brokers access and key usage in ways that do not provide the exclusive, hardware level control auditors demanded.
AWS Secrets Manager is not appropriate because it is for storing application secrets and configuration and it does not provide HSM backed key storage or the FIPS validated, customer controlled HSM environment required by the auditors.
Configure an AWS KMS custom key store backed by your CloudHSM cluster is closer because it lets KMS use a CloudHSM cluster, but KMS still mediates key usage and management. If auditors require keys to live solely in HSM hardware that you control without KMS brokering then directly importing into AWS CloudHSM meets that requirement more directly.
Exclusive control and dedicated HSM in a question usually indicate that you should choose AWS CloudHSM and importing key material when auditors demand customer owned, FIPS validated hardware.
You are the lead engineer at a startup building multiple Amazon API Gateway endpoints for internal microservices. In the development account, the team updates routes and Lambda integrations, but calling the dev endpoint still returns behavior from the prior version. What should you recommend to make the latest changes available to callers?
-  
✓ C. Redeploy the API to an existing stage or create a new stage deployment
 
The correct choice is Redeploy the API to an existing stage or create a new stage deployment. API Gateway stores changes to routes methods integrations and authorizers but those changes do not become active for callers until you create a deployment and associate it with a stage.
When you make configuration changes you must Redeploy the API to an existing stage or create a new stage deployment so the stage points to the new deployment. Each deployment represents a snapshot of the API configuration so callers will only observe behavior from the most recently associated deployment.
Grant developers IAM permissions for API execution in API Gateway is not sufficient because execution permissions only control who can invoke an already deployed API and they do not publish or activate updated API configuration.
Turn on stage-level caching in API Gateway is counterproductive for seeing immediate updates because caching can return stale responses and it does not push updated definitions. If caching is enabled you must invalidate or disable it but you still need a deployment to publish changes.
Use stage variables to toggle the development state of the API only affects configuration substitution for things like ARNs or backend endpoints and it does not deploy the API. Stage variables do not make saved changes active for callers without creating a new deployment.
Remember to deploy after making changes and to clear stage cache if enabled before testing the endpoint.
A smart transportation startup ingests telemetry from connected buses into Amazon Kinesis Data Streams. During occasional traffic bursts, the application’s PutRecords calls return HTTP 200 but show a nonzero FailedRecordCount, and the per record responses include ProvisionedThroughputExceededException for a shard and InternalFailure for another in stream metroSensorsStream under account 222222222222. What should the developer do to reliably handle these bursty write patterns without overloading shards? (Choose 2)
-  
✓ B. Implement retries with exponential backoff and jitter for failed PutRecords entries
 -  
✓ E. Reduce the request rate or decrease the amount of data per PutRecords call
 
The correct choices are Implement retries with exponential backoff and jitter for failed PutRecords entries and Reduce the request rate or decrease the amount of data per PutRecords call. These two measures together address the shard write throughput limits that cause the observed FailedRecordCount and per record ProvisionedThroughputExceededException and InternalFailure on the metroSensorsStream under account 222222222222.
Using Implement retries with exponential backoff and jitter for failed PutRecords entries lets the producer handle transient errors and smooth retry traffic so shards are not hit with a burst of immediate retries. Adding jitter prevents synchronized retry storms from many producers so the stream can recover.
Using Reduce the request rate or decrease the amount of data per PutRecords call spreads writes over time and lowers per request size so you stay within the per shard limits of about 1 MiB per second and 1,000 records per second. Smaller or less frequent batches reduce the chance of provisioning errors during traffic spikes.
Merge shards to reduce the number of shards in the stream is wrong because merging lowers aggregate write capacity and will make throttling worse during bursts rather than better.
Increase the PutRecords batch size or send requests more frequently is wrong because larger batches or higher request rates increase burst pressure on shards and raise the likelihood of ProvisionedThroughputExceededException.
Enable enhanced fan out on all consumers is wrong because enhanced fan out improves consumer read throughput and does not change producer write limits or shard write capacity.
When you see ProvisionedThroughputExceededException on Kinesis writes use exponential backoff with jitter for retries and consider reducing producer rate or batch size to stay within shard limits.
A regional airline ticketing platform must update the TicketSwaps DynamoDB table and the WalletLedger DynamoDB table in a single all-or-nothing operation so the data remains consistent if any part of the write fails. Which DynamoDB capability should the developers use?
-  
✓ B. DynamoDB Transactions
 
DynamoDB Transactions is the correct choice because it supports atomic all or nothing updates across multiple items and tables which keeps the TicketSwaps and WalletLedger tables consistent when a write must succeed as a unit.
DynamoDB Transactions provide ACID semantics through TransactWriteItems and TransactGetItems and they ensure that either all writes across the involved items and tables commit or none of them are applied. This transactional API is the intended mechanism for coordinating multi item updates in DynamoDB and it prevents partial writes that would leave the ticketing and wallet ledger data inconsistent.
DynamoDB Streams emit item level change events for asynchronous processing and auditing but they do not make writes atomic across items or tables so they cannot guarantee all or nothing updates.
DynamoDB TTL only controls time based expiry of items and it does not coordinate or group writes so it cannot provide transactional guarantees.
DynamoDB Indexes such as GSIs and LSIs improve query flexibility and performance but they do not enforce ACID transactions across tables and they are not used to perform atomic multi table writes.
For multi table all or nothing requirements choose DynamoDB Transactions because they provide ACID guarantees across items and tables.
A logistics startup, Trailhead Freight, manages infrastructure with AWS CloudFormation. A dedicated network stack provisions a VPC and a private subnet named app-subnet-02. A separate application stack must reference the subnet ID created by the network stack. What should the developer do so the application stack can consume this value?
-  
✓ C. Define an Outputs entry with an Export name in the network stack template
 
Define an Outputs entry with an Export name in the network stack template is correct because the network stack must export the subnet ID so the application stack can import and reference that value.
The producing stack must declare an Outputs entry with an Export name and value so the consumer can read it. The consuming template then uses Fn::ImportValue to retrieve the exported subnet ID. This export and import pattern works only when both stacks are in the same AWS account and Region and it avoids manual copying or external storage of identifiers.
Fn::ImportValue is not sufficient by itself because it only imports a value that has already been exported. The producer must create the export in its Outputs section before the consumer can call Fn::ImportValue.
AWS Systems Manager Parameter Store could hold IDs but it is not the native CloudFormation cross-stack mechanism and using it introduces extra operational overhead and additional permissions to manage.
Fn::Transform invokes macros to transform templates and it does not provide a way to expose values across stacks so it is not applicable for sharing the subnet ID.
Always export values you want other CloudFormation stacks to consume and confirm both stacks are deployed in the same account and Region before using Fn::ImportValue.
A mobile gaming startup is deploying a low-latency API on Amazon EC2 instances running Nginx. The platform must handle tens of millions of concurrent TCP connections, and the application needs to log each client’s original source IP address and source port without using X-Forwarded-For headers. Which AWS load balancing option best satisfies these requirements?
-  
✓ B. Network Load Balancer
 
Network Load Balancer is the correct choice because it operates at Layer 4 and it preserves the client source IP address and source port while scaling to tens of millions of concurrent TCP connections.
Network Load Balancer forwards raw TCP connections so backend Nginx instances observe the original client IP and port without relying on X-Forwarded-For. It is designed for ultra low latency and extreme connection scale which fits a mobile gaming API that must handle tens of millions of simultaneous TCP connections.
Application Load Balancer is incorrect because it is a Layer 7 load balancer and it conveys client details via headers such as X-Forwarded-For rather than preserving the source port, and it is optimized for HTTP and HTTPS routing rather than raw TCP at extreme concurrency.
Classic Load Balancer is incorrect because it is a legacy option that is not intended for tens of millions of concurrent connections and it does not natively preserve both client IP and port without additional workarounds like proxy protocol.
Elastic Load Balancer is incorrect because that phrase refers to the overall ELB service family rather than a specific load balancer implementation, so it is ambiguous and does not directly satisfy the requirement.
When you must preserve client IP and port and support very high numbers of TCP connections choose a Layer 4 load balancer instead of a Layer 7 option.
A small e-commerce startup wants to publish a static website on AWS and serve it at www.willowpets.net. You created an Amazon S3 bucket, enabled static website hosting, and set home.html as the index document. You then added an Alias record in Amazon Route 53 that points to the bucket’s S3 website endpoint. When you visit www.willowpets.net, the browser shows HTTP 403 Access Denied. What should you change to make the site publicly accessible?
-  
✓ B. Add a bucket policy that grants public read access and verify S3 Block Public Access is not blocking it
 
Add an S3 bucket policy that grants public read access and verify S3 Block Public Access is not blocking it is the correct option because the S3 website endpoint requires anonymous s3:GetObject permissions on the objects to serve the index document instead of returning HTTP 403. If the Route 53 alias points to the S3 website endpoint but objects are not publicly readable or Block Public Access is preventing public policies the browser will see 403.
You should apply a bucket policy that allows s3:GetObject on the bucket object ARN pattern and then confirm that Block Public Access settings at the bucket or account level are not blocking those public policies. Granting public read on objects is the usual fix for static website hosting when the DNS and website configuration are already correct.
Create an IAM role is incorrect because IAM roles grant permissions to AWS principals and services and they do not grant anonymous public access for website visitors.
Enable CORS on the bucket is incorrect because CORS controls cross origin requests and does not change the object read permissions that cause a 403 error.
Turn on default server-side encryption for the bucket is incorrect because encryption affects data at rest and does not grant or change public read permissions so it will not resolve a 403 response.
When a static S3 site returns 403 check object public read permissions and S3 Block Public Access first and remember that 404 usually means the index document or object is missing.
A team at a note-taking startup is building a cross platform app where a user’s preferences and current note edits must stay consistent across phone, tablet, and web. The app must also let several collaborators work on the same shared notebooks with near real time updates and resolve conflicts after offline changes. Which AWS service should you use to implement this?
-  
✓ C. AWS AppSync
 
AWS AppSync is the correct choice because it provides managed GraphQL APIs with subscriptions for near real time updates and it includes built in offline data access and conflict detection and resolution so user preferences and note edits can stay consistent across phone tablet and web while multiple collaborators work on shared notebooks.
AWS AppSync supports GraphQL subscriptions so clients receive updates in near real time and it integrates with client libraries and DataStore to provide local first synchronization and automatic conflict handling when devices come back online. The service also works with identity providers and back ends so you can secure data per user and per notebook while keeping synchronization logic managed by the service.
AWS Amplify speeds development and provides tooling and libraries for mobile and web apps and it can provision AppSync endpoints, yet it is not the core service that implements GraphQL subscriptions offline sync and conflict resolution by itself.
Amazon DynamoDB Streams publishes change events from a table for back end processing and analytics, yet it does not provide client side subscriptions offline sync capabilities or built in conflict resolution for collaborative real time editing.
Amazon Cognito Sync is focused on synchronizing user specific settings across a single user’s devices and it does not support multi user shared dataset collaboration in real time. Amazon Cognito Sync has been superseded by newer approaches such as AppSync and Amplify DataStore so it is less likely to be the correct choice on recent exams.
When a scenario mentions real time updates and offline conflict resolution and asks for multi device and multi user sync choose a managed GraphQL solution such as AppSync.
An engineer at Orion HealthTech set up an Application Load Balancer in front of several Amazon EC2 instances for a staging rollout. During initial smoke tests, they notice the listener does not yet have a target group attached. What HTTP status code will appear in the load balancer logs until targets are registered?
-  
✓ C. HTTP 503
 
HTTP 503 is the correct response that will appear in the Application Load Balancer logs when the listener does not have a target group attached or when the target group has no registered targets.
HTTP 503 is returned because the load balancer cannot forward the request to any backend. The ALB is able to accept the request but it has no backend endpoints to route to, so it replies with Service Unavailable rather than timing out or denying access.
HTTP 500 denotes an internal server error in the load balancer or backend processing and it is not the expected response for an unattached or empty target group.
HTTP 504 indicates a gateway timeout that occurs when the load balancer reached a target but did not receive a response before the timeout, and this situation does not apply when there are no targets attached.
HTTP 403 means the request is forbidden and is usually caused by access controls such as AWS WAF rules or listener rules, which is unrelated to a missing target group assignment.
Remember to check that the listener has an attached target group and that targets are registered and healthy when you see 503 responses in ALB logs.
A developer at Zephyr Health needs to publish new application code to an AWS Elastic Beanstalk environment that runs behind a load balancer across sixteen Amazon EC2 instances. The rollout must not reduce capacity or harm user experience at any time, and the team wants to keep extra cost to a minimum. Which deployment policy should be used?
-  
✓ C. Rolling with additional batch
 
The correct option is Rolling with additional batch. This deployment policy temporarily launches extra instances for each batch so the environment remains at full capacity during the rollout and it meets the requirement to avoid any reduction in capacity while keeping additional cost low.
The Rolling with additional batch strategy updates instances in batches and adds a short lived extra batch to preserve the same number of serving instances during each update. This prevents throughput dips during the deployment and keeps extra cost minimal because the additional instances exist only for the duration of the rollout.
Rolling is not appropriate because it updates batches without adding extra instances. That approach reduces available capacity during each batch and can harm user experience under load.
Immutable does provide zero downtime by creating a fresh Auto Scaling group and replacing the fleet, but it duplicates the entire environment and is generally much more expensive. That higher cost makes it a less suitable choice when minimizing extra expense is important.
All at once updates every instance at the same time which causes downtime or service interruption and it therefore violates the requirement to avoid any impact on user experience.
Look for phrases like no reduction in capacity and cost-effective to identify the deployment that temporarily adds short lived instances per batch.
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
A software engineer at a healthcare analytics startup needs to store confidential reports in an Amazon S3 bucket. All objects must be encrypted at rest, and the security policy requires the encryption keys to be rotated every 12 months. What is the simplest way to meet these requirements?
-  
✓ C. Enable automatic annual rotation on a customer managed AWS KMS key and use SSE-KMS for the bucket
 
The correct choice is Enable automatic annual rotation on a customer managed AWS KMS key and use SSE-KMS for the bucket. This option provides server side encryption for S3 objects while allowing you to enable automatic yearly rotation of the key material to meet the 12 month rotation requirement.
With SSE-KMS Amazon S3 uses AWS KMS for envelope encryption and a customer managed KMS key gives you control over key policies and lifecycle. Enabling automatic rotation on the customer managed key ensures AWS KMS rotates the cryptographic material every 12 months which satisfies the security policy with minimal operational overhead.
Perform client-side encryption before uploading objects to Amazon S3 would work technically but it is not the simplest approach because you must design and operate your own key management and rotation processes which increases complexity.
Import your own key material into AWS KMS and enable yearly rotation is incorrect because AWS KMS does not support automatic rotation for imported key material so you would need to rotate and reimport keys manually which defeats the requirement for minimal operational work.
Turn on default bucket encryption with SSE-S3 does not meet the requirement because S3 managed keys are rotated by AWS on an internal schedule and you cannot configure or demonstrate a customer controlled 12 month rotation cadence.
When the question asks for encrypted S3 objects and annual key rotation with the least operational effort, choose SSE-KMS with a customer managed KMS key and enable automatic rotation.
An engineer at LumiTrack Analytics deployed a new Application Load Balancer with a listener and a target group, but no instances or IP addresses have been registered to that group yet. When the engineer makes an HTTP request to the load balancer DNS name, which status code will be returned?
-  
✓ C. HTTP 503: Service unavailable
 
HTTP 503: Service unavailable is the correct response when an Application Load Balancer receives a request but the configured target group has no registered or no healthy targets to forward the request to.
The load balancer cannot route the request to any backend when there are no targets, and the ALB returns HTTP 503 to indicate the service is unavailable for that request. This response is generated before any backend connection is established and it is distinct from errors that require a reachable target to exist.
HTTP 504: Gateway timeout indicates a backend accepted the connection but did not respond within the timeout window and it therefore implies at least one reachable target was involved.
HTTP 502: Bad gateway usually means the load balancer received an invalid or malformed response from a target or could not establish a proper response chain and it typically presumes a target attempted to answer.
HTTP 500: Internal server error reflects an application fault on a responding target and it is not produced simply because the target group is empty.
Remember that an ALB with no registered or all unhealthy targets returns 503 and that timeouts and bad backend responses usually map to 504 and 502 respectively
Arcadia Goods has an Application Load Balancer sending requests to a Lambda function named Alpha as the target, but Alpha never processes any of the incoming requests. Monitoring shows another Lambda function named Beta in the same account frequently consumes about 900 out of the account’s 1,000 concurrent executions. What should the team change to ensure Alpha can reliably run when invoked by the ALB?
-  
✓ C. Configure reserved concurrency for Lambda function Beta to cap its maximum concurrent executions and prevent it from exhausting the account
 
Configure reserved concurrency for Lambda function Beta to cap its maximum concurrent executions and prevent it from exhausting the account is correct. This setting limits Beta and preserves unreserved capacity so that Alpha can be invoked by the Application Load Balancer.
Reserved concurrency puts a hard ceiling on how many concurrent executions a single function can use and it also guarantees that the remaining account concurrency is available to other functions. By configuring reserved concurrency on Beta the team prevents Beta from consuming nearly all of the account pool and they allow Alpha to obtain the concurrency it needs when the ALB routes requests to it.
Enable provisioned concurrency on Lambda function Beta to restrict its concurrency during spikes is incorrect because provisioned concurrency allocates warm execution environments for predictable latency and it does not impose a concurrency limit. It can actually consume concurrency capacity and it does not act as a throttle.
Use Amazon API Gateway instead of an Application Load Balancer for Lambda function Alpha is incorrect because switching the frontend does not change the account level concurrency that Beta is exhausting. The concurrency contention is at the Lambda account level and a different trigger will not solve that.
Enable provisioned concurrency on Lambda function Alpha is incorrect because provisioned concurrency addresses cold starts and latency rather than preventing starvation when another function has consumed most of the account concurrency. It does not free capacity if the account pool is already exhausted.
When you need to guarantee capacity use reserved concurrency. When you need lower cold start latency use provisioned concurrency.
A media analytics startup needs to display Amazon S3 object listings in its admin portal, showing 75 items per page while keeping AWS API requests to a minimum. Which AWS CLI parameters will return exactly 75 items per page and allow the next page to continue from where the previous one ended? (Choose 2)
-  
✓ B. –max-items
 -  
✓ D. –starting-token
 
The correct choices are –max-items and –starting-token. These two flags let the AWS CLI return exactly 75 items to the caller and then resume the next page from where the previous output ended.
Using –max-items 75 tells the CLI to aggregate service responses and present exactly seventy five items in the CLI output while minimizing the number of service API calls. Using –starting-token with the continuation token provided by the previous response allows the next page to continue the listing seamlessly without repeating or skipping objects.
–page-size controls how many items are requested per individual service API call and it can increase the total number of backend calls when you try to tune paging. That behavior is the opposite of the requirement to keep API requests to a minimum so it is not the right flag for this goal.
–next-token is not a valid top level AWS CLI pagination flag. Services may return a NextToken or similar continuation value but the CLI expects you to pass that value using –starting-token so –next-token is incorrect for the CLI parameters.
–limit is not a supported global pagination option for these commands and it does not exist as the way to request exactly N items per page in the AWS CLI so it is not correct.
Use –max-items to control how many results the CLI returns and use –starting-token to resume pages. Prefer these flags when your goal is to minimize service API calls rather than tuning per call sizes with –page-size.
CineWave Media is a global streaming platform with over 140 million subscribers who watch roughly 180 million hours of content each day. Nearly all workloads run on AWS across more than 15,000 instances, producing a rapidly changing network where services communicate within AWS and over the internet. The operations team must continuously monitor and optimize networking. They need a cost‑efficient way to ingest and analyze several terabytes of real-time VPC flow logs daily while retaining the flexibility to route the stream to multiple downstream analytics, storage, and alerting consumers. Which AWS service best meets these needs?
-  
✓ B. Amazon Kinesis Data Streams
 
Amazon Kinesis Data Streams is the correct choice because it ingests high throughput streaming data in real time and supports multiple independent consumers with enhanced fan-out or shared consumer models while also allowing replay and custom processing to route VPC flow logs to analytics storage and alerting systems as required.
Amazon Kinesis Data Streams scales by shard so you can cost effectively handle several terabytes per day and control ordering and retention for reprocessing. The service lets you attach multiple consumers that read the same stream without interfering with each other and it supports replay of records to recover from downstream failures or to run new analytics jobs on historical windows of data.
Amazon Kinesis Data Firehose is a managed delivery service that simplifies loading data into a few destinations such as S3 Redshift or Elasticsearch and it is a good choice for turnkey delivery. It is not the best fit here because it does not provide the same flexibility for multiple custom consumers or fine grained replay and routing that the scenario requires.
Amazon SQS is a message queuing service that is optimized for decoupling components and at least once processing. It does not provide shard based ordering and it is not designed for continuous high volume streaming with multiple independent analytics consumers and record replay, so it is not suitable for this use case.
AWS Glue is an extract transform and load service and data catalog that helps with batch and serverless ETL. It is not intended to be the primary real time ingestion and fan out mechanism for several terabytes of streaming VPC flow logs each day, so it is not the right service for this requirement.
When a question highlights real-time streaming and multiple independent consumers and replay choose a streaming service that supports fan-out and retention rather than a simple delivery or queueing service.

		
	
				