Practice Exams for the AWS Developer Certification (Associate)

Why Get AWS Developer Certified?

Over the past few months, I have been helping developers and cloud professionals prepare for careers that thrive in the AWS ecosystem. The goal is simple: to help you build, deploy, and manage real world applications using the same cloud services trusted by millions of organizations worldwide.

A key milestone in that journey is earning your AWS Certified Developer Associate credential. This certification demonstrates your ability to develop, test, and deploy modern applications on AWS using best practices for scalability, security, and performance.

Whether you’re a software developer, DevOps engineer, solutions architect, or backend specialist, the AWS Developer Associate certification gives you a strong foundation for cloud native development. You will learn how to integrate with key services like Lambda, DynamoDB, S3, SQS, and API Gateway while mastering tools like CloudFormation, CodePipeline, and CloudWatch.

In the fast paced world of cloud computing, understanding how to build and maintain applications on AWS is no longer optional. Every developer should be familiar with SDKs, IAM roles, serverless architecture, CI/CD pipelines, and event driven design.

That is exactly what the AWS Developer Associate Certification Exam measures. It validates your knowledge of AWS SDKs, application security, error handling, deployment automation, and how to write efficient, scalable, and cost optimized code for the cloud.

AWS Developer Practice Questions and Exam Simulators

Through my Udemy courses on AWS certifications and through the free question banks at certificationexams.pro, I have seen where learners struggle most. That experience led to the creation of realistic AWS Developer Practice Questions that mirror the format, difficulty, and nuance of the real test.

You will also find AWS Developer Exam Sample Questions and full AWS Developer Practice Tests designed to help you identify your strengths and weaknesses. Each AWS Developer Question and Answer set includes detailed explanations, helping you learn not just what the correct answer is, but why it is correct.

These materials go far beyond simple memorization. The goal is to help you reason through real AWS development scenarios such as deploying Lambda functions, optimizing API Gateway configurations, and managing data with DynamoDB just as you would in production.

Real Exam Readiness

If you are looking for Real AWS Developer Exam Questions, this collection provides authentic examples of what to expect. Each one has been carefully crafted to reflect the depth and logic of the actual exam without crossing ethical boundaries. These are not AWS Developer Exam Dumps or copied content. They are original practice resources built to help you learn.

Our AWS Developer Exam Simulator replicates the pacing and difficulty of the live exam, so by the time you sit for the test, you will be confident and ready. If you prefer to study in smaller chunks, you can explore curated AWS Developer Exam Dumps and AWS Developer Braindump style study sets that focus on one topic at a time, such as IAM, CI/CD, or serverless architecture.

Each AWS Developer Practice Test is designed to challenge you slightly more than the real exam. That is deliberate. If you can perform well here, you will be more than ready when it counts.

Learn & Succeed as an AWS Developer

The purpose of these AWS Developer Exam Questions is not just to help you pass. It is to help you grow as a professional who can design and deliver production grade applications on AWS. You will gain confidence in your ability to build resilient, efficient, and maintainable solutions using the full power of the AWS platform.

So dive into the AWS Developer Practice Questions, test your knowledge with the AWS Developer Exam Simulator, and see how well you can perform under real exam conditions.

Good luck, and remember, every great cloud development career begins with mastering the tools and services that power the AWS cloud.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Certification Exam Simulator Questions

 

Northwind Press, a digital publisher, wants a serverless way to roll out static front-end sites through preview and production stages. Repositories are spread across multiple Git providers, and deployments should start automatically when changes are merged into designated branches. All traffic and integrations must use HTTPS. Which approach will minimize ongoing operational effort?

  • ❏ A. Host in Amazon S3 and use AWS CodePipeline with AWS CodeBuild to deploy on branch merges, fronted by Amazon CloudFront for HTTPS

  • ❏ B. Use AWS Elastic Beanstalk with AWS CodeStar to manage environments and deployments

  • ❏ C. Use AWS Amplify Hosting, connect the target Git branches to environments, and let merges trigger automated HTTPS deployments

  • ❏ D. Run separate Amazon EC2 instances per environment and automate releases with AWS CodeDeploy tied to the repositories

A streaming analytics team at Vega Studios is building a serverless consumer that processes records from an Amazon Kinesis Data Streams stream using AWS Lambda. The function is CPU constrained during batch processing and the team wants to increase per-invocation compute capacity without changing code while keeping costs manageable. What should they do to increase the CPU available to the function?

  • ❏ A. Configure the function to run on unreserved account concurrency

  • ❏ B. Use Lambda@Edge

  • ❏ C. Increase the function’s concurrent executions limit

  • ❏ D. Allocate more memory to the function

A travel reservations startup is overhauling its release process and moving away from a rigid waterfall approach. The team now requires all services to follow CI/CD best practices and to be containerized with Docker, with images stored in Amazon ECR and published via AWS CodePipeline and AWS CodeBuild. During a pipeline run, the final push to the registry fails and reports an authorization error. What is the most probable cause?

  • ❏ A. Security group rules prevent CodeBuild from reaching Amazon ECR

  • ❏ B. The CodeBuild service role lacks permissions to obtain an ECR authorization token and push images

  • ❏ C. The ECS cluster instances need extra configuration added to /etc/ecs/ecs.config

  • ❏ D. The build environment’s VPC is missing Amazon ECR interface VPC endpoints

NovaArcade is building a gaming analytics feature on Amazon DynamoDB. The table uses user_id as the partition key and game_name as the sort key, and each item stores points and points_recorded_at. The team must power a leaderboard that returns the highest scoring players (user_id) per game_name with the lowest read cost and latency. What is the most efficient way to fetch these results?

  • ❏ A. Use a DynamoDB Query on the base table with key attributes user_id and game_name and sort the returned items by points in the application

  • ❏ B. Use DynamoDB Streams with AWS Lambda to maintain a separate Leaderboards table keyed by game_name and sorted by points, then query that table

  • ❏ C. Create a global secondary index with partition key game_name and sort key points, then query it for each game_name with ScanIndexForward set to false

  • ❏ D. Create a local secondary index with primary key game_name and sort key points and query by game_name

A Berlin-based travel startup is developing a cross-platform mobile app that alerts users to visible meteor showers and ISS flyovers over the next 30 days. The app signs users in with a social identity provider using the provider’s SDK and then sends the returned OAuth 2.0 or OpenID Connect token to Amazon Cognito Identity Pools. After the user is authenticated, what does Cognito return that the client uses to obtain temporary, limited-privilege AWS credentials?

  • ❏ A. Cognito key pair

  • ❏ B. AWS Security Token Service

  • ❏ C. Amazon Cognito identity ID

  • ❏ D. Amazon Cognito SDK

Over the next 9 months, Orion Travel Tech plans to rebuild its legacy platform using Node.js and GraphQL. The team needs full request tracing with a visual service map of dependencies. The application will run on an EC2 Auto Scaling group of Amazon Linux 2 instances behind an Application Load Balancer, and trace data must flow to AWS X-Ray. What is the most appropriate approach to meet these needs?

  • ❏ A. Refactor the Node.js service to call the PutTraceSegments API and push segments straight to AWS X-Ray

  • ❏ B. Enable AWS WAF on the Application Load Balancer to inspect and record all web requests

  • ❏ C. Add a user data script that installs and starts the AWS X-Ray daemon on each EC2 instance in the Auto Scaling group

  • ❏ D. Turn on AWS X-Ray tracing in the EC2 Auto Scaling launch template

An operations dashboard for Helios Tickets streams click and event data into Amazon Kinesis Data Streams. During flash sales, the producers are not fully using the 24 available shards, leaving write capacity idle. Which change will help the producers better utilize the shards and increase write throughput to the stream?

  • ❏ A. Increase the number of shards with UpdateShardCount

  • ❏ B. Insert Amazon SQS between producers and the stream to buffer writes

  • ❏ C. Use the Kinesis Producer Library to aggregate and batch records before calling PutRecords

  • ❏ D. Call DynamoDB BatchWriteItem to send multiple records at once

Your team is building a subscription portal for BrightPath Media using AWS Lambda, AWS App Runner, and Amazon DynamoDB. Whenever a new item is inserted into the Members table, a Lambda function must automatically send a personalized welcome email to the member. What is the most appropriate way to have the table changes invoke the function?

  • ❏ A. Create an Amazon EventBridge rule for new item inserts in the table and target the Lambda function

  • ❏ B. Enable DynamoDB Streams on the Members table and configure it as the event source for the Lambda function

  • ❏ C. Use Amazon Kinesis Data Streams to capture new table data and subscribe the Lambda function

  • ❏ D. Enable DynamoDB Transactions and configure them as the event source for the function

A developer at VerdantPay is building a serverless workflow that processes confidential payment data. The AWS Lambda function writes intermediate files to its /tmp directory, and the company requires those files to be encrypted while at rest. What should the developer do to meet this requirement?

  • ❏ A. Attach the Lambda function to a VPC and mount an encrypted Amazon EBS volume to /tmp

  • ❏ B. Configure the Lambda function to use an AWS KMS customer managed key, generate a data key with KMS, and encrypt data before writing to /tmp

  • ❏ C. Enable default encryption on an Amazon S3 bucket with a KMS customer managed key and mount the bucket to /tmp

  • ❏ D. Mount an encrypted Amazon EFS access point and rely on EFS encryption at rest for files written to /tmp

You are creating a YAML AWS CloudFormation stack for BlueRiver Analytics that provisions an Amazon EC2 instance and a single Amazon RDS DB instance. After the stack completes, you want to expose the database connection endpoint in the Outputs so other stacks can consume it. Which intrinsic function should be used to fetch that endpoint value?

  • ❏ A. !Sub

  • ❏ B. !FindInMap

  • ❏ C. !GetAtt

  • ❏ D. !Ref

A retail analytics startup runs part of its platform on Amazon EC2 and the rest on servers in a private colocation rack. The on-premises hosts are vital to the service, and the team needs to aggregate host metrics and application logs in Amazon CloudWatch alongside EC2 data. What should the developer implement to send telemetry from the on-premises servers to CloudWatch?

  • ❏ A. Build a scheduled script that gathers metrics and log files and uploads them to CloudWatch with the AWS CLI

  • ❏ B. Install the CloudWatch agent on the data center servers and configure an IAM user’s access key and secret key that allow publishing metrics and logs to CloudWatch

  • ❏ C. Register the servers as AWS Systems Manager hybrid managed instances so the CloudWatch agent can assume the Systems Manager role to send data to CloudWatch

  • ❏ D. Install the CloudWatch agent on the data center servers and attach an IAM role with CloudWatch permissions to those machines

A travel startup named Skylark Journeys plans to roll out refreshed REST endpoints for its iOS and Android app that run behind Amazon API Gateway. The team wants to begin by sending roughly 15% of calls to the new version while the rest continue to use the current release, and they prefer the simplest approach that stays within API Gateway. How can they expose the new version to only a subset of clients through API Gateway?

  • ❏ A. Configure an Amazon Route 53 weighted routing policy to send a percentage of requests to a second API Gateway domain

  • ❏ B. Enable a stage canary in Amazon API Gateway and use canarySettings to shift a small share of traffic to the new deployment

  • ❏ C. Launch the new API in a separate VPC and use Amazon CloudFront to split traffic between the two origins

  • ❏ D. Use AWS CodeDeploy traffic shifting to gradually move calls to the updated API

A media analytics startup runs an Amazon EC2 Auto Scaling group named api-fleet with a maximum size of 5 and a current size of 4. A scale-out policy is configured to add 4 instances when its CloudWatch alarm is in ALARM. When this policy is executed, what will happen?

  • ❏ A. Amazon EC2 Auto Scaling adds 4 instances to the group

  • ❏ B. Amazon EC2 Auto Scaling adds only 1 instance to the group

  • ❏ C. Amazon EC2 Auto Scaling launches 4 instances and then scales in 3 shortly afterward

  • ❏ D. Amazon EC2 Auto Scaling adds 4 instances across multiple Availability Zones because the maximum size applies per Availability Zone

A developer at Northwind Bikes configured an AWS CodeBuild project in the console to use a custom Docker image stored in Amazon ECR. The buildspec file exists and the build starts, but the environment fails when trying to fetch the container image before any build commands run. What is the most likely reason for the failure?

  • ❏ A. AWS CodeBuild does not support custom Docker images

  • ❏ B. The image in Amazon ECR has no tags

  • ❏ C. The CodeBuild service role lacks permissions to authenticate and pull from Amazon ECR

  • ❏ D. The build environment is not configured for privileged mode

Orion Outfitters, a global distributor, needs users from partner supplier companies who sign in with their own SAML or OIDC identity providers to add and modify items in two DynamoDB tables named PartnerOrders and SupplierCatalog in Orion’s AWS account without creating individual IAM users for them. Which approach should the developer implement to securely grant these partner users scoped access to those tables?

  • ❏ A. Create an IAM user for each partner user and attach DynamoDB permissions

  • ❏ B. Configure Amazon Cognito User Pools to sign in supplier users and authorize DynamoDB access

  • ❏ C. Set up Amazon Cognito Identity Pools to federate supplier IdPs and issue temporary credentials for DynamoDB operations

  • ❏ D. AWS IAM Identity Center

BrightParcel, a logistics startup, runs Auto Scaling worker nodes on Amazon EC2 that poll an Amazon SQS standard queue for tasks. Some messages fail repeatedly during processing, and after 4 attempts the team wants those problematic messages automatically routed to an isolated location for later analysis without deleting them. What should they implement to keep failing messages separate for troubleshooting?

  • ❏ A. Enable long polling on the queue

  • ❏ B. Implement a dead-letter queue

  • ❏ C. Decrease the visibility timeout

  • ❏ D. Increase the visibility timeout

A travel booking platform runs several Amazon EC2 web servers behind an Application Load Balancer, and during busy periods the instances sustain around 92 percent CPU utilization. The engineering team has confirmed that processing TLS for HTTPS traffic is consuming most of the CPU on the servers. What actions should they take to move the TLS workload off the application instances? (Choose 2)

  • ❏ A. Configure an HTTPS listener on the ALB that forwards encrypted traffic to targets without decryption (pass-through)

  • ❏ B. Request or import an SSL/TLS certificate in AWS Certificate Manager and associate it with the ALB

  • ❏ C. Install the ACM certificate directly on each EC2 instance to terminate TLS on the server

  • ❏ D. Create an HTTPS listener on the ALB that terminates TLS at the load balancer

  • ❏ E. Add AWS WAF on the ALB to reduce CPU from TLS

You manage multiple Amazon API Gateway REST APIs that invoke AWS Lambda for each release of Orion Fintech’s backend. The company wants to merge these into one API while keeping distinct subdomain endpoints such as alpha.api.orionfin.io, beta.api.orionfin.io, rc.api.orionfin.io, and prod.api.orionfin.io so clients can explicitly target an environment. What should you implement to achieve this in the most maintainable way?

  • ❏ A. Modify the Integration Request mapping to dynamically rewrite the backend endpoint for each release based on the hostname

  • ❏ B. Configure Amazon Route 53 weighted records to direct clients to separate APIs for each environment

  • ❏ C. Create stages for ALPHA, BETA, RC, and PROD and use stage variables to target the right Lambda or HTTP backend, then map each subdomain to its stage

  • ❏ D. Use Lambda layers to separate environment-specific code

NorthWind Labs, a media analytics startup, exposes a public Amazon API Gateway REST API that backs its web dashboard with Amazon Cognito sign-in. The team is preparing a beta of a major API revision that introduces new resources and breaking changes, and only a small set of internal developers should test it while paying customers continue using the current API uninterrupted. The development team will maintain and iterate on the beta during this period. What is the most operationally efficient way to let the developers invoke the new version without impacting production users?

  • ❏ A. Configure a canary release on the existing production stage and give developers a stage-variable URL to hit

  • ❏ B. Create a separate API Gateway stage such as beta-v3 that is deployed from the new version and have developers use that stage’s invoke URL

  • ❏ C. Issue new API keys in API Gateway and require developers to pass those keys to reach the new version

  • ❏ D. Build a brand-new API Gateway API for the new handlers and tell developers to call that new API

A data insights portal for a regional retailer runs on AWS Elastic Beanstalk and stores report data in an Amazon DynamoDB table named AnalyticsEvents. Each request currently performs a full table scan and then filters the results for the user. Adoption is expected to jump over the next 8 weeks, and the table will grow substantially as report requests increase. What should you implement ahead of the growth to keep read performance high while minimizing cost? (Choose 2)

  • ❏ A. Enable DynamoDB Accelerator (DAX)

  • ❏ B. Switch to Query requests where possible

  • ❏ C. Adjust the ScanIndexForward setting to sort query results

  • ❏ D. Lower the page size by setting a smaller Limit value

  • ❏ E. Increase the table’s write capacity units (WCU)

NovaPlay, a small streaming startup, uses Amazon CloudFront to serve static web assets to a global audience. Minutes after uploading new CSS and JavaScript files to the S3 origin, some visitors still receive stale versions from edge locations. The cache behavior currently has an 8-hour TTL, but the team needs the new objects to be delivered right away without causing an outage. What should the developer do to replace the cached files with minimal disruption?

  • ❏ A. Disable the CloudFront distribution and then enable it to refresh all edge caches

  • ❏ B. Submit a CloudFront invalidation request for the updated object paths

  • ❏ C. Reduce the cache TTL to zero in the cache behavior and wait for propagation

  • ❏ D. Create a new origin with the updated files and repoint the distribution to it

Orion Ledger, a fintech startup, runs a web service on Amazon EC2 instances behind an Application Load Balancer. Clients must connect to the load balancer over HTTPS. The developer obtained a public X.509 TLS certificate for the site from AWS Certificate Manager. What should the developer do to establish secure client connections to the load balancer?

  • ❏ A. Configure each EC2 instance to use the ACM certificate and terminate TLS on the instances

  • ❏ B. Export the certificate private key to an Amazon S3 bucket and configure the ALB to load the certificate from S3

  • ❏ C. Associate the ACM certificate with the ALB HTTPS listener by using the AWS Management Console

  • ❏ D. Place Amazon CloudFront in front of the ALB and attach the ACM certificate to the CloudFront distribution

A ticketing startup, MetroTix, is running a flash sale that causes a spike in events on an Amazon Kinesis Data Streams stream. To scale, the team split shards so the stream grew from 5 shards to 12 shards. The consumer uses the Kinesis Client Library and runs one worker per Amazon EC2 instance. What is the maximum number of EC2 instances that can be launched to process this stream for this application at the same time?

  • ❏ A. 24

  • ❏ B. 12

  • ❏ C. 5

  • ❏ D. 1

A retail technology startup is breaking an older order-processing system into microservices on AWS. One service will run several AWS Lambda functions, and the team plans to orchestrate these invocations with AWS Step Functions. What should the Developer create to define the workflow and coordinate the function executions?

  • ❏ A. Amazon EventBridge

  • ❏ B. Invoke StartExecution to begin a workflow run

  • ❏ C. Define a Step Functions state machine using Amazon States Language

  • ❏ D. Deploy the orchestration with an AWS CloudFormation YAML template

The engineering team at Solstice Shipping operates a hybrid environment with about 80 on-premises Linux and Windows servers and a set of Amazon EC2 instances. They need to gather OS-level metrics like CPU, memory, disk, and network from all machines and send them to a single view in Amazon CloudWatch with minimal custom effort. What is the most efficient way to achieve this?

  • ❏ A. Install AWS Distro for OpenTelemetry on all servers and export metrics to CloudWatch

  • ❏ B. Deploy the CloudWatch agent to both the on-premises servers and the EC2 instances

  • ❏ C. Enable CloudWatch detailed monitoring for both EC2 instances and on-premises servers

  • ❏ D. Use built-in EC2 metrics in CloudWatch and push on-premises metrics with AWS CLI put-metric-data scripts

NovaTrail Labs has opened a new AWS account and is setting up its first IAM users and permission policies for a small engineering team. Which approaches align with AWS recommended practices for managing user permissions? (Choose 2)

  • ❏ A. Always prefer customer managed policies over AWS managed policies

  • ❏ B. Assign permissions by adding users to IAM groups

  • ❏ C. Share a common IAM user among developers to reduce management overhead

  • ❏ D. Define reusable customer managed policies instead of inline policies attached to a single identity

  • ❏ E. Use AWS Organizations service control policies to grant permissions to individual users

A developer at Orion Analytics created an AWS Lambda function that reads items from an Amazon DynamoDB table named ReportsV3. The team now needs a simple public HTTPS endpoint that accepts HTTP GET requests and forwards the full request context to the function while keeping operational overhead and cost low. Which approach should the developer choose?

  • ❏ A. Amazon Cognito User Pool with Lambda triggers

  • ❏ B. Create an API Gateway API using a POST method

  • ❏ C. Configure an Application Load Balancer with the Lambda function as a target

  • ❏ D. Create an Amazon API Gateway API with Lambda proxy integration

A mid-sized fintech startup named Northwind Capital uses AWS CodePipeline to deliver updates to several Elastic Beanstalk environments. After almost two years of steady releases, the application is nearing the service’s cap on stored application versions, blocking registration of new builds. What is the best way to automatically clear out older, unused versions so that future deployments can proceed?

  • ❏ A. AWS Lambda

  • ❏ B. Elastic Beanstalk application version lifecycle policy

  • ❏ C. Amazon S3 Lifecycle rules

  • ❏ D. Elastic Beanstalk worker environment

A fintech startup named NorthPeak Analytics runs a scheduled agent on an Amazon EC2 instance that aggregates roughly 120 GB of files each day from three Amazon S3 buckets. The data science group wants to issue spur-of-the-moment SQL queries against those files without ingesting them into a database. To keep operations minimal and pay only per query, which AWS service should be used to query the data directly in S3?

  • ❏ A. Amazon EMR

  • ❏ B. Amazon Athena

  • ❏ C. AWS Step Functions

  • ❏ D. Amazon Redshift Spectrum

A travel booking startup runs its front end on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. During peak promotions the group scales to about 80 instances, and user sessions must persist for roughly 45 minutes and be available to any instance even as nodes are replaced. Where should the developer keep the session state so requests can be served by any instance?

  • ❏ A. Store session data on ephemeral instance store volumes

  • ❏ B. Store session data in an Amazon ElastiCache for Redis cluster

  • ❏ C. Store session data on the instance root filesystem

  • ❏ D. Store session data on a shared Amazon EBS volume attached to multiple instances

An engineer at Aurora Analytics is troubleshooting an AWS Lambda function deployed with the AWS CDK. The function runs without throwing exceptions, but nothing appears in Amazon CloudWatch Logs and no log group or log stream exists for the function. The code includes logging statements. The function uses an IAM execution role that trusts the Lambda service principal but has no permission policies attached, and the function has no resource-based policy. What change should be made to enable log creation in CloudWatch Logs?

  • ❏ A. Attach a resource-based policy to the function that grants logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents

  • ❏ B. Attach the AWSLambdaBasicExecutionRole managed policy to the Lambda execution role

  • ❏ C. Attach a resource-based policy to the function that grants only logs:PutLogEvents

  • ❏ D. Attach the CloudWatchLambdaInsightsExecutionRolePolicy managed policy to the execution role

A fashion marketplace’s engineering team is preparing for a 36-hour summer flash sale. The product lead requires an Amazon ElastiCache strategy that can handle sudden surges while ensuring product prices and descriptions remain fully consistent with the source database at all times. Which approach should they implement to keep the cache synchronized with the backend during updates?

  • ❏ A. Write to the cache first and asynchronously apply the change to the database

  • ❏ B. Commit to the database and rely on the item TTL to refresh the cache later

  • ❏ C. Commit to the database, then explicitly invalidate the affected cache keys

  • ❏ D. Amazon CloudFront

A telemedicine startup, HelixCare, runs an application on Amazon EC2 that produces thousands of tiny JSON files around 2 KB each containing sensitive patient data. The files are written to a vendor supplied network attached storage system that does not integrate with AWS services. The team wants to use AWS KMS in the safest way that keeps key material within KMS whenever possible. What should they do?

  • ❏ A. Generate a data key with a customer managed KMS key and use that data key to envelope encrypt each file

  • ❏ B. Encrypt the files directly using an AWS managed KMS key

  • ❏ C. Encrypt each file directly by calling AWS KMS Encrypt with a customer managed KMS key

  • ❏ D. Use the AWS Encryption SDK with KMS to generate data keys and encrypt the files

A fintech startup runs dozens of Dockerized microservices and plans to move them to Amazon ECS. Traffic is spiky, and the finance team wants costs to accrue only while individual tasks are actually running rather than paying for idle servers. Which choice best meets these goals?

  • ❏ A. An Amazon ECS service with Auto Scaling

  • ❏ B. Amazon ECS using the Fargate launch type

  • ❏ C. Amazon ECS with the EC2 launch type

  • ❏ D. Amazon ECS with EC2 Spot capacity providers

A media startup has deployed an Application Load Balancer in front of several Amazon EC2 instances for its StreamBox app. The target group now shows every instance as unhealthy, but browsing directly to an instance’s public IP on port 8080 successfully loads the site. Which issues could explain why the load balancer continues to mark these targets as unhealthy? (Choose 2)

  • ❏ A. Elastic IP addresses must be attached to the EC2 instances when used behind an Application Load Balancer

  • ❏ B. The target group health check path or port is not aligned with the application’s actual health endpoint

  • ❏ C. Cross-zone load balancing is disabled on the load balancer

  • ❏ D. The instance security group does not allow inbound traffic from the load balancer security group on required ports

  • ❏ E. The EBS volumes on the instances were mounted incorrectly

AWS Developer Certification Exam Simulator Answers

Northwind Press, a digital publisher, wants a serverless way to roll out static front-end sites through preview and production stages. Repositories are spread across multiple Git providers, and deployments should start automatically when changes are merged into designated branches. All traffic and integrations must use HTTPS. Which approach will minimize ongoing operational effort?

  • ✓ C. Use AWS Amplify Hosting, connect the target Git branches to environments, and let merges trigger automated HTTPS deployments

Use AWS Amplify Hosting, connect the target Git branches to environments, and let merges trigger automated HTTPS deployments is correct because it is a serverless service designed for static front ends and it natively supports branch-based CI/CD from multiple Git providers.

AWS Amplify Hosting automatically builds and deploys when designated branches are merged and it provisions a CDN and TLS certificates so traffic and integrations use HTTPS out of the box. Amplify Hosting also provides preview environments for pull requests and abstracts the underlying pipeline and infrastructure so ongoing operational effort is minimal.

Host in Amazon S3 and use AWS CodePipeline with AWS CodeBuild to deploy on branch merges, fronted by Amazon CloudFront for HTTPS can meet the requirements but it requires you to design and maintain pipelines, buildspecs, IAM policies, artifact handling, and cache invalidations which increases operational overhead compared with a hosted service.

Use AWS Elastic Beanstalk with AWS CodeStar to manage environments and deployments is not a good fit because it introduces managed compute instances and environment orchestration that are unnecessary for static sites and it does not match the serverless preference as cleanly.

Run separate Amazon EC2 instances per environment and automate releases with AWS CodeDeploy tied to the repositories is the least suitable option because it is not serverless and it adds instance lifecycle, patching, scaling, and operational management which greatly increases ongoing effort.

For questions that emphasize lowest operational overhead for static hosting with branch-triggered deployments choose AWS Amplify Hosting because it bundles CI/CD, previews, CDN, and TLS into a managed, serverless workflow.

A streaming analytics team at Vega Studios is building a serverless consumer that processes records from an Amazon Kinesis Data Streams stream using AWS Lambda. The function is CPU constrained during batch processing and the team wants to increase per-invocation compute capacity without changing code while keeping costs manageable. What should they do to increase the CPU available to the function?

  • ✓ D. Allocate more memory to the function

The correct option is Allocate more memory to the function. This setting increases the CPU and other resources available to each Lambda invocation because AWS allocates CPU proportionally to the configured memory.

Increasing memory for the function raises the vCPU share and I/O bandwidth available to compute bound batches and it can reduce overall execution time without requiring any code changes. This approach is the simplest way to increase per invocation compute capacity for a Lambda that processes Amazon Kinesis Data Streams records while keeping the architecture serverless.

Configure the function to run on unreserved account concurrency only changes which capacity pool serves the function and does not change the CPU available to each invocation.

Increase the function’s concurrent executions limit affects how many executions can run in parallel and not how much CPU a single execution receives.

Use Lambda@Edge is intended for executing code at CloudFront edge locations for content delivery and is not a solution to increase CPU for a regional Lambda that processes Kinesis records.

When asked how to get more per invocation CPU for a Lambda think increase the memory setting because CPU scales with memory and you do not need to change the code.

A travel reservations startup is overhauling its release process and moving away from a rigid waterfall approach. The team now requires all services to follow CI/CD best practices and to be containerized with Docker, with images stored in Amazon ECR and published via AWS CodePipeline and AWS CodeBuild. During a pipeline run, the final push to the registry fails and reports an authorization error. What is the most probable cause?

  • ✓ B. The CodeBuild service role lacks permissions to obtain an ECR authorization token and push images

The CodeBuild service role lacks permissions to obtain an ECR authorization token and push images is the most likely cause because the pipeline reports an authorization failure during the final push to Amazon ECR.

The build project uses its service role to call Amazon ECR for authentication and to perform repository operations so if the role does not have the necessary permissions such as ecr:GetAuthorizationToken and the push related actions like ecr:PutImage and the layer upload permissions the docker push will fail with an authorization error rather than a network timeout.

Security group rules prevent CodeBuild from reaching Amazon ECR is unlikely because an authorization error points to credential or IAM policy problems and not to connectivity failures which usually show as connection timeouts or refusals.

The ECS cluster instances need extra configuration added to /etc/ecs/ecs.config is unrelated because the failure occurs in the build stage when publishing the image to ECR and not on the ECS runtime hosts.

The build environment’s VPC is missing Amazon ECR interface VPC endpoints would manifest as network reachability errors when builds run in private subnets without outbound access and not as a clear authorization error tied to missing IAM permissions.

When an ECR push fails with an authorization error check the CodeBuild service role for required ECR permissions such as ecr:GetAuthorizationToken and repository write actions before investigating network settings.

NovaArcade is building a gaming analytics feature on Amazon DynamoDB. The table uses user_id as the partition key and game_name as the sort key, and each item stores points and points_recorded_at. The team must power a leaderboard that returns the highest scoring players (user_id) per game_name with the lowest read cost and latency. What is the most efficient way to fetch these results?

  • ✓ C. Create a global secondary index with partition key game_name and sort key points, then query it for each game_name with ScanIndexForward set to false

Create a global secondary index with partition key game_name and sort key points, then query it for each game_name with ScanIndexForward set to false is the correct option because it allows you to group items by game and return them ordered by score so you can fetch the top players with a single efficient Query.

The GSI reshapes the access pattern without duplicating reads and Query supports ScanIndexForward set to false to return highest points first. You can also apply Limit to the Query to fetch only the leaders which reduces read capacity usage and lowers latency. This native index based approach is simpler and more efficient than client side sorting or maintaining separate derived tables.

Use a DynamoDB Query on the base table with key attributes user_id and game_name and sort the returned items by points in the application is inefficient because Query is scoped to a single partition key user_id and it cannot order by a non key attribute. That pattern would require many queries or a scan plus client side sorting which increases latency and read cost.

Use DynamoDB Streams with AWS Lambda to maintain a separate Leaderboards table keyed by game_name and sorted by points, then query that table can work but it adds operational overhead and extra cost and it introduces eventual consistency and failure handling for the stream and Lambda pipeline. The GSI provides the required pattern natively with less complexity.

Create a local secondary index with primary key game_name and sort key points and query by game_name is not valid because local secondary indexes must use the same partition key as the base table which is user_id in this design. LSIs also must be defined at table creation which makes them less flexible for changing access patterns.

When you must return ranked results think partition by category and sort by the metric and prefer a GSI with ScanIndexForward set to false and a Limit to return the top N efficiently.

A Berlin-based travel startup is developing a cross-platform mobile app that alerts users to visible meteor showers and ISS flyovers over the next 30 days. The app signs users in with a social identity provider using the provider’s SDK and then sends the returned OAuth 2.0 or OpenID Connect token to Amazon Cognito Identity Pools. After the user is authenticated, what does Cognito return that the client uses to obtain temporary, limited-privilege AWS credentials?

  • ✓ C. Amazon Cognito identity ID

The correct option is Amazon Cognito identity ID. When your app exchanges the social provider token with Amazon Cognito Identity Pools, Cognito creates or looks up a unique identity and returns an Amazon Cognito identity ID that the client uses to obtain temporary, limited privilege AWS credentials.

Cognito establishes the identity and the SDK then uses that identity identifier together with the provider token to request credentials. Cognito maps the identity to an IAM role and then invokes AWS Security Token Service to issue short lived credentials that are scoped by that role.

Cognito key pair is incorrect because Cognito does not return cryptographic key pairs to represent users or to fetch AWS credentials.

AWS Security Token Service is incorrect as the direct answer because STS is the backend service that issues the temporary credentials. The client does not receive STS itself after the identity provider exchange and instead receives a Cognito identity identifier first.

Amazon Cognito SDK is incorrect because the SDK is a client library used to call Cognito APIs and it is not the identifier or token returned by Cognito after authentication.

Remember that Identity Pools return an identity ID which is then used with role mappings to obtain temporary credentials via STS.

Over the next 9 months, Orion Travel Tech plans to rebuild its legacy platform using Node.js and GraphQL. The team needs full request tracing with a visual service map of dependencies. The application will run on an EC2 Auto Scaling group of Amazon Linux 2 instances behind an Application Load Balancer, and trace data must flow to AWS X-Ray. What is the most appropriate approach to meet these needs?

  • ✓ C. Add a user data script that installs and starts the AWS X-Ray daemon on each EC2 instance in the Auto Scaling group

The correct choice is Add a user data script that installs and starts the AWS X-Ray daemon on each EC2 instance in the Auto Scaling group.

AWS X Ray SDKs send trace segments to a local process that aggregates and uploads them rather than calling the service for every request. On EC2 the recommended approach is to run the X-Ray daemon which listens on UDP port 2000, batches segments, handles retries, and forwards data to the X Ray service. Using user data ensures every instance launched by the Auto Scaling group installs and starts the daemon and that the instance IAM role can provide the permissions needed to upload traces.

Refactor the Node.js service to call the PutTraceSegments API and push segments straight to AWS X-Ray is technically possible but it increases operational burden and can harm application performance because it bypasses the daemon’s batching and backoff behavior.

Enable AWS WAF on the Application Load Balancer to inspect and record all web requests improves security and can log requests but it does not instrument application code and it will not produce distributed traces or a visual service map in X Ray.

Turn on AWS X-Ray tracing in the EC2 Auto Scaling launch template is not viable because there is no built in toggle to enable X Ray at the launch template level and you must deploy the daemon or an equivalent collector such as the AWS Distro for OpenTelemetry.

When tracing EC2 workloads install the X-Ray daemon via user data or an AMI so traces are batched and retried locally before they are sent to the service.

An operations dashboard for Helios Tickets streams click and event data into Amazon Kinesis Data Streams. During flash sales, the producers are not fully using the 24 available shards, leaving write capacity idle. Which change will help the producers better utilize the shards and increase write throughput to the stream?

  • ✓ C. Use the Kinesis Producer Library to aggregate and batch records before calling PutRecords

Use the Kinesis Producer Library to aggregate and batch records before calling PutRecords is the correct choice because it enables producers to better utilize the stream shards and increase write throughput.

Kinesis Producer Library aggregates many small user records into larger Kinesis records and batches them into PutRecords calls which reduces per-record API overhead and increases per-shard bandwidth. The library also handles efficient partitioning and retries so producers can drive higher sustained throughput across available shards.

Increase the number of shards with UpdateShardCount is unnecessary when shards have headroom because adding shards increases capacity but does not address inefficient small writes by producers. Scaling shards is appropriate when shards are actually saturated rather than when producers are not packing records efficiently.

Insert Amazon SQS between producers and the stream to buffer writes only decouples producers and consumers and adds buffering. SQS by itself does not cause producers to aggregate records or improve per-shard utilization unless you also change producer behavior to batch before calling Kinesis APIs.

Call DynamoDB BatchWriteItem to send multiple records at once is irrelevant because the DynamoDB BatchWriteItem API writes to DynamoDB and cannot be used to write to Kinesis Data Streams.

When shards report unused capacity prefer aggregation with the Kinesis Producer Library and batched PutRecords calls to raise per shard throughput and lower API overhead.

Your team is building a subscription portal for BrightPath Media using AWS Lambda, AWS App Runner, and Amazon DynamoDB. Whenever a new item is inserted into the Members table, a Lambda function must automatically send a personalized welcome email to the member. What is the most appropriate way to have the table changes invoke the function?

  • ✓ B. Enable DynamoDB Streams on the Members table and configure it as the event source for the Lambda function

The correct option is Enable DynamoDB Streams on the Members table and configure it as the event source for the Lambda function. This approach captures per item changes and invokes the Lambda when new INSERT records appear.

DynamoDB Streams captures item level modifications in near real time and an AWS Lambda event source mapping can poll the stream and invoke the function for new records. This lets you send personalized welcome emails immediately after a new member item is written without custom polling or extra services. After the first bolded mention you can refer to this as DynamoDB Streams and the Lambda event source mapping.

Create an Amazon EventBridge rule for new item inserts in the table and target the Lambda function is incorrect because EventBridge does not natively emit every individual DynamoDB item mutation so it cannot trigger Lambda for each insert by itself. You would need an intermediary to translate item level changes into events which adds unnecessary complexity.

Use Amazon Kinesis Data Streams to capture new table data and subscribe the Lambda function is incorrect since Kinesis is not automatically populated by DynamoDB changes. You would have to build or run a connector to copy changes into Kinesis which is unnecessary when Streams are available.

Enable DynamoDB Transactions and configure them as the event source for the function is incorrect because transactions provide ACID guarantees for multiple writes and they do not produce events or act as a Lambda trigger. Transactions do not replace Streams for event driven processing.

Remember that DynamoDB Streams plus an Lambda event source mapping gives direct item level triggers for near real time processing. Look for solutions that emit per item events when the exam asks for immediate reactions to table inserts.

A developer at VerdantPay is building a serverless workflow that processes confidential payment data. The AWS Lambda function writes intermediate files to its /tmp directory, and the company requires those files to be encrypted while at rest. What should the developer do to meet this requirement?

  • ✓ B. Configure the Lambda function to use an AWS KMS customer managed key, generate a data key with KMS, and encrypt data before writing to /tmp

Configure the Lambda function to use an AWS KMS customer managed key, generate a data key with KMS, and encrypt data before writing to /tmp is correct because it enforces application layer encryption for the function’s ephemeral storage and ensures sensitive payment files are encrypted at rest before they are written to the local filesystem.

The secure pattern uses envelope encryption with AWS KMS data keys. With Configure the Lambda function to use an AWS KMS customer managed key, generate a data key with KMS, and encrypt data before writing to /tmp the function calls KMS GenerateDataKey to receive a plaintext data key and a ciphertext blob. The function uses the plaintext key to encrypt the intermediate files in /tmp and then discards the plaintext key. The ciphertext blob can be stored if you need to decrypt later by calling KMS.

Attach the Lambda function to a VPC and mount an encrypted Amazon EBS volume to /tmp is wrong because Lambda cannot mount Amazon EBS volumes into the execution environment. VPC attachment manages networking only and does not provide block device mounts for /tmp.

Enable default encryption on an Amazon S3 bucket with a KMS customer managed key and mount the bucket to /tmp is incorrect because you cannot mount an S3 bucket into the Lambda runtime as a local /tmp directory. S3 encryption protects objects stored in S3 but it does not apply to files written to the function’s ephemeral storage.

Mount an encrypted Amazon EFS access point and rely on EFS encryption at rest for files written to /tmp is not applicable because EFS is a separate filesystem that must be mounted to a chosen path and it does not replace the Lambda instance’s local /tmp directory. Relying on EFS encryption at rest does not by itself encrypt files placed in the ephemeral /tmp storage.

Remember that Lambda /tmp is not encrypted by default so use KMS data keys and application layer encryption to protect sensitive files before they are written.

You are creating a YAML AWS CloudFormation stack for BlueRiver Analytics that provisions an Amazon EC2 instance and a single Amazon RDS DB instance. After the stack completes, you want to expose the database connection endpoint in the Outputs so other stacks can consume it. Which intrinsic function should be used to fetch that endpoint value?

  • ✓ C. !GetAtt

The correct choice is !GetAtt because it retrieves an attribute from a resource such as an Amazon RDS DB instance’s Endpoint which you can place in the Outputs for other stacks to consume.

!GetAtt returns named attributes that CloudFormation populates after the resource is created and Endpoint is one of those attributes for RDS instances and for Aurora clusters. You can expose that endpoint in the Outputs section and then export or import it so other stacks can consume the connection endpoint.

!Sub is for string substitution and interpolation and it cannot by itself fetch runtime attributes from resources. You can combine !Sub with other intrinsics to format a value but it does not retrieve the endpoint attribute on its own.

!FindInMap reads static values from the template Mappings section and does not access attributes created at resource creation time. It is for template lookups and not for fetching resource endpoints.

!Ref returns a resource logical name or a simple identifier and for RDS it typically yields the DB instance identifier rather than the connection endpoint attribute that consumers need.

When you need a runtime attribute such as an endpoint use !GetAtt. Use !Ref for IDs and use !Sub to compose strings from values you obtain with other intrinsics.

A retail analytics startup runs part of its platform on Amazon EC2 and the rest on servers in a private colocation rack. The on-premises hosts are vital to the service, and the team needs to aggregate host metrics and application logs in Amazon CloudWatch alongside EC2 data. What should the developer implement to send telemetry from the on-premises servers to CloudWatch?

  • ✓ B. Install the CloudWatch agent on the data center servers and configure an IAM user’s access key and secret key that allow publishing metrics and logs to CloudWatch

The correct option is Install the CloudWatch agent on the data center servers and configure an IAM user’s access key and secret key that allow publishing metrics and logs to CloudWatch. This approach lets you stream on premise host metrics and application logs into CloudWatch so they are aggregated with your EC2 telemetry.

This option is correct because on premise servers cannot use EC2 instance profiles so the CloudWatch agent must authenticate with credentials that are not tied to an EC2 instance. By deploying Install the CloudWatch agent on the data center servers and configure an IAM user’s access key and secret key that allow publishing metrics and logs to CloudWatch you grant a specific IAM identity the CloudWatch and CloudWatch Logs permissions and the agent provides built in collection, buffering, and retry behavior for reliable delivery.

Build a scheduled script that gathers metrics and log files and uploads them to CloudWatch with the AWS CLI is inferior because it requires custom maintenance and it lacks the integrated collection features and resilience that the CloudWatch agent provides.

Register the servers as AWS Systems Manager hybrid managed instances so the CloudWatch agent can assume the Systems Manager role to send data to CloudWatch is incorrect because registering as a hybrid managed instance does not remove the need for valid credentials for CloudWatch API calls and the CloudWatch agent still requires an authenticated IAM identity to publish data.

Install the CloudWatch agent on the data center servers and attach an IAM role with CloudWatch permissions to those machines is not possible because IAM roles and instance profiles can only be attached to AWS resources such as EC2 instances and cannot be applied to physical on premise servers.

Remember that on-premises servers cannot use EC2 instance profiles so the CloudWatch agent must use IAM user access keys with least privilege to publish metrics and logs.

A travel startup named Skylark Journeys plans to roll out refreshed REST endpoints for its iOS and Android app that run behind Amazon API Gateway. The team wants to begin by sending roughly 15% of calls to the new version while the rest continue to use the current release, and they prefer the simplest approach that stays within API Gateway. How can they expose the new version to only a subset of clients through API Gateway?

  • ✓ B. Enable a stage canary in Amazon API Gateway and use canarySettings to shift a small share of traffic to the new deployment

Enable a stage canary in Amazon API Gateway and use canarySettings to shift a small share of traffic to the new deployment is correct because it lets you route a defined percentage of stage traffic to the updated endpoints while keeping the majority on the current release for observation and quick rollback.

The stage canary works at the API Gateway stage level and uses canarySettings to control fractional traffic so you can send roughly 15 percent of calls to the new deployment and monitor CloudWatch metrics and logs before promoting the change. The canary approach keeps routing and rollback within API Gateway so client apps do not need immediate configuration changes.

Configure an Amazon Route 53 weighted routing policy to send a percentage of requests to a second API Gateway domain is not the simplest within API Gateway because DNS weighted routing requires separate domains or stage mappings and is subject to DNS caching which reduces precise control at the stage level.

Launch the new API in a separate VPC and use Amazon CloudFront to split traffic between the two origins introduces extra components and operational overhead because CloudFront is a CDN and origin splitting is heavier than using API Gateway native percentage routing for a straightforward canary rollout.

Use AWS CodeDeploy traffic shifting to gradually move calls to the updated API targets compute deployments and does not directly manage stage-level request routing in API Gateway, so it is an indirect and more complex choice for this specific requirement.

Within API Gateway look for stage-level canarySettings when a question asks for a gradual rollout inside the service because that is the simplest, built-in way to shift a small percentage of traffic.

A media analytics startup runs an Amazon EC2 Auto Scaling group named api-fleet with a maximum size of 5 and a current size of 4. A scale-out policy is configured to add 4 instances when its CloudWatch alarm is in ALARM. When this policy is executed, what will happen?

  • ✓ B. Amazon EC2 Auto Scaling adds only 1 instance to the group

The correct option is Amazon EC2 Auto Scaling adds only 1 instance to the group. The Auto Scaling group has a maximum size of 5 and a current size of 4 so the policy cannot increase the group beyond its configured maximum.

The scale-out policy would compute a desired capacity of 8 when adding 4 instances to the current 4 but Amazon EC2 Auto Scaling enforces the group’s maximum and clamps the desired capacity to 5. As a result the group launches only one additional instance to reach the maximum.

Amazon EC2 Auto Scaling adds 4 instances to the group is incorrect because Auto Scaling never launches more instances than the group’s configured maximum. The service applies the max limit before provisioning new instances.

Amazon EC2 Auto Scaling launches 4 instances and then scales in 3 shortly afterward is incorrect because Auto Scaling does not overprovision beyond the maximum and then immediately terminate excess instances. The limit is enforced up front rather than by launching and then removing instances.

Amazon EC2 Auto Scaling adds 4 instances across multiple Availability Zones because the maximum size applies per Availability Zone is incorrect because the maximum size in this Auto Scaling group is enforced at the group level and not as a per Availability Zone maximum in this scenario.

When a scaling policy runs compute the target desired capacity and then remember that Auto Scaling will clamp it to the group min and max to determine how many instances actually launch.

A developer at Northwind Bikes configured an AWS CodeBuild project in the console to use a custom Docker image stored in Amazon ECR. The buildspec file exists and the build starts, but the environment fails when trying to fetch the container image before any build commands run. What is the most likely reason for the failure?

  • ✓ C. The CodeBuild service role lacks permissions to authenticate and pull from Amazon ECR

The CodeBuild service role lacks permissions to authenticate and pull from Amazon ECR is the most likely cause of the failure. The build environment is fetched before the buildspec runs and CodeBuild uses its service role to call ECR APIs to obtain an authorization token and to retrieve image layers.

The service role must grant permissions such as ecr GetAuthorizationToken, ecr BatchGetImage and ecr GetDownloadUrlForLayer and the role must be scoped to cover the target repository and region. Without those permissions CodeBuild cannot authenticate or download the image and the image pull fails before any build commands execute.

AWS CodeBuild does not support custom Docker images is incorrect because CodeBuild supports custom images hosted in Amazon ECR, Amazon ECR Public and Docker Hub. Custom environment images are a standard CodeBuild feature and are commonly used.

The image in Amazon ECR has no tags is unlikely to be the root cause because images can be referenced by digest as well as by tag. A missing tag by itself does not prevent a pull when the correct reference is provided.

The build environment is not configured for privileged mode is not relevant in this situation because privileged mode is required when the build needs to run Docker or build images inside the build. Privileged mode does not control the ability to pull the environment image from ECR.

When a CodeBuild job cannot pull a custom ECR image first check the service role for ECR permissions and confirm the role covers the repository and region. Use least privilege while ensuring the necessary ECR actions are allowed.

Orion Outfitters, a global distributor, needs users from partner supplier companies who sign in with their own SAML or OIDC identity providers to add and modify items in two DynamoDB tables named PartnerOrders and SupplierCatalog in Orion’s AWS account without creating individual IAM users for them. Which approach should the developer implement to securely grant these partner users scoped access to those tables?

  • ✓ C. Set up Amazon Cognito Identity Pools to federate supplier IdPs and issue temporary credentials for DynamoDB operations

Set up Amazon Cognito Identity Pools to federate supplier IdPs and issue temporary credentials for DynamoDB operations is correct because it allows Orion to trust external SAML or OIDC identity providers and to exchange their authentication tokens for short lived AWS credentials that are scoped to allow add and modify actions on the PartnerOrders and SupplierCatalog tables.

This solution uses Amazon Cognito Identity Pools to federate supplier IdPs and to obtain temporary AWS credentials via the AWS Security Token Service. You assign IAM roles with least privilege policies that restrict access to just the two DynamoDB tables and to only the necessary operations. Temporary credentials scale cleanly and avoid issuing long lived secrets or creating individual IAM principals for each external user.

Configure Amazon Cognito User Pools to sign in supplier users and authorize DynamoDB access is incorrect because user pools provide authentication and JWTs for applications and they do not by themselves grant AWS credentials for direct DynamoDB SDK access unless you pair them with an identity pool or proxy requests through a trusted backend.

Create an IAM user for each partner user and attach DynamoDB permissions is incorrect because creating per user IAM users produces long lived credentials that are hard to manage for many external partners and it does not follow federation or least privilege best practices.

AWS IAM Identity Center is incorrect because it is focused on workforce single sign on into AWS accounts and applications and it does not target issuing temporary AWS credentials to external application users for direct client SDK access to DynamoDB.

When external partner users need direct client access to AWS services choose Identity Pools and then scope IAM roles to the exact DynamoDB tables and actions required.

BrightParcel, a logistics startup, runs Auto Scaling worker nodes on Amazon EC2 that poll an Amazon SQS standard queue for tasks. Some messages fail repeatedly during processing, and after 4 attempts the team wants those problematic messages automatically routed to an isolated location for later analysis without deleting them. What should they implement to keep failing messages separate for troubleshooting?

  • ✓ B. Implement a dead-letter queue

Implement a dead-letter queue is the correct option because it lets you automatically move messages that fail processing repeatedly to an isolated location for later analysis.

With Implement a dead-letter queue you attach a redrive policy to the source queue and set a maxReceiveCount so that messages that exceed the retry threshold are moved to a separate queue where they can be inspected and replayed if needed.

Enable long polling on the queue improves retrieval efficiency by reducing empty responses but it does not provide any mechanism to quarantine or isolate failing messages for troubleshooting.

Decrease the visibility timeout causes messages to reappear sooner which can increase duplicate processing and contention and it does not separate failing messages for analysis.

Increase the visibility timeout gives more time for a consumer to finish processing a message and it may reduce immediate retries but it still does not route repeatedly failing messages to a separate location for postmortem.

Use a DLQ with a redrive policy and an appropriate maxReceiveCount to quarantine bad messages for debugging rather than relying on visibility timeout tuning.

A travel booking platform runs several Amazon EC2 web servers behind an Application Load Balancer, and during busy periods the instances sustain around 92 percent CPU utilization. The engineering team has confirmed that processing TLS for HTTPS traffic is consuming most of the CPU on the servers. What actions should they take to move the TLS workload off the application instances? (Choose 2)

  • ✓ B. Request or import an SSL/TLS certificate in AWS Certificate Manager and associate it with the ALB

  • ✓ D. Create an HTTPS listener on the ALB that terminates TLS at the load balancer

Request or import an SSL/TLS certificate in AWS Certificate Manager and associate it with the ALB and Create an HTTPS listener on the ALB that terminates TLS at the load balancer are correct because they move TLS processing off the EC2 instances and onto the load balancer.

When you Request or import an SSL/TLS certificate in AWS Certificate Manager and associate it with the ALB the ALB can present the certificate to clients and handle the TLS handshake. When you Create an HTTPS listener on the ALB that terminates TLS at the load balancer the ALB performs decryption and forwards plain HTTP traffic to the targets which reduces CPU usage on the servers.

Configure an HTTPS listener on the ALB that forwards encrypted traffic to targets without decryption (pass-through) is wrong because encryption remains end to end and the instances still perform the decryption so CPU stays high.

Install the ACM certificate directly on each EC2 instance to terminate TLS on the server is wrong because public ACM certificates are not exportable and you cannot install them on instances which means this would not offload TLS work.

Add AWS WAF on the ALB to reduce CPU from TLS is wrong because AWS WAF inspects application layer requests and does not remove the cost of TLS handshakes or decryption from the web servers.

If HTTPS is driving up EC2 CPU think terminate TLS at the ALB and use ACM for the certificate so the load balancer handles cryptography and the instances handle application logic.

You manage multiple Amazon API Gateway REST APIs that invoke AWS Lambda for each release of Orion Fintech’s backend. The company wants to merge these into one API while keeping distinct subdomain endpoints such as alpha.api.orionfin.io, beta.api.orionfin.io, rc.api.orionfin.io, and prod.api.orionfin.io so clients can explicitly target an environment. What should you implement to achieve this in the most maintainable way?

  • ✓ C. Create stages for ALPHA, BETA, RC, and PROD and use stage variables to target the right Lambda or HTTP backend, then map each subdomain to its stage

Create stages for ALPHA, BETA, RC, and PROD and use stage variables to target the right Lambda or HTTP backend, then map each subdomain to its stage is correct because it lets you consolidate all releases under a single API Gateway while preserving distinct hostnames for each environment such as alpha.api.orionfin.io and prod.api.orionfin.io.

With Create stages for ALPHA, BETA, RC, and PROD and use stage variables to target the right Lambda or HTTP backend, then map each subdomain to its stage you assign stage variables that point to Lambda aliases or to different backend endpoints so each stage invokes the correct backend. This approach keeps the API definition identical across environments and simplifies deployments and rollbacks. Custom domain names and base path mappings let you bind each subdomain to its corresponding stage so clients target the intended environment by hostname.

Modify the Integration Request mapping to dynamically rewrite the backend endpoint for each release based on the hostname is incorrect because integration mapping templates are meant to transform requests and cannot reliably select or switch the integration endpoint per stage or per incoming hostname.

Configure Amazon Route 53 weighted records to direct clients to separate APIs for each environment is incorrect because weighted DNS distributes traffic across endpoints and does not map a hostname to a specific stage within a single API. This would also violate the requirement to consolidate releases behind one API Gateway.

Use Lambda layers to separate environment-specific code is incorrect because layers only provide shared libraries and dependencies and they do not control which function alias or version API Gateway invokes for an environment.

Use stages and stage variables to route to environment specific Lambda aliases or backends and map each environment hostname to the matching stage with a custom domain.

NorthWind Labs, a media analytics startup, exposes a public Amazon API Gateway REST API that backs its web dashboard with Amazon Cognito sign-in. The team is preparing a beta of a major API revision that introduces new resources and breaking changes, and only a small set of internal developers should test it while paying customers continue using the current API uninterrupted. The development team will maintain and iterate on the beta during this period. What is the most operationally efficient way to let the developers invoke the new version without impacting production users?

  • ✓ B. Create a separate API Gateway stage such as beta-v3 that is deployed from the new version and have developers use that stage’s invoke URL

Create a separate API Gateway stage such as beta-v3 that is deployed from the new version and have developers use that stage’s invoke URL is the correct option because it gives developers an isolated invoke URL and deployment that does not affect the production environment.

The beta stage provides a separate deployment lifecycle logging and configuration so the team can iterate on breaking changes while paying customers continue to use the stable production stage. Using a stage is operationally efficient because it reuses the same API configuration and authentication setup while providing a distinct URL and deployment target for testing.

Configure a canary release on the existing production stage and give developers a stage-variable URL to hit is unsuitable because canary routing splits live traffic on the same stage and could route some real customers to the breaking beta.

Issue new API keys in API Gateway and require developers to pass those keys to reach the new version is incorrect because API keys control identification quotas and usage plans and they do not determine which deployment or stage is invoked.

Build a brand-new API Gateway API for the new handlers and tell developers to call that new API would achieve isolation but it increases operational overhead for deployment authentication monitoring and lifecycle management compared with adding a stage.

Use separate API Gateway stages to isolate beta testing behind a different invoke URL and avoid using canary routing for breaking changes.

A data insights portal for a regional retailer runs on AWS Elastic Beanstalk and stores report data in an Amazon DynamoDB table named AnalyticsEvents. Each request currently performs a full table scan and then filters the results for the user. Adoption is expected to jump over the next 8 weeks, and the table will grow substantially as report requests increase. What should you implement ahead of the growth to keep read performance high while minimizing cost? (Choose 2)

  • ✓ B. Switch to Query requests where possible

  • ✓ D. Lower the page size by setting a smaller Limit value

Switch to Query requests where possible and Lower the page size by setting a smaller Limit value are the correct choices to implement before the expected growth because they reduce how much data each request reads and they contain read capacity usage while keeping costs low.

Switch to Query requests where possible is the primary fix because queries use partition key and optional sort key conditions to target specific items and partitions and they avoid the full table reads that drive high read capacity consumption and slow responses. Queries therefore reduce RCU use and improve latency without adding new services or significant expense.

Lower the page size by setting a smaller Limit value helps when a Scan is still necessary because it limits the number of items returned per request and spreads read throughput over time which lowers the chance of throttling and reduces per request cost. This is a low cost tuning step while you refactor access patterns toward queries.

Enable DynamoDB Accelerator (DAX) can reduce latency for some read patterns but it adds ongoing cost and it does not address the underlying inefficiency of full table scans. DAX is most beneficial when you already use Query or GetItem and it is not the best first action here.

Adjust the ScanIndexForward setting to sort query results only changes result order and it does not reduce the amount of data scanned or the read capacity used so it will not resolve the scan driven performance problem.

Increase the table’s write capacity units (WCU) raises write throughput but it does not improve read efficiency or lower read costs. Increasing WCU would add cost without addressing the read bottleneck caused by scans.

Prefer Query over Scan and use a small Limit for large result sets. Fix access patterns first and add caching like DAX only after reads are optimized.

NovaPlay, a small streaming startup, uses Amazon CloudFront to serve static web assets to a global audience. Minutes after uploading new CSS and JavaScript files to the S3 origin, some visitors still receive stale versions from edge locations. The cache behavior currently has an 8-hour TTL, but the team needs the new objects to be delivered right away without causing an outage. What should the developer do to replace the cached files with minimal disruption?

  • ✓ B. Submit a CloudFront invalidation request for the updated object paths

Submit a CloudFront invalidation request for the updated object paths is correct because it evicts the specified files from edge caches so that subsequent requests retrieve the new CSS and JavaScript from the S3 origin with minimal disruption.

Submit a CloudFront invalidation request for the updated object paths lets you target exact keys or use wildcards to remove only the changed assets without taking the distribution offline. Invalidations complete quickly relative to waiting for a long TTL to expire and they avoid a global outage. Keep in mind that many invalidation paths can add cost so using versioned file names reduces the need for frequent invalidations.

Disable the CloudFront distribution and then enable it to refresh all edge caches is wrong because disabling the distribution would cause service disruption for all users and it does not provide a safe or reliable way to purge specific objects from every edge location.

Reduce the cache TTL to zero in the cache behavior and wait for propagation is wrong because changing the TTL affects only future caching behavior and it does not evict objects that are already stored at edge locations. Waiting for propagation would still leave visitors receiving stale files until the old cached entries naturally expire or are removed.

Create a new origin with the updated files and repoint the distribution to it is wrong because it introduces unnecessary configuration changes and creates cold caches across the globe. This approach is more complex and still does not explicitly remove the previously cached objects from edge locations.

Use CloudFront invalidations to immediately remove changed assets and adopt versioned file names to avoid frequent invalidations and reduce risk.

Orion Ledger, a fintech startup, runs a web service on Amazon EC2 instances behind an Application Load Balancer. Clients must connect to the load balancer over HTTPS. The developer obtained a public X.509 TLS certificate for the site from AWS Certificate Manager. What should the developer do to establish secure client connections to the load balancer?

  • ✓ C. Associate the ACM certificate with the ALB HTTPS listener by using the AWS Management Console

Associate the ACM certificate with the ALB HTTPS listener by using the AWS Management Console is correct because terminating TLS at the Application Load Balancer lets the ALB handle the TLS handshake for client connections and uses the managed certificate from AWS Certificate Manager without installing private keys on the EC2 instances.

When you attach the certificate to an HTTPS listener on the ALB the load balancer presents the public certificate to clients and establishes secure client to ALB connections. This uses ACM managed certificates so you do not need to export private keys or copy certificates to targets and you can choose to enable TLS from the ALB to the back end instances if you require end to end encryption.

Configure each EC2 instance to use the ACM certificate and terminate TLS on the instances is incorrect because ACM public certificates cannot be exported for installation on instances and terminating at each instance is unnecessary unless you specifically need end to end encryption.

Export the certificate private key to an Amazon S3 bucket and configure the ALB to load the certificate from S3 is incorrect because ACM public certificates do not allow private key export and the ALB does not load certificates from S3 it uses certificates provided by ACM or by IAM.

Place Amazon CloudFront in front of the ALB and attach the ACM certificate to the CloudFront distribution is incorrect because adding CloudFront is an unnecessary extra component for the stated requirement and CloudFront also has its own regional ACM constraints for edge certificates which adds complexity rather than directly securing client connections to the ALB.

For HTTPS with an ALB remember to think HTTPS listener plus ACM certificate and recall that ACM public private keys cannot be exported.

A ticketing startup, MetroTix, is running a flash sale that causes a spike in events on an Amazon Kinesis Data Streams stream. To scale, the team split shards so the stream grew from 5 shards to 12 shards. The consumer uses the Kinesis Client Library and runs one worker per Amazon EC2 instance. What is the maximum number of EC2 instances that can be launched to process this stream for this application at the same time?

  • ✓ B. 12

The correct answer is 12. A Kinesis Client Library consumer assigns exactly one worker to each shard via a lease so the maximum number of concurrent application workers equals the number of shards, and with 12 shards and one worker per EC2 instance you can launch up to 12 EC2 instances to process the stream at the same time.

When the stream was split from 5 shards to 12 shards the available parallel processing slots increased to 12 so you scale out by adding shards and adding instances. The KCL coordinates shard leases and guarantees that only one worker holds a shard lease at a time so instances beyond the shard count will not increase throughput and will remain idle for this application.

24 is incorrect because the KCL does not allow two workers from the same application to process the same shard concurrently and you cannot double the worker count beyond the shard count.

5 is incorrect because that number reflects the previous shard count and it underutilizes the additional parallelism after the split to 12 shards.

1 is incorrect because a single instance can process all shards but it serializes work and it is not the maximum number of instances that can run concurrently to use shards in parallel.

Remember that KCL assigns one worker per shard so the maximum parallel consumers for an application equals the stream shard count.

A retail technology startup is breaking an older order-processing system into microservices on AWS. One service will run several AWS Lambda functions, and the team plans to orchestrate these invocations with AWS Step Functions. What should the Developer create to define the workflow and coordinate the function executions?

  • ✓ C. Define a Step Functions state machine using Amazon States Language

The correct option is Define a Step Functions state machine using Amazon States Language. This option names the workflow specification that AWS Step Functions executes.

AWS Step Functions workflows are written as state machines in the JSON-based Amazon States Language. A state machine definition describes each step and the transitions and it coordinates Lambda function invocations as well as branching, retries, and error handling, so the developer must author that definition to express the workflow logic.

Invoke StartExecution to begin a workflow run is incorrect because StartExecution only starts an execution of an already defined state machine and it does not create the workflow definition.

Deploy the orchestration with an AWS CloudFormation YAML template is incorrect because CloudFormation can provision an AWS::StepFunctions::StateMachine resource but you still must provide the Amazon States Language definition, so CloudFormation alone does not author the workflow logic.

Amazon EventBridge is incorrect because EventBridge is an event bus for routing events and it does not provide the stateful, step-by-step orchestration that Step Functions offers.

When a question asks you to define workflow logic look for terms like state machine and Amazon States Language. Remember that APIs such as StartExecution run existing workflows and infrastructure templates still need the ASL definition.

The engineering team at Solstice Shipping operates a hybrid environment with about 80 on-premises Linux and Windows servers and a set of Amazon EC2 instances. They need to gather OS-level metrics like CPU, memory, disk, and network from all machines and send them to a single view in Amazon CloudWatch with minimal custom effort. What is the most efficient way to achieve this?

  • ✓ B. Deploy the CloudWatch agent to both the on-premises servers and the EC2 instances

The most efficient choice is Deploy the CloudWatch agent to both the on-premises servers and the EC2 instances because this approach collects OS level metrics such as CPU, memory, disk, and network from Linux and Windows hosts and publishes them directly to CloudWatch with a consistent configuration and minimal custom code.

Deploy the CloudWatch agent to both the on-premises servers and the EC2 instances supports a unified configuration for metrics and logs and works across hybrid environments, so you get a single view in CloudWatch without maintaining ad hoc scripts or separate exporters. The agent is managed by AWS and can be configured centrally which reduces operational overhead and improves consistency across the fleet.

Install AWS Distro for OpenTelemetry on all servers and export metrics to CloudWatch is not ideal because the AWS Distro for OpenTelemetry is focused on application telemetry and usually requires more setup to collect host level metrics and to configure exporters, so it is less efficient for simple OS level metric collection.

Enable CloudWatch detailed monitoring for both EC2 instances and on-premises servers is incorrect because detailed monitoring is an EC2 feature and does not apply to on-premises servers, so it cannot provide host metrics from the on-premises portion of the hybrid environment.

Use built-in EC2 metrics in CloudWatch and push on-premises metrics with AWS CLI put-metric-data scripts can work technically but it introduces scripting, maintenance, and operational risk, and it does not provide the consistent, managed collection and configuration that the CloudWatch agent offers.

When the exam describes hybrid host metrics think CloudWatch agent first and avoid answers that require extensive custom scripting.

NovaTrail Labs has opened a new AWS account and is setting up its first IAM users and permission policies for a small engineering team. Which approaches align with AWS recommended practices for managing user permissions? (Choose 2)

  • ✓ B. Assign permissions by adding users to IAM groups

  • ✓ D. Define reusable customer managed policies instead of inline policies attached to a single identity

Assign permissions by adding users to IAM groups and Define reusable customer managed policies instead of inline policies attached to a single identity are the correct approaches because they promote scalable, reusable, and auditable permission management.

Assign permissions by adding users to IAM groups lets you manage permissions in one place and apply them consistently to many users so onboarding and offboarding are simpler and you reduce configuration drift while enforcing least privilege.

Define reusable customer managed policies instead of inline policies attached to a single identity allows you to reuse policies across users and roles and to update, version, and audit permissions centrally while inline policies remain tied to a single principal and are harder to maintain.

Always prefer customer managed policies over AWS managed policies is not correct because AWS managed policies are maintained by AWS and are a recommended starting point and you should create customer managed policies only when you need customization or tighter control.

Share a common IAM user among developers to reduce management overhead is wrong because shared credentials eliminate individual accountability and make credential rotation and auditing difficult and you should use individual identities or federated access with roles.

Use AWS Organizations service control policies to grant permissions to individual users is incorrect because service control policies act as account level guardrails and do not grant permissions to users and you still need IAM policies or roles to authorize actions.

Prefer groups for assigning common permissions and start with AWS managed policies then create customer managed policies for customization and reusability.

A developer at Orion Analytics created an AWS Lambda function that reads items from an Amazon DynamoDB table named ReportsV3. The team now needs a simple public HTTPS endpoint that accepts HTTP GET requests and forwards the full request context to the function while keeping operational overhead and cost low. Which approach should the developer choose?

  • ✓ D. Create an Amazon API Gateway API with Lambda proxy integration

Create an Amazon API Gateway API with Lambda proxy integration is the correct choice because it provides a simple public HTTPS endpoint that accepts GET requests and forwards the full request context to the Lambda function.

The Create an Amazon API Gateway API with Lambda proxy integration option forwards headers, path parameters, query strings, and the payload directly to your function with minimal mapping. It gives a managed HTTPS endpoint and supports GET semantics so your function can inspect the incoming request before querying the ReportsV3 DynamoDB table. Choosing API Gateway for proxying keeps configuration and operational overhead low and can be cost efficient, especially if you select the HTTP API option for simple proxy scenarios.

Create an API Gateway API using a POST method does not satisfy the requirement to handle GET requests for retrieving data and using POST would be semantically incorrect for a read operation. It also does not address the need to forward the full request context unless paired with proxy integration.

Configure an Application Load Balancer with the Lambda function as a target can work technically but it introduces additional setup and potentially higher cost compared to API Gateway for a straightforward HTTP-to-Lambda proxy. For a simple public GET endpoint that must forward full request details, ALB usually adds more operational overhead.

Amazon Cognito User Pool with Lambda triggers focuses on authentication and user management and it does not provide a generic HTTP proxy that forwards arbitrary GET requests to Lambda. Cognito can be added if you need authentication but it does not replace a public HTTPS endpoint for forwarding requests.

Remember that API Gateway with Lambda proxy integration forwards headers and query strings by default so choose it when you need a minimal, public HTTPS GET endpoint that passes the full request context to Lambda.

A mid-sized fintech startup named Northwind Capital uses AWS CodePipeline to deliver updates to several Elastic Beanstalk environments. After almost two years of steady releases, the application is nearing the service’s cap on stored application versions, blocking registration of new builds. What is the best way to automatically clear out older, unused versions so that future deployments can proceed?

  • ✓ B. Elastic Beanstalk application version lifecycle policy

The correct option is Elastic Beanstalk application version lifecycle policy. This native feature automatically removes older unused application versions based on age or maximum count when new versions are created and prevents you from hitting the application version quota so deployments can continue.

Using the Elastic Beanstalk application version lifecycle policy is the best approach because it is managed by Elastic Beanstalk and it cleans up both the stored artifacts and the service metadata as configured. This avoids the need to build and operate custom cleanup code and ensures that version records do not become dangling or inconsistent.

AWS Lambda is incorrect because using Lambda would require writing and maintaining custom functions and schedules plus handling permissions and failure cases rather than relying on a built in cleanup option.

Amazon S3 Lifecycle rules are incorrect because S3 lifecycle policies act only on S3 objects and do not remove Elastic Beanstalk application version metadata, which can leave Elastic Beanstalk still counting versions.

Elastic Beanstalk worker environment is incorrect because worker environments are intended for background task processing and they do not provide any mechanism to manage or prune stored application versions.

If you approach a version limit, enable an application version lifecycle policy for the Elastic Beanstalk application and verify that it is configured to remove unused versions by age or count.

A fintech startup named NorthPeak Analytics runs a scheduled agent on an Amazon EC2 instance that aggregates roughly 120 GB of files each day from three Amazon S3 buckets. The data science group wants to issue spur-of-the-moment SQL queries against those files without ingesting them into a database. To keep operations minimal and pay only per query, which AWS service should be used to query the data directly in S3?

  • ✓ B. Amazon Athena

Amazon Athena is the correct choice for this scenario because it provides serverless ad hoc SQL queries directly on data stored in Amazon S3 and charges you on a pay per query basis which keeps operations minimal and costs aligned with occasional analysis.

Amazon Athena supports standard SQL and integrates with the AWS Glue Data Catalog for schema discovery which allows the data science team to run spur of the moment queries without ingesting the 120 GB per day of files into a database. Being serverless means there is no cluster to provision or manage and you only pay for the queries you run which fits the requirement to minimize operational overhead.

Amazon EMR is designed for managed big data processing and requires provisioning and managing clusters and running frameworks such as Spark or Hive which adds unnecessary operational complexity for simple ad hoc SQL on S3.

Amazon Redshift Spectrum can query data in S3 but it relies on a Redshift cluster and external table setup which introduces baseline costs and more administration compared with a purely serverless query service.

AWS Step Functions is a workflow orchestration service that manages state machines and does not execute SQL queries on S3 data so it does not meet the requirement for on demand SQL querying.

When the exam scenario asks for ad hoc SQL directly on S3 with minimal ops and pay per query pricing choose Amazon Athena instead of services that require clusters or a data warehouse.

A travel booking startup runs its front end on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. During peak promotions the group scales to about 80 instances, and user sessions must persist for roughly 45 minutes and be available to any instance even as nodes are replaced. Where should the developer keep the session state so requests can be served by any instance?

  • ✓ B. Store session data in an Amazon ElastiCache for Redis cluster

Store session data in an Amazon ElastiCache for Redis cluster is the correct choice because it centralizes session state so any EC2 instance behind the Application Load Balancer can access user sessions even as instances are replaced by Auto Scaling.

Redis provides a centralized in memory store with very low latency and it supports time to live based expiry and replication and optional persistence so sessions survive instance termination and remain available across the fleet. Using ElastiCache decouples session state from individual instance storage and lets the ALB route requests to any healthy instance without losing user session data.

Store session data on ephemeral instance store volumes is incorrect because the instance store is local to one instance and data is lost when that instance stops terminates or fails and it cannot be shared across the Auto Scaling group.

Store session data on the instance root filesystem is incorrect because the root volume is per instance and not shared so scaling events or instance replacements will strand or lose sessions.

Store session data on a shared Amazon EBS volume attached to multiple instances is incorrect because EBS Multi Attach is limited to specific volume types and to instances in the same Availability Zone and it requires application level coordination so it does not offer a practical scalable multi AZ pattern for web session state behind an ALB.

Think centralized and in-memory for shared session state behind a load balancer so sessions remain available when instances scale or are replaced.

Why Get AWS Developer Certified?

Over the past few months, I have been helping developers and cloud professionals prepare for careers that thrive in the AWS ecosystem. The goal is simple: to help you build, deploy, and manage real world applications using the same cloud services trusted by millions of organizations worldwide.

A key milestone in that journey is earning your AWS Certified Developer Associate credential. This certification demonstrates your ability to develop, test, and deploy modern applications on AWS using best practices for scalability, security, and performance.

Whether you’re a software developer, DevOps engineer, solutions architect, or backend specialist, the AWS Developer Associate certification gives you a strong foundation for cloud native development. You will learn how to integrate with key services like Lambda, DynamoDB, S3, SQS, and API Gateway while mastering tools like CloudFormation, CodePipeline, and CloudWatch.

In the fast paced world of cloud computing, understanding how to build and maintain applications on AWS is no longer optional. Every developer should be familiar with SDKs, IAM roles, serverless architecture, CI/CD pipelines, and event driven design.

That is exactly what the AWS Developer Associate Certification Exam measures. It validates your knowledge of AWS SDKs, application security, error handling, deployment automation, and how to write efficient, scalable, and cost optimized code for the cloud.

AWS Developer Practice Questions and Exam Simulators

Through my Udemy courses on AWS certifications and through the free question banks at certificationexams.pro, I have seen where learners struggle most. That experience led to the creation of realistic AWS Developer Practice Questions that mirror the format, difficulty, and nuance of the real test.

You will also find AWS Developer Exam Sample Questions and full AWS Developer Practice Tests designed to help you identify your strengths and weaknesses. Each AWS Developer Question and Answer set includes detailed explanations, helping you learn not just what the correct answer is, but why it is correct.

These materials go far beyond simple memorization. The goal is to help you reason through real AWS development scenarios such as deploying Lambda functions, optimizing API Gateway configurations, and managing data with DynamoDB just as you would in production.

Real Exam Readiness

If you are looking for Real AWS Developer Exam Questions, this collection provides authentic examples of what to expect. Each one has been carefully crafted to reflect the depth and logic of the actual exam without crossing ethical boundaries. These are not AWS Developer Exam Dumps or copied content. They are original practice resources built to help you learn.

Our AWS Developer Exam Simulator replicates the pacing and difficulty of the live exam, so by the time you sit for the test, you will be confident and ready. If you prefer to study in smaller chunks, you can explore curated AWS Developer Exam Dumps and AWS Developer Braindump style study sets that focus on one topic at a time, such as IAM, CI/CD, or serverless architecture.

Each AWS Developer Practice Test is designed to challenge you slightly more than the real exam. That is deliberate. If you can perform well here, you will be more than ready when it counts.

Learn & Succeed as an AWS Developer

The purpose of these AWS Developer Exam Questions is not just to help you pass. It is to help you grow as a professional who can design and deliver production grade applications on AWS. You will gain confidence in your ability to build resilient, efficient, and maintainable solutions using the full power of the AWS platform.

So dive into the AWS Developer Practice Questions, test your knowledge with the AWS Developer Exam Simulator, and see how well you can perform under real exam conditions.

Good luck, and remember, every great cloud development career begins with mastering the tools and services that power the AWS cloud.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Certification Exam Simulator Questions

Northwind Press, a digital publisher, wants a serverless way to roll out static front-end sites through preview and production stages. Repositories are spread across multiple Git providers, and deployments should start automatically when changes are merged into designated branches. All traffic and integrations must use HTTPS. Which approach will minimize ongoing operational effort?

  • ❏ A. Host in Amazon S3 and use AWS CodePipeline with AWS CodeBuild to deploy on branch merges, fronted by Amazon CloudFront for HTTPS

  • ❏ B. Use AWS Elastic Beanstalk with AWS CodeStar to manage environments and deployments

  • ❏ C. Use AWS Amplify Hosting, connect the target Git branches to environments, and let merges trigger automated HTTPS deployments

  • ❏ D. Run separate Amazon EC2 instances per environment and automate releases with AWS CodeDeploy tied to the repositories

A streaming analytics team at Vega Studios is building a serverless consumer that processes records from an Amazon Kinesis Data Streams stream using AWS Lambda. The function is CPU constrained during batch processing and the team wants to increase per-invocation compute capacity without changing code while keeping costs manageable. What should they do to increase the CPU available to the function?

  • ❏ A. Configure the function to run on unreserved account concurrency

  • ❏ B. Use Lambda@Edge

  • ❏ C. Increase the function’s concurrent executions limit

  • ❏ D. Allocate more memory to the function

A travel reservations startup is overhauling its release process and moving away from a rigid waterfall approach. The team now requires all services to follow CI/CD best practices and to be containerized with Docker, with images stored in Amazon ECR and published via AWS CodePipeline and AWS CodeBuild. During a pipeline run, the final push to the registry fails and reports an authorization error. What is the most probable cause?

  • ❏ A. Security group rules prevent CodeBuild from reaching Amazon ECR

  • ❏ B. The CodeBuild service role lacks permissions to obtain an ECR authorization token and push images

  • ❏ C. The ECS cluster instances need extra configuration added to /etc/ecs/ecs.config

  • ❏ D. The build environment’s VPC is missing Amazon ECR interface VPC endpoints

NovaArcade is building a gaming analytics feature on Amazon DynamoDB. The table uses user_id as the partition key and game_name as the sort key, and each item stores points and points_recorded_at. The team must power a leaderboard that returns the highest scoring players (user_id) per game_name with the lowest read cost and latency. What is the most efficient way to fetch these results?

  • ❏ A. Use a DynamoDB Query on the base table with key attributes user_id and game_name and sort the returned items by points in the application

  • ❏ B. Use DynamoDB Streams with AWS Lambda to maintain a separate Leaderboards table keyed by game_name and sorted by points, then query that table

  • ❏ C. Create a global secondary index with partition key game_name and sort key points, then query it for each game_name with ScanIndexForward set to false

  • ❏ D. Create a local secondary index with primary key game_name and sort key points and query by game_name

A Berlin-based travel startup is developing a cross-platform mobile app that alerts users to visible meteor showers and ISS flyovers over the next 30 days. The app signs users in with a social identity provider using the provider’s SDK and then sends the returned OAuth 2.0 or OpenID Connect token to Amazon Cognito Identity Pools. After the user is authenticated, what does Cognito return that the client uses to obtain temporary, limited-privilege AWS credentials?

  • ❏ A. Cognito key pair

  • ❏ B. AWS Security Token Service

  • ❏ C. Amazon Cognito identity ID

  • ❏ D. Amazon Cognito SDK

Over the next 9 months, Orion Travel Tech plans to rebuild its legacy platform using Node.js and GraphQL. The team needs full request tracing with a visual service map of dependencies. The application will run on an EC2 Auto Scaling group of Amazon Linux 2 instances behind an Application Load Balancer, and trace data must flow to AWS X-Ray. What is the most appropriate approach to meet these needs?

  • ❏ A. Refactor the Node.js service to call the PutTraceSegments API and push segments straight to AWS X-Ray

  • ❏ B. Enable AWS WAF on the Application Load Balancer to inspect and record all web requests

  • ❏ C. Add a user data script that installs and starts the AWS X-Ray daemon on each EC2 instance in the Auto Scaling group

  • ❏ D. Turn on AWS X-Ray tracing in the EC2 Auto Scaling launch template

An operations dashboard for Helios Tickets streams click and event data into Amazon Kinesis Data Streams. During flash sales, the producers are not fully using the 24 available shards, leaving write capacity idle. Which change will help the producers better utilize the shards and increase write throughput to the stream?

  • ❏ A. Increase the number of shards with UpdateShardCount

  • ❏ B. Insert Amazon SQS between producers and the stream to buffer writes

  • ❏ C. Use the Kinesis Producer Library to aggregate and batch records before calling PutRecords

  • ❏ D. Call DynamoDB BatchWriteItem to send multiple records at once

Your team is building a subscription portal for BrightPath Media using AWS Lambda, AWS App Runner, and Amazon DynamoDB. Whenever a new item is inserted into the Members table, a Lambda function must automatically send a personalized welcome email to the member. What is the most appropriate way to have the table changes invoke the function?

  • ❏ A. Create an Amazon EventBridge rule for new item inserts in the table and target the Lambda function

  • ❏ B. Enable DynamoDB Streams on the Members table and configure it as the event source for the Lambda function

  • ❏ C. Use Amazon Kinesis Data Streams to capture new table data and subscribe the Lambda function

  • ❏ D. Enable DynamoDB Transactions and configure them as the event source for the function

A developer at VerdantPay is building a serverless workflow that processes confidential payment data. The AWS Lambda function writes intermediate files to its /tmp directory, and the company requires those files to be encrypted while at rest. What should the developer do to meet this requirement?

  • ❏ A. Attach the Lambda function to a VPC and mount an encrypted Amazon EBS volume to /tmp

  • ❏ B. Configure the Lambda function to use an AWS KMS customer managed key, generate a data key with KMS, and encrypt data before writing to /tmp

  • ❏ C. Enable default encryption on an Amazon S3 bucket with a KMS customer managed key and mount the bucket to /tmp

  • ❏ D. Mount an encrypted Amazon EFS access point and rely on EFS encryption at rest for files written to /tmp

You are creating a YAML AWS CloudFormation stack for BlueRiver Analytics that provisions an Amazon EC2 instance and a single Amazon RDS DB instance. After the stack completes, you want to expose the database connection endpoint in the Outputs so other stacks can consume it. Which intrinsic function should be used to fetch that endpoint value?

  • ❏ A. !Sub

  • ❏ B. !FindInMap

  • ❏ C. !GetAtt

  • ❏ D. !Ref

A retail analytics startup runs part of its platform on Amazon EC2 and the rest on servers in a private colocation rack. The on-premises hosts are vital to the service, and the team needs to aggregate host metrics and application logs in Amazon CloudWatch alongside EC2 data. What should the developer implement to send telemetry from the on-premises servers to CloudWatch?

  • ❏ A. Build a scheduled script that gathers metrics and log files and uploads them to CloudWatch with the AWS CLI

  • ❏ B. Install the CloudWatch agent on the data center servers and configure an IAM user’s access key and secret key that allow publishing metrics and logs to CloudWatch

  • ❏ C. Register the servers as AWS Systems Manager hybrid managed instances so the CloudWatch agent can assume the Systems Manager role to send data to CloudWatch

  • ❏ D. Install the CloudWatch agent on the data center servers and attach an IAM role with CloudWatch permissions to those machines

A travel startup named Skylark Journeys plans to roll out refreshed REST endpoints for its iOS and Android app that run behind Amazon API Gateway. The team wants to begin by sending roughly 15% of calls to the new version while the rest continue to use the current release, and they prefer the simplest approach that stays within API Gateway. How can they expose the new version to only a subset of clients through API Gateway?

  • ❏ A. Configure an Amazon Route 53 weighted routing policy to send a percentage of requests to a second API Gateway domain

  • ❏ B. Enable a stage canary in Amazon API Gateway and use canarySettings to shift a small share of traffic to the new deployment

  • ❏ C. Launch the new API in a separate VPC and use Amazon CloudFront to split traffic between the two origins

  • ❏ D. Use AWS CodeDeploy traffic shifting to gradually move calls to the updated API

A media analytics startup runs an Amazon EC2 Auto Scaling group named api-fleet with a maximum size of 5 and a current size of 4. A scale-out policy is configured to add 4 instances when its CloudWatch alarm is in ALARM. When this policy is executed, what will happen?

  • ❏ A. Amazon EC2 Auto Scaling adds 4 instances to the group

  • ❏ B. Amazon EC2 Auto Scaling adds only 1 instance to the group

  • ❏ C. Amazon EC2 Auto Scaling launches 4 instances and then scales in 3 shortly afterward

  • ❏ D. Amazon EC2 Auto Scaling adds 4 instances across multiple Availability Zones because the maximum size applies per Availability Zone

A developer at Northwind Bikes configured an AWS CodeBuild project in the console to use a custom Docker image stored in Amazon ECR. The buildspec file exists and the build starts, but the environment fails when trying to fetch the container image before any build commands run. What is the most likely reason for the failure?

  • ❏ A. AWS CodeBuild does not support custom Docker images

  • ❏ B. The image in Amazon ECR has no tags

  • ❏ C. The CodeBuild service role lacks permissions to authenticate and pull from Amazon ECR

  • ❏ D. The build environment is not configured for privileged mode

Orion Outfitters, a global distributor, needs users from partner supplier companies who sign in with their own SAML or OIDC identity providers to add and modify items in two DynamoDB tables named PartnerOrders and SupplierCatalog in Orion’s AWS account without creating individual IAM users for them. Which approach should the developer implement to securely grant these partner users scoped access to those tables?

  • ❏ A. Create an IAM user for each partner user and attach DynamoDB permissions

  • ❏ B. Configure Amazon Cognito User Pools to sign in supplier users and authorize DynamoDB access

  • ❏ C. Set up Amazon Cognito Identity Pools to federate supplier IdPs and issue temporary credentials for DynamoDB operations

  • ❏ D. AWS IAM Identity Center

BrightParcel, a logistics startup, runs Auto Scaling worker nodes on Amazon EC2 that poll an Amazon SQS standard queue for tasks. Some messages fail repeatedly during processing, and after 4 attempts the team wants those problematic messages automatically routed to an isolated location for later analysis without deleting them. What should they implement to keep failing messages separate for troubleshooting?

  • ❏ A. Enable long polling on the queue

  • ❏ B. Implement a dead-letter queue

  • ❏ C. Decrease the visibility timeout

  • ❏ D. Increase the visibility timeout

A travel booking platform runs several Amazon EC2 web servers behind an Application Load Balancer, and during busy periods the instances sustain around 92 percent CPU utilization. The engineering team has confirmed that processing TLS for HTTPS traffic is consuming most of the CPU on the servers. What actions should they take to move the TLS workload off the application instances? (Choose 2)

  • ❏ A. Configure an HTTPS listener on the ALB that forwards encrypted traffic to targets without decryption (pass-through)

  • ❏ B. Request or import an SSL/TLS certificate in AWS Certificate Manager and associate it with the ALB

  • ❏ C. Install the ACM certificate directly on each EC2 instance to terminate TLS on the server

  • ❏ D. Create an HTTPS listener on the ALB that terminates TLS at the load balancer

  • ❏ E. Add AWS WAF on the ALB to reduce CPU from TLS

You manage multiple Amazon API Gateway REST APIs that invoke AWS Lambda for each release of Orion Fintech’s backend. The company wants to merge these into one API while keeping distinct subdomain endpoints such as alpha.api.orionfin.io, beta.api.orionfin.io, rc.api.orionfin.io, and prod.api.orionfin.io so clients can explicitly target an environment. What should you implement to achieve this in the most maintainable way?

  • ❏ A. Modify the Integration Request mapping to dynamically rewrite the backend endpoint for each release based on the hostname

  • ❏ B. Configure Amazon Route 53 weighted records to direct clients to separate APIs for each environment

  • ❏ C. Create stages for ALPHA, BETA, RC, and PROD and use stage variables to target the right Lambda or HTTP backend, then map each subdomain to its stage

  • ❏ D. Use Lambda layers to separate environment-specific code

NorthWind Labs, a media analytics startup, exposes a public Amazon API Gateway REST API that backs its web dashboard with Amazon Cognito sign-in. The team is preparing a beta of a major API revision that introduces new resources and breaking changes, and only a small set of internal developers should test it while paying customers continue using the current API uninterrupted. The development team will maintain and iterate on the beta during this period. What is the most operationally efficient way to let the developers invoke the new version without impacting production users?

  • ❏ A. Configure a canary release on the existing production stage and give developers a stage-variable URL to hit

  • ❏ B. Create a separate API Gateway stage such as beta-v3 that is deployed from the new version and have developers use that stage’s invoke URL

  • ❏ C. Issue new API keys in API Gateway and require developers to pass those keys to reach the new version

  • ❏ D. Build a brand-new API Gateway API for the new handlers and tell developers to call that new API

A data insights portal for a regional retailer runs on AWS Elastic Beanstalk and stores report data in an Amazon DynamoDB table named AnalyticsEvents. Each request currently performs a full table scan and then filters the results for the user. Adoption is expected to jump over the next 8 weeks, and the table will grow substantially as report requests increase. What should you implement ahead of the growth to keep read performance high while minimizing cost? (Choose 2)

  • ❏ A. Enable DynamoDB Accelerator (DAX)

  • ❏ B. Switch to Query requests where possible

  • ❏ C. Adjust the ScanIndexForward setting to sort query results

  • ❏ D. Lower the page size by setting a smaller Limit value

  • ❏ E. Increase the table’s write capacity units (WCU)

NovaPlay, a small streaming startup, uses Amazon CloudFront to serve static web assets to a global audience. Minutes after uploading new CSS and JavaScript files to the S3 origin, some visitors still receive stale versions from edge locations. The cache behavior currently has an 8-hour TTL, but the team needs the new objects to be delivered right away without causing an outage. What should the developer do to replace the cached files with minimal disruption?

  • ❏ A. Disable the CloudFront distribution and then enable it to refresh all edge caches

  • ❏ B. Submit a CloudFront invalidation request for the updated object paths

  • ❏ C. Reduce the cache TTL to zero in the cache behavior and wait for propagation

  • ❏ D. Create a new origin with the updated files and repoint the distribution to it

Orion Ledger, a fintech startup, runs a web service on Amazon EC2 instances behind an Application Load Balancer. Clients must connect to the load balancer over HTTPS. The developer obtained a public X.509 TLS certificate for the site from AWS Certificate Manager. What should the developer do to establish secure client connections to the load balancer?

  • ❏ A. Configure each EC2 instance to use the ACM certificate and terminate TLS on the instances

  • ❏ B. Export the certificate private key to an Amazon S3 bucket and configure the ALB to load the certificate from S3

  • ❏ C. Associate the ACM certificate with the ALB HTTPS listener by using the AWS Management Console

  • ❏ D. Place Amazon CloudFront in front of the ALB and attach the ACM certificate to the CloudFront distribution

A ticketing startup, MetroTix, is running a flash sale that causes a spike in events on an Amazon Kinesis Data Streams stream. To scale, the team split shards so the stream grew from 5 shards to 12 shards. The consumer uses the Kinesis Client Library and runs one worker per Amazon EC2 instance. What is the maximum number of EC2 instances that can be launched to process this stream for this application at the same time?

  • ❏ A. 24

  • ❏ B. 12

  • ❏ C. 5

  • ❏ D. 1

A retail technology startup is breaking an older order-processing system into microservices on AWS. One service will run several AWS Lambda functions, and the team plans to orchestrate these invocations with AWS Step Functions. What should the Developer create to define the workflow and coordinate the function executions?

  • ❏ A. Amazon EventBridge

  • ❏ B. Invoke StartExecution to begin a workflow run

  • ❏ C. Define a Step Functions state machine using Amazon States Language

  • ❏ D. Deploy the orchestration with an AWS CloudFormation YAML template

The engineering team at Solstice Shipping operates a hybrid environment with about 80 on-premises Linux and Windows servers and a set of Amazon EC2 instances. They need to gather OS-level metrics like CPU, memory, disk, and network from all machines and send them to a single view in Amazon CloudWatch with minimal custom effort. What is the most efficient way to achieve this?

  • ❏ A. Install AWS Distro for OpenTelemetry on all servers and export metrics to CloudWatch

  • ❏ B. Deploy the CloudWatch agent to both the on-premises servers and the EC2 instances

  • ❏ C. Enable CloudWatch detailed monitoring for both EC2 instances and on-premises servers

  • ❏ D. Use built-in EC2 metrics in CloudWatch and push on-premises metrics with AWS CLI put-metric-data scripts

NovaTrail Labs has opened a new AWS account and is setting up its first IAM users and permission policies for a small engineering team. Which approaches align with AWS recommended practices for managing user permissions? (Choose 2)

  • ❏ A. Always prefer customer managed policies over AWS managed policies

  • ❏ B. Assign permissions by adding users to IAM groups

  • ❏ C. Share a common IAM user among developers to reduce management overhead

  • ❏ D. Define reusable customer managed policies instead of inline policies attached to a single identity

  • ❏ E. Use AWS Organizations service control policies to grant permissions to individual users

A developer at Orion Analytics created an AWS Lambda function that reads items from an Amazon DynamoDB table named ReportsV3. The team now needs a simple public HTTPS endpoint that accepts HTTP GET requests and forwards the full request context to the function while keeping operational overhead and cost low. Which approach should the developer choose?

  • ❏ A. Amazon Cognito User Pool with Lambda triggers

  • ❏ B. Create an API Gateway API using a POST method

  • ❏ C. Configure an Application Load Balancer with the Lambda function as a target

  • ❏ D. Create an Amazon API Gateway API with Lambda proxy integration

A mid-sized fintech startup named Northwind Capital uses AWS CodePipeline to deliver updates to several Elastic Beanstalk environments. After almost two years of steady releases, the application is nearing the service’s cap on stored application versions, blocking registration of new builds. What is the best way to automatically clear out older, unused versions so that future deployments can proceed?

  • ❏ A. AWS Lambda

  • ❏ B. Elastic Beanstalk application version lifecycle policy

  • ❏ C. Amazon S3 Lifecycle rules

  • ❏ D. Elastic Beanstalk worker environment

A fintech startup named NorthPeak Analytics runs a scheduled agent on an Amazon EC2 instance that aggregates roughly 120 GB of files each day from three Amazon S3 buckets. The data science group wants to issue spur-of-the-moment SQL queries against those files without ingesting them into a database. To keep operations minimal and pay only per query, which AWS service should be used to query the data directly in S3?

  • ❏ A. Amazon EMR

  • ❏ B. Amazon Athena

  • ❏ C. AWS Step Functions

  • ❏ D. Amazon Redshift Spectrum

A travel booking startup runs its front end on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. During peak promotions the group scales to about 80 instances, and user sessions must persist for roughly 45 minutes and be available to any instance even as nodes are replaced. Where should the developer keep the session state so requests can be served by any instance?

  • ❏ A. Store session data on ephemeral instance store volumes

  • ❏ B. Store session data in an Amazon ElastiCache for Redis cluster

  • ❏ C. Store session data on the instance root filesystem

  • ❏ D. Store session data on a shared Amazon EBS volume attached to multiple instances

An engineer at Aurora Analytics is troubleshooting an AWS Lambda function deployed with the AWS CDK. The function runs without throwing exceptions, but nothing appears in Amazon CloudWatch Logs and no log group or log stream exists for the function. The code includes logging statements. The function uses an IAM execution role that trusts the Lambda service principal but has no permission policies attached, and the function has no resource-based policy. What change should be made to enable log creation in CloudWatch Logs?

  • ❏ A. Attach a resource-based policy to the function that grants logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents

  • ❏ B. Attach the AWSLambdaBasicExecutionRole managed policy to the Lambda execution role

  • ❏ C. Attach a resource-based policy to the function that grants only logs:PutLogEvents

  • ❏ D. Attach the CloudWatchLambdaInsightsExecutionRolePolicy managed policy to the execution role

A fashion marketplace’s engineering team is preparing for a 36-hour summer flash sale. The product lead requires an Amazon ElastiCache strategy that can handle sudden surges while ensuring product prices and descriptions remain fully consistent with the source database at all times. Which approach should they implement to keep the cache synchronized with the backend during updates?

  • ❏ A. Write to the cache first and asynchronously apply the change to the database

  • ❏ B. Commit to the database and rely on the item TTL to refresh the cache later

  • ❏ C. Commit to the database, then explicitly invalidate the affected cache keys

  • ❏ D. Amazon CloudFront

A telemedicine startup, HelixCare, runs an application on Amazon EC2 that produces thousands of tiny JSON files around 2 KB each containing sensitive patient data. The files are written to a vendor supplied network attached storage system that does not integrate with AWS services. The team wants to use AWS KMS in the safest way that keeps key material within KMS whenever possible. What should they do?

  • ❏ A. Generate a data key with a customer managed KMS key and use that data key to envelope encrypt each file

  • ❏ B. Encrypt the files directly using an AWS managed KMS key

  • ❏ C. Encrypt each file directly by calling AWS KMS Encrypt with a customer managed KMS key

  • ❏ D. Use the AWS Encryption SDK with KMS to generate data keys and encrypt the files

A fintech startup runs dozens of Dockerized microservices and plans to move them to Amazon ECS. Traffic is spiky, and the finance team wants costs to accrue only while individual tasks are actually running rather than paying for idle servers. Which choice best meets these goals?

  • ❏ A. An Amazon ECS service with Auto Scaling

  • ❏ B. Amazon ECS using the Fargate launch type

  • ❏ C. Amazon ECS with the EC2 launch type

  • ❏ D. Amazon ECS with EC2 Spot capacity providers

A media startup has deployed an Application Load Balancer in front of several Amazon EC2 instances for its StreamBox app. The target group now shows every instance as unhealthy, but browsing directly to an instance’s public IP on port 8080 successfully loads the site. Which issues could explain why the load balancer continues to mark these targets as unhealthy? (Choose 2)

  • ❏ A. Elastic IP addresses must be attached to the EC2 instances when used behind an Application Load Balancer

  • ❏ B. The target group health check path or port is not aligned with the application’s actual health endpoint

  • ❏ C. Cross-zone load balancing is disabled on the load balancer

  • ❏ D. The instance security group does not allow inbound traffic from the load balancer security group on required ports

  • ❏ E. The EBS volumes on the instances were mounted incorrectly

AWS Developer Certification Exam Simulator Answers

Northwind Press, a digital publisher, wants a serverless way to roll out static front-end sites through preview and production stages. Repositories are spread across multiple Git providers, and deployments should start automatically when changes are merged into designated branches. All traffic and integrations must use HTTPS. Which approach will minimize ongoing operational effort?

  • ✓ C. Use AWS Amplify Hosting, connect the target Git branches to environments, and let merges trigger automated HTTPS deployments

Use AWS Amplify Hosting, connect the target Git branches to environments, and let merges trigger automated HTTPS deployments is correct because it is a serverless service designed for static front ends and it natively supports branch-based CI/CD from multiple Git providers.

AWS Amplify Hosting automatically builds and deploys when designated branches are merged and it provisions a CDN and TLS certificates so traffic and integrations use HTTPS out of the box. Amplify Hosting also provides preview environments for pull requests and abstracts the underlying pipeline and infrastructure so ongoing operational effort is minimal.

Host in Amazon S3 and use AWS CodePipeline with AWS CodeBuild to deploy on branch merges, fronted by Amazon CloudFront for HTTPS can meet the requirements but it requires you to design and maintain pipelines, buildspecs, IAM policies, artifact handling, and cache invalidations which increases operational overhead compared with a hosted service.

Use AWS Elastic Beanstalk with AWS CodeStar to manage environments and deployments is not a good fit because it introduces managed compute instances and environment orchestration that are unnecessary for static sites and it does not match the serverless preference as cleanly.

Run separate Amazon EC2 instances per environment and automate releases with AWS CodeDeploy tied to the repositories is the least suitable option because it is not serverless and it adds instance lifecycle, patching, scaling, and operational management which greatly increases ongoing effort.

For questions that emphasize lowest operational overhead for static hosting with branch-triggered deployments choose AWS Amplify Hosting because it bundles CI/CD, previews, CDN, and TLS into a managed, serverless workflow.

A streaming analytics team at Vega Studios is building a serverless consumer that processes records from an Amazon Kinesis Data Streams stream using AWS Lambda. The function is CPU constrained during batch processing and the team wants to increase per-invocation compute capacity without changing code while keeping costs manageable. What should they do to increase the CPU available to the function?

  • ✓ D. Allocate more memory to the function

The correct option is Allocate more memory to the function. This setting increases the CPU and other resources available to each Lambda invocation because AWS allocates CPU proportionally to the configured memory.

Increasing memory for the function raises the vCPU share and I/O bandwidth available to compute bound batches and it can reduce overall execution time without requiring any code changes. This approach is the simplest way to increase per invocation compute capacity for a Lambda that processes Amazon Kinesis Data Streams records while keeping the architecture serverless.

Configure the function to run on unreserved account concurrency only changes which capacity pool serves the function and does not change the CPU available to each invocation.

Increase the function’s concurrent executions limit affects how many executions can run in parallel and not how much CPU a single execution receives.

Use Lambda@Edge is intended for executing code at CloudFront edge locations for content delivery and is not a solution to increase CPU for a regional Lambda that processes Kinesis records.

When asked how to get more per invocation CPU for a Lambda think increase the memory setting because CPU scales with memory and you do not need to change the code.

A travel reservations startup is overhauling its release process and moving away from a rigid waterfall approach. The team now requires all services to follow CI/CD best practices and to be containerized with Docker, with images stored in Amazon ECR and published via AWS CodePipeline and AWS CodeBuild. During a pipeline run, the final push to the registry fails and reports an authorization error. What is the most probable cause?

  • ✓ B. The CodeBuild service role lacks permissions to obtain an ECR authorization token and push images

The CodeBuild service role lacks permissions to obtain an ECR authorization token and push images is the most likely cause because the pipeline reports an authorization failure during the final push to Amazon ECR.

The build project uses its service role to call Amazon ECR for authentication and to perform repository operations so if the role does not have the necessary permissions such as ecr:GetAuthorizationToken and the push related actions like ecr:PutImage and the layer upload permissions the docker push will fail with an authorization error rather than a network timeout.

Security group rules prevent CodeBuild from reaching Amazon ECR is unlikely because an authorization error points to credential or IAM policy problems and not to connectivity failures which usually show as connection timeouts or refusals.

The ECS cluster instances need extra configuration added to /etc/ecs/ecs.config is unrelated because the failure occurs in the build stage when publishing the image to ECR and not on the ECS runtime hosts.

The build environment’s VPC is missing Amazon ECR interface VPC endpoints would manifest as network reachability errors when builds run in private subnets without outbound access and not as a clear authorization error tied to missing IAM permissions.

When an ECR push fails with an authorization error check the CodeBuild service role for required ECR permissions such as ecr:GetAuthorizationToken and repository write actions before investigating network settings.

NovaArcade is building a gaming analytics feature on Amazon DynamoDB. The table uses user_id as the partition key and game_name as the sort key, and each item stores points and points_recorded_at. The team must power a leaderboard that returns the highest scoring players (user_id) per game_name with the lowest read cost and latency. What is the most efficient way to fetch these results?

  • ✓ C. Create a global secondary index with partition key game_name and sort key points, then query it for each game_name with ScanIndexForward set to false

Create a global secondary index with partition key game_name and sort key points, then query it for each game_name with ScanIndexForward set to false is the correct option because it allows you to group items by game and return them ordered by score so you can fetch the top players with a single efficient Query.

The GSI reshapes the access pattern without duplicating reads and Query supports ScanIndexForward set to false to return highest points first. You can also apply Limit to the Query to fetch only the leaders which reduces read capacity usage and lowers latency. This native index based approach is simpler and more efficient than client side sorting or maintaining separate derived tables.

Use a DynamoDB Query on the base table with key attributes user_id and game_name and sort the returned items by points in the application is inefficient because Query is scoped to a single partition key user_id and it cannot order by a non key attribute. That pattern would require many queries or a scan plus client side sorting which increases latency and read cost.

Use DynamoDB Streams with AWS Lambda to maintain a separate Leaderboards table keyed by game_name and sorted by points, then query that table can work but it adds operational overhead and extra cost and it introduces eventual consistency and failure handling for the stream and Lambda pipeline. The GSI provides the required pattern natively with less complexity.

Create a local secondary index with primary key game_name and sort key points and query by game_name is not valid because local secondary indexes must use the same partition key as the base table which is user_id in this design. LSIs also must be defined at table creation which makes them less flexible for changing access patterns.

When you must return ranked results think partition by category and sort by the metric and prefer a GSI with ScanIndexForward set to false and a Limit to return the top N efficiently.

A Berlin-based travel startup is developing a cross-platform mobile app that alerts users to visible meteor showers and ISS flyovers over the next 30 days. The app signs users in with a social identity provider using the provider’s SDK and then sends the returned OAuth 2.0 or OpenID Connect token to Amazon Cognito Identity Pools. After the user is authenticated, what does Cognito return that the client uses to obtain temporary, limited-privilege AWS credentials?

  • ✓ C. Amazon Cognito identity ID

The correct option is Amazon Cognito identity ID. When your app exchanges the social provider token with Amazon Cognito Identity Pools, Cognito creates or looks up a unique identity and returns an Amazon Cognito identity ID that the client uses to obtain temporary, limited privilege AWS credentials.

Cognito establishes the identity and the SDK then uses that identity identifier together with the provider token to request credentials. Cognito maps the identity to an IAM role and then invokes AWS Security Token Service to issue short lived credentials that are scoped by that role.

Cognito key pair is incorrect because Cognito does not return cryptographic key pairs to represent users or to fetch AWS credentials.

AWS Security Token Service is incorrect as the direct answer because STS is the backend service that issues the temporary credentials. The client does not receive STS itself after the identity provider exchange and instead receives a Cognito identity identifier first.

Amazon Cognito SDK is incorrect because the SDK is a client library used to call Cognito APIs and it is not the identifier or token returned by Cognito after authentication.

Remember that Identity Pools return an identity ID which is then used with role mappings to obtain temporary credentials via STS.

Over the next 9 months, Orion Travel Tech plans to rebuild its legacy platform using Node.js and GraphQL. The team needs full request tracing with a visual service map of dependencies. The application will run on an EC2 Auto Scaling group of Amazon Linux 2 instances behind an Application Load Balancer, and trace data must flow to AWS X-Ray. What is the most appropriate approach to meet these needs?

  • ✓ C. Add a user data script that installs and starts the AWS X-Ray daemon on each EC2 instance in the Auto Scaling group

The correct choice is Add a user data script that installs and starts the AWS X-Ray daemon on each EC2 instance in the Auto Scaling group.

AWS X Ray SDKs send trace segments to a local process that aggregates and uploads them rather than calling the service for every request. On EC2 the recommended approach is to run the X-Ray daemon which listens on UDP port 2000, batches segments, handles retries, and forwards data to the X Ray service. Using user data ensures every instance launched by the Auto Scaling group installs and starts the daemon and that the instance IAM role can provide the permissions needed to upload traces.

Refactor the Node.js service to call the PutTraceSegments API and push segments straight to AWS X-Ray is technically possible but it increases operational burden and can harm application performance because it bypasses the daemon’s batching and backoff behavior.

Enable AWS WAF on the Application Load Balancer to inspect and record all web requests improves security and can log requests but it does not instrument application code and it will not produce distributed traces or a visual service map in X Ray.

Turn on AWS X-Ray tracing in the EC2 Auto Scaling launch template is not viable because there is no built in toggle to enable X Ray at the launch template level and you must deploy the daemon or an equivalent collector such as the AWS Distro for OpenTelemetry.

When tracing EC2 workloads install the X-Ray daemon via user data or an AMI so traces are batched and retried locally before they are sent to the service.

An operations dashboard for Helios Tickets streams click and event data into Amazon Kinesis Data Streams. During flash sales, the producers are not fully using the 24 available shards, leaving write capacity idle. Which change will help the producers better utilize the shards and increase write throughput to the stream?

  • ✓ C. Use the Kinesis Producer Library to aggregate and batch records before calling PutRecords

Use the Kinesis Producer Library to aggregate and batch records before calling PutRecords is the correct choice because it enables producers to better utilize the stream shards and increase write throughput.

Kinesis Producer Library aggregates many small user records into larger Kinesis records and batches them into PutRecords calls which reduces per-record API overhead and increases per-shard bandwidth. The library also handles efficient partitioning and retries so producers can drive higher sustained throughput across available shards.

Increase the number of shards with UpdateShardCount is unnecessary when shards have headroom because adding shards increases capacity but does not address inefficient small writes by producers. Scaling shards is appropriate when shards are actually saturated rather than when producers are not packing records efficiently.

Insert Amazon SQS between producers and the stream to buffer writes only decouples producers and consumers and adds buffering. SQS by itself does not cause producers to aggregate records or improve per-shard utilization unless you also change producer behavior to batch before calling Kinesis APIs.

Call DynamoDB BatchWriteItem to send multiple records at once is irrelevant because the DynamoDB BatchWriteItem API writes to DynamoDB and cannot be used to write to Kinesis Data Streams.

When shards report unused capacity prefer aggregation with the Kinesis Producer Library and batched PutRecords calls to raise per shard throughput and lower API overhead.

Your team is building a subscription portal for BrightPath Media using AWS Lambda, AWS App Runner, and Amazon DynamoDB. Whenever a new item is inserted into the Members table, a Lambda function must automatically send a personalized welcome email to the member. What is the most appropriate way to have the table changes invoke the function?

  • ✓ B. Enable DynamoDB Streams on the Members table and configure it as the event source for the Lambda function

The correct option is Enable DynamoDB Streams on the Members table and configure it as the event source for the Lambda function. This approach captures per item changes and invokes the Lambda when new INSERT records appear.

DynamoDB Streams captures item level modifications in near real time and an AWS Lambda event source mapping can poll the stream and invoke the function for new records. This lets you send personalized welcome emails immediately after a new member item is written without custom polling or extra services. After the first bolded mention you can refer to this as DynamoDB Streams and the Lambda event source mapping.

Create an Amazon EventBridge rule for new item inserts in the table and target the Lambda function is incorrect because EventBridge does not natively emit every individual DynamoDB item mutation so it cannot trigger Lambda for each insert by itself. You would need an intermediary to translate item level changes into events which adds unnecessary complexity.

Use Amazon Kinesis Data Streams to capture new table data and subscribe the Lambda function is incorrect since Kinesis is not automatically populated by DynamoDB changes. You would have to build or run a connector to copy changes into Kinesis which is unnecessary when Streams are available.

Enable DynamoDB Transactions and configure them as the event source for the function is incorrect because transactions provide ACID guarantees for multiple writes and they do not produce events or act as a Lambda trigger. Transactions do not replace Streams for event driven processing.

Remember that DynamoDB Streams plus an Lambda event source mapping gives direct item level triggers for near real time processing. Look for solutions that emit per item events when the exam asks for immediate reactions to table inserts.

A developer at VerdantPay is building a serverless workflow that processes confidential payment data. The AWS Lambda function writes intermediate files to its /tmp directory, and the company requires those files to be encrypted while at rest. What should the developer do to meet this requirement?

  • ✓ B. Configure the Lambda function to use an AWS KMS customer managed key, generate a data key with KMS, and encrypt data before writing to /tmp

Configure the Lambda function to use an AWS KMS customer managed key, generate a data key with KMS, and encrypt data before writing to /tmp is correct because it enforces application layer encryption for the function’s ephemeral storage and ensures sensitive payment files are encrypted at rest before they are written to the local filesystem.

The secure pattern uses envelope encryption with AWS KMS data keys. With Configure the Lambda function to use an AWS KMS customer managed key, generate a data key with KMS, and encrypt data before writing to /tmp the function calls KMS GenerateDataKey to receive a plaintext data key and a ciphertext blob. The function uses the plaintext key to encrypt the intermediate files in /tmp and then discards the plaintext key. The ciphertext blob can be stored if you need to decrypt later by calling KMS.

Attach the Lambda function to a VPC and mount an encrypted Amazon EBS volume to /tmp is wrong because Lambda cannot mount Amazon EBS volumes into the execution environment. VPC attachment manages networking only and does not provide block device mounts for /tmp.

Enable default encryption on an Amazon S3 bucket with a KMS customer managed key and mount the bucket to /tmp is incorrect because you cannot mount an S3 bucket into the Lambda runtime as a local /tmp directory. S3 encryption protects objects stored in S3 but it does not apply to files written to the function’s ephemeral storage.

Mount an encrypted Amazon EFS access point and rely on EFS encryption at rest for files written to /tmp is not applicable because EFS is a separate filesystem that must be mounted to a chosen path and it does not replace the Lambda instance’s local /tmp directory. Relying on EFS encryption at rest does not by itself encrypt files placed in the ephemeral /tmp storage.

Remember that Lambda /tmp is not encrypted by default so use KMS data keys and application layer encryption to protect sensitive files before they are written.

You are creating a YAML AWS CloudFormation stack for BlueRiver Analytics that provisions an Amazon EC2 instance and a single Amazon RDS DB instance. After the stack completes, you want to expose the database connection endpoint in the Outputs so other stacks can consume it. Which intrinsic function should be used to fetch that endpoint value?

  • ✓ C. !GetAtt

The correct choice is !GetAtt because it retrieves an attribute from a resource such as an Amazon RDS DB instance’s Endpoint which you can place in the Outputs for other stacks to consume.

!GetAtt returns named attributes that CloudFormation populates after the resource is created and Endpoint is one of those attributes for RDS instances and for Aurora clusters. You can expose that endpoint in the Outputs section and then export or import it so other stacks can consume the connection endpoint.

!Sub is for string substitution and interpolation and it cannot by itself fetch runtime attributes from resources. You can combine !Sub with other intrinsics to format a value but it does not retrieve the endpoint attribute on its own.

!FindInMap reads static values from the template Mappings section and does not access attributes created at resource creation time. It is for template lookups and not for fetching resource endpoints.

!Ref returns a resource logical name or a simple identifier and for RDS it typically yields the DB instance identifier rather than the connection endpoint attribute that consumers need.

When you need a runtime attribute such as an endpoint use !GetAtt. Use !Ref for IDs and use !Sub to compose strings from values you obtain with other intrinsics.

A retail analytics startup runs part of its platform on Amazon EC2 and the rest on servers in a private colocation rack. The on-premises hosts are vital to the service, and the team needs to aggregate host metrics and application logs in Amazon CloudWatch alongside EC2 data. What should the developer implement to send telemetry from the on-premises servers to CloudWatch?

  • ✓ B. Install the CloudWatch agent on the data center servers and configure an IAM user’s access key and secret key that allow publishing metrics and logs to CloudWatch

The correct option is Install the CloudWatch agent on the data center servers and configure an IAM user’s access key and secret key that allow publishing metrics and logs to CloudWatch. This approach lets you stream on premise host metrics and application logs into CloudWatch so they are aggregated with your EC2 telemetry.

This option is correct because on premise servers cannot use EC2 instance profiles so the CloudWatch agent must authenticate with credentials that are not tied to an EC2 instance. By deploying Install the CloudWatch agent on the data center servers and configure an IAM user’s access key and secret key that allow publishing metrics and logs to CloudWatch you grant a specific IAM identity the CloudWatch and CloudWatch Logs permissions and the agent provides built in collection, buffering, and retry behavior for reliable delivery.

Build a scheduled script that gathers metrics and log files and uploads them to CloudWatch with the AWS CLI is inferior because it requires custom maintenance and it lacks the integrated collection features and resilience that the CloudWatch agent provides.

Register the servers as AWS Systems Manager hybrid managed instances so the CloudWatch agent can assume the Systems Manager role to send data to CloudWatch is incorrect because registering as a hybrid managed instance does not remove the need for valid credentials for CloudWatch API calls and the CloudWatch agent still requires an authenticated IAM identity to publish data.

Install the CloudWatch agent on the data center servers and attach an IAM role with CloudWatch permissions to those machines is not possible because IAM roles and instance profiles can only be attached to AWS resources such as EC2 instances and cannot be applied to physical on premise servers.

Remember that on-premises servers cannot use EC2 instance profiles so the CloudWatch agent must use IAM user access keys with least privilege to publish metrics and logs.

A travel startup named Skylark Journeys plans to roll out refreshed REST endpoints for its iOS and Android app that run behind Amazon API Gateway. The team wants to begin by sending roughly 15% of calls to the new version while the rest continue to use the current release, and they prefer the simplest approach that stays within API Gateway. How can they expose the new version to only a subset of clients through API Gateway?

  • ✓ B. Enable a stage canary in Amazon API Gateway and use canarySettings to shift a small share of traffic to the new deployment

Enable a stage canary in Amazon API Gateway and use canarySettings to shift a small share of traffic to the new deployment is correct because it lets you route a defined percentage of stage traffic to the updated endpoints while keeping the majority on the current release for observation and quick rollback.

The stage canary works at the API Gateway stage level and uses canarySettings to control fractional traffic so you can send roughly 15 percent of calls to the new deployment and monitor CloudWatch metrics and logs before promoting the change. The canary approach keeps routing and rollback within API Gateway so client apps do not need immediate configuration changes.

Configure an Amazon Route 53 weighted routing policy to send a percentage of requests to a second API Gateway domain is not the simplest within API Gateway because DNS weighted routing requires separate domains or stage mappings and is subject to DNS caching which reduces precise control at the stage level.

Launch the new API in a separate VPC and use Amazon CloudFront to split traffic between the two origins introduces extra components and operational overhead because CloudFront is a CDN and origin splitting is heavier than using API Gateway native percentage routing for a straightforward canary rollout.

Use AWS CodeDeploy traffic shifting to gradually move calls to the updated API targets compute deployments and does not directly manage stage-level request routing in API Gateway, so it is an indirect and more complex choice for this specific requirement.

Within API Gateway look for stage-level canarySettings when a question asks for a gradual rollout inside the service because that is the simplest, built-in way to shift a small percentage of traffic.

A media analytics startup runs an Amazon EC2 Auto Scaling group named api-fleet with a maximum size of 5 and a current size of 4. A scale-out policy is configured to add 4 instances when its CloudWatch alarm is in ALARM. When this policy is executed, what will happen?

  • ✓ B. Amazon EC2 Auto Scaling adds only 1 instance to the group

The correct option is Amazon EC2 Auto Scaling adds only 1 instance to the group. The Auto Scaling group has a maximum size of 5 and a current size of 4 so the policy cannot increase the group beyond its configured maximum.

The scale-out policy would compute a desired capacity of 8 when adding 4 instances to the current 4 but Amazon EC2 Auto Scaling enforces the group’s maximum and clamps the desired capacity to 5. As a result the group launches only one additional instance to reach the maximum.

Amazon EC2 Auto Scaling adds 4 instances to the group is incorrect because Auto Scaling never launches more instances than the group’s configured maximum. The service applies the max limit before provisioning new instances.

Amazon EC2 Auto Scaling launches 4 instances and then scales in 3 shortly afterward is incorrect because Auto Scaling does not overprovision beyond the maximum and then immediately terminate excess instances. The limit is enforced up front rather than by launching and then removing instances.

Amazon EC2 Auto Scaling adds 4 instances across multiple Availability Zones because the maximum size applies per Availability Zone is incorrect because the maximum size in this Auto Scaling group is enforced at the group level and not as a per Availability Zone maximum in this scenario.

When a scaling policy runs compute the target desired capacity and then remember that Auto Scaling will clamp it to the group min and max to determine how many instances actually launch.

A developer at Northwind Bikes configured an AWS CodeBuild project in the console to use a custom Docker image stored in Amazon ECR. The buildspec file exists and the build starts, but the environment fails when trying to fetch the container image before any build commands run. What is the most likely reason for the failure?

  • ✓ C. The CodeBuild service role lacks permissions to authenticate and pull from Amazon ECR

The CodeBuild service role lacks permissions to authenticate and pull from Amazon ECR is the most likely cause of the failure. The build environment is fetched before the buildspec runs and CodeBuild uses its service role to call ECR APIs to obtain an authorization token and to retrieve image layers.

The service role must grant permissions such as ecr GetAuthorizationToken, ecr BatchGetImage and ecr GetDownloadUrlForLayer and the role must be scoped to cover the target repository and region. Without those permissions CodeBuild cannot authenticate or download the image and the image pull fails before any build commands execute.

AWS CodeBuild does not support custom Docker images is incorrect because CodeBuild supports custom images hosted in Amazon ECR, Amazon ECR Public and Docker Hub. Custom environment images are a standard CodeBuild feature and are commonly used.

The image in Amazon ECR has no tags is unlikely to be the root cause because images can be referenced by digest as well as by tag. A missing tag by itself does not prevent a pull when the correct reference is provided.

The build environment is not configured for privileged mode is not relevant in this situation because privileged mode is required when the build needs to run Docker or build images inside the build. Privileged mode does not control the ability to pull the environment image from ECR.

When a CodeBuild job cannot pull a custom ECR image first check the service role for ECR permissions and confirm the role covers the repository and region. Use least privilege while ensuring the necessary ECR actions are allowed.

Orion Outfitters, a global distributor, needs users from partner supplier companies who sign in with their own SAML or OIDC identity providers to add and modify items in two DynamoDB tables named PartnerOrders and SupplierCatalog in Orion’s AWS account without creating individual IAM users for them. Which approach should the developer implement to securely grant these partner users scoped access to those tables?

  • ✓ C. Set up Amazon Cognito Identity Pools to federate supplier IdPs and issue temporary credentials for DynamoDB operations

Set up Amazon Cognito Identity Pools to federate supplier IdPs and issue temporary credentials for DynamoDB operations is correct because it allows Orion to trust external SAML or OIDC identity providers and to exchange their authentication tokens for short lived AWS credentials that are scoped to allow add and modify actions on the PartnerOrders and SupplierCatalog tables.

This solution uses Amazon Cognito Identity Pools to federate supplier IdPs and to obtain temporary AWS credentials via the AWS Security Token Service. You assign IAM roles with least privilege policies that restrict access to just the two DynamoDB tables and to only the necessary operations. Temporary credentials scale cleanly and avoid issuing long lived secrets or creating individual IAM principals for each external user.

Configure Amazon Cognito User Pools to sign in supplier users and authorize DynamoDB access is incorrect because user pools provide authentication and JWTs for applications and they do not by themselves grant AWS credentials for direct DynamoDB SDK access unless you pair them with an identity pool or proxy requests through a trusted backend.

Create an IAM user for each partner user and attach DynamoDB permissions is incorrect because creating per user IAM users produces long lived credentials that are hard to manage for many external partners and it does not follow federation or least privilege best practices.

AWS IAM Identity Center is incorrect because it is focused on workforce single sign on into AWS accounts and applications and it does not target issuing temporary AWS credentials to external application users for direct client SDK access to DynamoDB.

When external partner users need direct client access to AWS services choose Identity Pools and then scope IAM roles to the exact DynamoDB tables and actions required.

BrightParcel, a logistics startup, runs Auto Scaling worker nodes on Amazon EC2 that poll an Amazon SQS standard queue for tasks. Some messages fail repeatedly during processing, and after 4 attempts the team wants those problematic messages automatically routed to an isolated location for later analysis without deleting them. What should they implement to keep failing messages separate for troubleshooting?

  • ✓ B. Implement a dead-letter queue

Implement a dead-letter queue is the correct option because it lets you automatically move messages that fail processing repeatedly to an isolated location for later analysis.

With Implement a dead-letter queue you attach a redrive policy to the source queue and set a maxReceiveCount so that messages that exceed the retry threshold are moved to a separate queue where they can be inspected and replayed if needed.

Enable long polling on the queue improves retrieval efficiency by reducing empty responses but it does not provide any mechanism to quarantine or isolate failing messages for troubleshooting.

Decrease the visibility timeout causes messages to reappear sooner which can increase duplicate processing and contention and it does not separate failing messages for analysis.

Increase the visibility timeout gives more time for a consumer to finish processing a message and it may reduce immediate retries but it still does not route repeatedly failing messages to a separate location for postmortem.

Use a DLQ with a redrive policy and an appropriate maxReceiveCount to quarantine bad messages for debugging rather than relying on visibility timeout tuning.

A travel booking platform runs several Amazon EC2 web servers behind an Application Load Balancer, and during busy periods the instances sustain around 92 percent CPU utilization. The engineering team has confirmed that processing TLS for HTTPS traffic is consuming most of the CPU on the servers. What actions should they take to move the TLS workload off the application instances? (Choose 2)

  • ✓ B. Request or import an SSL/TLS certificate in AWS Certificate Manager and associate it with the ALB

  • ✓ D. Create an HTTPS listener on the ALB that terminates TLS at the load balancer

Request or import an SSL/TLS certificate in AWS Certificate Manager and associate it with the ALB and Create an HTTPS listener on the ALB that terminates TLS at the load balancer are correct because they move TLS processing off the EC2 instances and onto the load balancer.

When you Request or import an SSL/TLS certificate in AWS Certificate Manager and associate it with the ALB the ALB can present the certificate to clients and handle the TLS handshake. When you Create an HTTPS listener on the ALB that terminates TLS at the load balancer the ALB performs decryption and forwards plain HTTP traffic to the targets which reduces CPU usage on the servers.

Configure an HTTPS listener on the ALB that forwards encrypted traffic to targets without decryption (pass-through) is wrong because encryption remains end to end and the instances still perform the decryption so CPU stays high.

Install the ACM certificate directly on each EC2 instance to terminate TLS on the server is wrong because public ACM certificates are not exportable and you cannot install them on instances which means this would not offload TLS work.

Add AWS WAF on the ALB to reduce CPU from TLS is wrong because AWS WAF inspects application layer requests and does not remove the cost of TLS handshakes or decryption from the web servers.

If HTTPS is driving up EC2 CPU think terminate TLS at the ALB and use ACM for the certificate so the load balancer handles cryptography and the instances handle application logic.

You manage multiple Amazon API Gateway REST APIs that invoke AWS Lambda for each release of Orion Fintech’s backend. The company wants to merge these into one API while keeping distinct subdomain endpoints such as alpha.api.orionfin.io, beta.api.orionfin.io, rc.api.orionfin.io, and prod.api.orionfin.io so clients can explicitly target an environment. What should you implement to achieve this in the most maintainable way?

  • ✓ C. Create stages for ALPHA, BETA, RC, and PROD and use stage variables to target the right Lambda or HTTP backend, then map each subdomain to its stage

Create stages for ALPHA, BETA, RC, and PROD and use stage variables to target the right Lambda or HTTP backend, then map each subdomain to its stage is correct because it lets you consolidate all releases under a single API Gateway while preserving distinct hostnames for each environment such as alpha.api.orionfin.io and prod.api.orionfin.io.

With Create stages for ALPHA, BETA, RC, and PROD and use stage variables to target the right Lambda or HTTP backend, then map each subdomain to its stage you assign stage variables that point to Lambda aliases or to different backend endpoints so each stage invokes the correct backend. This approach keeps the API definition identical across environments and simplifies deployments and rollbacks. Custom domain names and base path mappings let you bind each subdomain to its corresponding stage so clients target the intended environment by hostname.

Modify the Integration Request mapping to dynamically rewrite the backend endpoint for each release based on the hostname is incorrect because integration mapping templates are meant to transform requests and cannot reliably select or switch the integration endpoint per stage or per incoming hostname.

Configure Amazon Route 53 weighted records to direct clients to separate APIs for each environment is incorrect because weighted DNS distributes traffic across endpoints and does not map a hostname to a specific stage within a single API. This would also violate the requirement to consolidate releases behind one API Gateway.

Use Lambda layers to separate environment-specific code is incorrect because layers only provide shared libraries and dependencies and they do not control which function alias or version API Gateway invokes for an environment.

Use stages and stage variables to route to environment specific Lambda aliases or backends and map each environment hostname to the matching stage with a custom domain.

NorthWind Labs, a media analytics startup, exposes a public Amazon API Gateway REST API that backs its web dashboard with Amazon Cognito sign-in. The team is preparing a beta of a major API revision that introduces new resources and breaking changes, and only a small set of internal developers should test it while paying customers continue using the current API uninterrupted. The development team will maintain and iterate on the beta during this period. What is the most operationally efficient way to let the developers invoke the new version without impacting production users?

  • ✓ B. Create a separate API Gateway stage such as beta-v3 that is deployed from the new version and have developers use that stage’s invoke URL

Create a separate API Gateway stage such as beta-v3 that is deployed from the new version and have developers use that stage’s invoke URL is the correct option because it gives developers an isolated invoke URL and deployment that does not affect the production environment.

The beta stage provides a separate deployment lifecycle logging and configuration so the team can iterate on breaking changes while paying customers continue to use the stable production stage. Using a stage is operationally efficient because it reuses the same API configuration and authentication setup while providing a distinct URL and deployment target for testing.

Configure a canary release on the existing production stage and give developers a stage-variable URL to hit is unsuitable because canary routing splits live traffic on the same stage and could route some real customers to the breaking beta.

Issue new API keys in API Gateway and require developers to pass those keys to reach the new version is incorrect because API keys control identification quotas and usage plans and they do not determine which deployment or stage is invoked.

Build a brand-new API Gateway API for the new handlers and tell developers to call that new API would achieve isolation but it increases operational overhead for deployment authentication monitoring and lifecycle management compared with adding a stage.

Use separate API Gateway stages to isolate beta testing behind a different invoke URL and avoid using canary routing for breaking changes.

A data insights portal for a regional retailer runs on AWS Elastic Beanstalk and stores report data in an Amazon DynamoDB table named AnalyticsEvents. Each request currently performs a full table scan and then filters the results for the user. Adoption is expected to jump over the next 8 weeks, and the table will grow substantially as report requests increase. What should you implement ahead of the growth to keep read performance high while minimizing cost? (Choose 2)

  • ✓ B. Switch to Query requests where possible

  • ✓ D. Lower the page size by setting a smaller Limit value

Switch to Query requests where possible and Lower the page size by setting a smaller Limit value are the correct choices to implement before the expected growth because they reduce how much data each request reads and they contain read capacity usage while keeping costs low.

Switch to Query requests where possible is the primary fix because queries use partition key and optional sort key conditions to target specific items and partitions and they avoid the full table reads that drive high read capacity consumption and slow responses. Queries therefore reduce RCU use and improve latency without adding new services or significant expense.

Lower the page size by setting a smaller Limit value helps when a Scan is still necessary because it limits the number of items returned per request and spreads read throughput over time which lowers the chance of throttling and reduces per request cost. This is a low cost tuning step while you refactor access patterns toward queries.

Enable DynamoDB Accelerator (DAX) can reduce latency for some read patterns but it adds ongoing cost and it does not address the underlying inefficiency of full table scans. DAX is most beneficial when you already use Query or GetItem and it is not the best first action here.

Adjust the ScanIndexForward setting to sort query results only changes result order and it does not reduce the amount of data scanned or the read capacity used so it will not resolve the scan driven performance problem.

Increase the table’s write capacity units (WCU) raises write throughput but it does not improve read efficiency or lower read costs. Increasing WCU would add cost without addressing the read bottleneck caused by scans.

Prefer Query over Scan and use a small Limit for large result sets. Fix access patterns first and add caching like DAX only after reads are optimized.

NovaPlay, a small streaming startup, uses Amazon CloudFront to serve static web assets to a global audience. Minutes after uploading new CSS and JavaScript files to the S3 origin, some visitors still receive stale versions from edge locations. The cache behavior currently has an 8-hour TTL, but the team needs the new objects to be delivered right away without causing an outage. What should the developer do to replace the cached files with minimal disruption?

  • ✓ B. Submit a CloudFront invalidation request for the updated object paths

Submit a CloudFront invalidation request for the updated object paths is correct because it evicts the specified files from edge caches so that subsequent requests retrieve the new CSS and JavaScript from the S3 origin with minimal disruption.

Submit a CloudFront invalidation request for the updated object paths lets you target exact keys or use wildcards to remove only the changed assets without taking the distribution offline. Invalidations complete quickly relative to waiting for a long TTL to expire and they avoid a global outage. Keep in mind that many invalidation paths can add cost so using versioned file names reduces the need for frequent invalidations.

Disable the CloudFront distribution and then enable it to refresh all edge caches is wrong because disabling the distribution would cause service disruption for all users and it does not provide a safe or reliable way to purge specific objects from every edge location.

Reduce the cache TTL to zero in the cache behavior and wait for propagation is wrong because changing the TTL affects only future caching behavior and it does not evict objects that are already stored at edge locations. Waiting for propagation would still leave visitors receiving stale files until the old cached entries naturally expire or are removed.

Create a new origin with the updated files and repoint the distribution to it is wrong because it introduces unnecessary configuration changes and creates cold caches across the globe. This approach is more complex and still does not explicitly remove the previously cached objects from edge locations.

Use CloudFront invalidations to immediately remove changed assets and adopt versioned file names to avoid frequent invalidations and reduce risk.

Orion Ledger, a fintech startup, runs a web service on Amazon EC2 instances behind an Application Load Balancer. Clients must connect to the load balancer over HTTPS. The developer obtained a public X.509 TLS certificate for the site from AWS Certificate Manager. What should the developer do to establish secure client connections to the load balancer?

  • ✓ C. Associate the ACM certificate with the ALB HTTPS listener by using the AWS Management Console

Associate the ACM certificate with the ALB HTTPS listener by using the AWS Management Console is correct because terminating TLS at the Application Load Balancer lets the ALB handle the TLS handshake for client connections and uses the managed certificate from AWS Certificate Manager without installing private keys on the EC2 instances.

When you attach the certificate to an HTTPS listener on the ALB the load balancer presents the public certificate to clients and establishes secure client to ALB connections. This uses ACM managed certificates so you do not need to export private keys or copy certificates to targets and you can choose to enable TLS from the ALB to the back end instances if you require end to end encryption.

Configure each EC2 instance to use the ACM certificate and terminate TLS on the instances is incorrect because ACM public certificates cannot be exported for installation on instances and terminating at each instance is unnecessary unless you specifically need end to end encryption.

Export the certificate private key to an Amazon S3 bucket and configure the ALB to load the certificate from S3 is incorrect because ACM public certificates do not allow private key export and the ALB does not load certificates from S3 it uses certificates provided by ACM or by IAM.

Place Amazon CloudFront in front of the ALB and attach the ACM certificate to the CloudFront distribution is incorrect because adding CloudFront is an unnecessary extra component for the stated requirement and CloudFront also has its own regional ACM constraints for edge certificates which adds complexity rather than directly securing client connections to the ALB.

For HTTPS with an ALB remember to think HTTPS listener plus ACM certificate and recall that ACM public private keys cannot be exported.

A ticketing startup, MetroTix, is running a flash sale that causes a spike in events on an Amazon Kinesis Data Streams stream. To scale, the team split shards so the stream grew from 5 shards to 12 shards. The consumer uses the Kinesis Client Library and runs one worker per Amazon EC2 instance. What is the maximum number of EC2 instances that can be launched to process this stream for this application at the same time?

  • ✓ B. 12

The correct answer is 12. A Kinesis Client Library consumer assigns exactly one worker to each shard via a lease so the maximum number of concurrent application workers equals the number of shards, and with 12 shards and one worker per EC2 instance you can launch up to 12 EC2 instances to process the stream at the same time.

When the stream was split from 5 shards to 12 shards the available parallel processing slots increased to 12 so you scale out by adding shards and adding instances. The KCL coordinates shard leases and guarantees that only one worker holds a shard lease at a time so instances beyond the shard count will not increase throughput and will remain idle for this application.

24 is incorrect because the KCL does not allow two workers from the same application to process the same shard concurrently and you cannot double the worker count beyond the shard count.

5 is incorrect because that number reflects the previous shard count and it underutilizes the additional parallelism after the split to 12 shards.

1 is incorrect because a single instance can process all shards but it serializes work and it is not the maximum number of instances that can run concurrently to use shards in parallel.

Remember that KCL assigns one worker per shard so the maximum parallel consumers for an application equals the stream shard count.

A retail technology startup is breaking an older order-processing system into microservices on AWS. One service will run several AWS Lambda functions, and the team plans to orchestrate these invocations with AWS Step Functions. What should the Developer create to define the workflow and coordinate the function executions?

  • ✓ C. Define a Step Functions state machine using Amazon States Language

The correct option is Define a Step Functions state machine using Amazon States Language. This option names the workflow specification that AWS Step Functions executes.

AWS Step Functions workflows are written as state machines in the JSON-based Amazon States Language. A state machine definition describes each step and the transitions and it coordinates Lambda function invocations as well as branching, retries, and error handling, so the developer must author that definition to express the workflow logic.

Invoke StartExecution to begin a workflow run is incorrect because StartExecution only starts an execution of an already defined state machine and it does not create the workflow definition.

Deploy the orchestration with an AWS CloudFormation YAML template is incorrect because CloudFormation can provision an AWS::StepFunctions::StateMachine resource but you still must provide the Amazon States Language definition, so CloudFormation alone does not author the workflow logic.

Amazon EventBridge is incorrect because EventBridge is an event bus for routing events and it does not provide the stateful, step-by-step orchestration that Step Functions offers.

When a question asks you to define workflow logic look for terms like state machine and Amazon States Language. Remember that APIs such as StartExecution run existing workflows and infrastructure templates still need the ASL definition.

The engineering team at Solstice Shipping operates a hybrid environment with about 80 on-premises Linux and Windows servers and a set of Amazon EC2 instances. They need to gather OS-level metrics like CPU, memory, disk, and network from all machines and send them to a single view in Amazon CloudWatch with minimal custom effort. What is the most efficient way to achieve this?

  • ✓ B. Deploy the CloudWatch agent to both the on-premises servers and the EC2 instances

The most efficient choice is Deploy the CloudWatch agent to both the on-premises servers and the EC2 instances because this approach collects OS level metrics such as CPU, memory, disk, and network from Linux and Windows hosts and publishes them directly to CloudWatch with a consistent configuration and minimal custom code.

Deploy the CloudWatch agent to both the on-premises servers and the EC2 instances supports a unified configuration for metrics and logs and works across hybrid environments, so you get a single view in CloudWatch without maintaining ad hoc scripts or separate exporters. The agent is managed by AWS and can be configured centrally which reduces operational overhead and improves consistency across the fleet.

Install AWS Distro for OpenTelemetry on all servers and export metrics to CloudWatch is not ideal because the AWS Distro for OpenTelemetry is focused on application telemetry and usually requires more setup to collect host level metrics and to configure exporters, so it is less efficient for simple OS level metric collection.

Enable CloudWatch detailed monitoring for both EC2 instances and on-premises servers is incorrect because detailed monitoring is an EC2 feature and does not apply to on-premises servers, so it cannot provide host metrics from the on-premises portion of the hybrid environment.

Use built-in EC2 metrics in CloudWatch and push on-premises metrics with AWS CLI put-metric-data scripts can work technically but it introduces scripting, maintenance, and operational risk, and it does not provide the consistent, managed collection and configuration that the CloudWatch agent offers.

When the exam describes hybrid host metrics think CloudWatch agent first and avoid answers that require extensive custom scripting.

NovaTrail Labs has opened a new AWS account and is setting up its first IAM users and permission policies for a small engineering team. Which approaches align with AWS recommended practices for managing user permissions? (Choose 2)

  • ✓ B. Assign permissions by adding users to IAM groups

  • ✓ D. Define reusable customer managed policies instead of inline policies attached to a single identity

Assign permissions by adding users to IAM groups and Define reusable customer managed policies instead of inline policies attached to a single identity are the correct approaches because they promote scalable, reusable, and auditable permission management.

Assign permissions by adding users to IAM groups lets you manage permissions in one place and apply them consistently to many users so onboarding and offboarding are simpler and you reduce configuration drift while enforcing least privilege.

Define reusable customer managed policies instead of inline policies attached to a single identity allows you to reuse policies across users and roles and to update, version, and audit permissions centrally while inline policies remain tied to a single principal and are harder to maintain.

Always prefer customer managed policies over AWS managed policies is not correct because AWS managed policies are maintained by AWS and are a recommended starting point and you should create customer managed policies only when you need customization or tighter control.

Share a common IAM user among developers to reduce management overhead is wrong because shared credentials eliminate individual accountability and make credential rotation and auditing difficult and you should use individual identities or federated access with roles.

Use AWS Organizations service control policies to grant permissions to individual users is incorrect because service control policies act as account level guardrails and do not grant permissions to users and you still need IAM policies or roles to authorize actions.

Prefer groups for assigning common permissions and start with AWS managed policies then create customer managed policies for customization and reusability.

A developer at Orion Analytics created an AWS Lambda function that reads items from an Amazon DynamoDB table named ReportsV3. The team now needs a simple public HTTPS endpoint that accepts HTTP GET requests and forwards the full request context to the function while keeping operational overhead and cost low. Which approach should the developer choose?

  • ✓ D. Create an Amazon API Gateway API with Lambda proxy integration

Create an Amazon API Gateway API with Lambda proxy integration is the correct choice because it provides a simple public HTTPS endpoint that accepts GET requests and forwards the full request context to the Lambda function.

The Create an Amazon API Gateway API with Lambda proxy integration option forwards headers, path parameters, query strings, and the payload directly to your function with minimal mapping. It gives a managed HTTPS endpoint and supports GET semantics so your function can inspect the incoming request before querying the ReportsV3 DynamoDB table. Choosing API Gateway for proxying keeps configuration and operational overhead low and can be cost efficient, especially if you select the HTTP API option for simple proxy scenarios.

Create an API Gateway API using a POST method does not satisfy the requirement to handle GET requests for retrieving data and using POST would be semantically incorrect for a read operation. It also does not address the need to forward the full request context unless paired with proxy integration.

Configure an Application Load Balancer with the Lambda function as a target can work technically but it introduces additional setup and potentially higher cost compared to API Gateway for a straightforward HTTP-to-Lambda proxy. For a simple public GET endpoint that must forward full request details, ALB usually adds more operational overhead.

Amazon Cognito User Pool with Lambda triggers focuses on authentication and user management and it does not provide a generic HTTP proxy that forwards arbitrary GET requests to Lambda. Cognito can be added if you need authentication but it does not replace a public HTTPS endpoint for forwarding requests.

Remember that API Gateway with Lambda proxy integration forwards headers and query strings by default so choose it when you need a minimal, public HTTPS GET endpoint that passes the full request context to Lambda.

A mid-sized fintech startup named Northwind Capital uses AWS CodePipeline to deliver updates to several Elastic Beanstalk environments. After almost two years of steady releases, the application is nearing the service’s cap on stored application versions, blocking registration of new builds. What is the best way to automatically clear out older, unused versions so that future deployments can proceed?

  • ✓ B. Elastic Beanstalk application version lifecycle policy

The correct option is Elastic Beanstalk application version lifecycle policy. This native feature automatically removes older unused application versions based on age or maximum count when new versions are created and prevents you from hitting the application version quota so deployments can continue.

Using the Elastic Beanstalk application version lifecycle policy is the best approach because it is managed by Elastic Beanstalk and it cleans up both the stored artifacts and the service metadata as configured. This avoids the need to build and operate custom cleanup code and ensures that version records do not become dangling or inconsistent.

AWS Lambda is incorrect because using Lambda would require writing and maintaining custom functions and schedules plus handling permissions and failure cases rather than relying on a built in cleanup option.

Amazon S3 Lifecycle rules are incorrect because S3 lifecycle policies act only on S3 objects and do not remove Elastic Beanstalk application version metadata, which can leave Elastic Beanstalk still counting versions.

Elastic Beanstalk worker environment is incorrect because worker environments are intended for background task processing and they do not provide any mechanism to manage or prune stored application versions.

If you approach a version limit, enable an application version lifecycle policy for the Elastic Beanstalk application and verify that it is configured to remove unused versions by age or count.

A fintech startup named NorthPeak Analytics runs a scheduled agent on an Amazon EC2 instance that aggregates roughly 120 GB of files each day from three Amazon S3 buckets. The data science group wants to issue spur-of-the-moment SQL queries against those files without ingesting them into a database. To keep operations minimal and pay only per query, which AWS service should be used to query the data directly in S3?

  • ✓ B. Amazon Athena

Amazon Athena is the correct choice for this scenario because it provides serverless ad hoc SQL queries directly on data stored in Amazon S3 and charges you on a pay per query basis which keeps operations minimal and costs aligned with occasional analysis.

Amazon Athena supports standard SQL and integrates with the AWS Glue Data Catalog for schema discovery which allows the data science team to run spur of the moment queries without ingesting the 120 GB per day of files into a database. Being serverless means there is no cluster to provision or manage and you only pay for the queries you run which fits the requirement to minimize operational overhead.

Amazon EMR is designed for managed big data processing and requires provisioning and managing clusters and running frameworks such as Spark or Hive which adds unnecessary operational complexity for simple ad hoc SQL on S3.

Amazon Redshift Spectrum can query data in S3 but it relies on a Redshift cluster and external table setup which introduces baseline costs and more administration compared with a purely serverless query service.

AWS Step Functions is a workflow orchestration service that manages state machines and does not execute SQL queries on S3 data so it does not meet the requirement for on demand SQL querying.

When the exam scenario asks for ad hoc SQL directly on S3 with minimal ops and pay per query pricing choose Amazon Athena instead of services that require clusters or a data warehouse.

A travel booking startup runs its front end on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. During peak promotions the group scales to about 80 instances, and user sessions must persist for roughly 45 minutes and be available to any instance even as nodes are replaced. Where should the developer keep the session state so requests can be served by any instance?

  • ✓ B. Store session data in an Amazon ElastiCache for Redis cluster

Store session data in an Amazon ElastiCache for Redis cluster is the correct choice because it centralizes session state so any EC2 instance behind the Application Load Balancer can access user sessions even as instances are replaced by Auto Scaling.

Redis provides a centralized in memory store with very low latency and it supports time to live based expiry and replication and optional persistence so sessions survive instance termination and remain available across the fleet. Using ElastiCache decouples session state from individual instance storage and lets the ALB route requests to any healthy instance without losing user session data.

Store session data on ephemeral instance store volumes is incorrect because the instance store is local to one instance and data is lost when that instance stops terminates or fails and it cannot be shared across the Auto Scaling group.

Store session data on the instance root filesystem is incorrect because the root volume is per instance and not shared so scaling events or instance replacements will strand or lose sessions.

Store session data on a shared Amazon EBS volume attached to multiple instances is incorrect because EBS Multi Attach is limited to specific volume types and to instances in the same Availability Zone and it requires application level coordination so it does not offer a practical scalable multi AZ pattern for web session state behind an ALB.

Think centralized and in-memory for shared session state behind a load balancer so sessions remain available when instances scale or are replaced.

An engineer at Aurora Analytics is troubleshooting an AWS Lambda function deployed with the AWS CDK. The function runs without throwing exceptions, but nothing appears in Amazon CloudWatch Logs and no log group or log stream exists for the function. The code includes logging statements. The function uses an IAM execution role that trusts the Lambda service principal but has no permission policies attached, and the function has no resource-based policy. What change should be made to enable log creation in CloudWatch Logs?

  • ✓ B. Attach the AWSLambdaBasicExecutionRole managed policy to the Lambda execution role

Attach the AWSLambdaBasicExecutionRole managed policy to the Lambda execution role is correct because that managed policy gives the execution role the CloudWatch Logs permissions required for the function to create log groups and streams and to write log events.

Lambda uses the permissions of its execution role to call other AWS services so if the role has no policies attached the function cannot create a log group or stream or put log events even when the code emits logs. Attaching AWSLambdaBasicExecutionRole grants the standard actions such as logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents so the function can create and write to its CloudWatch Logs resources.

Attach a resource-based policy to the function that grants logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents is wrong because resource-based policies determine who can invoke or manage the function and do not grant the function permission to call CloudWatch Logs. The necessary permissions must be on the execution role.

Attach a resource-based policy to the function that grants only logs:PutLogEvents is also incorrect because it still uses the wrong policy type and it lacks the create permissions needed to make a log group and stream so logs cannot be established.

Attach the CloudWatchLambdaInsightsExecutionRolePolicy managed policy to the execution role is not sufficient because that policy enables Lambda Insights for enhanced metrics and diagnostics and does not replace the basic CloudWatch Logs actions required for standard function logging. Insights is additive and the basic logs permissions are still required.

When a Lambda produces no logs and no log group exists check the execution role first and attach AWSLambdaBasicExecutionRole so the function can create and write CloudWatch Logs.

A fashion marketplace’s engineering team is preparing for a 36-hour summer flash sale. The product lead requires an Amazon ElastiCache strategy that can handle sudden surges while ensuring product prices and descriptions remain fully consistent with the source database at all times. Which approach should they implement to keep the cache synchronized with the backend during updates?

  • ✓ C. Commit to the database, then explicitly invalidate the affected cache keys

The correct choice is Commit to the database, then explicitly invalidate the affected cache keys. This option treats the database as the source of truth and ensures the cache is not allowed to serve stale product prices or descriptions during the flash sale.

Committing first and then invalidating implements the cache aside pattern with explicit invalidation. After a successful write removing the related cache keys the next read must fetch the authoritative data from the database and repopulate the cache so consistency is maintained even under sudden traffic spikes.

Write to the cache first and asynchronously apply the change to the database is unsafe because the asynchronous database write can fail or lag and cause the cache and database to diverge which breaks the requirement for never-stale data.

Commit to the database and rely on the item TTL to refresh the cache later is incorrect because TTL based refresh allows stale values to be served until they expire and that does not meet a strict consistency requirement during a high traffic sale.

Amazon CloudFront is a content delivery network that caches edge content and it does not provide application level coherence for dynamic product data in ElastiCache so it does not solve the consistency problem.

When strict consistency is required write to the database first and then explicitly invalidate cache keys so subsequent reads repopulate the cache with fresh data.

A telemedicine startup, HelixCare, runs an application on Amazon EC2 that produces thousands of tiny JSON files around 2 KB each containing sensitive patient data. The files are written to a vendor supplied network attached storage system that does not integrate with AWS services. The team wants to use AWS KMS in the safest way that keeps key material within KMS whenever possible. What should they do?

  • ✓ C. Encrypt each file directly by calling AWS KMS Encrypt with a customer managed KMS key

Encrypt each file directly by calling AWS KMS Encrypt with a customer managed KMS key is the correct choice because AWS KMS accepts plaintext up to 4 KB per Encrypt request which covers the roughly 2 KB JSON files and this approach keeps key material inside KMS rather than exposing data keys outside the service.

Calling Encrypt each file directly by calling AWS KMS Encrypt with a customer managed KMS key means the application sends the small plaintext over TLS to KMS and KMS performs the encryption and returns ciphertext so you do not need to store or manage plaintext data keys in your application. Using a customer managed key also gives you control over access policies and key rotation which improves security for sensitive patient data.

Generate a data key with a customer managed KMS key and use that data key to envelope encrypt each file is a valid pattern for larger objects but it requires handling plaintext data keys in application memory or storage which increases exposure and operational complexity when each file is small.

Encrypt the files directly using an AWS managed KMS key is not appropriate because AWS managed keys are owned and managed by AWS for service integrations and they are generally not available for arbitrary direct Encrypt API calls from customer applications.

Use the AWS Encryption SDK with KMS to generate data keys and encrypt the files relies on envelope encryption and therefore exposes plaintext data keys in the client process and adds complexity that is unnecessary for payloads that fit within KMS Encrypt size limits.

When payloads are under 4 KB prefer direct KMS Encrypt with a customer managed key to keep key material inside KMS and avoid managing plaintext data keys.

A fintech startup runs dozens of Dockerized microservices and plans to move them to Amazon ECS. Traffic is spiky, and the finance team wants costs to accrue only while individual tasks are actually running rather than paying for idle servers. Which choice best meets these goals?

  • ✓ B. Amazon ECS using the Fargate launch type

Amazon ECS using the Fargate launch type is correct because it bills for the vCPU and memory requested by each running task so costs accrue only while tasks are actually running and you do not manage or pay for idle EC2 instances.

Amazon ECS using the Fargate launch type provides a serverless container model where AWS provisions and meters the compute per task so it fits spiky traffic and the finance requirement to pay only for active tasks.

An Amazon ECS service with Auto Scaling is incorrect because Auto Scaling only adjusts how many tasks run and it does not change whether you are billed per task or per instance so it does not by itself guarantee the per-task billing requirement.

Amazon ECS with the EC2 launch type is incorrect because you are billed for EC2 instances regardless of task activity so you can end up paying for idle capacity when traffic is low.

Amazon ECS with EC2 Spot capacity providers is incorrect because although Spot can lower instance costs you are still billed at the instance level and Spot instances can be interrupted which does not satisfy a strict per-task, always-available billing requirement.

When a question signals pay per task or serverless containers prefer the Fargate launch type over EC2 based options.

A media startup has deployed an Application Load Balancer in front of several Amazon EC2 instances for its StreamBox app. The target group now shows every instance as unhealthy, but browsing directly to an instance’s public IP on port 8080 successfully loads the site. Which issues could explain why the load balancer continues to mark these targets as unhealthy? (Choose 2)

  • ✓ B. The target group health check path or port is not aligned with the application’s actual health endpoint

  • ✓ D. The instance security group does not allow inbound traffic from the load balancer security group on required ports

The target group health check path or port is not aligned with the application’s actual health endpoint and The instance security group does not allow inbound traffic from the load balancer security group on required ports are correct because the Application Load Balancer will mark targets unhealthy when it cannot successfully query the expected health endpoint or when its traffic is blocked by instance security group rules.

The target group health check path or port is not aligned with the application’s actual health endpoint is correct because if the load balancer is probing the wrong URL path or the wrong port it will get failing responses and mark the target unhealthy even though direct requests to the instance on port 8080 succeed. You should verify the target group health check settings match the service health endpoint and port.

The instance security group does not allow inbound traffic from the load balancer security group on required ports is correct because the instance must permit connections from the ALB security group for both the listener port and the health check port. If the instance security group only allows other sources the ALB cannot reach the application and health checks will fail.

Elastic IP addresses must be attached to the EC2 instances when used behind an Application Load Balancer is incorrect because ALBs route to targets using private IP addresses inside the VPC and they do not require instances to have Elastic IPs.

Cross-zone load balancing is disabled on the load balancer is incorrect because cross zone load balancing only affects how traffic is distributed across Availability Zones and it does not prevent health checks from succeeding.

The EBS volumes on the instances were mounted incorrectly is incorrect because the application responds when accessed directly by IP so the instance storage and mounting are unlikely to be the cause of failed health checks.

First verify the health check path and port in the target group and then confirm the instance security group allows the ALB security group to reach the health and listener ports.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.

An engineer at Aurora Analytics is troubleshooting an AWS Lambda function deployed with the AWS CDK. The function runs without throwing exceptions, but nothing appears in Amazon CloudWatch Logs and no log group or log stream exists for the function. The code includes logging statements. The function uses an IAM execution role that trusts the Lambda service principal but has no permission policies attached, and the function has no resource-based policy. What change should be made to enable log creation in CloudWatch Logs?

  • ✓ B. Attach the AWSLambdaBasicExecutionRole managed policy to the Lambda execution role

Attach the AWSLambdaBasicExecutionRole managed policy to the Lambda execution role is correct because that managed policy gives the execution role the CloudWatch Logs permissions required for the function to create log groups and streams and to write log events.

Lambda uses the permissions of its execution role to call other AWS services so if the role has no policies attached the function cannot create a log group or stream or put log events even when the code emits logs. Attaching AWSLambdaBasicExecutionRole grants the standard actions such as logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents so the function can create and write to its CloudWatch Logs resources.

Attach a resource-based policy to the function that grants logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents is wrong because resource-based policies determine who can invoke or manage the function and do not grant the function permission to call CloudWatch Logs. The necessary permissions must be on the execution role.

Attach a resource-based policy to the function that grants only logs:PutLogEvents is also incorrect because it still uses the wrong policy type and it lacks the create permissions needed to make a log group and stream so logs cannot be established.

Attach the CloudWatchLambdaInsightsExecutionRolePolicy managed policy to the execution role is not sufficient because that policy enables Lambda Insights for enhanced metrics and diagnostics and does not replace the basic CloudWatch Logs actions required for standard function logging. Insights is additive and the basic logs permissions are still required.

When a Lambda produces no logs and no log group exists check the execution role first and attach AWSLambdaBasicExecutionRole so the function can create and write CloudWatch Logs.

A fashion marketplace’s engineering team is preparing for a 36-hour summer flash sale. The product lead requires an Amazon ElastiCache strategy that can handle sudden surges while ensuring product prices and descriptions remain fully consistent with the source database at all times. Which approach should they implement to keep the cache synchronized with the backend during updates?

  • ✓ C. Commit to the database, then explicitly invalidate the affected cache keys

The correct choice is Commit to the database, then explicitly invalidate the affected cache keys. This option treats the database as the source of truth and ensures the cache is not allowed to serve stale product prices or descriptions during the flash sale.

Committing first and then invalidating implements the cache aside pattern with explicit invalidation. After a successful write removing the related cache keys the next read must fetch the authoritative data from the database and repopulate the cache so consistency is maintained even under sudden traffic spikes.

Write to the cache first and asynchronously apply the change to the database is unsafe because the asynchronous database write can fail or lag and cause the cache and database to diverge which breaks the requirement for never-stale data.

Commit to the database and rely on the item TTL to refresh the cache later is incorrect because TTL based refresh allows stale values to be served until they expire and that does not meet a strict consistency requirement during a high traffic sale.

Amazon CloudFront is a content delivery network that caches edge content and it does not provide application level coherence for dynamic product data in ElastiCache so it does not solve the consistency problem.

When strict consistency is required write to the database first and then explicitly invalidate cache keys so subsequent reads repopulate the cache with fresh data.

A telemedicine startup, HelixCare, runs an application on Amazon EC2 that produces thousands of tiny JSON files around 2 KB each containing sensitive patient data. The files are written to a vendor supplied network attached storage system that does not integrate with AWS services. The team wants to use AWS KMS in the safest way that keeps key material within KMS whenever possible. What should they do?

  • ✓ C. Encrypt each file directly by calling AWS KMS Encrypt with a customer managed KMS key

Encrypt each file directly by calling AWS KMS Encrypt with a customer managed KMS key is the correct choice because AWS KMS accepts plaintext up to 4 KB per Encrypt request which covers the roughly 2 KB JSON files and this approach keeps key material inside KMS rather than exposing data keys outside the service.

Calling Encrypt each file directly by calling AWS KMS Encrypt with a customer managed KMS key means the application sends the small plaintext over TLS to KMS and KMS performs the encryption and returns ciphertext so you do not need to store or manage plaintext data keys in your application. Using a customer managed key also gives you control over access policies and key rotation which improves security for sensitive patient data.

Generate a data key with a customer managed KMS key and use that data key to envelope encrypt each file is a valid pattern for larger objects but it requires handling plaintext data keys in application memory or storage which increases exposure and operational complexity when each file is small.

Encrypt the files directly using an AWS managed KMS key is not appropriate because AWS managed keys are owned and managed by AWS for service integrations and they are generally not available for arbitrary direct Encrypt API calls from customer applications.

Use the AWS Encryption SDK with KMS to generate data keys and encrypt the files relies on envelope encryption and therefore exposes plaintext data keys in the client process and adds complexity that is unnecessary for payloads that fit within KMS Encrypt size limits.

When payloads are under 4 KB prefer direct KMS Encrypt with a customer managed key to keep key material inside KMS and avoid managing plaintext data keys.

A fintech startup runs dozens of Dockerized microservices and plans to move them to Amazon ECS. Traffic is spiky, and the finance team wants costs to accrue only while individual tasks are actually running rather than paying for idle servers. Which choice best meets these goals?

  • ✓ B. Amazon ECS using the Fargate launch type

Amazon ECS using the Fargate launch type is correct because it bills for the vCPU and memory requested by each running task so costs accrue only while tasks are actually running and you do not manage or pay for idle EC2 instances.

Amazon ECS using the Fargate launch type provides a serverless container model where AWS provisions and meters the compute per task so it fits spiky traffic and the finance requirement to pay only for active tasks.

An Amazon ECS service with Auto Scaling is incorrect because Auto Scaling only adjusts how many tasks run and it does not change whether you are billed per task or per instance so it does not by itself guarantee the per-task billing requirement.

Amazon ECS with the EC2 launch type is incorrect because you are billed for EC2 instances regardless of task activity so you can end up paying for idle capacity when traffic is low.

Amazon ECS with EC2 Spot capacity providers is incorrect because although Spot can lower instance costs you are still billed at the instance level and Spot instances can be interrupted which does not satisfy a strict per-task, always-available billing requirement.

When a question signals pay per task or serverless containers prefer the Fargate launch type over EC2 based options.

A media startup has deployed an Application Load Balancer in front of several Amazon EC2 instances for its StreamBox app. The target group now shows every instance as unhealthy, but browsing directly to an instance’s public IP on port 8080 successfully loads the site. Which issues could explain why the load balancer continues to mark these targets as unhealthy? (Choose 2)

  • ✓ B. The target group health check path or port is not aligned with the application’s actual health endpoint

  • ✓ D. The instance security group does not allow inbound traffic from the load balancer security group on required ports

The target group health check path or port is not aligned with the application’s actual health endpoint and The instance security group does not allow inbound traffic from the load balancer security group on required ports are correct because the Application Load Balancer will mark targets unhealthy when it cannot successfully query the expected health endpoint or when its traffic is blocked by instance security group rules.

The target group health check path or port is not aligned with the application’s actual health endpoint is correct because if the load balancer is probing the wrong URL path or the wrong port it will get failing responses and mark the target unhealthy even though direct requests to the instance on port 8080 succeed. You should verify the target group health check settings match the service health endpoint and port.

The instance security group does not allow inbound traffic from the load balancer security group on required ports is correct because the instance must permit connections from the ALB security group for both the listener port and the health check port. If the instance security group only allows other sources the ALB cannot reach the application and health checks will fail.

Elastic IP addresses must be attached to the EC2 instances when used behind an Application Load Balancer is incorrect because ALBs route to targets using private IP addresses inside the VPC and they do not require instances to have Elastic IPs.

Cross-zone load balancing is disabled on the load balancer is incorrect because cross zone load balancing only affects how traffic is distributed across Availability Zones and it does not prevent health checks from succeeding.

The EBS volumes on the instances were mounted incorrectly is incorrect because the application responds when accessed directly by IP so the instance storage and mounting are unlikely to be the cause of failed health checks.

First verify the health check path and port in the target group and then confirm the instance security group allows the ALB security group to reach the health and listener ports.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.