Sample Questions for the AWS Developer Certification

Certified AWS Developer Associate Badge & Logo

These AWS questions come from the Udemy AWS Developer Practice course and from certificationexams.pro

AWS Developer Associate Sample Questions & Answers

The AWS Certified Developer Associate exam tests your ability to design, build, and deploy applications that make full use of the AWS ecosystem.

It focuses on key services like Lambda for serverless computing, S3 for object storage, DynamoDB and RDS for databases, and SQS or SNS for event driven messaging. Understanding how these services integrate is essential to passing the exam and becoming an effective cloud developer.

To help you prepare, this guide provides AWS Developer Practice Questions that mirror the structure and difficulty of the real test. You will find Real AWS Developer Exam Questions and AWS Developer Exam Sample Questions that cover core objectives such as writing and deploying code using the AWS SDK, implementing security with IAM roles, managing queues and streams, and optimizing data access with S3 and DynamoDB.

Targeted Developer Exam Topics

Each section includes AWS Developer Questions and Answers written to teach as well as test, giving you insight into how to reason through real world scenarios. For additional preparation, you can explore the AWS Developer Exam Simulator and full AWS Developer Practice Tests to assess your readiness.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

These materials are not AWS Developer Exam Dumps or copied content. They are original study resources built to strengthen your understanding of AWS architecture, improve your test taking skills, and give you the confidence to succeed. Whether you are reviewing an AWS Developer Braindump style summary or tackling full length practice exams, these resources will help you master the knowledge areas required to pass the AWS Developer Certification exam.

Question 1

An engineer at Scrumtuous Kart is launching a serverless API on AWS that places Amazon API Gateway in front of AWS Lambda and stores orders in an Amazon DynamoDB table named OrdersV2. The single-page app sends bearer tokens such as OAuth 2.0 JWTs or SAML assertions with a 45-minute lifetime, and the backend must evaluate the token to identify the caller and produce an IAM policy before invoking methods. Which API Gateway feature should be used to implement this custom authorization?

  • ❏ A. API Gateway resource policy

  • ❏ B. Lambda authorizer

  • ❏ C. Amazon Cognito user pools authorizer

  • ❏ D. Cross-Origin Resource Sharing (CORS)

Question 2

In a CloudFormation template what does specifying Transform AWS::Serverless-2016-10-31 mean?

  • ❏ A. An intrinsic function for substitutions

  • ❏ B. Invokes the AWS::Serverless transform so CloudFormation expands SAM syntax into standard resources

  • ❏ C. Declares the AWS::Include transform for template imports

  • ❏ D. Marks the Parameters section

Question 3

You lead an engineering team at a logistics startup and are granting restricted permissions to new hires. In the console you created an IAM group named app-devs for onboarding, and on your laptop you configured an AWS CLI profile called junior-dev. You need to verify that this identity cannot terminate Amazon EC2 instances without making any real changes. What should you run to safely confirm the restriction?

  • ❏ A. AWS IAM Access Analyzer policy validation

  • ❏ B. Use the AWS CLI with the –dry-run flag

  • ❏ C. Use the AWS CLI –test flag

  • ❏ D. Retrieve role policies from the EC2 instance metadata service and evaluate them in the IAM policy simulator

Question 4

How can you ensure a write to DynamoDB only succeeds if the item still has the expected attribute values, thereby preventing overwriting concurrent updates?

  • ❏ A. DynamoDB Streams

  • ❏ B. Update expressions

  • ❏ C. Conditional writes using a ConditionExpression

  • ❏ D. DynamoDB transactions (TransactWriteItems)

Question 5

A developer at McKenzie Payments is building an alerting microservice on a fleet of Amazon EC2 instances. When the fraud engine flags a suspicious charge, the service must post a message to the company’s incident chat through a third-party messaging API. The API requires an access token that must be encrypted at rest and in transit, and workloads in two other AWS accounts must be able to retrieve the same token with minimal operational overhead. Which approach best satisfies these needs?

  • ❏ A. Store a KMS-encrypted access token in an Amazon DynamoDB table and attach a resource-based policy to the table for cross-account access; grant the EC2 instance role permission and read the token from the table

  • ❏ B. Place the access token as an object in Amazon S3 using SSE-KMS and use a bucket policy to allow access from the other accounts; grant the EC2 instance role permission and download the token

  • ❏ C. Use AWS Secrets Manager with a customer-managed KMS key to store the token and attach a resource-based policy to the secret for cross-account retrieval; grant the EC2 instance role permission and fetch the secret from Secrets Manager

  • ❏ D. Use AWS Systems Manager Parameter Store SecureString with a customer-managed KMS key and attach a resource-based policy to the parameter; grant the EC2 instance role permission and call GetParameter with decryption

Question 6

In DynamoDB how can you make a PutItem operation succeed only if no item with the same partition key already exists to prevent overwriting an existing item?

  • ❏ A. DynamoDB Streams

  • ❏ B. PutItem with a ConditionExpression (conditional write)

  • ❏ C. BatchWriteItem

  • ❏ D. Atomic counters

Question 7

A small media startup, Pickering is Springfield Studio, built a CLI tool that surfaces sticker images by keyword and streams random URLs from a third-party API to a developer laptop. After the latest commit, each run is roughly 3 to 5 seconds slower, and you suspect the new dispatchRequest() helper is causing the delay. What should you do to accurately measure the latency introduced by that function?

  • ❏ A. Review Amazon CloudWatch Logs to investigate the slowdown

  • ❏ B. AWS CodeGuru Profiler

  • ❏ C. With AWS X-Ray, insert a custom subsegment around dispatchRequest() to time the code path

  • ❏ D. In AWS X-Ray, turn off sampling so that every request is traced

Question 8

Which AWS storage service should be used so that a Lambda function running in private subnets can share appendable files among multiple compute services and provide on premises access over a site to site VPN when processing objects of about 90 MB?

  • ❏ A. Amazon FSx for NetApp ONTAP

  • ❏ B. Amazon EFS

  • ❏ C. Amazon S3

  • ❏ D. Amazon FSx for Windows File Server

Question 9

A developer at Scrumtuous Media is deploying Docker-based services on Amazon ECS. One container runs a transactional database and must only be scheduled onto container instances designated as part of the db-core task group. What should the developer use to ensure the database task is placed on that group?

  • ❏ A. ECS capacity provider

  • ❏ B. Task placement constraint

  • ❏ C. Cluster Query Language

  • ❏ D. ECS container agent

Question 10

Which of the following statements are correct when configuring Amazon ElastiCache for Redis with cluster mode enabled and deploying across multiple Availability Zones? (Choose 2)

  • ❏ A. Manual promotion of an arbitrary replica to primary isn’t supported in cluster mode enabled

  • ❏ B. Cluster mode enabled supports multi-key transactions across shards

  • ❏ C. All nodes of a single Redis cluster reside in one AWS Region

  • ❏ D. With cluster mode disabled, replication to replicas is synchronous

  • ❏ E. Without replicas, a primary failure in a shard causes no data loss

Question 11

A mid-size ecommerce firm named McKenzie Lake Outfitters is breaking a legacy monolith into roughly fifty microservices. The engineering team wants these services to communicate asynchronously so they remain loosely coupled and resilient to failures. Which AWS offerings should the team use to implement asynchronous message delivery between the services? (Choose 2)

  • ❏ A. Amazon ECS

  • ❏ B. Amazon Simple Notification Service (SNS)

  • ❏ C. Amazon API Gateway

  • ❏ D. Amazon Simple Queue Service (SQS)

  • ❏ E. Amazon Kinesis Data Streams

Question 12

Which AWS service provides per-user offline data that automatically synchronizes across devices for mobile and web applications?

  • ❏ A. Amazon DynamoDB

  • ❏ B. Amazon Cognito Sync

  • ❏ C. AWS AppSync

  • ❏ D. Amazon Cognito User Pools

Question 13

An internal compliance directive at Scrumtuous Media mandates end-to-end tracing for a Node.js application deployed on AWS Elastic Beanstalk, covering outbound HTTP calls to third-party APIs and the SQL statements the service runs. You need the application traces to appear in the AWS X-Ray console for troubleshooting. How should you configure the environment to meet this requirement?

  • ❏ A. Enable active tracing by adding a healthcheckurl.config file under .ebextensions

  • ❏ B. Configure EC2 user data in the Beanstalk instances to start the X-Ray daemon on boot

  • ❏ C. Add an xray-daemon.config file in the .ebextensions folder to install and run the X-Ray daemon

  • ❏ D. Build a custom Docker image that bundles and launches the X-Ray daemon

Question 14

How should you enforce safe concurrent updates in DynamoDB so that write attempts are rejected when an item already has a disallowed attribute state?

  • ❏ A. DynamoDB Streams with Lambda conflict resolution

  • ❏ B. Optimistic locking with a version attribute and UpdateItem ConditionExpression

  • ❏ C. BatchWriteItem to group writes

  • ❏ D. Pessimistic locking with per-item lock records

Question 15

A fintech startup created an AWS Lambda function that posts event summaries to an external analytics partner. The team wants this function to run automatically every 45 minutes without managing any servers or persistent infrastructure. What is the most operationally simple and cost-effective way to achieve this?

  • ❏ A. Run a small Amazon EC2 instance with a cron job that invokes the function every 45 minutes

  • ❏ B. Configure a built-in schedule directly in the AWS Lambda console to run the function every 45 minutes

  • ❏ C. Create an Amazon EventBridge schedule rule to trigger the Lambda function at a 45-minute interval

  • ❏ D. Use AWS Step Functions to orchestrate the function on a 45-minute cadence

Question 16

With CloudFront placed in front of an Application Load Balancer serving an ECS Fargate API that uses JWT authentication which approach most efficiently blocks unauthenticated requests at the edge to minimize load on the origin?

  • ❏ A. Enable target-tracking scaling for the Fargate service

  • ❏ B. CloudFront Function on viewer-request to validate JWT and reject invalid or missing tokens

  • ❏ C. AWS WAF rate-based rule on the CloudFront distribution

  • ❏ D. Lambda@Edge on viewer-request to validate JWT

Question 17

Skyline BioTech runs Amazon EC2 instances, AWS Lambda functions, and an Amazon SQS queue to coordinate internal processing. All components live in a single VPC named research-net with two private subnets, and the team must ensure that inter-service calls use only private IP addresses without traversing the public internet. What should the developer do to meet this requirement? (Choose 2)

  • ❏ A. Provision a NAT gateway so traffic stays private

  • ❏ B. Create a VPC endpoint for Amazon SQS

  • ❏ C. Create the Amazon SQS queue inside the VPC

  • ❏ D. Create a VPC endpoint for AWS Lambda

  • ❏ E. Attach the Lambda functions to the VPC subnets and security groups

Question 18

In an EC2 or on premises AWS CodeDeploy deployment that uses an appspec.yml file, which lifecycle event performs the final validation before the deployment is marked successful?

  • ❏ A. AllowTraffic

  • ❏ B. ValidateService hook

  • ❏ C. BeforeInstall

  • ❏ D. ApplicationStart

Question 19

TrailPeak Outfitters runs an inventory service on 12 Amazon EC2 instances in an Auto Scaling group across two Availability Zones. The application writes to and reads from a DynamoDB table named InventoryLive. Immediately after updates, some instances occasionally read outdated values for the same keys. The team wants to modify the application so read requests always return the most current data without changing the table design. What should the developer do?

  • ❏ A. Add Amazon DynamoDB Accelerator (DAX) in front of the table

  • ❏ B. Create a new global secondary index on the table

  • ❏ C. Set ConsistentRead to true on read operations like GetItem to use strongly consistent reads

  • ❏ D. Wrap all writes with TransactWriteItems

Question 20

Which AWS service orchestrates serverless stateful multi step workflows and supports retries and branching across AWS services?

  • ❏ A. Amazon Managed Workflows for Apache Airflow

  • ❏ B. Amazon Step Functions

  • ❏ C. Amazon EventBridge

  • ❏ D. Amazon Simple Workflow Service (SWF)

Certified AWS Developer Sample Questions Answered

Certified AWS Developer Associate Badge & Logo

These AWS questions come from the Udemy AWS Developer Practice course and from certificationexams.pro

Question 1

An engineer at Scrumtuous Kart is launching a serverless API on AWS that places Amazon API Gateway in front of AWS Lambda and stores orders in an Amazon DynamoDB table named OrdersV2. The single-page app sends bearer tokens such as OAuth 2.0 JWTs or SAML assertions with a 45-minute lifetime, and the backend must evaluate the token to identify the caller and produce an IAM policy before invoking methods. Which API Gateway feature should be used to implement this custom authorization?

  • ✓ B. Lambda authorizer

The correct choice is Lambda authorizer. API Gateway can call a Lambda function on each request to validate a bearer token, perform custom logic, and return an IAM policy that authorizes or denies the method invocation. This supports custom token formats and external identity providers using OAuth or SAML.

API Gateway resource policy is not suitable because it controls access at the API level based on principals or source networks and does not evaluate request tokens or create dynamic per-request policies.

Amazon Cognito user pools authorizer is designed to validate JWTs issued by a Cognito user pool; it is not appropriate when tokens are issued directly by external OAuth or SAML providers unless Cognito is the token issuer.

Cross-Origin Resource Sharing (CORS) only manages browser cross-origin behavior and provides no authentication or authorization capabilities.

Cameron’s Exam Tip

When you need custom token validation or support for non-Cognito issuers on API Gateway REST APIs, choose Lambda authorizer; use Cognito user pools authorizer only when the JWTs are issued by Cognito.

Question 2

In a CloudFormation template what does specifying Transform AWS::Serverless-2016-10-31 mean?

  • ✓ B. Invokes the AWS::Serverless transform so CloudFormation expands SAM syntax into standard resources

The entry indicates that the template uses the SAM macro. Invokes the AWS::Serverless transform so CloudFormation expands SAM syntax into standard resources is correct because the Transform with value AWS::Serverless-2016-10-31 tells CloudFormation to process SAM resources and convert them into equivalent AWS

resources before deployment.

The option An intrinsic function for substitutions is incorrect because intrinsic functions like Fn::Sub or !Sub are used within resource properties and are not declared using the Transform section.

The option Declares the AWS::Include transform for template imports is incorrect since AWS::Include is a different transform name and is not the same as AWS::Serverless-2016-10-31.

The option Marks the Parameters section is incorrect because Parameters is a dedicated top-level section and has nothing to do with transforms.

Cameron’s Exam Tip

When you see Transform at the top of a template, think template-wide macros. The value AWS::Serverless-2016-10-31 specifically means SAM. Watch for distractors mentioning intrinsic functions like Fn::Sub or unrelated sections like Parameters.

Question 3

You lead an engineering team at a logistics startup and are granting restricted permissions to new hires. In the console you created an IAM group named app-devs for onboarding, and on your laptop you configured an AWS CLI profile called junior-dev. You need to verify that this identity cannot terminate Amazon EC2 instances without making any real changes. What should you run to safely confirm the restriction?

  • ✓ B. Use the AWS CLI with the –dry-run flag

The correct choice is Use the AWS CLI with the –dry-run flag. For many EC2 APIs, –dry-run triggers an authorization-only check that returns DryRunOperation if the caller is allowed or UnauthorizedOperation if not, without performing the operation.

AWS IAM Access Analyzer policy validation is helpful for finding policy issues and unintended public or cross-account access, but it does not execute a runtime permission check for a specific caller and action.

Use the AWS CLI –test flag is incorrect because no such flag exists for EC2 commands in the AWS CLI.

Retrieve role policies from the EC2 instance metadata service and evaluate them in the IAM policy simulator is not appropriate because the metadata service reflects the instance profile’s credentials, not the developer profile you set up, and the simulator can be used directly without metadata yet still would not validate the exact runtime call like a dry run does.

Cameron’s Exam Tip

When you need to confirm permissions without changing resources, look for EC2 API support for DryRun. Expect DryRunOperation if allowed and UnauthorizedOperation if denied, which safely verifies IAM access.

Question 4

How can you ensure a write to DynamoDB only succeeds if the item still has the expected attribute values, thereby preventing overwriting concurrent updates?

  • ✓ C. Conditional writes using a ConditionExpression

Conditional writes using a ConditionExpression is correct because DynamoDB evaluates the condition against the current item state and only commits the write when the expected attribute values match, implementing optimistic concurrency and preventing overwrites from concurrent clients. i_DynamoDB Streams_ is wrong because it only captures change events after the write and cannot prevent conflicting writes.

Update expressions are wrong because they specify how to modify attributes but do not impose a prerequisite check by themselves.

DynamoDB transactions (TransactWriteItems) is unnecessary overhead here since the requirement is a single-item conditional check, which conditional writes already handle efficiently.

Cameron’s Exam Tip

When you see requirements like “only write if the item has expected values” or “fail instead of overwriting,” think ConditionExpression and optimistic concurrency. Transactions are for multi-item atomicity, not single-item precondition checks. Streams are for change propagation, not write gating.

Question 5

A developer at RiverStone Payments is building an alerting microservice on a fleet of Amazon EC2 instances. When the fraud engine flags a suspicious charge, the service must post a message to the company’s incident chat through a third-party messaging API. The API requires an access token that must be encrypted at rest and in transit, and workloads in two other AWS accounts must be able to retrieve the same token with minimal operational overhead. Which approach best satisfies these needs?

  • ✓ C. Use AWS Secrets Manager with a customer-managed KMS key to store the token and attach a resource-based policy to the secret for cross-account retrieval; grant the EC2 instance role permission and fetch the secret from Secrets Manager

The best choice is AWS Secrets Manager with a customer-managed KMS key and a resource-based policy on the secret. Secrets Manager is designed for securely storing and retrieving secrets, integrates with AWS KMS for encryption at rest, returns values over TLS, and supports cross-account access via resource-based policies with low operational overhead.

S3 with SSE-KMS and a bucket policy can encrypt and share objects across accounts, but using S3 as a secrets store increases operational burden and is not a recommended pattern when a managed secrets service exists.

DynamoDB with a resource-based policy is not viable because DynamoDB does not support resource-based policies for tables, and it is not intended to store sensitive secrets.

Systems Manager Parameter Store SecureString with a resource-based policy is not appropriate here per the scenario because the described cross-account access via a parameter resource policy is not supported as stated.

Cameron’s Exam Tip

When a requirement includes cross-account access, encryption at rest, and minimal management overhead, prefer Secrets Manager over ad hoc solutions like S3 or databases. Look for keywords like resource-based policy and rotation to point to Secrets Manager.

Question 6

In DynamoDB how can you make a PutItem operation succeed only if no item with the same partition key already exists to prevent overwriting an existing item?

  • ✓ B. PutItem with a ConditionExpression (conditional write)

The correct choice is PutItem with a ConditionExpression (conditional write). A ConditionExpression such as attribute_not_exists(pkAttribute) enforces that the item is written only when the key does not already exist. If an item with the same key is present, DynamoDB rejects the write with ConditionalCheckFailedException, preventing accidental overwrites.

The option DynamoDB Streams is incorrect because Streams only capture item changes after they happen and do not gate writes.

BatchWriteItem is incorrect since it batches operations without conditions, so it will still overwrite existing items.

Atomic counters are limited to incrementing or decrementing numeric attributes on existing items and cannot guarantee uniqueness or prevent overwrites.

Cameron’s Exam Tip

When you see phrases like only if not exists or prevent overwrites, think ConditionExpression with attribute_not_exists on PutItem or UpdateItem. Remember the failure mode is ConditionalCheckFailedException. For broader concurrency control, optimistic locking with a version attribute also relies on conditional expressions.

Question 7

A small media startup, Driftwave Studio, built a CLI tool that surfaces sticker images by keyword and streams random URLs from a third-party API to a developer laptop. After the latest commit, each run is roughly 3 to 5 seconds slower, and you suspect the new dispatchRequest() helper is causing the delay. What should you do to accurately measure the latency introduced by that function?

  • ✓ C. With AWS X-Ray, insert a custom subsegment around dispatchRequest() to time the code path

The most direct way to measure the latency of a specific function within a request is to instrument it with an X-Ray subsegment. With AWS X-Ray, insert a custom subsegment around dispatchRequest() to time the code path captures precise timing for that code block inside the trace, making the bottleneck visible.

Review Amazon CloudWatch Logs to investigate the slowdown can help spot errors and general timing, but it lacks the fine-grained, request-scoped timing needed without additional instrumentation.

AWS CodeGuru Profiler is designed for continuous, statistical profiling of CPU and memory, not per-request latency for a single function call.

In AWS X-Ray, turn off sampling so that every request is traced increases data volume and cost, yet it still will not expose the function’s internal timing unless you add subsegments.

Cameron’s Exam Tip

For pinpointing latency inside your code path, think X-Ray subsegments; sampling controls trace volume, but subsegments provide the granular timing you need.

Question 8

Which AWS storage service should be used so that a Lambda function running in private subnets can share appendable files among multiple compute services and provide on premises access over a site to site VPN when processing objects of about 90 MB?

  • ✓ B. Amazon EFS

Amazon EFS is correct because it is the only AWS file system that Lambda can natively mount in a VPC, enabling shared, concurrent, append-style file writes across multiple compute services. EFS uses NFS and supports access from on‑premises networks over a site‑to‑site VPN or Direct Connect by targeting VPC mount targets and allowing NFS (2049) in security groups.

The option Amazon S3 is incorrect because S3 is object storage, does not support appending to existing objects, and cannot be mounted by Lambda as a file system.

The option Amazon FSx for Windows File Server is incorrect because it exposes SMB shares, which Lambda cannot mount; Lambda’s file system integration supports only EFS.

The option Amazon FSx for NetApp ONTAP is incorrect because, while it supports NFS/SMB, Lambda cannot mount FSx file systems; only EFS is supported for Lambda file system mounts.

Cameron’s Exam Tip

When you see Lambda needing shared, POSIX-compliant, appendable storage that multiple compute services and on‑prem clients can access, think EFS. Remember that Lambda’s file system integration supports only EFS, not S3 or FSx. For on‑prem access, verify the presence of a VPN or Direct Connect and security group rules for NFS 2049.

Question 9

A developer at Helios Media is deploying Docker-based services on Amazon ECS. One container runs a transactional database and must only be scheduled onto container instances designated as part of the db-core task group. What should the developer use to ensure the database task is placed on that group?

  • ✓ B. Task placement constraint

Task placement constraint is the correct choice because ECS uses placement constraints to filter which container instances are eligible before scheduling a task. With the memberOf constraint and a Cluster Query Language expression, you can target only the instances that belong to the db-core group when creating or updating a service or when running a task.

ECS capacity provider focuses on selecting and scaling the underlying capacity, such as choosing an Auto Scaling group or Fargate capacity, and does not restrict tasks to a particular labeled group of instances.

Cluster Query Language is only the expression language used within placement constraints or strategies; on its own it is not a control mechanism for scheduling.

ECS container agent enables communication between instances and the ECS control plane, but it does not define scheduling rules or filter eligible instances for a task.

Cameron’s Exam Tip

Differentiate placement constraints from placement strategies and capacity providers. Constraints filter which instances can receive the task, strategies decide how to spread/pack across those instances, and capacity providers determine where the capacity comes from.

Question 10

Which of the following statements are correct when configuring Amazon ElastiCache for Redis with cluster mode enabled and deploying across multiple Availability Zones? (Choose 2)

  • ✓ A. Manual promotion of an arbitrary replica to primary isn’t supported in cluster mode enabled

  • ✓ C. All nodes of a single Redis cluster reside in one AWS Region

The correct statements are Manual promotion of an arbitrary replica to primary isn’t supported in cluster mode enabled and All nodes of a single Redis cluster reside in one AWS Region. In cluster mode enabled, ElastiCache manages primaries and replicas per shard. You can initiate a failover for a node group (shard), but you cannot directly select a specific replica to promote. Also, ElastiCache is a regional service; Multi-AZ provides redundancy across Availability Zones within a single Region, not across Regions.

The option Cluster mode enabled supports multi-key transactions across shards is incorrect because Redis Cluster does not allow multi-key operations that span hash slots unless you use the same hash tag to force keys into the same slot.

The option With cluster mode disabled, replication to replicas is synchronous is wrong since Redis replication is asynchronous in both cluster modes.

The option Without replicas, a primary failure in a shard causes no data loss is false because a failed primary without replicas loses its in-memory data and cannot fail over.

Cameron’s Exam Tip

Remember that read replicas scale reads, not writes; write throughput scales by sharding. Multi-AZ requires at least one replica per shard for automatic failover. Redis replication in ElastiCache is asynchronous regardless of cluster mode. In cluster mode enabled, you can force a failover at the shard (node group) level but cannot hand-pick the exact replica. ElastiCache clusters are confined to one Region; there is no cross-Region failover for a single cluster.

Question 11

A mid-size ecommerce firm named CedarLake Outfitters is breaking a legacy monolith into roughly fifty microservices. The engineering team wants these services to communicate asynchronously so they remain loosely coupled and resilient to failures. Which AWS offerings should the team use to implement asynchronous message delivery between the services? (Choose 2)

  • ✓ B. Amazon Simple Notification Service (SNS)

  • ✓ D. Amazon Simple Queue Service (SQS)

Amazon Simple Queue Service (SQS) provides durable queues that let microservices exchange work asynchronously, which reduces coupling and smooths traffic spikes. Amazon Simple Notification Service (SNS) offers pub/sub fan-out and topic-based notifications that enable asynchronous, decoupled event distribution to multiple subscribers.

Amazon Kinesis Data Streams focuses on ingesting high-throughput telemetry and analytics streams, which is different from request-style microservice messaging patterns.

Amazon ECS is a container orchestration and compute service and does not supply a messaging channel between services.

Amazon API Gateway is primarily designed for synchronous HTTP APIs and WebSocket endpoints rather than background message delivery.

Cameron’s Exam Tip

When a scenario emphasizes decoupling and asynchronous communication between microservices, think queues for work distribution (SQS) and pub/sub topics for fan-out (SNS); streaming, compute, and API gateways are usually not the messaging fabric.

Question 12

Which AWS service provides per-user offline data that automatically synchronizes across devices for mobile and web applications?

  • ✓ B. Amazon Cognito Sync

Amazon Cognito Sync is designed for per-user datasets with local caching and automatic cross-device synchronization. Its client SDKs allow offline updates that sync when connectivity returns, matching the requirement for user-scoped, multi-device data sync.

The service is now legacy/deprecated, and AWS recommends modern architectures using AWS AppSync with Amplify DataStore for offline-first sync. However, when the exam explicitly targets per-user dataset synchronization from the Cognito family, Cognito Sync is the expected answer.

Why the others are incorrect: Amazon DynamoDB is a backend datastore only and does not provide client-side offline synchronization without additional tooling.

AWS AppSync can deliver offline and real-time sync (often with Amplify DataStore), but it is not the legacy per-user dataset sync service the question targets.

Amazon Cognito User Pools manages authentication and user profiles, not cross-device data sync.

Cameron’s Exam Tip

Watch for keywords like per-user datasets, offline, and cross-device sync to map to Cognito Sync. If a question emphasizes GraphQL APIs, real-time subscriptions, or modern offline-first patterns, consider AWS AppSync. Remember that authentication (User Pools) and temporary credentials (Identity Pools) do not equate to data synchronization.

Question 13

An internal compliance directive at PolarView Media mandates end-to-end tracing for a Node.js application deployed on AWS Elastic Beanstalk, covering outbound HTTP calls to third-party APIs and the SQL statements the service runs. You need the application traces to appear in the AWS X-Ray console for troubleshooting. How should you configure the environment to meet this requirement?

  • ✓ C. Add an xray-daemon.config file in the .ebextensions folder to install and run the X-Ray daemon

To send segments and subsegments from your application to AWS X-Ray, the X-Ray daemon must run on the Elastic Beanstalk instances. Elastic Beanstalk provides a built-in way to install and manage the daemon using a configuration file in your source bundle.

The correct choice is Add an xray-daemon.config file in the .ebextensions folder to install and run the X-Ray daemon. This enables the daemon as a managed service on each instance so traces from your instrumented Node.js code, including outbound HTTP and SQL calls, are forwarded to X-Ray.

Enable active tracing by adding a healthcheckurl.config file under .ebextensions is incorrect because that file only sets an application health check URL and does not enable X-Ray or install the daemon.

Configure EC2 user data in the Beanstalk instances to start the X-Ray daemon on boot is not the best answer since Elastic Beanstalk already supports enabling the daemon via .ebextensions; user data is brittle and less maintainable for this use case.

Build a custom Docker image that bundles and launches the X-Ray daemon is unnecessary for standard Beanstalk platforms and is more appropriate for container orchestration services; Beanstalk can manage the daemon for you.

Cameron’s Exam Tip

For Elastic Beanstalk, prefer the managed approach: enable the X-Ray daemon via .ebextensions/xray-daemon.config. Reserve user data or custom images for platforms that lack a native Beanstalk option.

Question 14

How should you enforce safe concurrent updates in DynamoDB so that write attempts are rejected when an item already has a disallowed attribute state?

  • ✓ B. Optimistic locking with a version attribute and UpdateItem ConditionExpression

The correct choice is Optimistic locking with a version attribute and UpdateItem ConditionExpression. In DynamoDB, the recommended way to prevent lost updates and enforce business rules is to use conditional writes. Maintain a version attribute (for example, ‘version’), and include a ConditionExpression that verifies the current version matches the expected value and that the attribute is in an allowed state. If either check fails, the write is rejected atomically.

The option DynamoDB Streams with Lambda conflict resolution is incorrect because Streams process changes after they occur and cannot prevent the initial conflicting write. They are useful for reacting to changes, not for enforcing pre-write constraints.

The option BatchWriteItem to group writes is incorrect since BatchWriteItem supports only Put and Delete, lacks UpdateItem, and does not support conditional expressions needed for concurrency control.

The option Pessimistic locking with per-item lock records is incorrect because DynamoDB does not provide native locks, and implementing lock items is complex and not recommended for high-throughput, highly concurrent workloads.

Cameron’s Exam Tip

Look for ConditionExpression and optimistic locking whenever a question mentions preventing conflicting writes or rejecting updates based on attribute values. Use transactions (TransactWriteItems) when coordinating multiple items, but for single-item concurrency, a version attribute plus conditional writes is the simplest and most efficient approach.

Question 15

A fintech startup created an AWS Lambda function that posts event summaries to an external analytics partner. The team wants this function to run automatically every 45 minutes without managing any servers or persistent infrastructure. What is the most operationally simple and cost-effective way to achieve this?

  • ✓ C. Create an Amazon EventBridge schedule rule to trigger the Lambda function at a 45-minute interval

The best choice is Create an Amazon EventBridge schedule rule to trigger the Lambda function at a 45-minute interval. EventBridge supports rate and cron expressions, invokes Lambda directly, and removes the need to manage servers or custom schedulers, making it both simple and cost-efficient.

Run a small Amazon EC2 instance with a cron job that invokes the function every 45 minutes is operationally heavy and incurs continuous instance costs, which undermines the goal of a lightweight, managed solution.

Configure a built-in schedule directly in the AWS Lambda console to run the function every 45 minutes is incorrect because Lambda does not include a native scheduler; scheduling is provided by EventBridge.

Use AWS Step Functions to orchestrate the function on a 45-minute cadence is not ideal because Step Functions lacks native scheduling and would still need EventBridge to start executions, increasing complexity and expense.

Cameron’s Exam Tip

For periodic or cron-based Lambda invocations, look for Amazon EventBridge schedules and rate/cron expressions as the managed, low-cost solution; avoid EC2 cron or desktop tools.

Question 16

With CloudFront placed in front of an Application Load Balancer serving an ECS Fargate API that uses JWT authentication which approach most efficiently blocks unauthenticated requests at the edge to minimize load on the origin?

  • ✓ B. CloudFront Function on viewer-request to validate JWT and reject invalid or missing tokens

CloudFront Function on viewer-request to validate JWT and reject invalid or missing tokens is best because it executes at CloudFront edge locations with very low latency and cost, performing lightweight header and token checks before requests reach the ALB and Fargate. This prevents unauthenticated traffic from consuming origin CPU and reduces load most efficiently.

The option Enable target-tracking scaling for the Fargate service is incorrect because scaling reacts to load but does not stop bad requests from hitting the tasks, so CPU waste continues.

The option AWS WAF rate-based rule on the CloudFront distribution can mitigate floods or throttle patterns but it does not perform semantic JWT validation, making it less precise and potentially ineffective against invalid tokens.

The option Lambda@Edge on viewer-request to validate JWT would block at the edge, but it is heavier in execution time, cold starts, and cost compared to CloudFront Functions for simple token checks, so it is not the most efficient choice.

Cameron’s Exam Tip

Prefer viewer-request for early rejection before the origin. Remember the difference between CloudFront Functions (lightweight, sub-millisecond, header/URL/cookie logic) and Lambda@Edge (more capable runtime but higher latency/cost). Use WAF for rate limiting and rule-based filtering, not for full JWT validation logic. Scaling solutions don’t prevent waste; blocking at the edge does.

Question 17

Skyline BioTech runs Amazon EC2 instances, AWS Lambda functions, and an Amazon SQS queue to coordinate internal processing. All components live in a single VPC named research-net with two private subnets, and the team must ensure that inter-service calls use only private IP addresses without traversing the public internet. What should the developer do to meet this requirement? (Choose 2)

  • ✓ B. Create a VPC endpoint for Amazon SQS

  • ✓ E. Attach the Lambda functions to the VPC subnets and security groups

To keep all traffic on private IPs, connect Lambda to the VPC so it uses ENIs in private subnets, and use PrivateLink for SQS so API calls stay inside the VPC. Combining these ensures messages flow between EC2, Lambda, and SQS without touching the public internet.

The correct choices are Attach the Lambda functions to the VPC subnets and security groups and Create a VPC endpoint for Amazon SQS. Attaching Lambda to the VPC gives it private network interfaces for intra-VPC communication, while the SQS VPC endpoint ensures SQS API calls do not traverse the internet.

Provision a NAT gateway so traffic stays private is incorrect because a NAT gateway provides outbound internet access using public IPs and does not keep traffic private within the VPC.

Create the Amazon SQS queue inside the VPC is incorrect because SQS is a regional service and queues are not created within a specific VPC.

Create a VPC endpoint for AWS Lambda is not the action needed to ensure private networking for function execution, and it does not replace configuring the function to attach to subnets and security groups.

Cameron’s Exam Tip

When you need private connectivity to an AWS API from a VPC, think Interface VPC Endpoints (PrivateLink); when you need a Lambda function to talk privately inside a VPC, think attach the function to the VPC.

Question 18

In an EC2 or on premises AWS CodeDeploy deployment that uses an appspec.yml file, which lifecycle event performs the final validation before the deployment is marked successful?

  • ✓ B. ValidateService hook

The correct choice is ValidateService hook because it is the final EC2/On-Premises CodeDeploy lifecycle event intended for end-to-end checks such as health probes and smoke tests. A successful exit status signals CodeDeploy to mark the deployment successful; a non-zero exit fails the deployment and can trigger rollback.

The option AllowTraffic is incorrect because it is an AWS-managed routing phase used in blue/green strategies and is not an appspec.yml lifecycle hook for running your tests.

The option BeforeInstall is incorrect because it runs before your files are installed and is used for pre-install tasks, not validation.

The option ApplicationStart is incorrect because it starts or restarts services but occurs prior to the validation stage.

Cameron’s Exam Tip

Memorize the EC2/On-Premises hook order: BeforeInstallAfterInstallApplicationStartValidateService. If a prompt mentions blue/green routing events like AllowTraffic, recognize they are not used to run your validation scripts for EC2/On-Premises. Always check the compute platform in the prompt to choose the correct set of hooks.

Question 19

TrailPeak Outfitters runs an inventory service on 12 Amazon EC2 instances in an Auto Scaling group across two Availability Zones. The application writes to and reads from a DynamoDB table named InventoryLive. Immediately after updates, some instances occasionally read outdated values for the same keys. The team wants to modify the application so read requests always return the most current data without changing the table design. What should the developer do?

  • ✓ C. Set ConsistentRead to true on read operations like GetItem to use strongly consistent reads

DynamoDB defaults to eventually consistent reads, which can return older values shortly after a write. To always read the freshest data, use strongly consistent reads.

Set ConsistentRead to true on read operations like GetItem to use strongly consistent reads ensures each request returns the most up-to-date item reflecting successful prior writes. This is a per-request setting for operations that support it.

Add Amazon DynamoDB Accelerator (DAX) in front of the table is not sufficient. DAX provides caching with eventual consistency and does not offer strong read consistency guarantees.

Create a new global secondary index on the table does not change consistency; GSIs inherit the same consistency choices and will not eliminate stale reads.

Wrap all writes with TransactWriteItems guarantees atomic, all-or-nothing writes, but it does not make reads strongly consistent or prevent stale reads.

Cameron’s Exam Tip

For DynamoDB, the default is eventual consistency. When the question requires the latest data, look for enabling strongly consistent reads by setting ConsistentRead=true on supported operations.

Question 20

Which AWS service orchestrates serverless stateful multi step workflows and supports retries and branching across AWS services?

  • ✓ B. Amazon Step Functions

The correct choice is Amazon Step Functions. It is designed for serverless, stateful orchestration of multi-step workflows with built-in retries, branching, parallelism, service integrations, and error handling. It allows you to coordinate multiple AWS services reliably and cost-effectively, which is exactly what the question targets.

Amazon Managed Workflows for Apache Airflow is suitable for complex data pipelines but requires provisioning and operating an Airflow environment, which adds cost and operational overhead compared to a fully serverless orchestrator.

Amazon EventBridge is an event bus and scheduler that routes events between services. While it can trigger workflows, it does not manage stateful, multi-step orchestration with retries and branches by itself.

Amazon Simple Workflow Service (SWF) is an older, lower-level workflow service and is typically not recommended for new applications compared to Step Functions, which provides higher-level constructs and broader native integrations.

Cameron’s Exam Tip

When you see keywords like stateful orchestration, multi-step workflows, retries, branching, parallel, or visual workflows, map them to Step Functions. If the prompt focuses on event routing or scheduling, think EventBridge. If it mentions managed Airflow or DAGs, that points to MWAA. If you see legacy workflow or lower-level APIs, consider SWF but prefer Step Functions for modern serverless designs.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.