AWS Developer Certification Practice Exams

All questions come from my Udemy AWS Developer Exams course and certificationexams.pro
Why Get AWS Developer Certified?
Over the past few months, I have been helping developers and cloud professionals prepare for careers that thrive in the AWS ecosystem. The goal is simple: to help you build, deploy, and manage real world applications using the same cloud services trusted by millions of organizations worldwide.
A key milestone in that journey is earning your AWS Certified Developer Associate credential. This certification demonstrates your ability to develop, test, and deploy modern applications on AWS using best practices for scalability, security, and performance.
Whether you’re a software developer, DevOps engineer, solutions architect, or backend specialist, the AWS Developer Associate certification gives you a strong foundation for cloud native development. You will learn how to integrate with key services like Lambda, DynamoDB, S3, SQS, and API Gateway while mastering tools like CloudFormation, CodePipeline, and CloudWatch.
In the fast paced world of cloud computing, understanding how to build and maintain applications on AWS is no longer optional. Every developer should be familiar with SDKs, IAM roles, serverless architecture, CI/CD pipelines, and event driven design.
That is exactly what the AWS Developer Associate Certification Exam measures. It validates your knowledge of AWS SDKs, application security, error handling, deployment automation, and how to write efficient, scalable, and cost optimized code for the cloud.
AWS Developer Practice Questions and Exam Simulators
Through my Udemy courses on AWS certifications and through the free question banks at certificationexams.pro, I have seen where learners struggle most. That experience led to the creation of realistic AWS Developer Practice Questions that mirror the format, difficulty, and nuance of the real test.
You will also find AWS Developer Exam Sample Questions and full AWS Developer Practice Tests designed to help you identify your strengths and weaknesses. Each AWS Developer Question and Answer set includes detailed explanations, helping you learn not just what the correct answer is, but why it is correct.
These materials go far beyond simple memorization. The goal is to help you reason through real AWS development scenarios such as deploying Lambda functions, optimizing API Gateway configurations, and managing data with DynamoDB just as you would in production.
Real Exam Readiness
If you are looking for Real AWS Developer Exam Questions, this collection provides authentic examples of what to expect. Each one has been carefully crafted to reflect the depth and logic of the actual exam without crossing ethical boundaries. These are not AWS Developer Exam Dumps or copied content. They are original practice resources built to help you learn.
Our AWS Developer Exam Simulator replicates the pacing and difficulty of the live exam, so by the time you sit for the test, you will be confident and ready. If you prefer to study in smaller chunks, you can explore curated AWS Developer Exam Dumps and AWS Developer Braindump style study sets that focus on one topic at a time, such as IAM, CI/CD, or serverless architecture.
Each AWS Developer Practice Test is designed to challenge you slightly more than the real exam. That is deliberate. If you can perform well here, you will be more than ready when it counts.
Learn & Succeed as an AWS Developer
The purpose of these AWS Developer Exam Questions is not just to help you pass. It is to help you grow as a professional who can design and deliver production grade applications on AWS. You will gain confidence in your ability to build resilient, efficient, and maintainable solutions using the full power of the AWS platform.
So dive into the AWS Developer Practice Questions, test your knowledge with the AWS Developer Exam Simulator, and see how well you can perform under real exam conditions.
Good luck, and remember, every great cloud development career begins with mastering the tools and services that power the AWS cloud.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Question 1
A developer at Scrumtuous Inc is sending custom Amazon CloudWatch metrics from multiple Amazon EC2 instances using the AWS CLI. They want the data to be organized so dashboards and alarms can filter by the specific instance ID and by the Auto Scaling group that launched the instance. When running aws cloudwatch put-metric-data, which parameter should be included to add this contextual grouping?
-
❏ A. –namespace
-
❏ B. –statistic-values
-
❏ C. –dimensions
-
❏ D. –unit
Question 2
Which approach best ensures web session state is preserved for 60 minutes across Auto Scaling EC2 instances behind an Application Load Balancer so sessions survive instance replacement?
-
❏ A. Store session files on Amazon EFS
-
❏ B. Persist sessions in Amazon DynamoDB via a session library
-
❏ C. Enable stickiness on the ALB target group
-
❏ D. Use Amazon ElastiCache for Redis as the session store
Question 3
A city records office is digitizing paper archives and moving to AWS. They will start by uploading 30 TB of historical documents to Amazon S3, and the files are expected to be retrieved very rarely over many years. They require secure and highly durable object storage at the lowest possible cost. Which S3 storage class should they choose?
-
❏ A. Amazon S3 Standard-Infrequent Access
-
❏ B. Amazon S3 Glacier Deep Archive
-
❏ C. Amazon S3 One Zone-IA
-
❏ D. Amazon S3 Glacier Flexible Retrieval
Question 4
Which method should you use to restrict a public Amazon API Gateway so that only browser clients from approved origins can invoke it?
-
❏ A. Require API keys with usage plans
-
❏ B. Enable CORS with only allowed origins
-
❏ C. AWS WAF
-
❏ D. API Gateway account-level throttling
Question 5
A developer at McKenzie Books is implementing an audit trail for a DynamoDB table named AuditedInvoices. DynamoDB Streams is enabled, and a Lambda function is triggered through an event source mapping. When an item is modified, the function must store only the item’s previous attribute values in an S3 bucket while keeping the updated values in the table. Which StreamViewType setting should be configured to meet this requirement?
-
❏ A. NEW_AND_OLD_IMAGES
-
❏ B. KEYS_ONLY
-
❏ C. OLD_IMAGE
-
❏ D. NEW_IMAGE
Question 6
What is the most direct way to trigger a Lambda function when an EC2 instance enters the running state?
-
❏ A. CloudTrail RunInstances metric filter with CloudWatch alarm to Lambda
-
❏ B. EC2 Auto Scaling lifecycle hook
-
❏ C. Amazon EventBridge rule for EC2 state change to running with Lambda target
-
❏ D. AWS Config rule evaluating EC2 instance state
Question 7
Pickering is Springfield Foods is migrating its customer storefront from a company data center to AWS. The site handles about 180,000 user sessions per day and must remain highly available with seamless scaling during peak events. If any single component experiences an outage, customers should not be logged out or lose their carts. Which approach should be used to manage session state in the new architecture?
-
❏ A. Amazon S3
-
❏ B. Enable sticky sessions on the Application Load Balancer
-
❏ C. Amazon ElastiCache for Redis
-
❏ D. Amazon CloudFront
Question 8
In an AWS CodeDeploy in place deployment to EC2 or on premises instances what is the correct order of lifecycle event hooks?
-
❏ A. BeforeAllowTraffic → AllowTraffic → AfterAllowTraffic
-
❏ B. ApplicationStop → BeforeInstall → ValidateService → ApplicationStart
-
❏ C. ApplicationStop → BeforeInstall → ApplicationStart → ValidateService
-
❏ D. ApplicationStop → AfterInstall → ApplicationStart → ValidateService
Question 9
A developer at Scrumtuous Analytics is building a PDF rendering microservice on AWS Lambda. The function relies on several native and third‑party libraries that are not included in the managed runtime. How should the developer construct the Lambda deployment package so the libraries are available when the function is invoked?
-
❏ A. Ship only the source code in a ZIP file along with a script that installs dependencies when the function starts
-
❏ B. Upload a ZIP that includes an appspec.yml listing libraries, store it in Amazon S3, and roll out with AWS CloudFormation
-
❏ C. Bundle the handler code and every required third-party library into a single ZIP archive
-
❏ D. Publish the native libraries as a Lambda layer and deploy a ZIP that contains only the function code
Question 10
Which AWS feature provides a stable cross device user identity for a mobile app that can be used to obtain AWS credentials?
-
❏ A. Amazon Cognito User Pools only
-
❏ B. Cognito Identity Pools with developer authenticated identities
-
❏ C. Create an IAM user per app customer
-
❏ D. Amazon Pinpoint endpoint IDs
Question 11
Engineers at McKenzie Analytics use AWS CodeBuild to produce container images and push them to a private Amazon ECR repository, tagging each image with a build number such as build-9021. Their laptops already have the AWS CLI configured. Which commands should they run locally to retrieve one of these images from the private repository?
-
❏ A. Use aws ecr-public get-login-password, then run docker pull REPOSITORY_URI:TAG
-
❏ B. Run docker pull REPOSITORY_URI:TAG
-
❏ C. Pipe the output of aws ecr get-login-password to docker login for the registry, then run docker pull REPOSITORY_URI:TAG
-
❏ D. Execute aws ecr get-login-password and afterward run docker pull REPOSITORY_URI:TAG
Question 12
Which AWS service provides an auditable history of AWS API calls across services including who made the calls when they occurred and the source of the calls?
-
❏ A. AWS X-Ray
-
❏ B. CloudTrail
-
❏ C. Amazon CloudWatch
-
❏ D. AWS Config
Question 13
A nonprofit publisher runs a production WordPress site on PHP in a single AWS Elastic Beanstalk environment and needs to roll out a new application version. The team cannot accept any outage if the update fails, and they want the smallest possible impact with an immediate rollback if needed. Which Elastic Beanstalk deployment policy should they choose?
-
❏ A. Rolling
-
❏ B. Immutable
-
❏ C. All at once
-
❏ D. Rolling with additional batch
Question 14
Which DynamoDB primary key design guarantees uniqueness within each vendor and allows querying a vendor’s purchases in time order using vendorId and purchaseTimestamp?
-
❏ A. vendorId (partition), productName (sort)
-
❏ B. purchaseTimestamp (partition), vendorId (sort)
-
❏ C. vendorId (partition), pricePerUnit (sort)
-
❏ D. vendorId (partition), purchaseTimestamp (sort)
Question 15
NorthPeak Robotics keeps separate AWS accounts for Sandbox, QA, and Production. A developer who signs in as an IAM user in the Sandbox account must be able to create and manage resources in the Production and QA accounts without issuing new long term credentials. What is the most efficient way to provide this access?
-
❏ A. Create a unique IAM user in each account and have the developer sign in to each account separately
-
❏ B. Define IAM roles with required permissions in the Production and QA accounts and allow the Sandbox user to assume those roles
-
❏ C. Create IAM groups in the Production and QA accounts and add the Sandbox IAM user to those groups
-
❏ D. Use AWS Organizations service control policies to grant the Sandbox user permissions in the other accounts
Question 16
An EC2 instance was terminated and within three minutes another instance of the same type was automatically launched without manual intervention. Which configuration most likely caused this replacement?
-
❏ A. EC2 Auto Recovery
-
❏ B. EC2 Auto Scaling group with desired capacity
-
❏ C. Application Load Balancer
-
❏ D. EC2 Spot Fleet with maintain target capacity
Question 17
A developer at Harbor Utilities runs a telemetry ingestion pipeline for thousands of smart water meters in a coastal region. The application writes events to an Amazon Kinesis Data Streams stream that currently has 6 shards, and a pool of Amazon EC2 instances using the Kinesis Client Library consumes and processes the records. CloudWatch shows high iterator age and CPU near 95 percent on the consumers, and IncomingBytes is approaching shard limits, indicating both the stream and the consumers are bottlenecked. What should the developer do to relieve the throughput and processing constraints?
-
❏ A. Increase only the shard count in the Kinesis stream
-
❏ B. Scale out the EC2 Auto Scaling group until the instance count matches the shard count
-
❏ C. Upgrade the EC2 instance types and add more shards by resharding
-
❏ D. Use larger EC2 instances only
Question 18
Which AWS service is best suited to store session state for up to 30 minutes with about three million concurrent logins while providing scalable, low latency access and resilience across Availability Zones?
-
❏ A. ALB session stickiness
-
❏ B. Amazon S3
-
❏ C. Amazon DynamoDB with TTL
-
❏ D. Amazon ElastiCache for Redis
Question 19
Orion Retail Labs has migrated from an on-premises data center to AWS and is setting up separate AWS Elastic Beanstalk environments for production, development, and QA. The production environment already uses a rolling deployment to keep the application available during releases. For development and QA, the team wants the fastest possible deployments and is comfortable with brief service interruptions. Which deployment policy should they use for those nonproduction environments?
-
❏ A. Immutable update
-
❏ B. All-at-once deployment
-
❏ C. Blue/green environment swap
-
❏ D. Rolling deployment
Question 20
Where should temporary scratch files be written in AWS Lambda so they do not persist after the function finishes executing?
-
❏ A. Amazon S3
-
❏ B. /tmp in the Lambda environment
-
❏ C. Amazon EFS
-
❏ D. /opt in Lambda runtime
AWS Developer Practice Exam Answers

All questions come from my Udemy AWS Developer Exams course and certificationexams.pro
Question 1
A developer at PolarWorks is sending custom Amazon CloudWatch metrics from multiple Amazon EC2 instances using the AWS CLI. They want the data to be organized so dashboards and alarms can filter by the specific instance ID and by the Auto Scaling group that launched the instance. When running aws cloudwatch put-metric-data, which parameter should be included to add this contextual grouping?
-
✓ C. –dimensions
The correct way to attach resource context to custom CloudWatch metrics is to include dimensions. With –dimensions, you add name–value pairs like InstanceId=i-abc123 and AutoScalingGroupName=my-asg to each datapoint so they can be grouped and filtered in dashboards, queries, and alarms. CloudWatch supports up to 10 dimensions per metric, which is ideal for tagging metrics with resource identifiers.
–namespace only defines the logical container for metrics and does not categorize them by resource attributes, so it will not enable per-instance or per-ASG filtering.
–statistic-values is used to publish aggregated statistic sets for a metric and does not provide any labeling or grouping context.
–unit defines the unit of measure for datapoints and similarly does not attach resource identifiers for grouping.
Cameron’s Exam Tip
Use dimensions to add resource context (for example, InstanceId, AutoScalingGroupName); namespace groups metrics at a high level, and statistic-values and unit do not provide filtering context.
Question 2
Which approach best ensures web session state is preserved for 60 minutes across Auto Scaling EC2 instances behind an Application Load Balancer so sessions survive instance replacement?
-
✓ B. Persist sessions in Amazon DynamoDB via a session library
Persist sessions in Amazon DynamoDB via a session library is correct because DynamoDB provides a durable, multi-AZ replicated, highly available key-value store that all EC2 instances can access. Sessions remain available even if any instance is terminated or replaced, and DynamoDB’s performance and scalability characteristics suit high-concurrency web workloads.
Store session files on Amazon EFS is not ideal because NFS-based session files introduce latency, file-locking, and scaling bottlenecks at high request rates. Although EFS is regional and multi-AZ, it is not the recommended pattern for scalable session state.
Enable stickiness on the ALB target group only keeps a client bound to one instance; it does not replicate or persist session data, so a terminated or failing instance still loses sessions.
Use Amazon ElastiCache for Redis as the session store can work with proper configuration, but it is memory-resident and may risk data loss on failover without persistence settings. Even with AOF/RDB, it is generally less durable than DynamoDB for strict session durability requirements.
Cameron’s Exam Tip
For stateful web apps behind load balancers and Auto Scaling, prefer a shared, durable, highly available store for session data. DynamoDB is a common best-practice answer. ALB stickiness addresses routing, not persistence. EFS and Redis can work but have trade-offs in durability, scalability, or operational complexity that make them less optimal for guaranteed session survival.
Question 3
A city records office is digitizing paper archives and moving to AWS. They will start by uploading 30 TB of historical documents to Amazon S3, and the files are expected to be retrieved very rarely over many years. They require secure and highly durable object storage at the lowest possible cost. Which S3 storage class should they choose?
-
✓ B. Amazon S3 Glacier Deep Archive
Amazon S3 Glacier Deep Archive is the most cost-effective S3 storage class for data that is rarely accessed and must be retained for many years, while still meeting high durability and security requirements.
Amazon S3 Standard-Infrequent Access costs more than archival classes and is intended for data that needs millisecond access when retrieved, not the cheapest cold archive.
Amazon S3 One Zone-IA stores objects in a single Availability Zone, which lowers resilience compared to multi-AZ classes and is a poor fit for long-term compliance or critical archives.
Amazon S3 Glacier Flexible Retrieval offers faster retrieval options but at a higher storage price than Deep Archive, making it less optimal for the lowest-cost, rarely accessed data.
Cameron’s Exam Tip
When the requirement stresses lowest cost and rarely accessed archival data with strong durability, choose S3 Glacier Deep Archive; if faster restores are needed, consider Glacier Flexible Retrieval instead.
Question 4
Which method should you use to restrict a public Amazon API Gateway so that only browser clients from approved origins can invoke it?
-
✓ B. Enable CORS with only allowed origins
The correct choice is Enable CORS with only allowed origins. CORS is enforced by browsers, and when configured on API Gateway to return Access-Control-Allow-Origin only for approved domains, unapproved web apps will be blocked by the browser from making successful cross-origin calls.
Require API keys with usage plans is incorrect because API keys govern usage tracking and throttling, not origin-based access; any client with a key can call the API regardless of origin.
AWS WAF is incorrect because it does not implement CORS. While you can filter by headers or IPs, it cannot reliably enforce browser-origin restrictions or handle CORS preflight behavior.
API Gateway account-level throttling is incorrect because it only limits request rates and bursts and cannot block based on the caller’s origin.
Cameron’s Exam Tip
When a requirement mentions restricting calls from browser-based clients by origin, think CORS. If the goal is to block non-browser clients or enforce real security, consider authorization (IAM, JWT authorizers, Cognito) rather than CORS. Use WAF for IP/pattern filtering and throttling/usage plans for rate control, not origin control.
Question 5
A developer at Lumen Books is implementing an audit trail for a DynamoDB table named AuditedInvoices. DynamoDB Streams is enabled, and a Lambda function is triggered through an event source mapping. When an item is modified, the function must store only the item’s previous attribute values in an S3 bucket while keeping the updated values in the table. Which StreamViewType setting should be configured to meet this requirement?
-
✓ C. OLD_IMAGE
The requirement is to forward only the state of the item prior to the update into S3. OLD_IMAGE emits the entire item as it existed before the modification, which lets the Lambda function write just the previous values to S3 while the updated values remain in DynamoDB.
NEW_AND_OLD_IMAGES includes both versions, which would deliver the new values as well and is more data than needed.
KEYS_ONLY omits attribute values entirely, making it impossible to reconstruct the previous item.
NEW_IMAGE provides only the post-update state, which does not meet the requirement to capture the old values.
Cameron’s Exam Tip
When selecting a DynamoDB Streams view type, align it with exactly what your consumer needs. If you see language like only the previous values, think OLD_IMAGE; for minimizing payload and cost, avoid sending extra images you will not use.
Question 6
What is the most direct way to trigger a Lambda function when an EC2 instance enters the running state?
-
✓ C. Amazon EventBridge rule for EC2 state change to running with Lambda target
Amazon EventBridge rule for EC2 state change to running with Lambda target is correct because EC2 emits native state-change events that EventBridge can match (state = running) and invoke Lambda directly with low latency and minimal configuration. This provides a simple, scalable, and near real-time trigger path for per-instance launches.
The option CloudTrail RunInstances metric filter with CloudWatch alarm to Lambda is inferior because it introduces delivery delays and extra plumbing (logs, metric filters, alarms) compared to native state-change events.
The option EC2 Auto Scaling lifecycle hook is scope-limited to Auto Scaling groups only and will not capture launches outside ASGs, so it is not a general solution.
The option AWS Config rule evaluating EC2 instance state is intended for configuration compliance evaluations and is not optimized for immediate event-driven invocation on each launch.
Cameron’s Exam Tip
When you see “trigger Lambda on EC2 instance start” or “state change to running,” think EventBridge service events with a rule filtering on the EC2 state and a Lambda target. Avoid designs that rely on CloudTrail + alarms for this use case unless the question explicitly requires API-call auditing-driven triggers.
Question 7
NovaRetail is migrating its customer storefront from a company data center to AWS. The site handles about 180,000 user sessions per day and must remain highly available with seamless scaling during peak events. If any single component experiences an outage, customers should not be logged out or lose their carts. Which approach should be used to manage session state in the new architecture?
-
✓ C. Amazon ElastiCache for Redis
The most resilient pattern is to externalize session data from the web tier into a managed, distributed store so that scaling out or replacing instances does not disrupt users. This makes the application stateless at the compute layer and maintains availability during failures.
Amazon ElastiCache for Redis fits these requirements by providing an in-memory, highly available store with replica support and clustering for horizontal scale, delivering the low latency and fault tolerance needed for session management.
Amazon S3 is durable but is not designed for frequent, low-latency session mutations and lacks in-memory operations required for efficient session handling.
Enable sticky sessions on the Application Load Balancer binds users to a specific instance, which undermines fault tolerance and elasticity because an instance failure or scale-in event can drop user sessions.
Amazon CloudFront accelerates content delivery but is not a database or cache for session state and cannot maintain per-user session data.
Cameron’s Exam Tip
When the requirement emphasizes no lost sessions and seamless scaling, think of externalizing state to a highly available, managed in-memory data store like Redis rather than relying on sticky sessions or object storage.
Question 8
In an AWS CodeDeploy in place deployment to EC2 or on premises instances what is the correct order of lifecycle event hooks?
-
✓ C. ApplicationStop → BeforeInstall → ApplicationStart → ValidateService
ApplicationStop → BeforeInstall → ApplicationStart → ValidateService is correct because, in an EC2/On-Premises in-place CodeDeploy deployment, the application is stopped, pre-install tasks run, the application is started, and then the service is validated. This matches the documented hook sequence for in-place deployments.
The option ApplicationStop → BeforeInstall → ValidateService → ApplicationStart is incorrect because validation must occur after the application is started, not before.
The option ApplicationStop → AfterInstall → ApplicationStart → ValidateService is incorrect because it omits the BeforeInstall hook and misorders the early steps.
The option BeforeAllowTraffic → AllowTraffic → AfterAllowTraffic is incorrect because these hooks apply to blue/green traffic-shifting scenarios (such as with load balancers, ECS, or Lambda), not EC2 in-place deployments.
Cameron’s Exam Tip
Distinguish in-place hooks from blue/green traffic hooks. For in-place EC2/On-Prem, remember the flow: stop, pre-install, start, then validate. For blue/green, think in terms of Before/AfterAllowTraffic and load balancer cutover.
Question 9
A developer at LumaForge Analytics is building a PDF rendering microservice on AWS Lambda. The function relies on several native and third‑party libraries that are not included in the managed runtime. How should the developer construct the Lambda deployment package so the libraries are available when the function is invoked?
-
✓ C. Bundle the handler code and every required third-party library into a single ZIP archive
The Lambda deployment package for ZIP-based functions must contain your function code and any libraries that the runtime does not provide. Packaging dependencies with the code guarantees they are available without network installation at cold start.
Bundle the handler code and every required third-party library into a single ZIP archive is correct because it creates a self-contained artifact so the Lambda execution environment has everything needed at invocation time.
Ship only the source code in a ZIP file along with a script that installs dependencies when the function starts is wrong because installing at runtime increases latency and can fail due to lack of build toolchains or outbound access.
Upload a ZIP that includes an appspec.yml listing libraries, store it in Amazon S3, and roll out with AWS CloudFormation is incorrect since AppSpec is used by CodeDeploy and does not manage Lambda dependency packaging.
Publish the native libraries as a Lambda layer and deploy a ZIP that contains only the function code is a plausible alternative, but the question specifically asks how to build the deployment package; layers are not the deployment package and only apply if you choose to externalize dependencies.
Cameron’s Exam Tip
For ZIP-based Lambda functions, place all non-runtime dependencies in the package or use a Lambda layer; avoid installing at invocation due to cold-start impact and reliability risks, and read the question carefully for clues like deployment package versus layer.
Question 10
Which AWS feature provides a stable cross device user identity for a mobile app that can be used to obtain AWS credentials?
-
✓ B. Cognito Identity Pools with developer authenticated identities
Cognito Identity Pools with developer authenticated identities is correct because Identity Pools create a persistent Cognito identity ID that remains the same across sessions and devices when you present the same developer provider and user identifier. It also enables retrieving AWS credentials for accessing resources securely.
The option Amazon Cognito User Pools only is not sufficient because while User Pools issues a stable sub claim for the user, it does not provide the Cognito identity ID or AWS credentials used for federated access via STS.
The option Create an IAM user per app customer is inappropriate for mobile app end users. It adds significant operational burden, scales poorly, and violates best practices.
The option Amazon Pinpoint endpoint IDs is incorrect because endpoint IDs represent individual devices or channels and are not a person-level identifier across multiple devices.
Cameron’s Exam Tip
Watch for cues like cross-device identity, persistent user ID, and obtaining AWS credentials. Those terms point to Identity Pools. When the question focuses on sign-up, sign-in, and tokens, think User Pools. Avoid using IAM users for consumer identities.
Question 11
Engineers at Arcturus Analytics use AWS CodeBuild to produce container images and push them to a private Amazon ECR repository, tagging each image with a build number such as build-9021. Their laptops already have the AWS CLI configured. Which commands should they run locally to retrieve one of these images from the private repository?
-
✓ C. Pipe the output of aws ecr get-login-password to docker login for the registry, then run docker pull REPOSITORY_URI:TAG
Pipe the output of aws ecr get-login-password to docker login for the registry, then run docker pull REPOSITORY_URI:TAG is correct because Docker must first be authenticated against the private Amazon ECR registry using the CLI v2 password, after which the image can be pulled successfully.
Run docker pull REPOSITORY_URI:TAG is incorrect because a private ECR repository requires prior authentication and a direct pull will fail with an authorization error.
Execute aws ecr get-login-password and afterward run docker pull REPOSITORY_URI:TAG is incorrect since merely running the command prints a password; you must pass that output to docker login to establish a session.
Use aws ecr-public get-login-password, then run docker pull REPOSITORY_URI:TAG is incorrect because this authenticates to ECR Public, not the private ECR registry described in the scenario.
Cameron’s Exam Tip
For private ECR pulls, first authenticate Docker: aws ecr get-login-password | docker login –username AWS –password-stdin .dkr.ecr..amazonaws.com, then run docker pull registry/repository:tag.
Question 12
Which AWS service provides an auditable history of AWS API calls across services including who made the calls when they occurred and the source of the calls?
-
✓ B. CloudTrail
CloudTrail is correct because it captures and stores AWS API events (management and data events), including the caller identity, timestamp, and source IP, providing a verifiable audit trail across AWS accounts and services. It supports organization trails, event history, and CloudTrail Lake for deeper analysis, which aligns directly with audit requirements.
The option AWS X-Ray is incorrect because it traces application requests within distributed systems and services for performance troubleshooting, not AWS API control-plane activity.
The option Amazon CloudWatch is incorrect because it handles metrics, alarms, and log ingestion but does not natively record a complete history of AWS API calls.
The option AWS Config is incorrect because it records resource configuration changes and compliance states, not the full set of API events across services and principals.
Cameron’s Exam Tip
When you see keywords like audit trail, API activity, who/when/source IP, and across services, think CloudTrail. Map similar distractors quickly: resource configuration timeline → AWS Config, metrics/alarms → CloudWatch, request traces → X-Ray.
Question 13
A nonprofit publisher runs a production WordPress site on PHP in a single AWS Elastic Beanstalk environment and needs to roll out a new application version. The team cannot accept any outage if the update fails, and they want the smallest possible impact with an immediate rollback if needed. Which Elastic Beanstalk deployment policy should they choose?
-
✓ B. Immutable
The safest way to avoid downtime while enabling the fastest rollback in a single Elastic Beanstalk environment is Immutable. It launches a new set of instances in a separate Auto Scaling group with the new version, verifies health, then switches traffic, and rollback is simply swapping back.
All at once updates every instance at the same time, which risks an outage and makes rollback a full redeploy.
Rolling replaces instances in batches and can temporarily reduce capacity, and rollback requires redeploying the previous version across multiple batches.
Rolling with additional batch preserves capacity but still performs phased replacements and does not provide the near-instant, safe cutover and rollback that Immutable offers.
Cameron’s Exam Tip
If a question emphasizes zero downtime and fast rollback within a single Elastic Beanstalk environment, pick Immutable. If it explicitly mentions two environments and a CNAME swap, think blue green.
Question 14
Which DynamoDB primary key design guarantees uniqueness within each vendor and allows querying a vendor’s purchases in time order using vendorId and purchaseTimestamp?
-
✓ D. vendorId (partition), purchaseTimestamp (sort)
The correct choice is vendorId (partition), purchaseTimestamp (sort). Using vendorId as the partition key groups all a vendor’s orders together, and using purchaseTimestamp as the sort key allows efficient time-ordered queries within that vendor partition. The composite key vendorId plus purchaseTimestamp provides per-vendor uniqueness and supports range queries by time. If exact timestamp collisions are possible, a common pattern is to append a tiebreaker such as an increment or UUID to the sort key, while still anchoring the design on vendorId and purchaseTimestamp.
The option vendorId (partition), productName (sort) is incorrect because products repeat, so it neither guarantees uniqueness nor provides chronological ordering.
The option purchaseTimestamp (partition), vendorId (sort) is incorrect because partitioning by time causes write hot spots and prevents efficient per-vendor queries, since Query requires the partition key.
The option vendorId (partition), pricePerUnit (sort) is incorrect because prices are reused and provide no reliable ordering or uniqueness.
Cameron’s Exam Tip
When a question asks for time-ordered queries within an entity group, pick the entity identifier as the partition key and the time attribute as the sort key. Use Query with KeyConditionExpression on the partition key and a time range on the sort key. Avoid using time as the partition key due to hot partitions and poor query ergonomics. If uniqueness within a timestamp could collide, compose the sort key as timestamp plus a tiebreaker while preserving time-based ordering.
Question 15
NorthPeak Robotics keeps separate AWS accounts for Sandbox, QA, and Production. A developer who signs in as an IAM user in the Sandbox account must be able to create and manage resources in the Production and QA accounts without issuing new long term credentials. What is the most efficient way to provide this access?
-
✓ B. Define IAM roles with required permissions in the Production and QA accounts and allow the Sandbox user to assume those roles
The most efficient solution is to use cross account roles. Create IAM roles in the Production and QA accounts with the necessary permissions and a trust policy that allows the Sandbox account to assume them. The developer then uses STS AssumeRole to obtain temporary credentials and launch resources. That is why Define IAM roles with required permissions in the Production and QA accounts and allow the Sandbox user to assume those roles is correct.
Create a unique IAM user in each account and have the developer sign in to each account separately increases operational overhead and credential sprawl, and it does not leverage temporary credentials.
Create IAM groups in the Production and QA accounts and add the Sandbox IAM user to those groups is not possible because IAM users from another account cannot be members of local IAM groups.
Use AWS Organizations service control policies to grant the Sandbox user permissions in the other accounts is incorrect because SCPs only define permission boundaries and do not grant permissions to principals, so they cannot authorize cross account actions by a specific user.
Cameron’s Exam Tip
When you see cross account access for an existing IAM user, look for assume role with a trust policy that names the external account, and avoid solutions that create duplicate users or rely on SCPs, which do not grant permissions.
Question 16
An EC2 instance was terminated and within three minutes another instance of the same type was automatically launched without manual intervention. Which configuration most likely caused this replacement?
-
✓ B. EC2 Auto Scaling group with desired capacity
The most likely cause is an EC2 Auto Scaling group with desired capacity. Auto Scaling groups continuously enforce their desired capacity. When an instance in the group is terminated, the group automatically launches a replacement instance using its launch template or configuration.
The EC2 Auto Recovery option is incorrect because it restarts the same instance after certain failures and does not create a new instance when a user terminates one. The Application Load Balancer option is incorrect because load balancers only route traffic and do not create instances. The EC2 Spot Fleet with maintain target capacity option is an advanced distractor; while Spot Fleets do replace terminated Spot instances to maintain capacity, the default and more general mechanism for automatic instance replacement after termination in typical AWS setups is an Auto Scaling group.
Cameron’s Exam Tip
When you see automatic replacement of instances after termination, think Auto Scaling group and the phrase maintain desired capacity. Load balancers do not launch instances, and Auto Recovery restarts the same instance rather than creating a new one. Spot-specific constructs apply only when Spot capacity is explicitly in use.
Question 17
A developer at Harbor Utilities runs a telemetry ingestion pipeline for thousands of smart water meters in a coastal region. The application writes events to an Amazon Kinesis Data Streams stream that currently has 6 shards, and a pool of Amazon EC2 instances using the Kinesis Client Library consumes and processes the records. CloudWatch shows high iterator age and CPU near 95 percent on the consumers, and IncomingBytes is approaching shard limits, indicating both the stream and the consumers are bottlenecked. What should the developer do to relieve the throughput and processing constraints?
-
✓ C. Upgrade the EC2 instance types and add more shards by resharding
The stream shows shard-level saturation and the consumers are CPU bound, so the correct remediation is to increase capacity on both sides. Upgrade the EC2 instance types and add more shards by resharding raises consumer processing power and increases Kinesis stream throughput and parallelism, which together reduce iterator age and backlog.
Increase only the shard count in the Kinesis stream addresses producer and read bandwidth limits but leaves the EC2 consumers CPU constrained, so lag would remain.
Scale out the EC2 Auto Scaling group until the instance count matches the shard count is not required because one host can run multiple KCL workers and it does not increase stream capacity.
Use larger EC2 instances only ignores the shard bottleneck, so the stream would still throttle or fall behind.
Cameron’s Exam Tip
For Kinesis pipelines, scale along two dimensions: shard count for stream throughput and consumer compute for processing capacity. Watch metrics such as IncomingBytes, WriteProvisionedThroughputExceeded, and GetRecords.IteratorAgeMilliseconds to decide whether to scale shards, consumers, or both.
Question 18
Which AWS service is best suited to store session state for up to 30 minutes with about three million concurrent logins while providing scalable, low latency access and resilience across Availability Zones?
-
✓ D. Amazon ElastiCache for Redis
Amazon ElastiCache for Redis is the best fit for storing session state at massive concurrency because it is an in-memory data store that delivers sub-millisecond latency, very high throughput, built-in TTLs for expiring sessions, horizontal scaling via sharding, and Multi-AZ replication with automatic failover for resilience to an Availability Zone outage.
ALB session stickiness is not a session store; it only keeps clients routed to the same target. If the instance or AZ fails or the target group scales in, sessions are lost because there is no shared state.
Amazon S3 is durable object storage with much higher latency and request patterns unsuitable for high-frequency session reads and writes. It is not designed as a session store.
Amazon DynamoDB with TTL can scale and expire items, but it is not in-memory and typically has higher latency than Redis for per-request session access. Hot keys and very high request rates for session reads and writes make ElastiCache a better choice for this use case.
Cameron’s Exam Tip
For ephemeral, high-throughput session state, choose an in-memory cache like ElastiCache for Redis with Multi-AZ. Avoid confusing load balancer stickiness with state storage. Use TTLs to auto-expire sessions. Be cautious with durable stores like S3 or DynamoDB when ultra-low latency and extremely high RPS are required.
Question 19
Orion Retail Labs has migrated from an on-premises data center to AWS and is setting up separate AWS Elastic Beanstalk environments for production, development, and QA. The production environment already uses a rolling deployment to keep the application available during releases. For development and QA, the team wants the fastest possible deployments and is comfortable with brief service interruptions. Which deployment policy should they use for those nonproduction environments?
-
✓ B. All-at-once deployment
The right choice for nonproduction when speed matters and brief outages are acceptable is All-at-once deployment. This policy updates all instances in the environment simultaneously, which is the fastest method and intentionally allows a short period of unavailability.
Immutable update provisions a parallel Auto Scaling group and shifts traffic only after health checks pass, which is safer but slower and unnecessary when downtime is acceptable.
Blue/green environment swap is a zero-downtime strategy that requires a second environment and a CNAME swap, adding complexity and time and is not a Beanstalk deployment policy option inside a single environment.
Rolling deployment updates instances in batches to maintain service availability, which reduces risk but is slower than all-at-once and not aligned with the goal of fastest possible pushes.
Cameron’s Exam Tip
When you see keywords like fastest deployment and downtime is acceptable, think All-at-once. If the requirement emphasizes no downtime or safe rollback, prefer Rolling, Rolling with additional batches, Immutable, or a Blue/green approach.
Question 20
Where should temporary scratch files be written in AWS Lambda so they do not persist after the function finishes executing?
-
✓ B. /tmp in the Lambda environment
The correct choice is /tmp in the Lambda environment. This is the only writable, ephemeral directory available to a Lambda function. Files written here are tied to the execution environment and do not persist across new cold environments.
The option Amazon S3 is incorrect because S3 is durable storage; objects persist until explicitly deleted.
The option Amazon EFS is incorrect because EFS is a persistent network file system shared across invocations and functions.
The option /opt in Lambda runtime is incorrect because it is generally used for Lambda layers and is read-only during execution.
Cameron’s Exam Tip
Remember that Lambda has a single writable path: /tmp. Paths tied to the code package or layers are read-only, and external services like S3 or EFS persist data. If you see “temporary” or “scratch” in a Lambda question, think /tmp.
Jira, Scrum & AI Certification |
---|
Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.