Certified AWS DevOps Engineer Exam Dump and Braindump

These questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
AWS DevOps Certification Exam Simulator
Despite the title of this article, this is not an AWS DevOps Engineer Professional braindump in the traditional sense.
I don’t believe in cheating.
Traditionally, the word “braindump” referred to someone taking an exam, memorizing the questions, and posting them online for others to use. That practice is unethical and violates the AWS certification agreement. It provides no integrity, skill development, or lasting value.
Better than braindumps & exam dumps
This is not an AWS braindump.
All of these questions come from my AWS DevOps Engineer Professional Udemy course and from the certificationexams.pro website, which offers hundreds of free AWS DevOps Engineer Practice Questions.
Each question has been carefully written to match the official AWS Certified DevOps Engineer Professional exam topics. They mirror the tone, logic, and technical depth of real AWS scenarios, but none are copied from the actual test. Every question is designed to help you learn, reason, and master automation, monitoring, CI/CD, and operational excellence the right way.
If you can answer these questions and understand why the incorrect options are wrong, you will not only pass the real AWS DevOps Engineer Professional exam, you will gain a deep understanding of how to build, deploy, monitor, and optimize resilient and automated systems on AWS.
So if you want to call this your AWS DevOps Engineer Braindump, that’s fine, but know that every question here is built to teach, not to cheat.
Each item includes detailed explanations, practical examples, and tips that help you think like an AWS DevOps professional during the exam.
Study thoroughly, practice often, and approach your certification with integrity. Success in AWS DevOps comes not from memorizing answers but from understanding how automation, monitoring, and continuous delivery come together to build the modern cloud.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
AWS DevOps Engineer Sample Questions
Question 1
Scrumtuous Publishing, a media analytics startup, runs a Kinesis Client Library application on a single Amazon EC2 instance to consume from an Amazon Kinesis Data Streams stream that uses 8 shards. Under peak traffic the consumer falls behind and some records expire out of a 36-hour retention window before processing completes. You must improve reliability with the fewest changes to code and architecture. What should you do? (Choose 2)
-
❏ A. Increase the stream’s data retention to 72 hours
-
❏ B. Run the KCL app in an Auto Scaling group of EC2 instances and scale on the MillisBehindLatest CloudWatch metric
-
❏ C. Add more shards to the Kinesis stream to boost throughput
-
❏ D. Migrate the consumer to AWS Lambda
-
❏ E. Replace the stream with Amazon SQS
Question 2
Which control prevents IAM user creation across all organization accounts while allowing only approved principals to create users?
-
❏ A. Identity-based IAM policies in each account
-
❏ B. Organizations SCP denying iam:CreateUser except approved principals via aws:PrincipalArn condition
-
❏ C. EventBridge rule with Lambda to remove unauthorized users
-
❏ D. Organizations SCP denying iam:CreateLoginProfile
Question 3
After a security incident at McKenzie Pay, the payments API running in an EC2 Auto Scaling group must enforce that any production instance receiving an interactive SSH session is automatically shut down. All instances already forward logs to Amazon CloudWatch Logs using the CloudWatch agent. What is the most automated approach to accomplish this requirement?
-
❏ A. Create a CloudWatch Alarm based on a metric filter for SSH logins that publishes to an SNS topic; on-call engineers subscribe and terminate the reported instance manually
-
❏ B. Configure a CloudWatch Alarm for SSH login events to send notifications to an SQS queue; run a pool of EC2 worker instances that poll the queue and terminate the indicated instances
-
❏ C. Create a CloudWatch Logs subscription filter that invokes an AWS Lambda function to tag the source instance with MARK_FOR_TERMINATION when an SSH login is detected, and use an Amazon EventBridge scheduled rule to run a second Lambda every 45 minutes to terminate all instances carrying that tag
-
❏ D. Wire a CloudWatch Logs subscription directly to AWS Step Functions to attach a FOR_DELETION tag to the instance, and schedule an EventBridge rule at 01:30 UTC each day to trigger a Lambda that terminates tagged instances
Question 4
Which AWS Systems Manager actions enable centralized patching for EC2 instances and on premises servers while limiting patching to 10 PM to 6 AM local time? (Choose 3)
-
❏ A. Create an EventBridge schedule to start patching nightly
-
❏ B. Register on-prem servers with Systems Manager using Hybrid Activations
-
❏ C. Attach an IAM instance profile to EC2 for Systems Manager access
-
❏ D. Use Amazon Inspector to automatically apply patches
-
❏ E. Configure a Systems Manager Maintenance Window for off-hours
-
❏ F. Generate long-lived IAM access keys on on-prem hosts for SSM
Question 5
At Pickering is Springfield Robotics, about 300 nightly batch computations that cannot be containerized must run on Amazon EC2. Capacity may be reclaimed with only a two minute notice, so the jobs must resume from the most recent checkpoint. The team requires a shared, persistent file system that many instances can mount for scratch data and checkpoints while keeping compute costs low. Which solution best meets these requirements?
-
❏ A. Amazon FSx for Lustre with EC2 On-Demand Instances
-
❏ B. Amazon Elastic File System with EC2 Spot Instances
-
❏ C. Amazon EBS with EC2 Reserved Instances
-
❏ D. Amazon S3 with EC2 On-Demand Instances
Question 6
An EC2 instance receives an AccessDenied error when attempting to retrieve an object from a restricted S3 bucket. Which configurations should you check to determine the cause? (Choose 2)
-
❏ A. S3 Block Public Access settings
-
❏ B. Instance profile IAM role permissions/trust
-
❏ C. S3 Object Lock
-
❏ D. Bucket policy principals/conditions
-
❏ E. S3 VPC endpoint policy
Question 7
Northwind Mutual is moving its customer self-service portal to AWS. The application must run on AWS Elastic Beanstalk and connect to an Amazon RDS for MySQL database configured for Multi-AZ. Leadership requires deployments to maintain 100% capacity by launching a separate set of instances, provide an easy rollback path, and avoid problems from partially completed rolling updates. While a new release is being deployed, user experience must remain unaffected. What is the most cost-effective deployment approach to meet these goals?
-
❏ A. Elastic Beanstalk with an external Amazon RDS for MySQL Multi-AZ, blue/green to a parallel environment with a CNAME swap, and keep the previous environment running as a standby
-
❏ B. AWS CodeDeploy
-
❏ C. Elastic Beanstalk with an external Amazon RDS for MySQL Multi-AZ and immutable deployments
-
❏ D. Elastic Beanstalk with an RDS instance created inside the environment set to Multi-AZ, using Rolling with additional batch
Question 8
When deploying a Lambda function with CodeDeploy what approach allows you to run readiness checks and block traffic to the new version until it is ready to avoid transient errors?
-
❏ A. Enable Lambda Provisioned Concurrency
-
❏ B. Use BeforeAllowTraffic hook in the Lambda AppSpec
-
❏ C. CodeDeploy canary: 10% traffic for 15 minutes
-
❏ D. Use AfterAllowTraffic hook
Question 9
A DevOps team at a digital ticketing startup uses AWS CloudFormation to deploy a Lambda function. After AWS CodePipeline uploads a new zip to Amazon S3 at s3://build-artifacts-123/lambda/payments-handler-v2.zip and triggers a stack update, the stack completes but the function code does not change. What should you do to ensure the new package is picked up quickly? (Choose 3)
-
❏ A. Enable the AWS SAM transform in the template
-
❏ B. Upload each release with a new object key in the same S3 bucket
-
❏ C. Turn on S3 versioning for the artifacts bucket and reference the S3ObjectVersion in CloudFormation
-
❏ D. Add an extra wait or manual approval to delay the stack update for 5 seconds
-
❏ E. Use AWS CodeDeploy for Lambda deployments
-
❏ F. Push the package to a new S3 bucket name on every deployment and update the template accordingly
Question 10
Which AWS solution provides a fully serverless publicly accessible static website with global content acceleration and protection against common web exploits?
-
-
❏ A. AWS Lambda + API Gateway + GuardDuty
-
-
❏ B. Amazon S3 + Amazon CloudFront + AWS Shield Standard
-
❏ C. S3 static hosting with CloudFront and an AWS WAF web ACL
-
❏ D. Amazon EC2 + ElastiCache Redis + AWS Shield Advanced

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.
Question 11
A national retailer, Aurora Outfitters, runs its order management platform on an Amazon EC2 Auto Scaling group behind an Application Load Balancer, with purchases stored in a DynamoDB table named OrderLedger. For analytics, six AWS Lambda functions subscribe to the table’s DynamoDB Streams to compute product counts, track stock levels, generate summaries, and send new records to Amazon Data Firehose. During heavy traffic, the functions encounter stream throttling and some invocations fail, reducing reporting throughput. What is the most scalable and cost-effective way to address this with minimal operational effort?
-
❏ A. Add a new GSI to the OrderLedger table, increase provisioned RCU and WCU, disable DynamoDB Streams, and refactor the Lambda reporting functions to query the table directly
-
❏ B. Migrate processing to Amazon ECS with Fargate and use an AWS Glue streaming job to read from the table, while moving interactive analytics to Amazon Managed Service for Apache Flink Studio
-
❏ C. Use the DynamoDB Streams Kinesis Adapter with the Kinesis Client Library to scale out consumption across shards and offload complex aggregations to Amazon Managed Service for Apache Flink Studio
-
❏ D. Keep Lambda event source mappings and raise reserved concurrency, set ParallelizationFactor to 10, increase batch size, and switch the table to on-demand capacity
Question 12
How can you automatically start and stop EC2 instances and a single RDS DB only while CodePipeline test stages run which last about four hours and occur roughly three times per week without changing the existing architecture?
-
❏ A. Use CodeBuild pre/post steps with AWS CLI to start and stop resources
-
❏ B. AWS Instance Scheduler with a weekly cron schedule
-
❏ C. EventBridge CodePipeline state-change events invoking Systems Manager Automation to start/stop EC2 and RDS
-
❏ D. Migrate to Aurora Serverless, add an ALB, and use Lambda to toggle power
Question 13
A national nonprofit health directory operated by the Maple Health Alliance runs a public website listing clinics, hospitals, medical specialists, and related services. The organization also provides several public REST APIs that partners use to search this data, which are implemented with AWS Lambda. All records are stored in an Amazon DynamoDB table named CareDirectory, and search indexes are kept in an Amazon OpenSearch Service domain called care-search-v2. Leadership has asked a DevOps engineer to implement a deployment approach that results in zero downtime if a release fails and that stops any subsequent releases after a failure. During updates, the platform must retain 100% capacity with no temporary reduction to avoid performance degradation. What is the most efficient way to meet these requirements?
-
❏ A. Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy blue/green for releases; host the website on Amazon S3 with cross-Region replication
-
❏ B. Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy blue/green for releases; run the web tier on AWS Elastic Beanstalk with the All at once policy
-
❏ C. Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy blue/green for releases; run the web tier on AWS Elastic Beanstalk with the Immutable policy
-
❏ D. Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy in-place for releases; run the web tier on AWS Elastic Beanstalk with the Rolling with additional batch policy
Question 14
How can you automatically update an S3 static status page when EC2 Auto Scaling instances launch or terminate while maintaining a durable searchable log of the scaling events?
-
❏ A. Create an EventBridge rule that delivers events to CloudWatch Logs and periodically export to S3
-
❏ B. Configure an EventBridge rule with two targets, S3 and CloudWatch Logs
-
❏ C. Use an EventBridge rule for Auto Scaling instance launch and terminate to invoke Lambda; Lambda updates the S3 page and writes events to CloudWatch Logs
-
❏ D. Run a scheduled EventBridge rule every 15 minutes to have Lambda scan the Auto Scaling group and update S3, logging to S3
Question 15
A platform engineer at Orion Retail launched an Amazon EC2 instance in a private subnet inside an Amazon VPC. The instance uses an instance profile to fetch an object from a tightly restricted Amazon S3 bucket, but the request returns a 403 Access Denied response. What are two plausible reasons for this failure? (Choose 2)
-
❏ A. Default encryption is enabled on the S3 bucket
-
❏ B. The S3 bucket policy does not grant the required permission
-
❏ C. The EC2 instance security group blocks outbound HTTPS traffic
-
❏ D. S3 versioning is enabled on the bucket
-
❏ E. There is a misconfiguration in the instance profile IAM role
Question 16
In an AWS CodeBuild buildspec.yml file how should you configure pushing a Docker image to a private Amazon ECR repository so that the image is pushed only after a successful build?
-
❏ A. Use a finally block in post_build to push the image
-
❏ B. AWS CodePipeline
-
❏ C. Use post_build commands to push the image to ECR
-
❏ D. Use pre_build commands to push the image
Question 17
Nova Mobility, a transportation analytics company, plans to launch a real-time road-conditions web portal on AWS. Leadership requires an application stack that remains available during failures and minimizes downtime. They also want a native capability that continuously scans workloads for vulnerabilities, unintended exposure, and drift from AWS security best practices. Which architecture best satisfies these requirements?
-
❏ A. Use Amazon Macie for automated security assessment to help improve the security and compliance of your applications, store data in Amazon DynamoDB, place an Auto Scaling group of EC2 instances across four Availability Zones behind an Application Load Balancer, and in Amazon Route 53 create a non-alias A record at the root domain pointing to the load balancer
-
❏ B. Use Amazon Inspector to automatically evaluate applications for exposure, vulnerabilities, and deviations from AWS best practices, run an EC2 Auto Scaling group spread across three Availability Zones behind an Application Load Balancer, use Amazon Aurora for the database tier, and in Amazon Route 53 create an alias record at the zone apex that targets the load balancer
-
❏ C. Use Amazon GuardDuty for automated security assessment to help improve the security and compliance of your applications, add Amazon ElastiCache for application caching, launch an Auto Scaling group of EC2 instances in two Availability Zones and attach it to an Application Load Balancer, deploy Amazon RDS for MySQL with Multi-AZ and read replicas, and in Amazon Route 53 create a CNAME for the root domain to the load balancer
-
❏ D. Use AWS Shield for automated security assessment to help improve the security and compliance of your applications, run an Auto Scaling group of EC2 instances across three Availability Zones with an Application Load Balancer in front, use Amazon RDS for MySQL with Multi-AZ, and in Amazon Route 53 create a non-alias A record for the apex zone to point to the load balancer
Question 18
Which AWS tool can discover a legacy Java application, containerize it for deployment to Amazon ECS or Amazon EKS, and bootstrap an automated CI/CD pipeline?
-
❏ A. AWS Copilot CLI for ECS with CI/CD
-
❏ B. AWS App2Container with CodeBuild/CodeDeploy
-
❏ C. AWS Proton service templates
-
❏ D. AWS Elastic Beanstalk Docker with CodePipeline
Question 19
A media intelligence startup named NovaSight plans to deploy a self-managed Apache Kafka cluster with 9 brokers spread across 3 Availability Zones. Kafka is stateful and each broker must keep its data on its own EBS volume. The team needs automatic recovery when a broker fails and must guarantee that any replacement broker reattaches the correct EBS volume in the same Availability Zone. What is the most reliable CloudFormation-based approach to meet these requirements?
-
❏ A. Create 9 EC2 instances with EBS volumes and explicit attachments in a single CloudFormation stack, and rely on stack updates to recreate instances if they are terminated
-
❏ B. Create an Auto Scaling group with desired capacity 9 across 3 AZs and provision 9 EBS volumes across those AZs, then use user data to claim any free volume in the same AZ at launch
-
❏ C. Use a nested CloudFormation template that defines an Auto Scaling group with min and max 1 and one EBS volume tagged to match that group, include a bootstrap script to attach the tagged volume at boot, and reference the nested stack nine times
-
❏ D. Create 9 EC2 instances with EBS volumes and define the attachments, and on termination trigger drift detection and attempt remediation to restore the attachment
Question 20
Which AWS service provides continuous threat detection across an account by analyzing CloudTrail events, VPC flow logs, and DNS logs?
-
❏ A. Amazon Inspector
-
❏ B. Amazon GuardDuty
-
❏ C. Amazon Macie
-
❏ D. Amazon Detective
Question 21
A platform engineer at Borealis Health is moving a Dockerized service from a company server room to AWS. The team wants the runtime to require as little infrastructure management as possible. They also need encrypted connectivity between the on-premises network and AWS so the application can reach internal systems securely during operation. What should the engineer implement to meet these goals? (Choose 2)
-
❏ A. Migrate the containers to Amazon ECS on the EC2 launch type in a dedicated VPC
-
❏ B. Front the service with a Network Load Balancer using IP targets and an HTTPS listener
-
❏ C. Provision AWS Direct Connect for the on-premises link and use it without additional encryption
-
❏ D. Configure an AWS Site-to-Site VPN between the data center and the VPC
-
❏ E. Run the workload on Amazon ECS with the Fargate launch type in a dedicated VPC
Question 22
How should a Lambda function retrieve Amazon RDS credentials at runtime while ensuring the database password is automatically rotated every 30 days?
-
❏ A. AWS Systems Manager Parameter Store SecureString
-
❏ B. Amazon RDS IAM authentication tokens
-
❏ C. AWS Key Management Service
-
❏ D. AWS Secrets Manager with rotation enabled
Question 23
A national health insurer runs its member portal on AWS and stores protected documents in a server-side encrypted Amazon S3 bucket. The DevOps team configured SAML federation from an on-premises Active Directory group to grant controlled access to the bucket. Compliance now requires automatically detecting any change to the bucket access configuration or the federated role and having the ability to roll back unintended admin edits. What is the quickest way to detect such configuration changes?
-
❏ A. Integrate Amazon EventBridge with an AWS Lambda function to run a 20-minute schedule that scans the IAM policy attached to the federated role and reverts unexpected changes
-
❏ B. Create an AWS Config rule with a periodic trigger that evaluates the S3 bucket every 45 minutes and use a Lambda remediation to roll back changes
-
❏ C. Configure an AWS Config rule with a configuration change trigger scoped to the S3 bucket and federated role, and invoke an AWS Systems Manager Automation runbook via Lambda to restore approved settings
-
❏ D. Use Amazon EventBridge with CloudTrail events to invoke a Lambda function whenever API calls occur and attempt to detect and revert changes
Question 24
ECS Fargate tasks running in private subnets without a NAT gateway fail with the error ‘ResourceInitializationError unable to pull secrets or registry auth’ What actions will allow the tasks to pull container images and retrieve secrets so they can start successfully? (Choose 2)
-
❏ A. Run tasks in public subnets with public IPs
-
❏ B. Create VPC endpoints: ECR API, ECR DKR, and S3
-
❏ C. Add a Secrets Manager VPC endpoint only
-
❏ D. Enable awsvpc networking on tasks
-
❏ E. Fix the task execution IAM role permissions for ECR and Secrets Manager
Question 25
Marlowe Analytics has a fleet of Amazon EC2 instances that engineers currently reach via SSH, and keys are rotated when staff depart. The CTO has directed the team to retire EC2 key pairs and move all access to AWS Systems Manager Session Manager, and sessions must traverse only private connectivity inside the VPC with no internet exposure. What actions should a DevOps engineer implement to satisfy these requirements? (Choose 2)
-
❏ A. Launch a new EC2 instance as a bastion host for administrative access
-
❏ B. Provision interface VPC endpoints for Systems Manager in the VPC to keep Session Manager traffic private
-
❏ C. Create an interface VPC endpoint for Amazon EC2
-
❏ D. Associate an IAM instance profile with each instance that includes permissions such as AmazonSSMManagedInstanceCore
-
❏ E. Allow inbound TCP 22 from the VPC CIDR in every instance security group
AWS DevOps Engineer Practice Test Answers
Question 1
Scrumtuous Publishing, a media analytics startup, runs a Kinesis Client Library application on a single Amazon EC2 instance to consume from an Amazon Kinesis Data Streams stream that uses 8 shards. Under peak traffic the consumer falls behind and some records expire out of a 36-hour retention window before processing completes. You must improve reliability with the fewest changes to code and architecture. What should you do? (Choose 2)
-
✓ A. Increase the stream’s data retention to 72 hours
-
✓ B. Run the KCL app in an Auto Scaling group of EC2 instances and scale on the MillisBehindLatest CloudWatch metric
The most direct, low-impact fixes are to scale out the consumer and give it more time to clear backlogs. Running multiple KCL workers across instances lets the library assign processors per shard across hosts and reduce lag. Scaling the Auto Scaling group on the KCL metric MillisBehindLatest adds capacity only when needed, which improves reliability while keeping changes minimal.
Increasing the stream retention window provides a safety buffer so records are still available while the consumer catches up. Extending retention reduces the chance that items age out before processing, making Increase the stream’s data retention to 72 hours an effective complement to scaling consumers.
Add more shards to the Kinesis stream to boost throughput primarily helps producer/write throughput and potential consumer parallelism, but it does not fix a single underpowered consumer and may increase coordination overhead; without adding more workers it is not the minimal or most effective change.
Migrate the consumer to AWS Lambda requires non-trivial refactoring and does not guarantee elimination of lag; it is not the smallest change.
Replace the stream with Amazon SQS is a wholesale architectural swap that forfeits shard ordering semantics and is not justified for this problem.
Cameron’s Exam Tip
For Kinesis consumer backlogs, scale KCL workers using the MillisBehindLatest CloudWatch metric and extend retention to protect against data loss; think add consumers, then add time before reshaping producers or rewriting services.
Question 2
Which control prevents IAM user creation across all organization accounts while allowing only approved principals to create users?
-
✓ B. Organizations SCP denying iam:CreateUser except approved principals via aws:PrincipalArn condition
Organizations SCP denying iam:CreateUser except approved principals via aws:PrincipalArn condition is correct because service control policies filter permissions at the organization boundary, even for root and administrators in member accounts. By placing an explicit Deny on iam:CreateUser and scoping an allow list using a condition on aws:PrincipalArn (or principal tags), only approved principals can perform the action anywhere in the organization. This is a true preventive control that cannot be bypassed by identity policies in member accounts.
The option Identity-based IAM policies in each account is incorrect because per-account policies are mutable by local admins, do not restrict the account root, and lack centralized, enforceable prevention across all accounts.
The option EventBridge rule with Lambda to remove unauthorized users is incorrect because it is reactive; the CreateUser call already succeeds and there is a window for misuse and race conditions.
The option Organizations SCP denying iam:CreateLoginProfile is incorrect because it targets password creation for existing users and does not block creating users at all.
Cameron’s Exam Tip
When a question emphasizes organization-wide and preventive enforcement, prefer SCPs with explicit Deny over reactive detection or cleanup. For CreateUser controls, target the caller identity using keys like aws:PrincipalArn or principal tags rather than the target user name. Avoid solutions that depend on event-driven remediation when the requirement is to prevent the action.
Question 3
After a security incident at VegaPay, the payments API running in an EC2 Auto Scaling group must enforce that any production instance receiving an interactive SSH session is automatically shut down. All instances already forward logs to Amazon CloudWatch Logs using the CloudWatch agent. What is the most automated approach to accomplish this requirement?
-
✓ C. Create a CloudWatch Logs subscription filter that invokes an AWS Lambda function to tag the source instance with MARK_FOR_TERMINATION when an SSH login is detected, and use an Amazon EventBridge scheduled rule to run a second Lambda every 45 minutes to terminate all instances carrying that tag
The most hands-off design is Create a CloudWatch Logs subscription filter that invokes an AWS Lambda function to tag the source instance with MARK_FOR_TERMINATION when an SSH login is detected, and use an Amazon EventBridge scheduled rule to run a second Lambda every 45 minutes to terminate all instances carrying that tag. A subscription filter can match SSH login entries and invoke Lambda immediately, and a scheduled Lambda safely enforces termination of tagged instances, removing the need for persistent workers.
Create a CloudWatch Alarm based on a metric filter for SSH logins that publishes to an SNS topic; on-call engineers subscribe and terminate the reported instance manually is not fully automated because it depends on humans taking action after notifications.
Configure a CloudWatch Alarm for SSH login events to send notifications to an SQS queue; run a pool of EC2 worker instances that poll the queue and terminate the indicated instances is both overengineered and incorrect because CloudWatch Alarms publish to SNS, not SQS, and a worker fleet is unnecessary when Lambda can process events.
Wire a CloudWatch Logs subscription directly to AWS Step Functions to attach a FOR_DELETION tag to the instance, and schedule an EventBridge rule at 01:30 UTC each day to trigger a Lambda that terminates tagged instances is invalid because CloudWatch Logs subscriptions can target Lambda, Kinesis Data Streams, or Kinesis Data Firehose, but not Step Functions.
Cameron’s Exam Tip
For event-driven reactions to log content, think CloudWatch Logs subscription filter to Lambda; remember that CloudWatch Alarms publish to SNS (not SQS) and that Step Functions is not a direct subscription target.
Question 4
Which AWS Systems Manager actions enable centralized patching for EC2 instances and on premises servers while limiting patching to 10 PM to 6 AM local time? (Choose 3)
-
✓ B. Register on-prem servers with Systems Manager using Hybrid Activations
-
✓ C. Attach an IAM instance profile to EC2 for Systems Manager access
-
✓ E. Configure a Systems Manager Maintenance Window for off-hours
The correct actions are Register on-prem servers with Systems Manager using Hybrid Activations, Attach an IAM instance profile to EC2 for Systems Manager access, and Configure a Systems Manager Maintenance Window for off-hours. Hybrid Activations brings on-prem systems under Systems Manager as managed instances, the EC2 instance profile enables the SSM Agent to communicate with the service without static credentials, and Maintenance Windows enforce the 10 PM to 6 AM local-time patching schedule when running Patch Manager tasks.
Create an EventBridge schedule to start patching nightly is not ideal because Systems Manager Maintenance Windows provide native, enforceable scheduling, time zone handling, concurrency, and safe cutoffs for patching.
Use Amazon Inspector to automatically apply patches is incorrect because Inspector detects vulnerabilities; patch application is handled by SSM Patch Manager, typically invoked within a Maintenance Window.
Generate long-lived IAM access keys on on-prem hosts for SSM is insecure and not the supported method; on-prem onboarding should use Hybrid Activations with temporary activation codes.
Cameron’s Exam Tip
Look for cues about managing both EC2 and on-prem systems—this points to Systems Manager Hybrid Activations. Scheduling patching within a specific local-time window maps to Systems Manager Maintenance Windows, not generic event scheduling. For EC2, remember the SSM Agent needs an IAM instance profile. Keywords to watch: centralized patching, on-prem and EC2, and off-hours/local-time window, which collectively imply SSM Patch Manager with Maintenance Windows and proper instance enrollment.
Question 5
At Asteria Robotics, about 300 nightly batch computations that cannot be containerized must run on Amazon EC2. Capacity may be reclaimed with only a two minute notice, so the jobs must resume from the most recent checkpoint. The team requires a shared, persistent file system that many instances can mount for scratch data and checkpoints while keeping compute costs low. Which solution best meets these requirements?
-
✓ B. Amazon Elastic File System with EC2 Spot Instances
The best fit is Amazon Elastic File System with EC2 Spot Instances. EFS offers a fully managed, shared NFS file system with POSIX semantics that multiple EC2 instances can mount concurrently, which is ideal for shared checkpoints and intermediate data. Spot Instances significantly reduce compute cost for interruption-tolerant workloads, and checkpointing allows safe resumption after a reclaimed instance event.
Amazon FSx for Lustre with EC2 On-Demand Instances delivers a performant shared file system, but On-Demand compute does not minimize spend for a workload designed to tolerate interruptions.
Amazon EBS with EC2 Reserved Instances is unsuitable because EBS volumes are not a multi-writer shared file system across many instances, and Reserved Instances target steady-state usage rather than interruption-tolerant cost optimization.
Amazon S3 with EC2 On-Demand Instances is not appropriate since S3 is object storage and does not provide a POSIX file system interface needed by many applications, and On-Demand pricing undercuts the cost-saving goal.
Cameron’s Exam Tip
When a workload can checkpoint and resume, pair EC2 Spot for cost savings with a shared POSIX file system like EFS; avoid S3 for file system semantics and remember EBS is per-instance, not concurrently shared.
Question 6
An EC2 instance receives an AccessDenied error when attempting to retrieve an object from a restricted S3 bucket. Which configurations should you check to determine the cause? (Choose 2)
-
✓ B. Instance profile IAM role permissions/trust
-
✓ D. Bucket policy principals/conditions
Instance profile IAM role permissions/trust and Bucket policy principals/conditions are the primary configurations that determine whether an EC2 instance can read from a private S3 bucket. AccessDenied indicates an authorization failure, so you must verify that the instance profile role has s3:GetObject access to the object/key and that the bucket policy does not explicitly deny or conditionally block that principal.
S3 Block Public Access settings is a common red herring; it prevents public ACLs and broad public bucket policies but does not interfere with authorized role-based access.
S3 Object Lock controls retention and legal holds and does not prevent reads by authorized principals.
S3 VPC endpoint policy can deny requests only when traffic goes through a configured S3 VPC endpoint; absent that, it does not apply, and even when present it is secondary to IAM and bucket policies for basic AccessDenied troubleshooting.
Strategy: When you see 403 AccessDenied to S3, first inspect the identity-based policy on the caller (instance profile role), then the resource-based policy (bucket policy). Look for explicit Deny statements and condition keys like aws:PrincipalArn, aws:SourceIp, aws:SourceVpce, s3:prefix, or s3:ExistingObjectTag. Use the IAM Policy Simulator and S3 request context to validate effective permissions. Remember that Block Public Access affects only public access paths, and storage features like Object Lock or default encryption do not change read authorization for properly permitted principals.
Question 7
Northwind Mutual is moving its customer self-service portal to AWS. The application must run on AWS Elastic Beanstalk and connect to an Amazon RDS for MySQL database configured for Multi-AZ. Leadership requires deployments to maintain 100% capacity by launching a separate set of instances, provide an easy rollback path, and avoid problems from partially completed rolling updates. While a new release is being deployed, user experience must remain unaffected. What is the most cost-effective deployment approach to meet these goals?
-
✓ C. Elastic Beanstalk with an external Amazon RDS for MySQL Multi-AZ and immutable deployments
Elastic Beanstalk with an external Amazon RDS for MySQL Multi-AZ and immutable deployments is the best fit. Immutable updates create a separate Auto Scaling group of new instances at full capacity, run health checks before cutover, and roll back cleanly by terminating the new group if issues arise. This avoids inconsistent states from partially completed rolling deployments and limits extra spend to the temporary capacity during the deployment.
Elastic Beanstalk with an external Amazon RDS for MySQL Multi-AZ, blue/green to a parallel environment with a CNAME swap, and keep the previous environment running as a standby achieves zero-downtime and easy rollback but is not the most cost-effective because it retains duplicate environments after the cutover. Deleting the old environment would reduce cost, but
the option as stated keeps it running.
AWS CodeDeploy is not aligned with the requirement to host on Elastic Beanstalk and would require additional integration work, making it an unnecessary and incorrect choice here.
Elastic Beanstalk with an RDS instance created inside the environment set to Multi-AZ, using Rolling with additional batch can leave the environment in a partially updated state and complicates rollback. Embedding RDS in the EB environment also ties database lifecycle to the application environment, which is discouraged for production.
Cameron’s Exam Tip
When the requirement is full capacity during deploys, simple rollback, and avoiding partial updates, think immutable deployments on Elastic Beanstalk and keep the production database external to the EB environment for independent lifecycle management.
Question 8
When deploying a Lambda function with CodeDeploy what approach allows you to run readiness checks and block traffic to the new version until it is ready to avoid transient errors?
-
✓ B. Use BeforeAllowTraffic hook in the Lambda AppSpec
The correct choice is Use BeforeAllowTraffic hook in the Lambda AppSpec. In AWS CodeDeploy for Lambda, the BeforeAllowTraffic lifecycle hook lets you run pre-traffic validation or warmup logic and gate the alias shift until dependencies (such as configuration, caches, or schema updates) are ready. This prevents transient errors during cutover by ensuring the new version only receives traffic once readiness checks pass.
The option Enable Lambda Provisioned Concurrency focuses solely on cold-start latency and does not coordinate deployment-time readiness or block the traffic shift.
The option CodeDeploy canary: 10% traffic for 15 minutes reduces risk but still sends some traffic before readiness is proven, so transient errors can still occur.
The option Use AfterAllowTraffic hook runs post-shift and is too late to prevent initial errors during the transition.
Cameron’s Exam Tip
Memorize CodeDeploy lifecycle events for Lambda.
BeforeAllowTraffic gates the shift and is ideal for validation and warmups.
AfterAllowTraffic is for post-shift checks.
Canary or linear traffic shifting limits blast radius but does not guarantee readiness.
Provisioned Concurrency addresses cold starts only. When the question mentions preventing errors at cutover by running checks before the new version receives traffic, look for the pre-traffic hook.
Question 9
A DevOps team at a digital ticketing startup uses AWS CloudFormation to deploy a Lambda function. After AWS CodePipeline uploads a new zip to Amazon S3 at s3://build-artifacts-123/lambda/payments-handler-v2.zip and triggers a stack update, the stack completes but the function code does not change. What should you do to ensure the new package is picked up quickly? (Choose 3)
-
✓ B. Upload each release with a new object key in the same S3 bucket
-
✓ C. Turn on S3 versioning for the artifacts bucket and reference the S3ObjectVersion in CloudFormation
-
✓ F. Push the package to a new S3 bucket name on every deployment and update the template accordingly
CloudFormation only updates a Lambda function’s code when it detects a change in the S3 location or version of the artifact. If the same object key is overwritten, CloudFormation may not register a change and will skip updating the function. The fastest fixes are to change one of the code location identifiers or provide an explicit version.
Upload each release with a new object key in the same S3 bucket works because a new key alters the S3Key, which CloudFormation treats as a code change. Turn on S3 versioning for the artifacts bucket and reference the S3ObjectVersion in CloudFormation is robust and precise, as CloudFormation will fetch and deploy the specified object version. Push the package to a new S3 bucket name on every deployment and update the template accordingly also forces a redeploy by changing S3Bucket, though it is less operationally convenient.
Enable the AWS SAM transform in the template is not a quick remediation, since it implies refactoring the template and deployment method without addressing the core detection requirement.
Add an extra wait or manual approval to delay the stack update for 5 seconds does nothing to change the S3Bucket, S3Key, or S3ObjectVersion and S3 provides strong consistency, so waiting does not help.
Use AWS CodeDeploy for Lambda deployments is a different deployment approach that requires additional integration and does not resolve CloudFormation’s need for a changed key or version.
Cameron’s Exam Tip
When updating Lambda code with CloudFormation, remember that a deployment occurs only if S3Bucket, S3Key, or S3ObjectVersion changes; the most reliable pattern is to enable S3 versioning and pass S3ObjectVersion in the template.
Question 10
Which AWS solution provides a fully serverless publicly accessible static website with global content acceleration and protection against common web exploits?
-
✓ C. S3 static hosting with CloudFront and an AWS WAF web ACL
The correct choice is S3 static hosting with CloudFront and an AWS WAF web ACL. Amazon S3 provides fully serverless static website hosting, Amazon CloudFront delivers global edge caching for fast performance, and AWS WAF adds rule-based protection against common web attacks such as SQLi and XSS.
The option AWS Lambda + API Gateway + GuardDuty is unnecessary for a static site and lacks edge caching; GuardDuty is threat detection, not an inline web firewall.
The option Amazon S3 + Amazon CloudFront + AWS Shield Standard covers CDN and DDoS basics but omits AWS WAF, so it does not address common application-layer exploits.
The option Amazon EC2 + ElastiCache Redis + AWS Shield Advanced is not serverless and misuses Redis for static content; Shield Advanced focuses on DDoS rather than OWASP-style threats.
Cameron’s Exam Tip
Map the keywords serverless static to S3, global acceleration to CloudFront, and protect against common web exploits to AWS WAF. Remember that GuardDuty is for detection, not blocking, and Shield is for DDoS, not general web filtering.
Question 11
A national retailer, Scrumtuous Outfitters, runs its order management platform on an Amazon EC2 Auto Scaling group behind an Application Load Balancer, with purchases stored in a DynamoDB table named OrderLedger. For analytics, six AWS Lambda functions subscribe to the table’s DynamoDB Streams to compute product counts, track stock levels, generate summaries, and send new records to Amazon Data Firehose. During heavy traffic, the functions encounter stream throttling and some invocations fail, reducing reporting throughput. What is the most scalable and cost-effective way to address this with minimal operational effort?
-
✓ C. Use the DynamoDB Streams Kinesis Adapter with the Kinesis Client Library to scale out consumption across shards and offload complex aggregations to Amazon Managed Service for Apache Flink Studio
The most scalable and low-ops fix is to adopt the Kinesis Client Library patterns for DynamoDB Streams using the DynamoDB Streams Kinesis Adapter with the KCL. This allows coordinated, high-throughput consumption across shards, efficient checkpointing, and the ability to scale out consumers at massive scale. Pairing this with Amazon Managed Service for Apache Flink Studio provides a managed path for real-time transformations and analytics.
Add a new GSI to the OrderLedger table, increase provisioned RCU and WCU, disable DynamoDB Streams, and refactor the Lambda reporting functions to query the table directly is incorrect because indexes and capacity do not address stream consumer throttling and moving to direct queries increases cost and operational complexity without fixing the root cause.
Migrate processing to Amazon ECS with Fargate and use an AWS Glue streaming job to read from the table, while moving interactive analytics to Amazon Managed Service for Apache Flink Studio is incorrect because this adds unnecessary infrastructure and Glue is not a native DynamoDB Streams consumer, leading to more overhead and integration work.
Keep Lambda event source mappings and raise reserved concurrency, set ParallelizationFactor to 10, increase batch size, and switch the table to on-demand capacity is incorrect because while tuning may help somewhat, it cannot overcome shard-level limits and contention when multiple consumers read the same stream, so throttling can persist.
Cameron’s Exam Tip
When multiple consumers of DynamoDB Streams hit throttling, think KCL patterns via the DynamoDB Streams Kinesis Adapter rather than throwing capacity, indexes, or more Lambda concurrency at the problem.
Question 12
How can you automatically start and stop EC2 instances and a single RDS DB only while CodePipeline test stages run which last about four hours and occur roughly three times per week without changing the existing architecture?
-
✓ C. EventBridge CodePipeline state-change events invoking Systems Manager Automation to start/stop EC2 and RDS
The best choice is EventBridge CodePipeline state-change events invoking Systems Manager Automation to start/stop EC2 and RDS. EventBridge can listen for CodePipeline execution and stage state changes and trigger a Systems Manager Automation runbook. The runbook can centrally and idempotently start EC2 instances and start/stop the RDS DB instance when tests begin and end. This solution is event-driven, inexpensive, and requires no architectural changes to the workload.
The option Use CodeBuild pre/post steps with AWS CLI to start and stop resources depends on build steps executing successfully and may fail to stop resources on errors or cancellations. It also introduces custom scripts and credentials management, increasing operational risk compared to a managed runbook.
The option AWS Instance Scheduler with a weekly cron schedule is time-based rather than event-driven. It can power on environments when no tests are running or miss ad hoc runs, defeating the goal of aligning runtime strictly to pipeline test windows.
The option Migrate to Aurora Serverless, add an ALB, and use Lambda to toggle power changes the architecture without necessity. Aurora Serverless and an ALB are unrelated to the core requirement of event-driven start/stop based on pipeline stages, adding cost and complexity.
Cameron’s Exam Tip
Prefer event-driven designs that directly integrate with pipeline lifecycle via EventBridge when requirements specify “only while tests run.” Use Systems Manager Automation for multi-resource operational tasks like coordinated EC2 and RDS start/stop. Be cautious of time-based schedules for pipelines and avoid architecture changes when the prompt says not to redesign.
Question 13
A national nonprofit health directory operated by the Maple Health Alliance runs a public website listing clinics, hospitals, medical specialists, and related services. The organization also provides several public REST APIs that partners use to search this data, which are implemented with AWS Lambda. All records are stored in an Amazon DynamoDB table named CareDirectory, and search indexes are kept in an Amazon OpenSearch Service domain called care-search-v2. Leadership has asked a DevOps engineer to implement a deployment approach that results in zero downtime if a release fails and that stops any subsequent releases after a failure. During updates, the platform must retain 100% capacity with no temporary reduction to avoid performance degradation. What is the most efficient way to meet these requirements?
-
✓ C. Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy blue/green for releases; run the web tier on AWS Elastic Beanstalk with the Immutable policy
The most efficient pattern is Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy blue/green for releases; run the web tier on AWS Elastic Beanstalk with the Immutable policy. Elastic Beanstalk immutable deployments create a parallel Auto Scaling group with the new version, validate health, and only then swap, which provides zero downtime and an easy, automatic rollback if health checks fail. CodeDeploy blue/green for Lambda shifts traffic via aliases safely, and a failed deployment status prevents further releases in a typical pipeline.
Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy blue/green for releases; run the web tier on AWS Elastic Beanstalk with the All at once policy is wrong because all-at-once replaces every instance at the same time, causing an interruption and not maintaining availability.
Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy in-place for releases; run the web tier on AWS Elastic Beanstalk with the Rolling with additional batch policy still lacks full isolation and atomic rollback; while capacity can be preserved, rolling updates can leave partially updated fleets if failures occur, and in-place is not the appropriate mode for Lambda traffic shifting.
Create DynamoDB, Lambda, and Amazon OpenSearch Service with AWS CloudFormation; use AWS CodeDeploy blue/green for releases; host the website on Amazon S3 with cross-Region replication does not fit a dynamic web tier and does not provide a robust zero-downtime deployment mechanism for the application servers.
Cameron’s Exam Tip
For strict zero downtime with full capacity and safe rollback on EC2/Elastic Beanstalk, think Immutable. For Lambda, think CodeDeploy blue/green with aliases. Keywords like “no capacity reduction” and “failed deploy must block further releases” point away from rolling and all-at-once strategies.
Question 14
How can you automatically update an S3 static status page when EC2 Auto Scaling instances launch or terminate while maintaining a durable searchable log of the scaling events?
-
✓ C. Use an EventBridge rule for Auto Scaling instance launch and terminate to invoke Lambda; Lambda updates the S3 page and writes events to CloudWatch Logs
The best solution is Use an EventBridge rule for Auto Scaling instance launch and terminate to invoke Lambda; Lambda updates the S3 page and writes events to CloudWatch Logs. Amazon EventBridge can capture EC2 Auto Scaling instance lifecycle events in near real time and trigger an AWS Lambda function. Lambda can update the S3-hosted page or a backing JSON file to reflect the current fleet immediately, and it can write a structured log entry to Amazon CloudWatch Logs, which is durable and searchable.
The option Create an EventBridge rule that delivers events to CloudWatch Logs and periodically export to S3 does not update the S3 page in real time and requires manual or scheduled exports, missing the immediate status requirement.
The option Configure an EventBridge rule with two targets, S3 and CloudWatch Logs is invalid because S3 is not a supported direct EventBridge target, so you cannot push updates to S3 that way.
The option Run a scheduled EventBridge rule every 15 minutes to have Lambda scan the Auto Scaling group and update S3, logging to S3 is periodic rather than event driven, introduces lag, and S3 logs are not inherently searchable without additional services such as Athena.
Cameron’s Exam Tip
For event driven automation, pair EventBridge with Lambda. Remember that S3 is not an EventBridge target, so updates to S3 require an intermediary such as Lambda. Prefer CloudWatch Logs for durable, searchable operational logs. For real time page updates on S3 static hosting, have Lambda write an object (for example JSON) that the page reads on load or via client side polling.
Question 15
A platform engineer at Orion Retail launched an Amazon EC2 instance in a private subnet inside an Amazon VPC. The instance uses an instance profile to fetch an object from a tightly restricted Amazon S3 bucket, but the request returns a 403 Access Denied response. What are two plausible reasons for this failure? (Choose 2)
-
✓ B. The S3 bucket policy does not grant the required permission
-
✓ E. There is a misconfiguration in the instance profile IAM role
A 403 Access Denied from S3 almost always indicates an authorization problem. The most common causes in this scenario are a restrictive resource policy or an identity policy issue on the role the instance uses. Therefore, The S3 bucket policy does not grant the required permission and There is a misconfiguration in the instance profile IAM role are valid causes.
Default encryption is enabled on the S3 bucket is not a cause because S3 auto-decrypts for authorized principals; encryption alone does not deny access unless KMS permissions are missing, which is not stated here.
The EC2 instance security group blocks outbound HTTPS traffic would prevent network connectivity and typically lead to timeouts or connection errors rather than an S3 403, which is an authorization error.
S3 versioning is enabled on the bucket does not impact permissions and therefore would not trigger a 403 Access Denied.
Cameron’s Exam Tip
When you see an S3 403 Access Denied, think policies first: identity policy on the IAM role, bucket policy (including conditions like aws:SourceVpce or explicit denies), and the role’s trust policy linking the instance profile to the role.
Question 16
In an AWS CodeBuild buildspec.yml file how should you configure pushing a Docker image to a private Amazon ECR repository so that the image is pushed only after a successful build?
-
✓ C. Use post_build commands to push the image to ECR
The correct approach is to configure Use post_build commands to push the image to ECR. In CodeBuild, later phases run only if earlier phases succeed. By placing the docker push commands in post_build, the image is pushed only after a successful build. Avoid using the finally sequence because it runs regardless of success or failure.
The option Use a finally block in post_build to push the image is incorrect because finally always runs, which can push even on failed builds.
AWS CodePipeline is not correct here because while it can orchestrate stages, the requirement is about ensuring the push happens conditionally within the build job; this is best handled directly in buildspec.yml.
Use pre_build commands to push the image is incorrect because it executes before the build completes, potentially pushing artifacts from a failed build.
Cameron’s Exam Tip
Learn the buildspec phase semantics. Use pre_build for steps like ECR login, build for compilation and image build, and post_build for success-only actions like pushing to ECR. Do not use finally when you need actions to run only on success.
Question 17
Nova Mobility, a transportation analytics company, plans to launch a real-time road-conditions web portal on AWS. Leadership requires an application stack that remains available during failures and minimizes downtime. They also want a native capability that continuously scans workloads for vulnerabilities, unintended exposure, and drift from AWS security best practices. Which architecture best satisfies these requirements?
-
✓ B. Use Amazon Inspector to automatically evaluate applications for exposure, vulnerabilities, and deviations from AWS best practices, run an EC2 Auto Scaling group spread across three Availability Zones behind an Application Load Balancer, use Amazon Aurora for the database tier, and in Amazon Route 53 create an alias record at the zone apex that targets the load balancer
The correct design is Use Amazon Inspector to automatically evaluate applications for exposure, vulnerabilities, and deviations from AWS best practices, run an EC2 Auto Scaling group spread across three Availability Zones behind an Application Load Balancer, use Amazon Aurora for the database tier, and in Amazon Route 53 create an alias record at the zone apex that targets the load balancer. Amazon Inspector continuously scans for software vulnerabilities and unintended network exposure, while the ALB and Multi-AZ Auto Scaling group provide resiliency. Aurora adds a highly available managed database layer, and a Route 53 alias record is required to map the root domain to an ALB target.
Use Amazon Macie for automated security assessment to help improve the security and compliance of your applications, store data in Amazon DynamoDB, place an Auto Scaling group of EC2 instances across four Availability Zones behind an Application Load Balancer, and in Amazon Route 53 create a non-alias A record at the root domain pointing to the load balancer is incorrect because Macie is for sensitive data discovery and classification, not application vulnerability scanning, and a non-alias A record cannot target an ALB at the zone apex.
Use Amazon GuardDuty for automated security assessment to help improve the security and compliance of your applications, add Amazon ElastiCache for application caching, launch an Auto Scaling group of EC2 instances in two Availability Zones and attach it to an Application Load Balancer, deploy Amazon RDS for MySQL with Multi-AZ and read replicas, and in Amazon Route 53 create a CNAME for the root domain to the load balancer is incorrect because GuardDuty is a threat detection service rather than an automated application assessment tool, and a CNAME cannot be used at the root domain.
Use AWS Shield for automated security assessment to help improve the security and compliance of your applications, run an Auto Scaling group of EC2 instances across three Availability Zones with an Application Load Balancer in front, use Amazon RDS for MySQL with Multi-AZ, and in Amazon Route 53 create a non-alias A record for the apex zone to point to the load balancer is incorrect because Shield focuses on DDoS protection, not vulnerability assessment, and a non-alias A record cannot reference an ALB at the apex.
Cameron’s Exam Tip
For automated application vulnerability scanning, choose Amazon Inspector, not Macie, GuardDuty, or Shield. When pointing a root domain to an ALB, use a Route 53 alias record since ALBs do not have static IPs and CNAMEs are not allowed at the apex.
Question 18
Which AWS tool can discover a legacy Java application, containerize it for deployment to Amazon ECS or Amazon EKS, and bootstrap an automated CI/CD pipeline?
-
✓ B. AWS App2Container with CodeBuild/CodeDeploy
AWS App2Container with CodeBuild/CodeDeploy is correct because it specifically targets existing Java applications, performs automated discovery, builds container images and task or pod definitions for ECS or EKS, and can generate a working CI/CD pipeline using CodeBuild and CodeDeploy (and optionally CodePipeline). This directly covers discovery, containerization, and automated delivery with minimal refactoring.
The option AWS Copilot CLI for ECS with CI/CD is incorrect because Copilot focuses on deploying new containerized services and pipelines but does not analyze or containerize applications running on existing servers.
The option AWS Proton service templates is incorrect because Proton standardizes and manages infrastructure and service templates across environments but does not transform legacy apps into containers or generate app-specific containers from existing VMs.
The option AWS Elastic Beanstalk Docker with CodePipeline is incorrect because while Beanstalk can host Docker and integrate with pipelines, you still need to manually containerize the app; it does not discover and containerize an existing Java workload.
Cameron’s Exam Tip
When a question emphasizes discovering an existing on-prem or legacy application, automatically containerizing it, and scaffolding CI/CD, look for App2Container. Tools like Copilot and Proton are powerful for new container workloads or standardized templating but do not perform legacy workload discovery and conversion. If the scenario mentions ECS/EKS targets and minimal refactoring, App2Container is the best fit.
Question 19
A media intelligence startup named NovaSight plans to deploy a self-managed Apache Kafka cluster with 9 brokers spread across 3 Availability Zones. Kafka is stateful and each broker must keep its data on its own EBS volume. The team needs automatic recovery when a broker fails and must guarantee that any replacement broker reattaches the correct EBS volume in the same Availability Zone. What is the most reliable CloudFormation-based approach to meet these requirements?
-
✓ C. Use a nested CloudFormation template that defines an Auto Scaling group with min and max 1 and one EBS volume tagged to match that group, include a bootstrap script to attach the tagged volume at boot, and reference the nested stack nine times
The most resilient pattern for stateful EC2 workloads like Kafka is one Auto Scaling group per node with a fixed size of one, paired with a persistent EBS volume that is selected and attached by tags at boot. Use a nested CloudFormation template that defines an Auto Scaling group with min and max 1 and one EBS volume tagged to match that group, include a bootstrap script to attach the tagged volume at boot, and reference the nested stack nine times provides health-based replacement and guarantees the correct volume is reattached in the same Availability Zone.
Create 9 EC2 instances with EBS volumes and explicit attachments in a single CloudFormation stack, and rely on stack updates to recreate instances if they are terminated is unreliable because CloudFormation will not automatically recreate instances terminated outside of a stack operation, so the stateful mapping can be lost.
Create an Auto Scaling group with desired capacity 9 across 3 AZs and provision 9 EBS volumes across those AZs, then use user data to claim any free volume in the same AZ at launch risks AZ imbalance and volume mismatch; EBS volumes are AZ-scoped and a single ASG can launch replacements in an AZ without an available matching volume.
Create 9 EC2 instances with EBS volumes and define the attachments, and on termination trigger drift detection and attempt remediation to restore the attachment is insufficient because drift detection is read-only and does not enforce automatic recreation or correct reattachment behavior.
Cameron’s Exam Tip
For stateful EC2 services that must survive instance replacement, think one-instance ASG + EBS volume tagged per node + user data to reattach, and remember that EBS volumes are AZ-scoped so designs must avoid cross-AZ reassignment.
Question 0
Which AWS service provides continuous threat detection across an account by analyzing CloudTrail events, VPC flow logs, and DNS logs?
-
✓ B. Amazon GuardDuty
Amazon GuardDuty is the correct choice because it continuously analyzes AWS CloudTrail events, VPC flow logs, and DNS logs to detect compromised resources and suspicious behavior, producing actionable security findings across the account.
Amazon Inspector is incorrect because it focuses on vulnerability and exposure assessments for workloads and does not analyze CloudTrail, VPC flow logs, or DNS logs for real-time threat detection.
Amazon Macie is incorrect as it specializes in discovering and protecting sensitive data in S3, not in detecting threats from account activity.
Amazon Detective is incorrect because it is designed for investigation and root-cause analysis of findings, not for continuous detection.
Cameron’s Exam Tip
When you see keywords like continuous account-level threat detection and data sources such as CloudTrail + VPC Flow Logs + DNS, think GuardDuty. If the prompt emphasizes vulnerability scanning, choose Inspector; for S3 sensitive data discovery, choose Macie; for investigation/graph analysis, choose Detective; for findings aggregation, think Security Hub.
Question 1
A platform engineer at Borealis Health is moving a Dockerized service from a company server room to AWS. The team wants the runtime to require as little infrastructure management as possible. They also need encrypted connectivity between the on-premises network and AWS so the application can reach internal systems securely during operation. What should the engineer implement to meet these goals? (Choose 2)
-
✓ D. Configure an AWS Site-to-Site VPN between the data center and the VPC
-
✓ E. Run the workload on Amazon ECS with the Fargate launch type in a dedicated VPC
The least-managed way to run containers is Run the workload on Amazon ECS with the Fargate launch type in a dedicated VPC, because Fargate abstracts the EC2 hosts, patching, and scaling so you only define tasks and services. For encrypted, private connectivity to on-premises, Configure an AWS Site-to-Site VPN between the data center and the VPC provides IPsec-based encryption over the internet with managed endpoints.
Migrate the containers to Amazon ECS on the EC2 launch type in a dedicated VPC is heavier to operate since you must manage the EC2 instances, which conflicts with the goal of minimal overhead.
Front the service with a Network Load Balancer using IP targets and an HTTPS listener is unsuitable because NLB does not offer an HTTPS (Layer 7) listener and a load balancer does not address on-premises link encryption.
Provision AWS Direct Connect for the on-premises link and use it without additional encryption does not meet the encryption requirement because DX is not encrypted by default and would still need IPsec or MACsec.
Cameron’s Exam Tip
When you see minimal management for containers, think Fargate. For secure on-premises-to-AWS connectivity, the default answer is often Site-to-Site VPN unless the question explicitly requires private, dedicated bandwidth or MACsec with Direct Connect.
Question 2
How should a Lambda function retrieve Amazon RDS credentials at runtime while ensuring the database password is automatically rotated every 30 days?
-
✓ D. AWS Secrets Manager with rotation enabled
The best solution is AWS Secrets Manager with rotation enabled. Secrets Manager securely stores database credentials, lets the Lambda execution role fetch them at runtime via GetSecretValue, and supports managed rotation for Amazon RDS using an AWS Lambda rotation function. This directly satisfies the requirement for secure retrieval without embedding credentials and for automatic password rotation every 30 days.
The option AWS Systems Manager Parameter Store SecureString is not ideal because while it can securely store and retrieve values, it lacks built-in, managed rotation for RDS credentials. Meeting the rotation requirement would require building and maintaining custom rotation code and schedules.
The option Amazon RDS IAM authentication tokens uses ephemeral tokens instead of passwords, which can be secure, but it does not comply with a policy that explicitly mandates rotating a database password. If the policy were phrased to allow token-based auth without passwords, this could be acceptable, but under the stated requirement it is not.
The option AWS Key Management Service is incorrect because KMS manages encryption keys, not application secrets. It cannot store or rotate database passwords; it is typically used to encrypt secrets stored elsewhere.
Cameron’s Exam Tip
When you see requirements for secret storage plus automatic rotation of database credentials, the go-to answer is Secrets Manager. Parameter Store is great for configuration and parameters but lacks managed rotation. KMS is for key management, not secret storage. Be careful with RDS IAM authentication; it is secure and eliminates passwords, but if the question requires password rotation, it will not satisfy that compliance requirement.
Question 3
A national health insurer runs its member portal on AWS and stores protected documents in a server-side encrypted Amazon S3 bucket. The DevOps team configured SAML federation from an on-premises Active Directory group to grant controlled access to the bucket. Compliance now requires automatically detecting any change to the bucket access configuration or the federated role and having the ability to roll back unintended admin edits. What is the quickest way to detect such configuration changes?
-
✓ C. Configure an AWS Config rule with a configuration change trigger scoped to the S3 bucket and federated role, and invoke an AWS Systems Manager Automation runbook via Lambda to restore approved settings
Configure an AWS Config rule with a configuration change trigger scoped to the S3 bucket and federated role, and invoke an AWS Systems Manager Automation runbook via Lambda to restore approved settings is the fastest because change-triggered AWS Config evaluations fire in near real time when the resource state changes, and they can directly invoke remediation workflows such as SSM Automation through Lambda.
Create an AWS Config rule with a periodic trigger that evaluates the S3 bucket every 45 minutes and use a Lambda remediation to roll back changes is slower by design because it only evaluates on a schedule, so drift could persist for tens of minutes before detection.
Integrate Amazon EventBridge with an AWS Lambda function to run a 20-minute schedule that scans the IAM policy attached to the federated role and reverts unexpected changes relies on polling, which delays detection and requires custom code to inspect policies, increasing complexity and reducing responsiveness.
Use Amazon EventBridge with CloudTrail events to invoke a Lambda function whenever API calls occur and attempt to detect and revert changes can capture API activity but lacks resource state evaluation and typically needs many brittle patterns to cover all relevant APIs, making it less reliable than AWS Config for rapid, state-based detection and automated remediation.
When the question asks for the fastest detection of configuration drift on AWS resources, prefer AWS Config rules with configuration change triggers over periodic scans or custom polling via EventBridge schedules.
Question 4
ECS Fargate tasks running in private subnets without a NAT gateway fail with the error ‘ResourceInitializationError unable to pull secrets or registry auth’ What actions will allow the tasks to pull container images and retrieve secrets so they can start successfully? (Choose 2)
-
✓ B. Create VPC endpoints: ECR API, ECR DKR, and S3
-
✓ E. Fix the task execution IAM role permissions for ECR and Secrets Manager
Create VPC endpoints: ECR API, ECR DKR, and S3 and Fix the task execution IAM role permissions for ECR and Secrets Manager together address both network path and authorization needed for Fargate to pull images and retrieve secrets in private subnets without NAT. ECR requires interface endpoints for com.amazonaws.<region>.ecr.api (token retrieval) and com.amazonaws.<region>.ecr.dkr (registry), plus an S3 gateway endpoint because ECR image layers are stored in S3. The task execution role must include permissions such as ecr:GetAuthorizationToken, ecr:BatchGetImage, ecr:GetDownloadUrlForLayer, and required logs/secrets/KMS actions when used.
Run tasks in public subnets with public IPs could work by providing internet egress, but it contradicts the private/no-NAT posture and is unnecessary when VPC endpoints are configured.
Add a Secrets Manager VPC endpoint only is incomplete because registry auth and layer downloads still fail without ECR and S3 endpoints.
Enable awsvpc networking on tasks changes nothing for Fargate because awsvpc is already mandatory and does not solve registry connectivity.
When you see Fargate start failures with ResourceInitializationError mentioning secrets or registry auth in private subnets, think: ECR API + ECR DKR interface endpoints and an S3 gateway endpoint, plus correct execution role permissions. If secrets are referenced, add Secrets Manager (and often KMS) permissions/endpoints. NAT Gateway would also restore connectivity but is not required and is less aligned with a private, endpoint-based design.
Question 5
Marlowe Analytics has a fleet of Amazon EC2 instances that engineers currently reach via SSH, and keys are rotated when staff depart. The CTO has directed the team to retire EC2 key pairs and move all access to AWS Systems Manager Session Manager, and sessions must traverse only private connectivity inside the VPC with no internet exposure. What actions should a DevOps engineer implement to satisfy these requirements? (Choose 2)
-
✓ B. Provision interface VPC endpoints for Systems Manager in the VPC to keep Session Manager traffic private
-
✓ D. Associate an IAM instance profile with each instance that includes permissions such as AmazonSSMManagedInstanceCore
To remove SSH key usage and constrain access to a private path, instances must have SSM permissions and Session Manager traffic must stay inside the VPC. Attaching Associate an IAM instance profile with each instance that includes permissions such as AmazonSSMManagedInstanceCore provides the required permissions for the SSM Agent to register and handle sessions.
Ensuring private connectivity requires Provision interface VPC endpoints for Systems Manager in the VPC to keep Session Manager traffic private, which uses AWS PrivateLink so session control and data channels do not traverse the public internet.
Launch a new EC2 instance as a bastion host for administrative access still depends on SSH and key pairs and does not meet the mandate to use Session Manager or eliminate public pathways.
Create an interface VPC endpoint for Amazon EC2 targets the EC2 API, not the Systems Manager and message channels used by Session Manager, so it does not satisfy the private Session Manager requirement.
Allow inbound TCP 22 from the VPC CIDR in every instance security group is unnecessary and contrary to the goal because Session Manager does not need port 22 at all.
For private Session Manager access, think in two steps: attach an instance profile with AmazonSSMManagedInstanceCore and create interface endpoints for the SSM services; you do not need to open SSH ports or run a bastion.
Jira, Scrum & AI Certification |
---|
Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.