AWS DevOps Engineer Certification Practice Exams

AWS Certified DevOps Engineer Logo and Badge

These questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.

Free AWS DevOps Engineer Exam Topics Tests

Over the past few months, I have been helping cloud engineers, developers, and automation professionals prepare for roles that thrive in the AWS ecosystem. The goal is simple: to help you design, automate, and manage production environments using the same cloud services trusted by leading enterprises around the world.

A key step in that journey is earning the AWS Certified DevOps Engineer Professional credential. This certification demonstrates your ability to implement continuous delivery, automate security controls, and monitor, manage, and operate systems at scale across AWS environments.

Whether you are a software developer, product owner, database administrator, or solutions architect, the AWS DevOps Engineer Professional certification gives you a solid foundation in automation and operational excellence. You will learn to build CI/CD pipelines, manage infrastructure as code, integrate with AWS services such as CodeBuild, CodeDeploy, CodePipeline, CloudFormation, CloudWatch, and Systems Manager, and maintain high availability across distributed systems.

That is exactly what the AWS DevOps Engineer Professional Certification Exam measures. It validates your expertise in automation, infrastructure management, monitoring, and incident response while ensuring you can build reliable, secure, and cost optimized systems for production workloads.

AWS Practice Questions and Exam Simulators

Through my Udemy courses on AWS certifications and through the free question banks at certificationexams.pro, I have seen the areas where most learners struggle. That experience helped shape a complete set of AWS DevOps Engineer Practice Questions that closely match the format and challenge of the real certification exam.

You will also find AWS DevOps Engineer Sample Questions and full AWS DevOps Engineer Practice Tests to evaluate your readiness. Each AWS DevOps Engineer Exam Question and Answer set includes clear explanations that show you how to reason through automation, monitoring, and deployment scenarios.

These materials are not about memorizing answers. They teach you to think like a DevOps professional working in live AWS environments, whether you are tuning CloudWatch alarms, setting up blue green deployments, or defining CI/CD strategies with CodePipeline.

Real AWS Exam Readiness

If you are searching for Real AWS DevOps Exam Questions, this collection provides authentic examples of what to expect on test day. Each question is written to capture the tone and depth of the real exam. These are not AWS DevOps Engineer Exam Dumps or copied content. They are original learning resources created to help you build genuine skill.

The AWS DevOps Engineer Exam Simulator replicates the timing, structure, and complexity of the official exam, giving you the confidence to perform under real testing conditions.

You can also explore focused AWS DevOps Engineer Braindump style study sets and AWS DevOps Engineer Exam Dumps that organize questions by topic, such as automation, monitoring, or continuous delivery pipelines.

Every AWS DevOps Engineer Practice Test is designed to challenge you slightly beyond the real exam, preparing you to excel when it matters most.

Learn and Succeed as an AWS DevOps Engineer

The purpose of these AWS DevOps Engineer Exam Questions is not only to help you pass but to help you grow into a cloud professional who can automate, monitor, and optimize complex systems across AWS. You will gain the confidence to design and maintain scalable, secure, and efficient architectures that meet real business needs.

Start today with the AWS DevOps Engineer Practice Questions, test yourself using the AWS DevOps Engineer Exam Simulator, and see how ready you are for the next step in your cloud career.

Good luck, and remember that every successful cloud operations career begins with mastering the tools and services that drive automation and continuous delivery on AWS.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AWS DevOps Engineer Professional Practice Exam

Question 1

NovaEdge Solutions runs a CI/CD pipeline in AWS that builds with CodeBuild and deploys to a fleet of Amazon EC2 instances through CodeDeploy. The team needs a mandatory human sign-off before any release reaches production, even when all unit and integration tests pass, and the workflow is managed by CodePipeline. What is the simplest and most cost-effective way to add this enforced approval gate?

  • ❏ A. Run the unit and integration tests with AWS Step Functions, then add a test action after the last deploy, add a manual approval with SNS notifications, and finally add a deploy action to promote to production

  • ❏ B. Use CodeBuild to execute the tests, insert a Manual approval action in CodePipeline immediately before the production CodeDeploy stage with SNS notifications to approvers, then proceed to the production deploy after approval

  • ❏ C. Use CodeBuild for tests and create a custom CodePipeline action with a bespoke job worker to perform the approval, notify through SNS, and promote on success

  • ❏ D. Perform the tests in a self-managed Jenkins or GitLab on EC2, add a test action, add a manual approval in the pipeline with SNS notifications, and then deploy to production

Question 2

Which disaster recovery pattern best meets a low-cost RTO of three hours and an RPO of about fifteen minutes for an application that uses EC2 Auto Scaling and an Application Load Balancer with RDS for PostgreSQL behind Route 53?

  • ❏ A. Backup and restore with AWS Backup cross-Region copies and rebuild on failover

  • ❏ B. Pilot light in another Region with cross-Region RDS read replica, minimal app stack, Route 53 failover, and promote replica on disaster

  • ❏ C. Warm standby in another Region with scaled-down but running stack and scale out on failover

  • ❏ D. Single-Region Multi-AZ only for EC2 and RDS

Question 3

The platform engineering team at NovaCare Analytics manages more than 320 AWS accounts through AWS Organizations. Security mandates that every EC2 instance launches from a centrally approved, hardened base AMI. When a new AMI version is released, the team must ensure no new instances are started from the previous AMI, and they also need a centralized and auditable view of AMI compliance across all accounts. What approach should be implemented to meet these goals across the organization? (Choose 2)

  • ❏ A. Use AWS Systems Manager Automation distributed with AWS CloudFormation StackSets to build the AMI inside every account

  • ❏ B. Deploy an AWS Config custom rule with AWS CloudFormation StackSets to check instance AMI IDs against an approved list and aggregate results in an AWS Config aggregator in the management account

  • ❏ C. Create the AMI in a central account and copy it to each account and Region whenever a new version is published

  • ❏ D. Use AWS Systems Manager Automation to produce the AMI in a central account and share it with organizational accounts, then revoke sharing on the previous AMI and share the new one when updated

  • ❏ E. Publish the approved AMI as a product in AWS Service Catalog across the organization

Question 4

Which configuration provides near-real-time alerts for AWS Organizations when membership or account changes occur? (Choose 2)

  • ❏ A. AWS Security Hub and Amazon Detective

  • ❏ B. AWS CloudTrail organization trail with EventBridge to SNS

  • ❏ C. AWS Control Tower notifications

  • ❏ D. AWS Config organization aggregator with rules to SNS or EventBridge

  • ❏ E. Third-party SIEM with Amazon GuardDuty

Question 5

Northwind Outfitters operates a web API that fronts Amazon EC2 instances behind an Application Load Balancer by using an Amazon API Gateway REST API. The engineering team wants new releases to be rolled out with minimal user impact and with the ability to revert quickly if defects are found. What approach will achieve this with the least changes to the existing application?

  • ❏ A. Use AWS CodeDeploy blue/green with the Auto Scaling group behind the ALB and shift production traffic to the new revision

  • ❏ B. Create a parallel environment behind the ALB with the new build and configure API Gateway canary release to send a small portion of requests to it

  • ❏ C. Stand up a duplicate environment and update the Route 53 alias to the new stack

  • ❏ D. Create a new target group for the ALB and have API Gateway weight requests directly to that target group

Question 6

Which AWS service enables preventive governance by allowing users to provision only preapproved architectures through a self service interface while enforcing tags and parameters?

  • ❏ A. AWS Organizations SCPs with tag policies

  • ❏ B. AWS Service Catalog

  • ❏ C. AWS Config

  • ❏ D. AWS CloudFormation Guard

Question 7

A platform team at a fintech startup plans to launch workloads in six Amazon VPCs across two AWS accounts. The services must have any-to-any connectivity with transitive routing among the VPCs. Leadership wants centralized administration of network traffic policies for consistent security. What architecture should the team implement to meet these needs with the least operational overhead?

  • ❏ A. Configure VPC peering between every VPC to build a full mesh and centralize WebACLs with AWS WAF

  • ❏ B. Use AWS Transit Gateway for transitive connectivity among VPCs and manage network access policies centrally with AWS Firewall Manager

  • ❏ C. Set up AWS PrivateLink endpoints between each VPC and use AWS Security Hub for centralized security policies

  • ❏ D. Establish AWS Site-to-Site VPN tunnels between each pair of VPCs and manage policies with AWS Firewall Manager

Question 8

Which solution provides the most cost effective way to obtain per object S3 access counts for immediate analysis?

  • ❏ A. Amazon S3 Storage Lens

  • ❏ B. Enable S3 data events in AWS CloudTrail and query with Amazon Athena

  • ❏ C. Turn on S3 server access logging and analyze logs with Amazon Athena

  • ❏ D. Use Redshift Spectrum with a Redshift cluster to query S3 access logs

Question 9

A fintech startup runs a critical service on Amazon EC2 instances in an Auto Scaling group. A lightweight health probe on each instance runs every 5 seconds to verify that the application responds. The DevOps engineer must use the probe results for monitoring and to raise an alarm when failures occur. Metrics must be captured at 1 minute intervals while keeping costs low. What should the engineer implement?

  • ❏ A. Use a default CloudWatch metric at standard resolution and add a dimension so the script can publish once every 60 seconds

  • ❏ B. Amazon CloudWatch Synthetics

  • ❏ C. Create a custom CloudWatch metric and publish statistic sets that roll up the 5 second results, sending one update every 60 seconds

  • ❏ D. Use a custom CloudWatch metric at high resolution and push data every 5 seconds

Question 10

In AWS CloudFormation which UpdatePolicy performs rolling updates on an Auto Scaling group so instances adopt a new launch configuration while keeping at least six of ten instances in service?

  • ❏ A. UpdateReplacePolicy

  • ❏ B. AutoScalingReplacingUpdate

  • ❏ C. UpdatePolicy: AutoScalingRollingUpdate

  • ❏ D. AWS CodeDeploy

AWS Certified DevOps Engineer Logo and Badge

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.

Question 11

Orion Byte Labs runs several services with a MERN front end behind NGINX and uses AWS CodeDeploy to automate rollouts. The team has a QA deployment group and will add PREPROD and PRODUCTION groups later. They want the NGINX log level to be set dynamically at deploy time so each group can have different verbosity without creating separate application revisions or maintaining different scripts per environment. Which approach provides the lowest ongoing management effort and avoids multiple script variants?

  • ❏ A. Invoke a script during ApplicationStart that uses the DEPLOYMENT_GROUP_ID environment variable to detect the group and update the NGINX log level

  • ❏ B. Use a single script that reads the DEPLOYMENT_GROUP_NAME environment variable in CodeDeploy and call it in the BeforeInstall hook to set NGINX logging per group

  • ❏ C. Tag each EC2 instance with its deployment group and have a ValidateService hook script call aws ec2 describe-tags to choose the log level

  • ❏ D. Define a custom environment variable in CodeDeploy for each environment such as QA, PREPROD, and PROD, and have a ValidateService hook script read it to set the log level

Question 12

What is the quickest way to isolate an impaired Availability Zone for an Application Load Balancer while diverting traffic to healthy zones with minimal changes?

  • ❏ A. Configure Auto Scaling health checks to replace instances in that AZ

  • ❏ B. Remove the impaired AZ subnet from the ALB

  • ❏ C. Amazon Route 53 ARC zonal shift for the ALB

  • ❏ D. Route 53 weighted routing across two ALBs

Question 13

At Northwind Publishing, a CI/CD pipeline runs ephemeral integration tests using a CloudFormation template that will later be promoted to production. The template provisions an Amazon S3 bucket and an AWS Lambda function that creates thumbnail images for any photos uploaded to the bucket. The test automation uploads 15 sample images and validates that the thumbnails appear in the same bucket, after which the pipeline attempts to delete the stack and the deletion consistently fails. What is the most likely cause and how should it be fixed?

  • ❏ A. A stack policy is blocking stack deletion; remove the stack policy and try again

  • ❏ B. Add a Delete: Force setting to the S3 bucket in the template so CloudFormation auto-empties the bucket

  • ❏ C. The S3 bucket is not empty and CloudFormation cannot delete it; use a Lambda-backed custom resource to purge the bucket on stack deletion

  • ❏ D. The Lambda function is still using the bucket; add a WaitCondition to the function to allow the delete to proceed

Question 14

Which actions provide TLS for a custom domain, protection against edge DDoS attacks and web exploits, and reduced global latency for an application behind an ALB? (Choose 2)

  • ❏ A. Use AWS Global Accelerator in front of the ALB

  • ❏ B. Attach an AWS WAF web ACL to the CloudFront distribution

  • ❏ C. Deploy CloudFront in front of the ALB and use ACM for the custom domain TLS

  • ❏ D. Associate an AWS WAF web ACL with the ALB

  • ❏ E. Create a CloudFront distribution using the Auto Scaling group as the origin

Question 15

Lighthouse Media is building a customer portal with AWS CloudFormation where the data platform group manages a stack for the database layer and the application team maintains a separate stack for the web and API components. The application team must reference resources created by the data platform group while both teams preserve independent lifecycles and perform resource-level change set reviews. The application team will deploy through their CI/CD pipeline. What is the best approach to enable this cross-stack usage without coupling the teams?

  • ❏ A. Use AWS CloudFormation StackSets to share parameters across the database and application stacks

  • ❏ B. Export Outputs from the database stack and import them into the application stack with Fn::ImportValue

  • ❏ C. Convert the database template into a nested stack within the application stack

  • ❏ D. Store database resource identifiers in AWS Systems Manager Parameter Store and reference them via dynamic references in the application template

Question 16

What is the best way to minimize EC2 Auto Scaling scale-out readiness time for a stateless web tier that currently takes 27 minutes to install the application and 18 minutes to warm a local cache?

  • ❏ A. Use CodeDeploy during lifecycle hooks to install the app on launch; let the cache warm later

  • ❏ B. Golden AMI with app preinstalled; runtime config via user data

  • ❏ C. AMI with app and baked local cache; set dynamic config via user data

  • ❏ D. AMI with app; user data; invoke Lambda at boot to prefill local cache

Question 17

AuroraView, a media-streaming startup, wants to standardize provisioning with AWS CloudFormation. The platform team must restrict launches to eu-west-1 and ap-southeast-2 and require CostCenter, Environment, and DataClass tags on all resources, while engineering teams continue to deploy multiple versions of the same service. What approach will best enforce these controls and still allow developers to deploy different releases?

  • ❏ A. Launch approved CloudFormation templates through StackSets

  • ❏ B. Create AWS Trusted Advisor checks to detect unapproved StackSets

  • ❏ C. Publish vetted CloudFormation products in AWS Service Catalog

  • ❏ D. Use CloudFormation drift detection to find and remediate noncompliant stacks

Question 18

Which Elastic Beanstalk deployment policy updates instances in place using batches so the application keeps the same DNS, avoids creating new resources, and maintains availability?

  • ❏ A. Set up blue/green and swap CNAMEs

  • ❏ B. Rolling deployment with 30% batches

  • ❏ C. In-place all-at-once deployment

  • ❏ D. Rolling deployment with an additional batch

Question 19

HarborPay, a fintech startup, runs roughly 25 EC2-based microservices and each service uses its own Amazon Machine Image. Releases currently require engineers to manually bake a new AMI and roll it out. Leadership wants this image creation to be automated and triggered from the CI/CD pipeline, and the resulting AMI ID must be saved in a central location that other pipelines can read programmatically. What is the most cost-effective approach with the least operational overhead?

  • ❏ A. Run Packer from a Jenkins job on a provisioned EC2 instance and record AMI IDs in Amazon DynamoDB

  • ❏ B. Use AWS Systems Manager Automation runbooks triggered by AWS CodePipeline via Amazon EventBridge to bake AMIs and publish the AMI IDs to Systems Manager Parameter Store

  • ❏ C. Provision EC2 build workers in CodePipeline to download OVF images, customize with guestfish, import as AMIs, and save AMI IDs in Systems Manager Parameter Store

  • ❏ D. Use AWS CodePipeline to snapshot EBS-backed instances, patch via AWS Lambda, create AMIs from snapshots, and write AMI IDs to Amazon S3

Question 20

What is the most cost effective way to block SQL injection attacks against an Amazon API Gateway while retaining a searchable audit history of WAF rule changes?

  • ❏ A. AWS Shield Advanced with CloudTrail logs

  • ❏ B. WAF web ACL on API Gateway stage with managed SQLi rules; track via AWS Config

  • ❏ C. CloudFront before API Gateway, WAF on CloudFront, block attacker IP in VPC NACL, use AWS Config

  • ❏ D. API Gateway throttling and usage plans with CloudWatch Logs for change history

AWS DevOps Professional Practice Exam Answers

AWS Certified DevOps Engineer Logo and Badge

These questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.

Question 1

Scrumtuous Solutions runs a CI/CD pipeline in AWS that builds with CodeBuild and deploys to a fleet of Amazon EC2 instances through CodeDeploy. The team needs a mandatory human sign-off before any release reaches production, even when all unit and integration tests pass, and the workflow is managed by CodePipeline. What is the simplest and most cost-effective way to add this enforced approval gate?

  • ✓ B. Use CodeBuild to execute the tests, insert a Manual approval action in CodePipeline immediately before the production CodeDeploy stage with SNS notifications to approvers, then proceed to the production deploy after approval

The most direct and economical solution is to use CodePipeline’s native Manual approval action as the gate before production. Pairing this with SNS notifications alerts approvers, and the pipeline resumes only after explicit approval, ensuring releases cannot bypass the check even if all tests pass.

Use CodeBuild to execute the tests, insert a Manual approval action in CodePipeline immediately before the production CodeDeploy stage with SNS notifications to approvers, then proceed to the production deploy after approval is correct because the Manual approval action is built-in, incurs no additional service to manage, and integrates cleanly with SNS for notifications.

Run the unit and integration tests with AWS Step Functions, then add a test action after the last deploy, add a manual approval with SNS notifications, and finally add a deploy action to promote to production is unnecessary and more complex than needed, since CodeBuild already handles tests and the gating requirement is met by the Manual approval action.

Use CodeBuild for tests and create a custom CodePipeline action with a bespoke job worker to perform the approval, notify through SNS, and promote on success adds development and maintenance overhead for custom workers when a managed Manual approval action already exists.

Perform the tests in a self-managed Jenkins or GitLab on EC2, add a test action, add a manual approval in the pipeline with SNS notifications, and then deploy to production increases cost and operational toil by running and maintaining third-party tooling instead of using managed AWS services.

Cameron’s Exam Tip

When a question asks for the simplest and most cost-effective human gate in a CodePipeline, choose the built-in Manual approval action with SNS notifications and avoid custom actions or self-managed CI/CD stacks.

Question 2

Which disaster recovery pattern best meets a low-cost RTO of three hours and an RPO of about fifteen minutes for an application that uses EC2 Auto Scaling and an Application Load Balancer with RDS for PostgreSQL behind Route 53?

  • ✓ B. Pilot light in another Region with cross-Region RDS read replica, minimal app stack, Route 53 failover, and promote replica on disaster

Pilot light in another Region with cross-Region RDS read replica, minimal app stack, Route 53 failover, and promote replica on disaster is the best fit because it provides minutes-level RPO via continuous asynchronous replication to a cross-Region RDS read replica while keeping most compute resources powered off to minimize ongoing cost. During an event, the replica can be promoted and the minimal app stack scaled up to meet an hours-level RTO, and Route 53 failover can redirect traffic quickly.

The option Warm standby in another Region with scaled-down but running stack and scale out on failover can achieve the RTO but keeps an always-on environment that is costlier than necessary given the cost constraint and the hours-level RTO target.

The option Backup and restore with AWS Backup cross-Region copies and rebuild on failover commonly cannot meet a 15-minute RPO and may exceed a 3-hour RTO because snapshot copy and full environment re-provisioning take longer.

The option Single-Region Multi-AZ only for EC2 and RDS does not protect against Regional failures and therefore cannot meet cross-Region disaster recovery requirements.

Cameron’s Exam Tip

Map RTO and RPO to DR patterns. For minutes-level RPO with hours-level RTO and low cost, think pilot light with cross-Region database replication and DNS failover. For near-zero RPO/RTO, think multi-site active/active at higher cost. For lowest cost but long RTO/RPO, think backup and restore. Confirm that solutions address Region-level failures, not just AZ.

Question 3

The platform engineering team at NovaCare Analytics manages more than 320 AWS accounts through AWS Organizations. Security mandates that every EC2 instance launches from a centrally approved, hardened base AMI. When a new AMI version is released, the team must ensure no new instances are started from the previous AMI, and they also need a centralized and auditable view of AMI compliance across all accounts. What approach should be implemented to meet these goals across the organization? (Choose 2)

  • ✓ B. Deploy an AWS Config custom rule with AWS CloudFormation StackSets to check instance AMI IDs against an approved list and aggregate results in an AWS Config aggregator in the management account

  • ✓ D. Use AWS Systems Manager Automation to produce the AMI in a central account and share it with organizational accounts, then revoke sharing on the previous AMI and share the new one when updated

The combination of Deploy an AWS Config custom rule with AWS CloudFormation StackSets to check instance AMI IDs against an approved list and aggregate results in an AWS Config aggregator in the management account and Use AWS Systems Manager Automation to produce the AMI in a central account and share it with organizational accounts, then revoke sharing on the previous AMI and share the new one when updated meets both enforcement and auditing needs.

Centralizing the AMI and using share and unshare enforces that new instances cannot be launched from an older AMI, because once the previous AMI is unshared it is no longer available for new launches across accounts. The AWS Config rule rolled out with StackSets and aggregated in the management account provides a single-pane view to audit AMI usage and detect noncompliant instances organization-wide.

Use AWS Systems Manager Automation distributed with AWS CloudFormation StackSets to build the AMI inside every account is operationally heavy and leads to version sprawl, making it hard to retire old AMIs consistently and block new launches from them.

Create the AMI in a central account and copy it to each account and Region whenever a new version is published proliferates stale copies that remain launchable, so it does not guarantee that old AMIs are blocked.

Publish the approved AMI as a product in AWS Service Catalog across the organization may guide provisioning but does not prevent users from bypassing the catalog and launching older AMIs directly, nor does it provide organization-wide compliance reporting.

Cameron’s Exam Tip

For org-wide AMI governance, think in two steps: enforce usage through central sharing and revocation, and audit with AWS Config custom rules aggregated at the organization level.

Question 4

Which configuration provides near-real-time alerts for AWS Organizations when membership or account changes occur? (Choose 2)

  • ✓ B. AWS CloudTrail organization trail with EventBridge to SNS

  • ✓ D. AWS Config organization aggregator with rules to SNS or EventBridge

The best ways to get near-real-time alerts for AWS Organizations membership or account changes are AWS CloudTrail organization trail with EventBridge to SNS and AWS Config organization aggregator with rules to SNS or EventBridge. An organization-level CloudTrail records Organizations API activity such as invites, account creation, and handshakes. EventBridge rules can match these events and push alerts through SNS with minimal delay, which directly addresses the requirement.

AWS Config with an organization aggregator provides centralized visibility and evaluation across accounts. While it is not the primary event source for Organizations API calls, it can drive notifications when evaluated configurations indicate account-related changes across the org, complementing CloudTrail for governance monitoring.

The option AWS Security Hub and Amazon Detective is incorrect because these services focus on security findings and investigation rather than authoritative detection of Organizations membership changes.

AWS Control Tower notifications is incorrect because Control Tower focuses on landing zone governance and does not serve as a comprehensive eventing source for Organizations membership or invite events.

Third-party SIEM with Amazon GuardDuty is incorrect because GuardDuty provides threat detections, not explicit Organizations change events, and SIEM tooling does not inherently add that specificity.

Cameron’s Exam Tip

For near-real-time alerts on administrative changes, look for patterns that combine CloudTrail as the authoritative activity log with EventBridge for event-driven routing to alerting (SNS, Lambda). Remember that Security Hub, Detective, and GuardDuty focus on security findings, not control plane governance for Organizations. When you see “organization-wide” visibility and evaluation, consider AWS Config aggregators, but prioritize CloudTrail + EventBridge for precise API-change alerts.

Question 5

McKenzie Outfitters operates a web API that fronts Amazon EC2 instances behind an Application Load Balancer by using an Amazon API Gateway REST API. The engineering team wants new releases to be rolled out with minimal user impact and with the ability to revert quickly if defects are found. What approach will achieve this with the least changes to the existing application?

  • ✓ B. Create a parallel environment behind the ALB with the new build and configure API Gateway canary release to send a small portion of requests to it

The best approach is to use Create a parallel environment behind the ALB with the new build and configure API Gateway canary release to send a small portion of requests to it. API Gateway REST API canary deployments let you gradually shift a small percentage of traffic to the new backend, observe behavior, and instantly roll back by changing or disabling the canary weight. This requires no application code changes and only minimal configuration updates at the API stage.

Use AWS CodeDeploy blue/green with the Auto Scaling group behind the ALB and shift production traffic to the new revision can work, but it adds deployment tooling, agents, and AppSpec files, which is more invasive than necessary for a deployment controlled at the API layer.

Stand up a duplicate environment and update the Route 53 alias to the new stack forces a full cutover that affects every user, lacks progressive exposure, and makes rollback a second DNS flip rather than a quick weight change.

Create a new target group for the ALB and have API Gateway weight requests directly to that target group is not feasible because API Gateway does not weight across ALB target groups; the supported mechanism for weighted exposure in API Gateway is a canary release.

Cameron’s Exam Tip

When an API Gateway REST API fronts an ALB and EC2 fleet, think canary release at the API stage for minimal-risk rollouts and fast rollback; avoid full DNS cutovers and avoid overengineering when a simple API-level weight shift will do.

Question 6

Which AWS service enables preventive governance by allowing users to provision only preapproved architectures through a self service interface while enforcing tags and parameters?

  • ✓ B. AWS Service Catalog

AWS Service Catalog is correct because it lets administrators publish vetted CloudFormation products and apply constraints and TagOptions so users can provision only approved architectures with required tags. This delivers preventative governance at the moment of provisioning while preserving self-service.

AWS Organizations SCPs with tag policies is insufficient because tag policies primarily validate and report, and while SCPs can conditionally deny creates for missing tags in some services, they do not package or present pre-approved architectures as a curated catalog.

AWS Config provides detective controls after resources are launched, which does not prevent noncompliant resources from being created.

AWS CloudFormation Guard validates templates via policy-as-code but relies on integration in pipelines or hooks and does not provide a catalog nor enforce controls for ad hoc provisioning.

Cameron’s Exam Tip

When you see pre-approved architectures, curated products, enforced parameters or tags, and preventative control at provisioning, think Service Catalog. Distinguish preventative (Service Catalog, some SCP patterns) from detective (AWS Config). Catalog plus constraints strongly points to Service Catalog.

Question 7

A platform team at a fintech startup plans to launch workloads in six Amazon VPCs across two AWS accounts. The services must have any-to-any connectivity with transitive routing among the VPCs. Leadership wants centralized administration of network traffic policies for consistent security. What architecture should the team implement to meet these needs with the least operational overhead?

  • ✓ B. Use AWS Transit Gateway for transitive connectivity among VPCs and manage network access policies centrally with AWS Firewall Manager

The most operationally efficient solution for fully meshed, transitive connectivity across many VPCs is Use AWS Transit Gateway for transitive connectivity among VPCs and manage network access policies centrally with AWS Firewall Manager. Transit Gateway acts as a scalable hub for VPC attachments and natively provides transitive routing, which avoids the complexity of managing numerous point-to-point connections. Firewall Manager lets you centrally apply and govern network security policies across accounts and VPCs, including policies for AWS Network Firewall.

Configure VPC peering between every VPC to build a full mesh and centralize WebACLs with AWS WAF is inefficient and does not meet requirements because VPC peering does not support transitive routing, and AWS WAF is for web-layer protection, not centralized network access controls.

Set up AWS PrivateLink endpoints between each VPC and use AWS Security Hub for centralized security policies is unsuitable since PrivateLink exposes services over interface endpoints rather than providing general routing, and Security Hub aggregates findings but does not enforce network policies.

Establish AWS Site-to-Site VPN tunnels between each pair of VPCs and manage policies with AWS Firewall Manager is operationally heavy and not aligned with AWS-managed VPC-to-VPC connectivity; Site-to-Site VPN is intended for connecting on-premises networks to AWS rather than building VPC meshes.

Cameron’s Exam Tip

When you see requirements for transitive routing and centralized network policy management across many VPCs and accounts, think AWS Transit Gateway plus AWS Firewall Manager for the most scalable and operationally efficient design.

Question 8

Which solution provides the most cost effective way to obtain per object S3 access counts for immediate analysis?

  • ✓ C. Turn on S3 server access logging and analyze logs with Amazon Athena

The correct choice is Turn on S3 server access logging and analyze logs with Amazon Athena. S3 server access logs capture per-request details including object key, requester, operation, and bytes. Writing these logs to S3 and querying with Athena provides immediate, serverless, pay-per-query analysis to rank the most accessed objects at minimal cost.

Amazon S3 Storage Lens is incorrect because it provides aggregated insights at the bucket or prefix level and on a periodic schedule, not detailed per-object request counts needed for precise ranking.

Enable S3 data events in AWS CloudTrail and query with Amazon Athena is not cost-effective at high request volumes because CloudTrail data events are billed per request and can become expensive compared to native S3 access logs.

Use Redshift Spectrum with a Redshift cluster to query S3 access logs is unnecessary for this use case since it requires provisioning a cluster and incurs higher ongoing costs; Athena is simpler and cheaper.

Cameron’s Exam Tip

When a question emphasizes per-object access insights that must be quick and low cost, prefer serverless, pay-per-query analytics on existing logs. Watch for distractors like Storage Lens (aggregated, not per-object), CloudTrail data events (accurate but costly for heavy traffic), and solutions that require provisioning clusters or complex pipelines.

Question 9

A fintech startup runs a critical service on Amazon EC2 instances in an Auto Scaling group. A lightweight health probe on each instance runs every 5 seconds to verify that the application responds. The DevOps engineer must use the probe results for monitoring and to raise an alarm when failures occur. Metrics must be captured at 1 minute intervals while keeping costs low. What should the engineer implement?

  • ✓ C. Create a custom CloudWatch metric and publish statistic sets that roll up the 5 second results, sending one update every 60 seconds

The most cost-effective way to ingest frequent health checks while storing metrics at 1 minute granularity is to aggregate locally and publish once per minute using CloudWatch statistic sets. Create a custom CloudWatch metric and publish statistic sets that roll up the 5 second results, sending one update every 60 seconds meets the monitoring and alarm needs while minimizing PutMetricData calls.

Use a default CloudWatch metric at standard resolution and add a dimension so the script can publish once every 60 seconds is not viable because default metrics are created by AWS services and cannot accept your application’s script output; dimensions only annotate metrics you already publish.

Amazon CloudWatch Synthetics could run external checks but it does not use the instance script and typically costs more than batching custom metrics for this requirement.

Use a custom CloudWatch metric at high resolution and push data every 5 seconds would increase ingestion cost and is unnecessary since the requirement is 1 minute granularity.

Cameron’s Exam Tip

When you collect high-frequency samples but only need 1 minute visibility, publish a custom metric using statistic sets once per minute to cut PutMetricData calls and cost; reserve high-resolution metrics for true sub-minute alarms.

Question 10

In AWS CloudFormation which UpdatePolicy performs rolling updates on an Auto Scaling group so instances adopt a new launch configuration while keeping at least six of ten instances in service?

  • ✓ C. UpdatePolicy: AutoScalingRollingUpdate

UpdatePolicy: AutoScalingRollingUpdate is correct because it is the CloudFormation UpdatePolicy that performs rolling updates for Auto Scaling groups and supports MinInstancesInService and MaxBatchSize to maintain capacity while updating instances to a new launch configuration. This matches the requirement to keep a specified number of instances in service during the rollout.

AutoScalingReplacingUpdate is incorrect because it replaces all instances (or the entire group) rather than doing controlled batch updates, which risks breaching the in-service capacity target.

AWS CodeDeploy is incorrect since it is a deployment service outside of CloudFormation’s UpdatePolicy mechanics and is not used to orchestrate ASG rolling updates within a template.

UpdateReplacePolicy is incorrect because it governs whether to retain, delete, or snapshot a resource when it is replaced, not how updates roll through instances. On the exam, look for cues like MinInstancesInService and MaxBatchSize tied to CloudFormation. These strongly indicate UpdatePolicy with AutoScalingRollingUpdate. Be careful not to confuse UpdateReplacePolicy (retention behavior) or broad replacement mechanisms like AutoScalingReplacingUpdate with rolling updates. If the question emphasizes maintaining capacity during an ASG update from a template, prioritize the rolling update policy.

AWS Certified DevOps Engineer Logo and Badge

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.

Question 11

Pickering is Springfield Labs runs several services with a MERN front end behind NGINX and uses AWS CodeDeploy to automate rollouts. The team has a QA deployment group and will add PREPROD and PRODUCTION groups later. They want the NGINX log level to be set dynamically at deploy time so each group can have different verbosity without creating separate application revisions or maintaining different scripts per environment. Which approach provides the lowest ongoing management effort and avoids multiple script variants?

  • ✓ B. Use a single script that reads the DEPLOYMENT_GROUP_NAME environment variable in CodeDeploy and call it in the BeforeInstall hook to set NGINX logging per group

The least overhead solution is to leverage what CodeDeploy already exposes. Use a single script that reads the DEPLOYMENT_GROUP_NAME environment variable in CodeDeploy and call it in the BeforeInstall hook to set NGINX logging per group. This requires no additional AWS API calls or credentials and lets you apply configuration before services start.

Tag each EC2 instance with its deployment group and have a ValidateService hook script call aws ec2 describe-tags to choose the log level introduces unnecessary tagging management and CLI usage during deployment, and ValidateService occurs late in the lifecycle.

Define a custom environment variable in CodeDeploy for each environment such as QA, PREPROD, and PROD, and have a ValidateService hook script read it to set the log level is not viable because CodeDeploy hook scripts only get predefined variables and do not support arbitrary custom variables.

Invoke a script during ApplicationStart that uses the DEPLOYMENT_GROUP_ID environment variable to detect the group and update the NGINX log level relies on the group ID and runs later than necessary; even though the ID exists, using the group name is simpler and configuring earlier in BeforeInstall is preferred.

Cameron’s Exam Tip

Remember that CodeDeploy provides built-in environment variables like DEPLOYMENT_GROUP_NAME; use them to drive environment-specific behavior and pick the earliest suitable lifecycle event such as BeforeInstall when you need configuration changes in place before services start.

Question 12

What is the quickest way to isolate an impaired Availability Zone for an Application Load Balancer while diverting traffic to healthy zones with minimal changes?

  • ✓ C. Amazon Route 53 ARC zonal shift for the ALB

Amazon Route 53 ARC zonal shift for the ALB is correct because it provides an operational control to quickly evacuate a single Availability Zone for an existing Application Load Balancer without changing the infrastructure. Zonal shift instructs AWS to stop sending traffic to targets in the specified AZ, effectively isolating the impaired zone while keeping the stack intact.

The option Configure Auto Scaling health checks to replace instances in that AZ is incorrect because replacing instances does not stop the ALB from routing traffic to the impaired zone; it does not achieve zonal isolation.

The option Remove the impaired AZ subnet from the ALB can isolate the zone but requires modifying the load balancer configuration or updating infrastructure as code, which is not the minimal, fast operational step the question seeks.

The option Route 53 weighted routing across two ALBs is incorrect because it introduces additional load balancers and DNS complexity and still does not solve zonal isolation for a single ALB.

Cameron’s Exam Tip

When you see requirements like isolate a single AZ, within one Region, and perform it quickly with minimal or no stack changes, think Route 53 ARC zonal shift. If the scenario talks about global entry points or multi-Region failover, consider alternatives like Global Accelerator or Route 53 failover across endpoints.

Question 13

At Northwind Publishing, a CI/CD pipeline runs ephemeral integration tests using a CloudFormation template that will later be promoted to production. The template provisions an Amazon S3 bucket and an AWS Lambda function that creates thumbnail images for any photos uploaded to the bucket. The test automation uploads 15 sample images and validates that the thumbnails appear in the same bucket, after which the pipeline attempts to delete the stack and the deletion consistently fails. What is the most likely cause and how should it be fixed?

  • ✓ C. The S3 bucket is not empty and CloudFormation cannot delete it; use a Lambda-backed custom resource to purge the bucket on stack deletion

The delete operation fails because the bucket still has objects from the test run, and CloudFormation cannot remove a non-empty S3 bucket. The right approach is to add a custom resource that runs on the stack’s Delete event to empty the bucket, including versions and delete markers if versioning is enabled.

The S3 bucket is not empty and CloudFormation cannot delete it; use a Lambda-backed custom resource to purge the bucket on stack deletion is correct because CloudFormation requires the bucket to be empty before deletion, and a custom resource can programmatically delete all contents during stack teardown.

A stack policy is blocking stack deletion; remove the stack policy and try again is incorrect because stack policies only control update operations and do not prevent stack deletion.

Add a Delete: Force setting to the S3 bucket in the template so CloudFormation auto-empties the bucket is incorrect because CloudFormation has no such property, and it will not automatically purge bucket contents.

The Lambda function is still using the bucket; add a WaitCondition to the function to allow the delete to proceed is incorrect because WaitCondition is not used with Lambda and would not solve the requirement that the bucket be empty before deletion.

Cameron’s Exam Tip

When stacks create S3 buckets for tests, ensure teardown empties the bucket first; use a Delete event on a custom resource to remove objects, and remember there is no force delete setting for S3 in CloudFormation.

Question 14

Which actions provide TLS for a custom domain, protection against edge DDoS attacks and web exploits, and reduced global latency for an application behind an ALB? (Choose 2)

  • ✓ B. Attach an AWS WAF web ACL to the CloudFront distribution

  • ✓ C. Deploy CloudFront in front of the ALB and use ACM for the custom domain TLS

The correct solution is to front the application with CloudFront for global performance and TLS, and enforce WAF at the edge. Attach an AWS WAF web ACL to the CloudFront distribution ensures requests are inspected and blocked at edge locations, stopping common web exploits before reaching the origin. Deploy CloudFront in front of the ALB and use ACM for the custom domain TLS provides HTTPS via ACM certificates, reduces global latency via edge caching, and includes AWS Shield Standard for L3/L4 DDoS protections.

The option Use AWS Global Accelerator in front of the ALB improves global routing and adds DDoS protections but does not provide web exploit filtering or caching, so it does not meet all requirements.

Associate an AWS WAF web ACL with the ALB secures at the regional layer but lacks edge-level filtering and does not address global latency.

Create a CloudFront distribution using the Auto Scaling group as the origin is invalid because CloudFront cannot directly target an Auto Scaling group; you must use an ALB or other supported origin.

Cameron’s Exam Tip

When you see requirements for global latency reduction and edge protections, think CloudFront. For custom domain TLS at the edge, pair CloudFront with ACM certificates (uploaded or issued in us-east-1 for CloudFront). Attach AWS WAF to CloudFront for edge inspection. Remember WAF cannot attach to an Auto Scaling group, and CloudFront cannot use an Auto Scaling group directly as an origin.

Question 15

Lighthouse Media is building a customer portal with AWS CloudFormation where the data platform group manages a stack for the database layer and the application team maintains a separate stack for the web and API components. The application team must reference resources created by the data platform group while both teams preserve independent lifecycles and perform resource-level change set reviews. The application team will deploy through their CI/CD pipeline. What is the best approach to enable this cross-stack usage without coupling the teams?

  • ✓ B. Export Outputs from the database stack and import them into the application stack with Fn::ImportValue

The most suitable pattern is cross-stack references. By using Outputs that the database stack exports and referencing them via Fn::ImportValue in the application stack, Export Outputs from the database stack and import them into the application stack with Fn::ImportValue preserves independent stacks, enables resource-level change set reviews per team, and lets the CI/CD pipeline update the application stack without redeploying the database stack.

Use AWS CloudFormation StackSets to share parameters across the database and application stacks is not appropriate because StackSets target multi-account or multi-Region governance, not sharing resources between two stacks in the same account and Region.

Convert the database template into a nested stack within the application stack would couple the stacks under one parent where change sets apply across the hierarchy, which conflicts with the requirement for independent resource-level reviews.

Store database resource identifiers in AWS Systems Manager Parameter Store and reference them via dynamic references in the application template lacks CloudFormation-managed dependencies and change set tracking, making it harder to validate changes and increasing drift risk.

Cameron’s Exam Tip

Use Exports and Fn::ImportValue for cross-stack references within the same account and Region when teams need independent lifecycles and change sets; prefer nested stacks only when a single stack hierarchy and unified reviews are acceptable, and remember StackSets are for multi-account and multi-Region propagation.

Question 16

What is the best way to minimize EC2 Auto Scaling scale-out readiness time for a stateless web tier that currently takes 27 minutes to install the application and 18 minutes to warm a local cache?

  • ✓ B. Golden AMI with app preinstalled; runtime config via user data

The best way to reduce time-to-ready is to make instances as immutable as possible. Golden AMI with app preinstalled; runtime config via user data removes the 27-minute install step entirely by pre-packing the application into the AMI. User data (or SSM Parameter Store) then supplies environment-specific configuration at boot without baking volatile values. Local caches should not be baked into images because they are ephemeral and change frequently; rely on external caches like ElastiCache and allow instances to warm quickly from that shared layer.

The option Use CodeDeploy during lifecycle hooks to install the app on launch; let the cache warm later still performs full installation during scale-out, preserving the lengthy 27-minute delay. The choice AMI with app and baked local cache; set dynamic config via user data is brittle because cached content becomes stale and inconsistent, leading to correctness issues and frequent AMI rebuilds. The approach AMI with app; user data; invoke Lambda at boot to prefill local cache adds coordination complexity and remains tied to dynamic, frequently changing cache data, which undermines predictability and speed.

Cameron’s Exam Tip

Look for cues like stateless web tier, Auto Scaling, and long bootstrap times. Prefer immutable images to eliminate installation steps, and separate configuration from code via user data or Systems Manager. Avoid baking volatile runtime data such as caches into AMIs. If mentioned, warm pools can complement this pattern, but they do not replace the benefits of a properly baked AMI.

Question 17

AuroraView, a media-streaming startup, wants to standardize provisioning with AWS CloudFormation. The platform team must restrict launches to eu-west-1 and ap-southeast-2 and require CostCenter, Environment, and DataClass tags on all resources, while engineering teams continue to deploy multiple versions of the same service. What approach will best enforce these controls and still allow developers to deploy different releases?

  • ✓ C. Publish vetted CloudFormation products in AWS Service Catalog

The right choice is Publish vetted CloudFormation products in AWS Service Catalog. Service Catalog allows the platform team to curate approved templates, publish multiple product versions for different releases, and apply constraints that restrict parameters and effectively scope usage to approved regions while enforcing required tags through template rules and product controls.

Launch approved CloudFormation templates through StackSets is not sufficient because StackSets orchestrates distribution but does not enforce mandatory tags or prevent developers from launching in disallowed regions.

Create AWS Trusted Advisor checks to detect unapproved StackSets is incorrect since Trusted Advisor cannot enforce CloudFormation governance or block noncompliant launches.

Use CloudFormation drift detection to find and remediate noncompliant stacks is reactive and detects differences only after deployment, so it cannot guarantee pre-deployment policy compliance.

Cameron’s Exam Tip

For governance of IaC at scale, look for curated, versioned, and constrained launches via Service Catalog; tools like StackSets solve distribution, not policy enforcement.

Question 18

Which Elastic Beanstalk deployment policy updates instances in place using batches so the application keeps the same DNS, avoids creating new resources, and maintains availability?

  • ✓ B. Rolling deployment with 30% batches

The correct choice is Rolling deployment with 30% batches. Rolling deployments update instances in-place in batches within the existing environment, which keeps the DNS endpoint unchanged, avoids standing up a new environment or Auto Scaling group, and maintains availability because some instances remain in service during each batch.

Set up blue/green and swap CNAMEs is incorrect because it provisions a separate environment before the swap, introducing new resources.

In-place all-at-once deployment is incorrect because updating every instance simultaneously causes downtime.

Rolling deployment with an additional batch is incorrect because it temporarily adds instances to increase capacity, which violates the requirement to avoid creating new resources.

Cameron’s Exam Tip

Match constraints to deployment modes. If you see keep DNS, exclude blue/green. If you see no new resources, exclude immutable and additional-batch. If you see maintain availability, exclude all-at-once. That leaves standard rolling deployments with an appropriate batch size.

Question 19

HarborPay, a fintech startup, runs roughly 25 EC2-based microservices and each service uses its own Amazon Machine Image. Releases currently require engineers to manually bake a new AMI and roll it out. Leadership wants this image creation to be automated and triggered from the CI/CD pipeline, and the resulting AMI ID must be saved in a central location that other pipelines can read programmatically. What is the most cost-effective approach with the least operational overhead?

  • ✓ B. Use AWS Systems Manager Automation runbooks triggered by AWS CodePipeline via Amazon EventBridge to bake AMIs and publish the AMI IDs to Systems Manager Parameter Store

The best choice is Use AWS Systems Manager Automation runbooks triggered by AWS CodePipeline via Amazon EventBridge to bake AMIs and publish the AMI IDs to Systems Manager Parameter Store. Systems Manager Automation provides runbooks such as AWS-UpdateLinuxAmi and AWS-UpdateWindowsAmi to standardize image baking without maintaining servers. Triggering via CodePipeline and EventBridge keeps the process event-driven, and Parameter Store offers a low-cost, centrally managed, and programmatically accessible location for AMI IDs.

Run Packer from a Jenkins job on a provisioned EC2 instance and record AMI IDs in Amazon DynamoDB adds operational burden by running and patching Jenkins on EC2 and paying for always-on infrastructure, and DynamoDB is unnecessary for simple parameter retrieval compared to Parameter Store.

Provision EC2 build workers in CodePipeline to download OVF images, customize with guestfish, import as AMIs, and save AMI IDs in Systems Manager Parameter Store is overly complex and manual, requiring EC2 build capacity and VM import workflows that increase cost and time.

Use AWS CodePipeline to snapshot EBS-backed instances, patch via AWS Lambda, create AMIs from snapshots, and write AMI IDs to Amazon S3 introduces many components and uses S3 for configuration values, which is less suitable than Parameter Store and increases operational overhead.

Cameron’s Exam Tip

When you see image baking with a requirement for least overhead and a central, programmatic store for IDs, think Systems Manager Automation for AMI creation and Parameter Store for distribution, triggered by CodePipeline and EventBridge.

Question 20

What is the most cost effective way to block SQL injection attacks against an Amazon API Gateway while retaining a searchable audit history of WAF rule changes?

  • ✓ B. WAF web ACL on API Gateway stage with managed SQLi rules; track via AWS Config

WAF web ACL on API Gateway stage with managed SQLi rules; track via AWS Config is correct because AWS WAF integrates directly with API Gateway stages. Applying the AWS Managed Rules for SQLi provides immediate protections against common injection patterns, and a scope-down statement can target only sensitive routes (for example, login) to reduce false positives. AWS Config records WAFv2 resource configuration changes, giving a searchable, auditable history without adding unnecessary services or cost.

The option AWS Shield Advanced with CloudTrail logs is incorrect because Shield Advanced focuses on DDoS mitigation rather than application-layer attacks like SQL injection, and CloudTrail is primarily API activity logging, not a dedicated, queryable configuration history like AWS Config provides.

The option CloudFront before API Gateway, WAF on CloudFront, block attacker IP in VPC NACL, use AWS Config is incorrect because this adds complexity and cost; NACLs do not protect API Gateway since it is not in your VPC, and placing CloudFront in front of API Gateway is unnecessary solely for SQLi protection.

The option API Gateway throttling and usage plans with CloudWatch Logs for change history is incorrect because throttling does not prevent injection payloads and CloudWatch Logs do not deliver authoritative configuration timelines like AWS Config.

Cameron’s Exam Tip

Remember that API Gateway can be directly associated with a regional AWS WAF web ACL. Map attack types to the right service: SQL injection implies WAF managed rules, while DDoS implies Shield. For searchable configuration change history, prefer AWS Config over CloudTrail. Also note that VPC NACLs and security groups protect VPC resources, not regional edge services like API Gateway.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.