Professional Devops Engineer Sample Questions and Answers

AWS Certified DevOps Engineer Exam Topics Test

The AWS Certified DevOps Engineer Professional exam validates your ability to automate, operate, and manage distributed applications and systems on the AWS platform. It focuses on continuous delivery, automation of security controls, governance, monitoring, metrics, and high availability strategies that support scalable and resilient cloud solutions.

To prepare effectively, explore these AWS DevOps Engineer Practice Questions designed to reflect the structure, logic, and depth of the real certification exam. You’ll find Real AWS DevOps Exam Questions that mirror complex deployment and automation scenarios, along with AWS DevOps Engineer Sample Questions covering CI/CD pipelines, CloudFormation, Elastic Beanstalk, ECS, and CodePipeline integrations.

Targeted AWS Exam Topics

Each section includes AWS DevOps Engineer Exam Questions and Answers created to teach as well as test. These scenarios challenge your understanding of how to build automated solutions using services like CloudWatch, CodeDeploy, and Systems Manager. The explanations clarify not only what the correct answers are, but why,  helping you reason through real-world DevOps trade-offs.

For additional preparation, try the AWS DevOps Engineer Exam Simulator and complete AWS DevOps Engineer Practice Tests that track your progress over time. These simulated exams provide the same tone and pacing as the official test, helping you get comfortable with AWS exam timing and complexity.

If you prefer focused study materials, the AWS DevOps Engineer Exam Dump, AWS DevOps Engineer Braindump, and AWS DevOps Engineer Sample Questions & Answers collections organize hundreds of authentic practice items to strengthen your understanding of automation, monitoring, and deployment pipelines across multi-account AWS environments.

Mastering these AWS DevOps Engineer Exam Questions gives you the confidence to pass the certification and the practical skills to deliver automated, secure, and scalable solutions in real AWS production environments.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AWS Certified DevOps Engineer Sample Questions


Certification Questions and Answers

At Northwind HealthTech, an AWS CloudFormation stack provisions an S3 bucket, one EC2 instance, and a single EBS volume. A compliance request requires changing the stack name, but the live resources must continue running and must not be recreated or deleted. What is the best way to rename the stack while keeping all current resources in place?

  • ❏ A. Apply DeletionPolicy Retain to the S3 bucket and EC2 instance and DeletionPolicy Snapshot to the EBS volume, delete the old stack, create a new stack with the desired name, import the retained resources, then remove those policies

  • ❏ B. Update the existing template to set DeletionPolicy Retain on every resource, delete the stack so the resources are preserved, create a new stack with the new name, import the retained S3 bucket, EC2 instance, and EBS volume, then remove the Retain settings

  • ❏ C. Create CloudFormation registry hooks for each resource, delete the original stack, create a new stack with the target name, and rely on the hooks to bring the existing resources under management

  • ❏ D. Deploy a second stack with the new name that duplicates the resources and adds DependsOn relationships to the original stack, then delete both stacks while retaining the underlying resources

A fintech analytics startup runs a RESTful service on EC2 instances in an Auto Scaling group behind an Application Load Balancer. The application writes request data to Amazon DynamoDB, and static assets are hosted on Amazon S3. Monitoring shows roughly 90% of reads are repeated across many users. What should you implement to increase responsiveness and lower costs?

  • ❏ A. Use ElastiCache for Redis in front of DynamoDB and serve static files with CloudFront

  • ❏ B. Configure API Gateway caching for the API and enable S3 Transfer Acceleration on the bucket

  • ❏ C. Activate DynamoDB Accelerator and put CloudFront in front of the S3 origin

  • ❏ D. Enable DAX for DynamoDB and add ElastiCache Memcached to speed up S3 reads

A medical imaging startup named RadiantLabs is moving a legacy three-tier system to AWS. Due to vendor licensing, the application layer must run on a single Amazon EC2 Dedicated Instance that uses an Elastic Fabric Adapter and instance store volumes, so Auto Scaling cannot be used. The database tier will be migrated to Amazon Aurora to persist application data and transactions across at least two Availability Zones. The team needs automated healing to keep the service available if the EC2 host fails or the Aurora writer becomes unavailable while minimizing ongoing cost by avoiding always-on duplicates. What is the most cost-effective approach to meet these requirements?

  • ❏ A. Monitor the Dedicated Instance with AWS Health, allocate an Elastic IP, pre-provision a second EC2 instance in another Availability Zone, and use EventBridge with Lambda to move the EIP on failure; run a single-instance Aurora database

  • ❏ B. Use AWS Health with CloudWatch alarms to enable EC2 automatic recovery and deploy a single-instance Aurora database governed by an AWS Config conformance pack

  • ❏ C. Subscribe to AWS Health events and trigger an EventBridge rule that invokes a Lambda function to create a replacement EC2 instance in another Availability Zone upon failure, and configure an Aurora cluster with one cross-AZ Aurora Replica that can be promoted

  • ❏ D. Place the instance in an Auto Scaling group with desired capacity 1 and lifecycle hooks for recovery, and migrate the database to Aurora Serverless v2 for automatic failover

PixelBay, a retail startup, plans to roll out a global mobile commerce platform built with AWS Amplify that will serve over 30 million users. To minimize latency, the company will run its payment and order APIs in multiple AWS Regions so customers read and write to the Region nearest to them. Leadership requires that any transaction written in one Region be automatically propagated to other Regions without building custom replication code. The user base will span North America, Latin America, Europe, and Asia-Pacific. What architecture should the DevOps engineer choose to maximize scalability, availability, and cost efficiency?

  • ❏ A. Write orders to a DynamoDB table in each Region and use a Lambda function subscribed to a primary table’s stream to replay recent writes to tables in all other Regions

  • ❏ B. Deploy Amazon Aurora Global Database with multi-writer capability across the required Regions and replicate writes using Aurora replication

  • ❏ C. Create an Amazon DynamoDB global table and add replica Regions for each market; write to the local replica in each Region so DynamoDB replicates changes automatically worldwide

  • ❏ D. Create a DynamoDB global table in one Region that automatically creates replicas in every AWS Region and keeps them in sync without any configuration

Which AWS services provide centralized collection of Prometheus metrics from EKS, ECS, and on-premises Kubernetes and also offer managed dashboards and analysis?

  • ❏ A. Amazon CloudWatch agent with Athena and QuickSight

  • ❏ B. AWS Systems Manager Agent with Amazon Managed Service for Prometheus

  • ❏ C. Amazon Managed Service for Prometheus plus Amazon Managed Grafana

  • ❏ D. Amazon CloudWatch Container Insights with CloudWatch dashboards

MetroCare Alliance, a nonprofit healthcare consortium, is moving its secure document storage to AWS. They must store patient PII and sensitive billing records, encrypt data both at rest and during transmission, and maintain ongoing replication to two locations that are at least 700 miles apart. As the DevOps Engineer, which approach should you implement to meet these requirements?

  • ❏ A. Create primary and secondary S3 buckets in different Availability Zones at least 700 miles apart, require HTTPS via a bucket policy, use SSE-KMS for all objects, enable S3 Transfer Acceleration, and keep the KMS key in the primary Region

  • ❏ B. Create S3 buckets in two AWS Regions at least 700 miles apart, attach an IAM role that requires TLS-only access, configure a bucket policy for SSE-S3, and enable cross-Region replication

  • ❏ C. Create S3 buckets in two separate AWS Regions at least 700 miles apart, enforce HTTPS-only access with a bucket policy, require SSE-S3 for all objects, and enable S3 cross-Region replication

  • ❏ D. Create S3 buckets in two AWS Regions at least 700 miles apart, enforce HTTPS with a bucket policy, require SSE-S3, and use S3 Multi-Region Access Points for automatic replication without setting replication rules

EcoRide Labs, an electric mobility analytics firm, has nearly completed its AWS migration and notices that several platform engineers can remove Amazon DynamoDB tables. The operations team wants a very low-cost solution that alerts them within about a minute whenever the DynamoDB DeleteTable API is called in the account. What should they implement?

  • ❏ A. Configure a CloudTrail event selector and invoke an AWS Lambda function to publish to Amazon SNS

  • ❏ B. Enable DynamoDB Streams and trigger an AWS Lambda function to send Amazon SNS notifications

  • ❏ C. Create an AWS CloudTrail trail and add an Amazon EventBridge rule for AWS API Call via CloudTrail that matches DeleteTable and targets Amazon SNS

  • ❏ D. Create an AWS Config custom rule to detect table deletions and publish to Amazon SNS

The platform team at NovaTech Robotics shares an Amazon S3 bucket named devops-eu-artifacts-002 to store build artifacts used by multiple CI/CD pipelines across two product lines. A new engineer recently updated the bucket policy and unintentionally blocked downloads, which halted deployments. The team now wants immediate notifications whenever the bucket policy is changed so they can react quickly. What is the most appropriate solution?

  • ❏ A. Enable S3 server access logging on the bucket, create a CloudWatch metric filter for bucket policy events, and set a CloudWatch alarm

  • ❏ B. Configure S3 Event Notifications to publish bucket policy update events to an Amazon SNS topic

  • ❏ C. Create a CloudTrail trail that delivers management events to a CloudWatch Logs log group, add a metric filter for PutBucketPolicy and DeleteBucketPolicy, and configure a CloudWatch alarm to notify on matches

  • ❏ D. Create an EventBridge rule that filters S3 bucket policy events from a CloudWatch Logs group and trigger an alarm when matched

NovaVista Media deploys applications on AWS Elastic Beanstalk and manages the underlying resources with AWS CloudFormation. The operations team produces a hardened golden AMI with the latest security fixes every 10 days, and more than 150 Beanstalk environments were created from many different CloudFormation templates. You need each environment to move to the newest AMI on that cadence, but the templates are not standardized and parameter names vary across stacks. How should you implement this so the EC2 AMI used by the environments is refreshed on schedule without manually editing every template?

  • ❏ A. Store the current golden AMI ID in an S3 object and use a CloudFormation mapping in every template; schedule a Lambda function to rewrite the mapping in each template in S3 every 10 days and then call UpdateStack for all Beanstalk stacks

  • ❏ B. Publish the AMI ID in AWS AppConfig and reference it from a CloudFormation parameter; invoke a scheduled Lambda every 10 days to update all stacks

  • ❏ C. Put the AMI ID in AWS Systems Manager Parameter Store and use an SSM parameter type in CloudFormation so the value is resolved at update time; trigger a scheduled EventBridge rule every 10 days that runs a Lambda to call UpdateStack on all stacks

  • ❏ D. Keep the AMI ID in AWS Systems Manager Parameter Store but have a Lambda read it and pass it as a string parameter to UpdateStack for every stack on a 10 day schedule

An application runs on EC2 instances behind an Application Load Balancer and listens on a custom port, which actions correctly configure health checks so that client access is restored? (Choose 2)

  • ❏ A. Change the target type to IP addresses

  • ❏ B. Set the ALB listener to TCP on the custom port

  • ❏ C. Use ELB health checks in the Auto Scaling group for the app’s port and path

  • ❏ D. Set the target group health check to the app’s custom port and health endpoint

  • ❏ E. Open the instance security group to the ALB on the custom port

AWS Certified DevOps Engineer Logo and Badge

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.

A startup named Oriole Data maintains a Java service built with Apache Maven. The source code lives in GitHub, and on each push to the main branch the team wants the project compiled, unit tests executed, and the built artifact uploaded to an Amazon S3 bucket. What combination of actions should the DevOps engineer implement to achieve this? (Choose 3)

  • ❏ A. Provision a single Amazon EC2 instance, install build tools via user data, and run the build there

  • ❏ B. Add a buildspec.yml file to the repository that defines the Maven compile, test, and packaging steps

  • ❏ C. Launch an AWS CodeDeploy application targeting the EC2/On-Premises compute platform

  • ❏ D. Configure a GitHub webhook so that each push to the repository triggers a build

  • ❏ E. Create an AWS CodeBuild project that connects to the GitHub repository as its source

  • ❏ F. Implement an AWS Lambda function invoked by an Amazon S3 event to compile the code and run tests

A fintech company, NovaLedger, must ensure its licensed middleware runs only on Amazon EC2 Dedicated Hosts to control third party license spend. The platform team needs an automated audit in one Amazon VPC that detects any EC2 instance not placed on a Dedicated Host and produces a compliance summary about every 45 days with the least ongoing administration. What should the engineer implement?

  • ❏ A. Use AWS Systems Manager Configuration Compliance with PutComplianceItems, store instance IDs in Parameter Store, and summarize with ListComplianceSummaries

  • ❏ B. AWS License Manager

  • ❏ C. Turn on AWS Config recording for EC2 instances and Dedicated Hosts with a custom AWS Config rule that invokes Lambda to evaluate host placement and mark noncompliant instances, then use AWS Config compliance reports

  • ❏ D. Use AWS CloudTrail with a Lambda function to parse EC2 events, store noncompliant instance IDs in Amazon S3, and query results with Amazon Athena

NovaCanvas, a digital illustration startup, is rolling out an API enhancement that expects a JSON field named color. The Lambda handler must interpret color set to none as traffic from legacy clients. Operations wants to maintain a single Lambda backend while serving both older and newer apps. The API is currently deployed to a stage called prod-v1, and some Android users will upgrade slowly. The solution must keep backward compatibility for several years. What approach best satisfies these requirements?

  • ❏ A. Publish a new Lambda version as a separate v2 function and create a prod-v2 API Gateway stage that invokes it; keep prod-v1 invoking the original function and add logic in the v1 function to proxy requests without color to the v2 function

  • ❏ B. Enable API Gateway caching on prod-v1 and remove the legacy Lambda; deploy a new prod-v2 API backed by a new Lambda version and toggle a stage variable to supply a default color of none

  • ❏ C. Publish a new Lambda version and expose a new API Gateway stage named prod-v2 that, along with prod-v1, invokes the same Lambda alias; on prod-v1 add a request mapping template that injects “color”: “none” into the payload

  • ❏ D. Publish a new Lambda version and keep a single prod stage by configuring an API Gateway mapping template that conditionally fills a default color of none when the attribute is missing

AuroraStream, a digital media startup, is migrating a legacy monolith to a serverless architecture on AWS to cut operational overhead. The next application version should be exposed to only a small portion of clients for early validation before a full rollout. If automated post-deployment checks fail, the team must rapidly revert with minimal disruption to production. Which deployment approach best meets these goals while reducing risk to the live environment? (Choose 2)

  • ❏ A. Create a Network Load Balancer with an Amazon API Gateway private integration and two target groups, direct 15% of requests to the new version and remove the old target group after stability

  • ❏ B. Use a single AWS Lambda alias that references the current and new versions, shift 15% of traffic to the new version and later route 100% when it proves stable

  • ❏ C. Configure Amazon Route 53 with a Failover policy to send 15% to the new endpoint and the remainder to the existing one, then switch all traffic after verification

  • ❏ D. Enable a canary deployment on Amazon API Gateway to send 15% of calls to the canary and promote it to production after automated checks succeed

  • ❏ E. Set up an Application Load Balancer behind an API Gateway private integration and choose a built-in Canary routing option to direct traffic to the new version

In an ECS blue/green deployment using CodeDeploy which AppSpec lifecycle event should perform automated checks against the replacement task set using the test listener and enable automatic rollback before production traffic is shifted?

  • ❏ A. AppSpec AfterInstall

  • ❏ B. AppSpec AllowTestTraffic

  • ❏ C. AppSpec AfterAllowTestTraffic with Lambda

  • ❏ D. AppSpec BeforeAllowTraffic

A platform engineer at LumaHealth is releasing a single-page web application that must federate with the company’s SAML 2.0 identity provider. The app requires a branded sign-up and sign-in flow hosted within the app, and once authenticated it must obtain temporary credentials to invoke AWS services. How should the engineer implement authentication and access control for this application?

  • ❏ A. Use an Amazon API Gateway REST API with an AWS Lambda authorizer that accepts SAML assertions as bearer tokens and apply identity-based policies for backend access

  • ❏ B. Use Amazon Cognito Federated Identities with a SAML provider and call AWS STS AssumeRoleWithWebIdentity to obtain temporary credentials

  • ❏ C. AWS IAM Identity Center

  • ❏ D. Use Amazon Cognito user pools integrated with a SAML IdP, configure domain-based identifiers to route sign-ins, then use the issued tokens to exchange for short-lived AWS credentials

Aurora Parcel runs a new serverless workload made up of several AWS Lambda functions and an Amazon DynamoDB table named FleetEvents. The team’s CI/CD pipeline uses GitHub, AWS CodeBuild, and AWS CodePipeline with source, build, test, and deploy stages already in place. To reduce blast radius, the platform lead wants each release to first go to a small portion of production traffic for a short bake period, and only then roll out to everyone if no issues are detected. How should the deployment stage be updated to achieve this?

  • ❏ A. Use AWS CloudFormation to define and publish the new application version, then deploy the Lambda functions with AWS CodeDeploy using CodeDeployDefault.LambdaAllAtOnce

  • ❏ B. Publish a new version with AWS CloudFormation and add a manual approval in CodePipeline to verify the change, then have CodePipeline switch the Lambda production alias

  • ❏ C. Define and publish the serverless application version with AWS CloudFormation and deploy the Lambda updates with AWS CodeDeploy using CodeDeployDefault.LambdaCanary20Percent10Minutes

  • ❏ D. AWS AppConfig

VegaPay operates about 3,200 EBS-backed Amazon EC2 instances that power a latency-sensitive service across several Availability Zones. To minimize disruption, the operations team wants any instance targeted for an AWS-scheduled EC2 instance retirement to be automatically stopped and then started so it moves to healthy hardware before the retirement takes effect. What is the most effective way to implement this automation?

  • ❏ A. Attach the instances to an Auto Scaling group with Elastic Load Balancing health checks and rely on automatic replacement if a host degrades

  • ❏ B. Create an Amazon EventBridge rule using Amazon EC2 as the source that matches instance state changes to stopping or shutting-down and trigger an AWS Systems Manager Automation document to start the instances

  • ❏ C. Configure an Amazon EventBridge rule for AWS Health EC2 retirement scheduled events that invokes an AWS Systems Manager Automation runbook to stop and then start the impacted instances

  • ❏ D. Enable EC2 Auto Recovery by creating a CloudWatch alarm with the Recover action and schedule recovery to occur after hours

A DevOps engineer at Zephyr Retail needs to implement configuration governance across several AWS accounts. Security requires a near real-time view that shows which resources are compliant and flags any violations within three minutes of a change. Which solution best satisfies these requirements?

  • ❏ A. Use Amazon Inspector to assess compliance of workloads, send findings to Amazon CloudWatch Logs, and surface controls with a CloudWatch dashboard and custom metric filters

  • ❏ B. Apply mandatory tags to every resource and rely on AWS Trusted Advisor to flag noncompliant items, checking overall posture in the AWS Management Console

  • ❏ C. Enable AWS Config to capture configuration changes, deliver snapshots and history to Amazon S3, and build near real-time compliance visuals in Amazon QuickSight

  • ❏ D. Use AWS Systems Manager State Manager and Compliance to enforce settings and track violations, publishing metrics to an Amazon CloudWatch dashboard with Amazon SNS alerts

During a CodeDeploy blue/green deployment of a Lambda alias behind API Gateway how can you prevent traffic shifting until the API Gateway stage live-v3 is responding?

  • ❏ A. CloudWatch alarm on the API endpoint in the deployment config

  • ❏ B. BeforeAllowTraffic hook that invokes a validator Lambda for the live-v3 stage

  • ❏ C. CodePipeline pre-deploy test stage that calls the API, then starts CodeDeploy

  • ❏ D. AfterAllowTraffic hook with a health-check Lambda

At a travel-booking startup named AeroNomad, the DevOps team built an AWS CodePipeline whose final stage uses AWS CodeDeploy to update an AWS Lambda function. As the engineering lead, you want each deployment to send a small portion of live traffic to the new version for 5 minutes, then shift all traffic to it, and you require automatic rollback if the function experiences a spike in failures. Which actions should you recommend? (Choose 2)

  • ❏ A. Choose a deployment configuration of LambdaAllAtOnce

  • ❏ B. Create a CloudWatch alarm on Lambda metrics and associate it with the CodeDeploy deployment

  • ❏ C. Create an Amazon EventBridge rule for deployment monitoring and attach it to the CodeDeploy deployment

  • ❏ D. Choose a deployment configuration of LambdaCanary10Percent5Minutes

  • ❏ E. Choose a deployment configuration of LambdaLinear10PercentEvery10Minutes

A live quiz gaming platform run by a startup in Singapore has been operating in a single AWS Region and must now serve players worldwide. The backend runs on Amazon EC2 with a requirement for very high availability and consistently low latency. Demand will be uneven, with a few countries expected to produce significantly more traffic than others. Which routing approach should be implemented to best satisfy these needs?

  • ❏ A. Utilize Route 53 latency-based routing and deploy the EC2 instances in Auto Scaling groups behind Application Load Balancers across three Regions

  • ❏ B. Create Route 53 weighted routing records for Application Load Balancers that front Auto Scaling groups of EC2 instances in three Regions

  • ❏ C. Deploy EC2 instances in Auto Scaling groups behind Application Load Balancers in three Regions and use Route 53 geoproximity routing records that point to each load balancer with adjustable bias

  • ❏ D. Deploy EC2 instances in Auto Scaling groups behind Application Load Balancers in three Regions and use Route 53 geolocation routing records that point to each load balancer

AWS Certified DevOps Engineer Logo and Badge

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.

Riverton Labs uses AWS IAM Identity Center for workforce access and wants to avoid reliance on standalone IAM users. A DevOps engineer must implement automation that disables any credentials for a newly created IAM user within 90 seconds and ensures the security operations team is alerted when such creation occurs. Which combination of actions will achieve this outcome? (Choose 3)

  • ❏ A. Create an AWS Config rule that publishes to Amazon SNS when IAM user resources are modified

  • ❏ B. Build an AWS Lambda function that disables access keys and removes the console login profile for newly created IAM users, and invoke it from the EventBridge rule

  • ❏ C. Create an Amazon EventBridge rule that triggers on IAM GetLoginProfile API calls in AWS CloudTrail

  • ❏ D. Define an Amazon SNS topic as a target for an EventBridge rule and subscribe the security operations distribution list

  • ❏ E. Configure an Amazon EventBridge rule that matches IAM CreateUser API calls recorded by AWS CloudTrail

  • ❏ F. Build an AWS Lambda function that only deletes login profiles for new IAM users, and invoke it from the EventBridge rule

A data engineering team at Orion Cart runs batch jobs on an Amazon EMR cluster that scales across EC2 instances for a retail analytics platform. After five months, AWS Trusted Advisor highlighted multiple idle EMR nodes, and the team discovered there were no scale-in settings, which led to unnecessary spend. They want to receive timely notifications about Trusted Advisor findings so they can react quickly before costs grow. Which approaches would meet this requirement? (Choose 3)

  • ❏ A. Create an Amazon EventBridge rule that listens for Trusted Advisor check status updates and route matches to an Amazon SNS topic for email alerts

  • ❏ B. Enable Trusted Advisor’s built-in email summaries to receive weekly savings and checks digest

  • ❏ C. Schedule an AWS Lambda function daily to call the Trusted Advisor API, then post a message to an Amazon SNS topic to notify subscribed teams

  • ❏ D. Turn on an automatic EventBridge notification for Trusted Advisor that emails the account owner without creating any rules

  • ❏ E. Run an AWS Lambda job every day to refresh Trusted Advisor via API and push results to CloudWatch Logs; create a metric filter and a CloudWatch alarm that sends notifications

  • ❏ F. Use an Amazon EventBridge rule for Trusted Advisor and configure Amazon SES directly as the target for sending emails

Which AWS continuous integration and continuous delivery design enables near zero downtime releases and allows rollback to the previous version in under 4 minutes?

  • ❏ A. AWS CodeCommit with CodeDeploy in-place rolling across an Auto Scaling group

  • ❏ B. Single repo with develop->main via PRs, CodeBuild on commit, CodeDeploy blue/green with traffic shifting

  • ❏ C. Per-developer repos, shared develop, CodeBuild, PRs to main, CodeDeploy blue/green

  • ❏ D. AWS Elastic Beanstalk with rolling updates

A compliance audit at Solstice Analytics discovered that an AWS CodeBuild build retrieves a database seed script from an Amazon S3 bucket using an anonymous request. The security team has banned any unauthenticated access to S3 for this pipeline. What is the most secure way to remediate this?

  • ❏ A. Enable HTTPS basic authentication on the S3 bucket and use curl to pass a token for the download

  • ❏ B. Remove public access with a bucket policy and download the file using the AWS CLI with a long-lived IAM access key and secret configured in environment variables

  • ❏ C. Remove public access with a bucket policy and grant the CodeBuild service role least-privilege S3 permissions, then use the AWS CLI in the build to retrieve the object

  • ❏ D. Attach the AmazonS3FullAccess managed policy to the CodeBuild service role and keep the bucket public, then use the AWS CLI to pull the file

Riverton Analytics is modernizing a monolithic web application on AWS that currently runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The team wants to release a new build to a small slice of users first and then increase exposure to all traffic without any downtime. What approach should they use to perform this canary rollout?

  • ❏ A. Configure a private REST API in Amazon API Gateway with an Application Load Balancer integration and create a new stage for the updated build, then use API Gateway canary release to send a small portion of requests to the stage

  • ❏ B. Use Amazon Route 53 latency-based routing between two Application Load Balancers so that you can gradually move a percentage of users to the new stack

  • ❏ C. Stand up a parallel stack with an Application Load Balancer and Auto Scaling group running the new version, then use Route 53 weighted alias records to split and progressively shift traffic between the two load balancers

  • ❏ D. Perform a canary deployment with AWS CodeDeploy using the CodeDeployDefault.LambdaCanary20Percent15Minutes configuration

StratusPrint, a digital publishing firm, requires every Amazon EBS volume to be snapshotted on a biweekly schedule. An internal job reads a custom tag named BackupInterval on each volume and triggers the snapshot when the value is 14d, but an audit found several volumes were missed because the tag was not present. What is the most reliable way to enforce the tag and automatically remediate any EBS volumes in this AWS account that are missing it?

  • ❏ A. Create an Amazon EventBridge rule for CloudTrail CreateVolume events and target an AWS Systems Manager Automation runbook to add the default 14d tag

  • ❏ B. Configure AWS Config with the required-tags rule scoped to AWS::EC2::Volume and attach a Systems Manager Automation remediation to add the BackupInterval=14d tag to noncompliant volumes

  • ❏ C. Use an IAM policy with aws:RequestTag and aws:TagKeys to deny CreateVolume unless BackupInterval=14d is supplied

  • ❏ D. Set up AWS Config to evaluate AWS::EC2::Instance and remediate by tagging any attached volumes with the default 14d value

A public university runs its student records platform on roughly 180 Windows Server Amazon EC2 instances, each with multiple attached EBS volumes. The security team requires an automated process to install the latest security updates across the fleet and a control that continuously evaluates whether running instances were launched from an approved AMI list. Engineers must still be able to start instances from other AMIs for experiments, but operations need to be alerted whenever any noncompliant instance is detected in the VPC. What should you implement to satisfy these requirements without blocking launches?

  • ❏ A. Use Amazon GuardDuty to continuously audit EC2 instances for missing security patches and unapproved AMIs, and trigger Amazon CloudWatch alarms for any noncompliance

  • ❏ B. Create an IAM policy that denies launching EC2 instances from any AMI not on an approved list, and set up CloudWatch alarms to alert on violations

  • ❏ C. Define a patch baseline with AWS Systems Manager Patch Manager and use an AWS Config managed rule to evaluate instances against an approved AMI list, with CloudWatch alarms for any noncompliant resources

  • ❏ D. Use Amazon Inspector to scan the fleet for vulnerabilities and verify AMI approvals, then notify operations through CloudWatch alarms

How can you deploy EC2 Auto Scaling instances from a single reusable AMI across multiple environments while securing environment specific secrets and reducing instance boot time?

  • ❏ A. Session Manager-baked AMI + Lambda from user data + Secrets Manager

  • ❏ B. Patch Manager AMI prep + Lambda + AppConfig

  • ❏ C. SSM Automation golden AMI + user data reads env tag + Parameter Store SecureString

  • ❏ D. CloudFormation cfn-init on boot + stack parameter env + Secrets Manager

An online payments startup named LumaPay deployed a Ruby on Rails application to AWS Elastic Beanstalk in a sandbox account to handle provisioning, load balancing, auto scaling, and health checks. The environment was launched with an Amazon RDS PostgreSQL instance that is attached to the Elastic Beanstalk environment, so terminating the environment also removes the database. The team needs to perform blue green cutovers and eventually promote the stack to production without putting the database at risk. How should the DevOps engineer separate the database lifecycle from the environment with the least chance of data loss?

  • ❏ A. Use a canary deployment, take an RDS snapshot and enable deletion protection, point a new Elastic Beanstalk environment at the same RDS instance, then delete the old environment

  • ❏ B. Run a blue green deployment, create an RDS snapshot and enable deletion protection, launch a new Elastic Beanstalk environment configured to use the existing RDS instance, and immediately delete the old environment

  • ❏ C. Use blue green, snapshot the database and turn on deletion protection, bring up a new Elastic Beanstalk environment that connects to the same RDS instance, then remove the old environment security group from the DB security group before terminating the old environment

  • ❏ D. Migrate the database to a new RDS instance with AWS Database Migration Service and then tear down the original Elastic Beanstalk environment

Northwind Studios runs a media asset management platform in its data center and connects to an AWS VPC through Direct Connect. The archive contains about 80 TB of video stored on a physical tape library. The team wants an automated index that can find people across the footage using facial recognition and store a still image for each identified person. They intend to migrate the media and the MAM workload to AWS over time, but need a solution now that adds very little operational overhead and causes minimal disruption to existing workflows. What should they do?

  • ❏ A. Ingest the MAM archive into Amazon Kinesis Video Streams, create a face collection in Amazon Rekognition, process the stream, store metadata back in MAM, and configure the stream to persist to Amazon S3

  • ❏ B. Deploy AWS Storage Gateway file gateway on-premises, have MAM read and write media through the gateway so files land in Amazon S3, and use AWS Lambda to invoke Amazon Rekognition to index faces from S3 and update the MAM catalog

  • ❏ C. Copy all media to a large Amazon EC2 instance with attached Amazon EBS, install an open-source facial recognition tool to generate metadata, update the catalog, and then move the content into Amazon S3

  • ❏ D. Stand up AWS Storage Gateway tape gateway, write media to virtual tapes from MAM, build a Rekognition face collection, and invoke Rekognition from AWS Lambda to read and analyze the tapes directly

A healthcare analytics startup, MedInsight Labs, uses AWS to capture API activity from users, services, and automation across a single account. After a breach investigation, the team identified a principal that executed risky actions and now requires absolute, cryptographic proof that the audit logs for the past 48 hours reflect the exact order of events and have not been modified. As the consultant, what should you implement to meet this requirement?

  • ❏ A. Enable AWS Config to record resource changes, store its data in Amazon S3, and enforce S3 Object Lock in compliance mode

  • ❏ B. Create an AWS CloudTrail trail that delivers to Amazon S3 and immediately transition objects to S3 Glacier with a Glacier Vault Lock policy

  • ❏ C. Configure AWS CloudTrail to deliver logs to Amazon S3 and use CloudTrail log file integrity validation to verify authenticity and ordering

  • ❏ D. Stream CloudTrail events to Amazon CloudWatch Logs and use metric filters and alarms to detect tampering

OrbitPay, a fintech startup, runs a worldwide checkout on AWS that accepts credit cards and Ethereum using Amazon EC2, Amazon DynamoDB, Amazon S3, and Amazon CloudFront. A qualified security assessor reported within the past 45 days that primary account numbers were not properly encrypted, causing a PCI DSS compliance failure. You must fix this quickly and also increase the share of viewer requests served from CloudFront edge locations instead of the origins to improve performance. What should you implement to best protect the card data while also improving the CloudFront cache hit rate?

  • ❏ A. Use CloudFront signed URLs and configure long Cache-Control max-age on origin objects

  • ❏ B. Set an origin access identity for the distribution and vary caching on User-Agent and Host headers

  • ❏ C. Enforce HTTPS to the origins and enable CloudFront field-level encryption, and have the origin send Cache-Control max-age with the longest safe value

  • ❏ D. Enable DynamoDB encryption with AWS KMS and turn on CloudFront Origin Shield

Which ALB feature provides a per request latency breakdown across the load balancer and its targets without requiring changes to application code?

  • ❏ A. CloudWatch metrics

  • ❏ B. Turn on ALB access logs

  • ❏ C. AWS X-Ray

  • ❏ D. CloudWatch agent on EC2

AWS DevOps Certification Exam Answers

AWS Certified DevOps Engineer Logo and Badge

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.

At Northwind HealthTech, an AWS CloudFormation stack provisions an S3 bucket, one EC2 instance, and a single EBS volume. A compliance request requires changing the stack name, but the live resources must continue running and must not be recreated or deleted. What is the best way to rename the stack while keeping all current resources in place?

  • ✓ B. Update the existing template to set DeletionPolicy Retain on every resource, delete the stack so the resources are preserved, create a new stack with the new name, import the retained S3 bucket, EC2 instance, and EBS volume, then remove the Retain settings

The correct choice is Update the existing template to set DeletionPolicy Retain on every resource, delete the stack so the resources are preserved, create a new stack with the new name, import the retained S3 bucket, EC2 instance, and EBS volume, then remove the Retain settings. This approach preserves the live resources while allowing you to remove the old stack and then bring those same resources under a newly named stack.

This method works because applying Retain and import prevents CloudFormation from deleting the physical resources when the original stack is deleted. After the stack is removed the S3 bucket the EC2 instance and the EBS volume remain in your account and you can create a new stack with the desired name and use the resource import feature to associate those existing resources with the new stack. That workflow avoids recreating or disrupting the running EC2 instance and keeps the bucket and volume intact.

Apply DeletionPolicy Retain to the S3 bucket and EC2 instance and DeletionPolicy Snapshot to the EBS volume, delete the old stack, create a new stack with the desired name, import the retained resources, then remove those policies is incorrect because using a Snapshot deletion policy for the EBS volume results in creating a snapshot and removing the volume so there would be no volume to import. That outcome fails the requirement to keep the live resources running.

Create CloudFormation registry hooks for each resource, delete the original stack, create a new stack with the target name, and rely on the hooks to bring the existing resources under management is wrong because hooks are designed for checks and lifecycle extensions and they do not perform retention or resource import operations that preserve and reassign existing resources when a stack is deleted.

Deploy a second stack with the new name that duplicates the resources and adds DependsOn relationships to the original stack, then delete both stacks while retaining the underlying resources is incorrect because duplicating resources will create conflicts with existing resource identifiers and DependsOn does not provide a supported mechanism to transfer or import live resources into a different stack.

Apply DeletionPolicy Retain to each resource then delete the old stack and use the resource import workflow to create the new stack name without recreating the resources.

A fintech analytics startup runs a RESTful service on EC2 instances in an Auto Scaling group behind an Application Load Balancer. The application writes request data to Amazon DynamoDB, and static assets are hosted on Amazon S3. Monitoring shows roughly 90% of reads are repeated across many users. What should you implement to increase responsiveness and lower costs?

  • ✓ C. Activate DynamoDB Accelerator and put CloudFront in front of the S3 origin

Activate DynamoDB Accelerator and put CloudFront in front of the S3 origin is the correct choice because it pairs a native DynamoDB cache with an edge cache for static assets to improve responsiveness and lower costs.

DynamoDB Accelerator, or DAX, provides a managed in-memory cache that is API compatible with DynamoDB and it reduces read latency while lowering DynamoDB read capacity consumption. CloudFront caches S3 objects at edge locations and it reduces origin requests plus improves global latency and often lowers delivery costs compared to fetching directly from S3.

Use ElastiCache for Redis in front of DynamoDB and serve static files with CloudFront is less suitable because Redis does not offer native DynamoDB compatibility and you would have to handle cache integration and consistency yourself which adds complexity compared to DAX.

Configure API Gateway caching for the API and enable S3 Transfer Acceleration on the bucket is not a good fit because the API is deployed behind an Application Load Balancer rather than API Gateway and S3 Transfer Acceleration optimizes long distance uploads rather than providing edge read caching for static content.

Enable DAX for DynamoDB and add ElastiCache Memcached to speed up S3 reads is incorrect because Memcached cannot be used to cache S3 objects directly and it does not provide the global edge caching that CloudFront does for static assets.

Remember that DAX is the managed, DynamoDB native cache for repeated reads and that CloudFront is the standard choice to cache and distribute S3 static assets at the edge.

A medical imaging startup named RadiantLabs is moving a legacy three-tier system to AWS. Due to vendor licensing, the application layer must run on a single Amazon EC2 Dedicated Instance that uses an Elastic Fabric Adapter and instance store volumes, so Auto Scaling cannot be used. The database tier will be migrated to Amazon Aurora to persist application data and transactions across at least two Availability Zones. The team needs automated healing to keep the service available if the EC2 host fails or the Aurora writer becomes unavailable while minimizing ongoing cost by avoiding always-on duplicates. What is the most cost-effective approach to meet these requirements?

  • ✓ C. Subscribe to AWS Health events and trigger an EventBridge rule that invokes a Lambda function to create a replacement EC2 instance in another Availability Zone upon failure, and configure an Aurora cluster with one cross-AZ Aurora Replica that can be promoted

Subscribe to AWS Health events and trigger an EventBridge rule that invokes a Lambda function to create a replacement EC2 instance in another Availability Zone upon failure, and configure an Aurora cluster with one cross-AZ Aurora Replica that can be promoted is the correct choice because it provides event driven recovery for the constrained EC2 tier and a low cost, cross Availability Zone failover path for the database.

The EventBridge plus Lambda replacement approach lets you react only when the Dedicated Instance host fails so you avoid paying for a warm standby instance that sits idle. This method can capture AWS Health events and run the exact replacement steps needed for an instance that uses Elastic Fabric Adapter and instance store volumes. The Aurora cluster with a cross-AZ Aurora Replica supplies a promotable replica that preserves durability and enables quick writer failover without maintaining multiple active writers.

Monitor the Dedicated Instance with AWS Health, allocate an Elastic IP, pre-provision a second EC2 instance in another Availability Zone, and use EventBridge with Lambda to move the EIP on failure; run a single-instance Aurora database is less cost effective because pre-provisioning a standby means you pay for an always on host and the single-instance Aurora leaves the database as a single point of failure.

Use AWS Health with CloudWatch alarms to enable EC2 automatic recovery and deploy a single-instance Aurora database governed by an AWS Config conformance pack is unsuitable because EC2 automatic recovery does not cover instances that use Elastic Fabric Adapter and instance store volumes so host level failures may not be handled, and a single-instance Aurora configuration is not highly available.

Place the instance in an Auto Scaling group with desired capacity 1 and lifecycle hooks for recovery, and migrate the database to Aurora Serverless v2 for automatic failover is invalid given the licensing constraint that prevents Auto Scaling and it may not guarantee compatibility with EFA and instance store volumes while adding operational complexity.

When Auto Scaling is prohibited use event driven tooling such as EventBridge plus Lambda to launch replacements on demand and pair that with a cross AZ Aurora Replica to achieve failover without constant duplicate capacity

PixelBay, a retail startup, plans to roll out a global mobile commerce platform built with AWS Amplify that will serve over 30 million users. To minimize latency, the company will run its payment and order APIs in multiple AWS Regions so customers read and write to the Region nearest to them. Leadership requires that any transaction written in one Region be automatically propagated to other Regions without building custom replication code. The user base will span North America, Latin America, Europe, and Asia-Pacific. What architecture should the DevOps engineer choose to maximize scalability, availability, and cost efficiency?

  • ✓ C. Create an Amazon DynamoDB global table and add replica Regions for each market; write to the local replica in each Region so DynamoDB replicates changes automatically worldwide

Create an Amazon DynamoDB global table and add replica Regions for each market; write to the local replica in each Region so DynamoDB replicates changes automatically worldwide is the correct option because it provides managed active active writes and automatic cross Region replication without the need to build custom replication code.

DynamoDB global tables are a fully managed, multi Region, multi writer solution that handles replication, conflict resolution, and fault tolerance for you. The service lets each Region accept local reads and writes to keep latency low and it synchronizes changes across replicas so developers do not have to manage streams, retries, or ordering. This approach is designed to scale to very high write volumes and it is typically more cost efficient for key value workloads than running distributed relational databases or building bespoke replication with serverless functions.

Write orders to a DynamoDB table in each Region and use a Lambda function subscribed to a primary table’s stream to replay recent writes to tables in all other Regions is unsuitable because Lambda based fan out creates operational complexity and requires custom handling for ordering, retries, and conflict resolution. It is difficult to scale reliably for tens of millions of users without significant engineering effort.

Deploy Amazon Aurora Global Database with multi-writer capability across the required Regions and replicate writes using Aurora replication is not ideal because Aurora Global Database is generally used with a single writer and cross Region read replicas. True multi Region multi writer capability is not a standard, broadly used configuration and the relational model would typically add complexity and cost for this access pattern.

Create a DynamoDB global table in one Region that automatically creates replicas in every AWS Region and keeps them in sync without any configuration is incorrect because DynamoDB global tables require you to explicitly add replica Regions. The service does not automatically create replicas in every AWS Region without configuration.

For questions that ask for global low latency active active writes without custom replication code choose DynamoDB Global Tables for key value workloads. Verify that the service supports multi Region writes and built in conflict resolution.

Which AWS services provide centralized collection of Prometheus metrics from EKS, ECS, and on-premises Kubernetes and also offer managed dashboards and analysis?

  • ✓ C. Amazon Managed Service for Prometheus plus Amazon Managed Grafana

The correct choice is Amazon Managed Service for Prometheus plus Amazon Managed Grafana. This pairing centralizes Prometheus metrics from EKS ECS and on premises Kubernetes and it provides managed dashboards and analysis.

Amazon Managed Service for Prometheus is a fully managed Prometheus compatible ingestion and storage service that accepts Prometheus remote write from EKS ECS and on premises clusters so you can centralize and scale time series metric storage.

Amazon Managed Grafana integrates natively with AMP to provide visualization querying dashboards and alerting without the operational overhead of running Grafana yourself.

Amazon CloudWatch agent with Athena and QuickSight is incorrect because it is not Prometheus native and it does not provide a Prometheus remote write endpoint. Athena and QuickSight are not optimized for low latency time series Prometheus queries and alerting.

AWS Systems Manager Agent with Amazon Managed Service for Prometheus is incorrect because the Systems Manager Agent does not scrape Prometheus metrics and AMP is a storage and query backend rather than a visualization layer so you still need a tool such as Grafana.

Amazon CloudWatch Container Insights with CloudWatch dashboards is incorrect because Container Insights sends container metrics into CloudWatch but it does not act as a Prometheus compatible remote write store and it is not ideal for aggregating native Prometheus metrics from on premises clusters.

When you see Prometheus or remote write think of a managed Prometheus backend for ingestion and a managed Grafana for dashboards and alerting.

MetroCare Alliance, a nonprofit healthcare consortium, is moving its secure document storage to AWS. They must store patient PII and sensitive billing records, encrypt data both at rest and during transmission, and maintain ongoing replication to two locations that are at least 700 miles apart. As the DevOps Engineer, which approach should you implement to meet these requirements?

  • ✓ C. Create S3 buckets in two separate AWS Regions at least 700 miles apart, enforce HTTPS-only access with a bucket policy, require SSE-S3 for all objects, and enable S3 cross-Region replication

Create S3 buckets in two separate AWS Regions at least 700 miles apart, enforce HTTPS-only access with a bucket policy, require SSE-S3 for all objects, and enable S3 cross-Region replication is correct because it satisfies the geographic separation requirement and encrypts data both in transit and at rest while providing continuous replication.

This option works since Amazon S3 buckets are regional and using buckets in two Regions ensures the required physical separation. A bucket policy that enforces HTTPS with the aws:SecureTransport condition provides in transit encryption. Server side encryption with S3 managed keys SSE-S3 ensures objects are encrypted at rest. Cross Region Replication provides the ongoing asynchronous copying of objects between Regions to meet the replication requirement.

Create primary and secondary S3 buckets in different Availability Zones at least 700 miles apart, require HTTPS via a bucket policy, use SSE-KMS for all objects, enable S3 Transfer Acceleration, and keep the KMS key in the primary Region is wrong because S3 buckets are regional resources and Availability Zones are not separated by hundreds of miles. Transfer Acceleration improves upload performance and does not perform replication. Keeping a KMS key only in the primary Region complicates cross Region replication when using KMS.

Create S3 buckets in two AWS Regions at least 700 miles apart, attach an IAM role that requires TLS-only access, configure a bucket policy for SSE-S3, and enable cross-Region replication is incorrect because enforcing TLS only for all requests must be done with an S3 bucket policy using aws:SecureTransport and not by attaching an IAM role to the bucket.

Create S3 buckets in two AWS Regions at least 700 miles apart, enforce HTTPS with a bucket policy, require SSE-S3, and use S3 Multi-Region Access Points for automatic replication without setting replication rules is wrong because Multi Region Access Points provide a global endpoint and routing but they do not copy objects between Regions automatically without configuring replication rules.

When a requirement specifies long physical separation think multi Region not Availability Zones and enforce HTTPS with a bucket policy using aws:SecureTransport and use S3 CRR with SSE S3 for a simple compliant design

EcoRide Labs, an electric mobility analytics firm, has nearly completed its AWS migration and notices that several platform engineers can remove Amazon DynamoDB tables. The operations team wants a very low-cost solution that alerts them within about a minute whenever the DynamoDB DeleteTable API is called in the account. What should they implement?

  • ✓ C. Create an AWS CloudTrail trail and add an Amazon EventBridge rule for AWS API Call via CloudTrail that matches DeleteTable and targets Amazon SNS

Create an AWS CloudTrail trail and add an Amazon EventBridge rule for AWS API Call via CloudTrail that matches DeleteTable and targets Amazon SNS is the correct choice because it gives near real time alerts for management API calls with very low ongoing cost and no custom log parsing.

Create an AWS CloudTrail trail and add an Amazon EventBridge rule for AWS API Call via CloudTrail that matches DeleteTable and targets Amazon SNS uses CloudTrail to capture the DeleteTable API call as a management event and uses an EventBridge rule to match that event and forward it to SNS for immediate notification. This avoids delivering and parsing raw log files and it scales with minimal infrastructure which keeps costs low.

Configure a CloudTrail event selector and invoke an AWS Lambda function to publish to Amazon SNS is not ideal because event selectors only control what CloudTrail records and they do not themselves trigger actions and you would still need to process delivered logs which adds latency and cost.

Enable DynamoDB Streams and trigger an AWS Lambda function to send Amazon SNS notifications is unsuitable because DynamoDB Streams report item level data changes and they do not emit administrative API calls such as DeleteTable.

Create an AWS Config custom rule to detect table deletions and publish to Amazon SNS is more complex and typically less timely because Config evaluates configuration changes and it is not focused on immediate API call detection.

For low cost near real time alerts remember to use CloudTrail for management API calls and pair it with an EventBridge rule that targets SNS for simple notifications

The platform team at NovaTech Robotics shares an Amazon S3 bucket named devops-eu-artifacts-002 to store build artifacts used by multiple CI/CD pipelines across two product lines. A new engineer recently updated the bucket policy and unintentionally blocked downloads, which halted deployments. The team now wants immediate notifications whenever the bucket policy is changed so they can react quickly. What is the most appropriate solution?

  • ✓ C. Create a CloudTrail trail that delivers management events to a CloudWatch Logs log group, add a metric filter for PutBucketPolicy and DeleteBucketPolicy, and configure a CloudWatch alarm to notify on matches

Create a CloudTrail trail that delivers management events to a CloudWatch Logs log group, add a metric filter for PutBucketPolicy and DeleteBucketPolicy, and configure a CloudWatch alarm to notify on matches is correct because it captures S3 control plane API calls that change bucket policies and lets you trigger an alarm as soon as those specific events occur.

CloudTrail records management events including PutBucketPolicy and DeleteBucketPolicy and forwarding those events to CloudWatch Logs lets you create a metric filter that matches the exact API calls you care about. When the filter detects a match it increments a metric and the configured CloudWatch alarm can notify the team immediately so they can respond to broken deployments.

Enable S3 server access logging on the bucket, create a CloudWatch metric filter for bucket policy events, and set a CloudWatch alarm is incorrect because S3 server access logs record object level requests and access records and they do not reliably capture S3 control plane API calls such as policy updates.

Configure S3 Event Notifications to publish bucket policy update events to an Amazon SNS topic is incorrect because S3 Event Notifications only cover object level events and they do not include bucket policy change events.

Create an EventBridge rule that filters S3 bucket policy events from a CloudWatch Logs group and trigger an alarm when matched is incorrect because EventBridge cannot directly match against CloudWatch Logs content and the supported pattern is to use CloudTrail management events delivered to CloudWatch Logs with a metric filter and alarm.

For alerts on configuration changes think management events in CloudTrail sent to CloudWatch Logs with a metric filter and a CloudWatch alarm so you get fast, specific notifications.

NovaVista Media deploys applications on AWS Elastic Beanstalk and manages the underlying resources with AWS CloudFormation. The operations team produces a hardened golden AMI with the latest security fixes every 10 days, and more than 150 Beanstalk environments were created from many different CloudFormation templates. You need each environment to move to the newest AMI on that cadence, but the templates are not standardized and parameter names vary across stacks. How should you implement this so the EC2 AMI used by the environments is refreshed on schedule without manually editing every template?

  • ✓ C. Put the AMI ID in AWS Systems Manager Parameter Store and use an SSM parameter type in CloudFormation so the value is resolved at update time; trigger a scheduled EventBridge rule every 10 days that runs a Lambda to call UpdateStack on all stacks

Put the AMI ID in AWS Systems Manager Parameter Store and use an SSM parameter type in CloudFormation so the value is resolved at update time and trigger a scheduled EventBridge rule every 10 days that runs a Lambda to call UpdateStack on all stacks is correct because it centralizes the AMI reference and lets CloudFormation resolve the current AMI at stack update time while you automate the update across environments.

Put the AMI ID in AWS Systems Manager Parameter Store and use an SSM parameter type in CloudFormation so the value is resolved at update time and trigger a scheduled EventBridge rule every 10 days that runs a Lambda to call UpdateStack on all stacks works because CloudFormation supports SSM parameter types and dynamic references so templates can read the Parameter Store value when the stack updates and the scheduled EventBridge rule with Lambda simply triggers UpdateStack for each stack so heterogeneous templates do not need identical parameter names.

Store the current golden AMI ID in an S3 object and use a CloudFormation mapping in every template and schedule a Lambda function to rewrite the mapping in each template in S3 every 10 days and then call UpdateStack for all Beanstalk stacks is fragile and complex because it requires programmatically editing many different templates and mapping keys may not match across stacks so it does not scale.

Publish the AMI ID in AWS AppConfig and reference it from a CloudFormation parameter and invoke a scheduled Lambda every 10 days to update all stacks is not suitable because CloudFormation does not natively read values from AppConfig so stacks cannot automatically resolve the value at update time.

Keep the AMI ID in AWS Systems Manager Parameter Store but have a Lambda read it and pass it as a string parameter to UpdateStack for every stack on a 10 day schedule is operationally impractical because it assumes uniform parameter names across templates and you would still need per template mapping for stacks that use different parameter names which defeats the goal of avoiding manual edits.

Use SSM Parameter Store dynamic references or the CloudFormation SSM parameter type and automate UpdateStack with a scheduled EventBridge rule so environments pick up new AMIs without modifying templates.

AWS Certified DevOps Engineer Logo and Badge

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.

An application runs on EC2 instances behind an Application Load Balancer and listens on a custom port, which actions correctly configure health checks so that client access is restored? (Choose 2)

  • ✓ C. Use ELB health checks in the Auto Scaling group for the app’s port and path

  • ✓ D. Set the target group health check to the app’s custom port and health endpoint

Use ELB health checks in the Auto Scaling group for the app’s port and path and Set the target group health check to the app’s custom port and health endpoint are correct because they align the load balancer and autoscaling behavior with what clients actually observe.

The Set the target group health check to the app’s custom port and health endpoint choice ensures the Application Load Balancer probes the exact port and a valid HTTP health path so targets are marked healthy only when the application is responding on its real listening port. The Use ELB health checks in the Auto Scaling group for the app’s port and path choice makes Auto Scaling base instance replacement on the load balancer observed health instead of only EC2 status, and that restores client access by replacing instances that are unhealthy from the ALB perspective.

Change the target type to IP addresses is incorrect because switching the target type does not fix a mismatch between health check port or path and the application’s listening configuration. Changing target type is not a remedy for misconfigured health checks.

Set the ALB listener to TCP on the custom port is incorrect because an Application Load Balancer only supports HTTP and HTTPS listeners and does not use TCP listeners. TCP listeners are a feature of the Network Load Balancer and that change would not be valid for an ALB.

Open the instance security group to the ALB on the custom port is not the primary fix when health checks are wrong. Security group rules must allow ALB traffic, but opening the instance SG alone will not help if the target group health check is probing the wrong port or path. The core fix is aligning the health check and autoscaling health settings with the application.

Align the target group health check port and path with the app and enable ELB health checks on the Auto Scaling group when the app runs behind an ALB.

A startup named Oriole Data maintains a Java service built with Apache Maven. The source code lives in GitHub, and on each push to the main branch the team wants the project compiled, unit tests executed, and the built artifact uploaded to an Amazon S3 bucket. What combination of actions should the DevOps engineer implement to achieve this? (Choose 3)

  • ✓ B. Add a buildspec.yml file to the repository that defines the Maven compile, test, and packaging steps

  • ✓ D. Configure a GitHub webhook so that each push to the repository triggers a build

  • ✓ E. Create an AWS CodeBuild project that connects to the GitHub repository as its source

The correct options are Create an AWS CodeBuild project that connects to the GitHub repository as its source, Configure a GitHub webhook so that each push to the repository triggers a build, and Add a buildspec.yml file to the repository that defines the Maven compile, test, and packaging steps.

Create an AWS CodeBuild project that connects to the GitHub repository as its source gives you a fully managed build environment that can run Maven commands and unit tests and can upload build artifacts to Amazon S3. Configure a GitHub webhook so that each push to the repository triggers a build allows CodeBuild to start a build automatically on every push to main so you get continuous feedback. Add a buildspec.yml file to the repository that defines the Maven compile, test, and packaging steps tells CodeBuild the exact phases and shell commands to run and how to collect and store the artifact in S3.

Provision a single Amazon EC2 instance, install build tools via user data, and run the build there is operationally heavy and requires you to manage scaling updates and availability when a managed build service will handle those concerns. Implement an AWS Lambda function invoked by an Amazon S3 event to compile the code and run tests is unsuitable because Lambda has execution time and environment constraints that make full Maven builds and test suites unreliable. Launch an AWS CodeDeploy application targeting the EC2/On-Premises compute platform is focused on deployment to compute hosts and does not address building or running unit tests in the source repository.

When the question requires compile test and publish on each push think CodeBuild plus a repository buildspec.yml and a GitHub webhook so you avoid managing servers.

A fintech company, NovaLedger, must ensure its licensed middleware runs only on Amazon EC2 Dedicated Hosts to control third party license spend. The platform team needs an automated audit in one Amazon VPC that detects any EC2 instance not placed on a Dedicated Host and produces a compliance summary about every 45 days with the least ongoing administration. What should the engineer implement?

  • ✓ C. Turn on AWS Config recording for EC2 instances and Dedicated Hosts with a custom AWS Config rule that invokes Lambda to evaluate host placement and mark noncompliant instances, then use AWS Config compliance reports

The Turn on AWS Config recording for EC2 instances and Dedicated Hosts with a custom AWS Config rule that invokes Lambda to evaluate host placement and mark noncompliant instances, then use AWS Config compliance reports option is correct because it records resource relationships and produces native compliance summaries suitable for periodic audits.

By enabling AWS Config for both EC2 instances and Dedicated Hosts the service captures the state and associations automatically. A custom AWS Config rule can invoke Lambda to compare instance placement against hosts and mark resources as NON_COMPLIANT when they are not on a Dedicated Host. AWS Config also provides built in compliance reporting so the platform team can produce the required summary about every 45 days with minimal ongoing administration.

Use AWS Systems Manager Configuration Compliance with PutComplianceItems, store instance IDs in Parameter Store, and summarize with ListComplianceSummaries is not suitable because Systems Manager Compliance is oriented to patching and state baselines and it does not natively determine Dedicated Host placement.

AWS License Manager helps track and govern software licenses but it does not evaluate or enforce whether EC2 instances are running on Dedicated Hosts so it cannot provide the placement audit required here.

Use AWS CloudTrail with a Lambda function to parse EC2 events, store noncompliant instance IDs in Amazon S3, and query results with Amazon Athena could be implemented but it forces you to build and maintain custom log parsing storage and query layers which increases operational overhead compared to using AWS Config.

For continuous relationship checks with minimal maintenance prefer AWS Config and a custom rule so you get automated compliance summaries without building a log analytics pipeline.

NovaCanvas, a digital illustration startup, is rolling out an API enhancement that expects a JSON field named color. The Lambda handler must interpret color set to none as traffic from legacy clients. Operations wants to maintain a single Lambda backend while serving both older and newer apps. The API is currently deployed to a stage called prod-v1, and some Android users will upgrade slowly. The solution must keep backward compatibility for several years. What approach best satisfies these requirements?

  • ✓ C. Publish a new Lambda version and expose a new API Gateway stage named prod-v2 that, along with prod-v1, invokes the same Lambda alias; on prod-v1 add a request mapping template that injects “color”: “none” into the payload

Publish a new Lambda version and expose a new API Gateway stage named prod-v2 that, along with prod-v1, invokes the same Lambda alias; on prod-v1 add a request mapping template that injects “color”: “none” into the payload is correct because it preserves a single Lambda backend while allowing two API stages. The legacy stage adds the missing field for old clients via a mapping template, and the new stage expects clients to send the field explicitly. This provides clear versioning, long-term support, and minimal operational overhead.

Publish a new Lambda version as a separate v2 function and create a prod-v2 API Gateway stage that invokes it; keep prod-v1 invoking the original function and add logic in the v1 function to proxy requests without color to the v2 function is wrong because it creates and manages two Lambda functions and adds redirection logic, which contradicts the requirement to manage only one function and adds latency and cost.

Enable API Gateway caching on prod-v1 and remove the legacy Lambda; deploy a new prod-v2 API backed by a new Lambda version and toggle a stage variable to supply a default color of none is incorrect since caching is unrelated to request transformation, stage variables cannot reliably inject JSON fields into the body, and removing the legacy path jeopardizes backward compatibility.

Publish a new Lambda version and keep a single prod stage by configuring an API Gateway mapping template that conditionally fills a default color of none when the attribute is missing is not the best fit because using one stage muddles API versioning and long-term governance; per-stage mappings provide a cleaner separation so legacy behavior is isolated to the legacy stage.

Exam Tip

For long-lived backward compatibility, prefer separate API Gateway stages that can apply stage-specific mapping templates while pointing to the same Lambda alias, rather than adding app logic or relying on caching or stage variables.

AuroraStream, a digital media startup, is migrating a legacy monolith to a serverless architecture on AWS to cut operational overhead. The next application version should be exposed to only a small portion of clients for early validation before a full rollout. If automated post-deployment checks fail, the team must rapidly revert with minimal disruption to production. Which deployment approach best meets these goals while reducing risk to the live environment? (Choose 2)

  • ✓ B. Use a single AWS Lambda alias that references the current and new versions, shift 15% of traffic to the new version and later route 100% when it proves stable

  • ✓ D. Enable a canary deployment on Amazon API Gateway to send 15% of calls to the canary and promote it to production after automated checks succeed

Use a single AWS Lambda alias that references the current and new versions, shift 15% of traffic to the new version and later route 100% when it proves stable and Enable a canary deployment on Amazon API Gateway to send 15% of calls to the canary and promote it to production after automated checks succeed are the correct choices for this scenario.

Both the Lambda alias weighted routing and the API Gateway canary approach let you shift a precise percentage of production traffic to the new version while keeping the rest on the stable version. You can automate post deployment checks and then either promote the canary to 100 percent or quickly roll back to the previous version with minimal disruption. Lambda alias traffic shifting operates at the function level and offers controlled gradual rollout and rollback. API Gateway canaries operate at the API level and permit routing a small fraction of calls to a new integration or stage for live validation.

Create a Network Load Balancer with an Amazon API Gateway private integration and two target groups, direct 15% of requests to the new version and remove the old target group after stability is incorrect because Network Load Balancers do not provide weighted target group forwarding for percentage based canaries, so you cannot reliably split traffic by exact percentages this way.

Configure Amazon Route 53 with a Failover policy to send 15% to the new endpoint and the remainder to the existing one, then switch all traffic after verification is wrong because failover routing is health based and not designed for precise percentage traffic shifting. Using weighted DNS has TTL based caching which makes fine grained shifts and near instant rollback unreliable compared with in service traffic shifting.

Set up an Application Load Balancer behind an API Gateway private integration and choose a built-in Canary routing option to direct traffic to the new version is incorrect because there is no built in canary toggle on an ALB in the way described for API Gateway canaries, so the proposed integration and workflow is not available.

Use Lambda alias weighted routing for function level control and API Gateway canary for API level control. Both give precise percentages and fast rollback which reduces blast radius.

In an ECS blue/green deployment using CodeDeploy which AppSpec lifecycle event should perform automated checks against the replacement task set using the test listener and enable automatic rollback before production traffic is shifted?

  • ✓ C. AppSpec AfterAllowTestTraffic with Lambda

AppSpec AfterAllowTestTraffic with Lambda is correct because CodeDeploy for ECS uses a test listener to route test traffic to the replacement task set and the AfterAllowTestTraffic lifecycle event runs validations after test traffic is enabled so failures can trigger automatic rollback before production traffic is shifted.

The AppSpec AfterAllowTestTraffic with Lambda hook runs after the test listener starts sending traffic and it is the safe pre cutover point to perform live validation against the replacement task set. A Lambda invoked by this hook can execute health and functional checks and then signal success or failure to CodeDeploy so that automatic rollback can occur before any production traffic is routed to the new task set.

The AppSpec AfterInstall event is incorrect because it runs before test traffic is routed to the new task set and it cannot validate behavior under live test traffic.

The AppSpec AllowTestTraffic option is incorrect because it only enables the test listener and does not itself perform validation after test traffic is active so it should not be relied on for automated checks that can trigger rollback.

The AppSpec BeforeAllowTraffic event is incorrect because it occurs immediately before production traffic is shifted and that timing is too late to perform comprehensive pre production checks and prevent cutover if failures are found.

Focus on hooks that run after the test listener is active and can signal failure to CodeDeploy. Look for test listener and automatic rollback in questions about blue green ECS deployments.

A platform engineer at LumaHealth is releasing a single-page web application that must federate with the company’s SAML 2.0 identity provider. The app requires a branded sign-up and sign-in flow hosted within the app, and once authenticated it must obtain temporary credentials to invoke AWS services. How should the engineer implement authentication and access control for this application?

  • ✓ D. Use Amazon Cognito user pools integrated with a SAML IdP, configure domain-based identifiers to route sign-ins, then use the issued tokens to exchange for short-lived AWS credentials

Use Amazon Cognito user pools integrated with a SAML IdP, configure domain-based identifiers to route sign-ins, then use the issued tokens to exchange for short-lived AWS credentials is correct because it gives a branded sign up and sign in experience hosted inside the app while supporting SAML 2.0 federation and token issuance that can be exchanged for temporary AWS credentials.

Cognito user pools provide a customizable authentication UI and native SAML federation so the application can present a consistent, branded sign in and sign up flow. After authentication the user pool issues ID and access tokens that can be mapped through a Cognito identity pool or exchanged for temporary AWS credentials to call AWS services securely. This approach separates the customer facing authentication experience from the credentials used to access AWS resources and it follows AWS best practices for short lived credentials.

Use an Amazon API Gateway REST API with an AWS Lambda authorizer that accepts SAML assertions as bearer tokens and apply identity-based policies for backend access is incorrect because this pattern does not provide the embedded sign up and sign in user experience and Lambda authorizers are not designed to perform full SAML interactive authentication flows.

Use Amazon Cognito Federated Identities with a SAML provider and call AWS STS AssumeRoleWithWebIdentity to obtain temporary credentials is incorrect because SAML federation uses AWS STS AssumeRoleWithSAML when exchanging SAML assertions, and identity pools by themselves do not deliver a registration or branded sign in experience for users.

AWS IAM Identity Center is not suitable for this use case because it targets workforce single sign on and access to AWS accounts and business applications rather than customer facing apps that need an embedded sign up and sign in flow and token exchange for application calls to AWS services.

For customer facing web apps that need a branded sign up and sign in plus SAML federation think Cognito user pools for the authentication experience and use an identity pool or STS to obtain temporary credentials for calling AWS services.

Aurora Parcel runs a new serverless workload made up of several AWS Lambda functions and an Amazon DynamoDB table named FleetEvents. The team’s CI/CD pipeline uses GitHub, AWS CodeBuild, and AWS CodePipeline with source, build, test, and deploy stages already in place. To reduce blast radius, the platform lead wants each release to first go to a small portion of production traffic for a short bake period, and only then roll out to everyone if no issues are detected. How should the deployment stage be updated to achieve this?

  • ✓ C. Define and publish the serverless application version with AWS CloudFormation and deploy the Lambda updates with AWS CodeDeploy using CodeDeployDefault.LambdaCanary20Percent10Minutes

Define and publish the serverless application version with AWS CloudFormation and deploy the Lambda updates with AWS CodeDeploy using CodeDeployDefault.LambdaCanary20Percent10Minutes is correct because it routes 20% of production traffic to the new Lambda version for 10 minutes and then shifts the remaining traffic if no alarms are triggered which provides a controlled canary and bake window with automatic rollback capability.

The CodeDeploy canary strategy for Lambda is designed to gradually shift traffic and monitor alarms during the bake period so you can detect issues before full rollout. Integrating the version published by CloudFormation with CodeDeploy lets the pipeline perform traffic shifting and automated rollback without a manual cutover.

Use AWS CloudFormation to define and publish the new application version, then deploy the Lambda functions with AWS CodeDeploy using CodeDeployDefault.LambdaAllAtOnce is wrong because it sends 100% of traffic to the new version immediately which gives no limited exposure or bake window for verification.

Publish a new version with AWS CloudFormation and add a manual approval in CodePipeline to verify the change, then have CodePipeline switch the Lambda production alias is insufficient because a human approval gate does not provide partial traffic shifting or automated rollback during a bake period and it results in an immediate cutover after approval.

AWS AppConfig is not appropriate here because AppConfig manages configuration and feature toggles rather than shifting Lambda alias traffic in a deployment workflow managed by CodeDeploy.

When a requirement calls for a small subset and a bake period and automatic rollback for Lambda think CodeDeploy canary and choose a predefined Lambda canary deployment configuration so traffic is shifted gradually and monitored automatically.

VegaPay operates about 3,200 EBS-backed Amazon EC2 instances that power a latency-sensitive service across several Availability Zones. To minimize disruption, the operations team wants any instance targeted for an AWS-scheduled EC2 instance retirement to be automatically stopped and then started so it moves to healthy hardware before the retirement takes effect. What is the most effective way to implement this automation?

  • ✓ C. Configure an Amazon EventBridge rule for AWS Health EC2 retirement scheduled events that invokes an AWS Systems Manager Automation runbook to stop and then start the impacted instances

Configure an Amazon EventBridge rule for AWS Health EC2 retirement scheduled events that invokes an AWS Systems Manager Automation runbook to stop and then start the impacted instances is correct because AWS publishes scheduled instance retirement notifications via AWS Health and an EventBridge rule can capture that notification and invoke an SSM Automation runbook to perform the required stop and start so the instance is moved to healthy hardware before the retirement takes effect.

The AWS Health signal such as AWS_EC2_INSTANCE_RETIREMENT_SCHEDULED is the precise indicator of a scheduled retirement and EventBridge can route that event to Systems Manager. Systems Manager Automation can target the affected instance ids and perform an orderly stop and start with the necessary IAM permissions and logging, and this approach reliably acts only when a retirement is scheduled.

Attach the instances to an Auto Scaling group with Elastic Load Balancing health checks and rely on automatic replacement if a host degrades is unsuitable because ASG and ELB react to runtime health failures rather than AWS-scheduled retirements and they may replace instances instead of stopping and starting the same instance as required.

Create an Amazon EventBridge rule using Amazon EC2 as the source that matches instance state changes to stopping or shutting-down and trigger an AWS Systems Manager Automation document to start the instances is too broad because EC2 state change events indicate that a stop has already occurred and they do not convey AWS-scheduled retirement notifications and this would trigger on routine operational stops and cause unintended restarts.

Enable EC2 Auto Recovery by creating a CloudWatch alarm with the Recover action and schedule recovery to occur after hours is incorrect because Auto Recovery responds to system status check failures rather than scheduled retirements and CloudWatch recover actions cannot be used to perform the required stop and start to migrate an instance off retired hardware.

Use AWS Health events routed through EventBridge to trigger targeted SSM Automation runbooks when a scheduled EC2 retirement is announced.

A DevOps engineer at Zephyr Retail needs to implement configuration governance across several AWS accounts. Security requires a near real-time view that shows which resources are compliant and flags any violations within three minutes of a change. Which solution best satisfies these requirements?

  • ✓ C. Enable AWS Config to capture configuration changes, deliver snapshots and history to Amazon S3, and build near real-time compliance visuals in Amazon QuickSight

Enable AWS Config to capture configuration changes, deliver snapshots and history to Amazon S3, and build near real-time compliance visuals in Amazon QuickSight is the correct choice because it provides continuous recording and rule evaluation across AWS resources and it can power organization wide dashboards for near real time visibility.

AWS Config records resource configuration changes and evaluates them against rules so it can detect drift and flag violations quickly and it can deliver snapshots and configuration history to Amazon S3 for aggregation and analysis. You can use aggregators and Amazon QuickSight to create cross account visuals and alerts so security teams can see compliance posture and respond within the required three minute window.

Use Amazon Inspector to assess compliance of workloads, send findings to Amazon CloudWatch Logs, and surface controls with a CloudWatch dashboard and custom metric filters is not suitable because Amazon Inspector focuses on host and container vulnerability scanning and runtime assessments rather than broad configuration drift detection across AWS services.

Apply mandatory tags to every resource and rely on AWS Trusted Advisor to flag noncompliant items, checking overall posture in the AWS Management Console is inadequate because Trusted Advisor provides high level best practice checks and it does not evaluate custom configuration rules or provide near real time drift detection across all resource types.

Use AWS Systems Manager State Manager and Compliance to enforce settings and track violations, publishing metrics to an Amazon CloudWatch dashboard with Amazon SNS alerts is limited because Systems Manager Compliance applies to managed instances and specific SSM baselines and it does not natively cover the full range of AWS resource types needed for organization wide configuration governance.

When you need organization wide configuration governance and fast drift detection think AWS Config rules and export configuration history to S3 so you can build near real time dashboards with QuickSight.

AWS Certified DevOps Engineer Logo and Badge

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.

During a CodeDeploy blue/green deployment of a Lambda alias behind API Gateway how can you prevent traffic shifting until the API Gateway stage live-v3 is responding?

  • ✓ B. BeforeAllowTraffic hook that invokes a validator Lambda for the live-v3 stage

BeforeAllowTraffic hook that invokes a validator Lambda for the live-v3 stage is correct.

CodeDeploy on the Lambda compute platform exposes a pre-traffic lifecycle event named BeforeAllowTraffic that runs before any alias traffic is shifted. You can invoke a validator Lambda during that event to poll or call the API Gateway stage and return success only when the stage responds, and this effectively gates the alias shift until the API is ready.

CloudWatch alarm on the API endpoint in the deployment config is not sufficient because alarms can cause rollbacks but they do not provide a pre-traffic readiness gate to prevent the alias from shifting.

CodePipeline pre-deploy test stage that calls the API, then starts CodeDeploy can validate the API earlier in the pipeline but it cannot guarantee the API is still healthy at the exact moment of the alias shift and it does not integrate with CodeDeploy lifecycle hooks for gating.

AfterAllowTraffic hook with a health-check Lambda runs after traffic has already been shifted so it is too late to prevent production traffic from reaching an unready API.

Use the BeforeAllowTraffic pre-traffic hook to gate Lambda alias shifts with a validator Lambda and use CloudWatch alarms primarily for rollback detection rather than pre-shift readiness checks.

At a travel-booking startup named AeroNomad, the DevOps team built an AWS CodePipeline whose final stage uses AWS CodeDeploy to update an AWS Lambda function. As the engineering lead, you want each deployment to send a small portion of live traffic to the new version for 5 minutes, then shift all traffic to it, and you require automatic rollback if the function experiences a spike in failures. Which actions should you recommend? (Choose 2)

  • ✓ B. Create a CloudWatch alarm on Lambda metrics and associate it with the CodeDeploy deployment

  • ✓ D. Choose a deployment configuration of LambdaCanary10Percent5Minutes

Create a CloudWatch alarm on Lambda metrics and associate it with the CodeDeploy deployment and Choose a deployment configuration of LambdaCanary10Percent5Minutes are correct because they implement a short canary window and allow CodeDeploy to automatically roll back if alarms trigger.

Choose a deployment configuration of LambdaCanary10Percent5Minutes sends a small portion of live traffic to the new version for a five minute bake period and then shifts the remainder if no alarm fires. This matches the requirement to route a small percentage of production traffic for five minutes before moving all traffic to the new version.

Create a CloudWatch alarm on Lambda metrics and associate it with the CodeDeploy deployment lets CodeDeploy monitor error related metrics such as Errors or Throttles during the traffic shift and perform an automatic rollback when thresholds are exceeded. Associating alarms with the deployment group is the supported mechanism for automatic rollback during Lambda deployments.

Choose a deployment configuration of LambdaAllAtOnce is incorrect because it immediately routes 100 percent of traffic to the new version and provides no canary bake window.

Create an Amazon EventBridge rule for deployment monitoring and attach it to the CodeDeploy deployment is incorrect because CodeDeploy rollbacks are driven by CloudWatch alarms associated with the deployment group rather than EventBridge rules.

Choose a deployment configuration of LambdaLinear10PercentEvery10Minutes is incorrect because a linear pattern shifts traffic in multiple incremental steps over a longer period and does not provide the single five minute canary then full shift behavior that was requested.

Remember that CodeDeploy uses CloudWatch alarms for automated rollback and that predefined Lambda canary and linear configs describe traffic percentages and timing.

A live quiz gaming platform run by a startup in Singapore has been operating in a single AWS Region and must now serve players worldwide. The backend runs on Amazon EC2 with a requirement for very high availability and consistently low latency. Demand will be uneven, with a few countries expected to produce significantly more traffic than others. Which routing approach should be implemented to best satisfy these needs?

  • ✓ C. Deploy EC2 instances in Auto Scaling groups behind Application Load Balancers in three Regions and use Route 53 geoproximity routing records that point to each load balancer with adjustable bias

Deploy EC2 instances in Auto Scaling groups behind Application Load Balancers in three Regions and use Route 53 geoproximity routing records that point to each load balancer with adjustable bias is the correct choice because geoproximity routing directs users toward the nearest resources and the bias control lets you intentionally shift traffic toward or away from Regions to handle hotspots while maintaining low latency and high availability.

Geoproximity routing calculates distance between users and endpoints and the adjustable bias lets you expand or shrink the effective geographic coverage of a Region so you can move capacity to where demand is concentrated. Running EC2 in Auto Scaling groups behind Application Load Balancers across multiple Regions provides fault tolerance and scalable capacity so the service can keep responding quickly even when traffic is uneven.

Utilize Route 53 latency-based routing and deploy the EC2 instances in Auto Scaling groups behind Application Load Balancers across three Regions is not ideal because latency-based routing chooses endpoints by measured latency and it does not provide a means to intentionally shift load to or from specific Regions when a few locations produce disproportionate traffic.

Create Route 53 weighted routing records for Application Load Balancers that front Auto Scaling groups of EC2 instances in three Regions is suboptimal because weighted routing splits traffic by fixed percentages and it ignores user geography which can increase latency and cannot dynamically address regional demand spikes.

Deploy EC2 instances in Auto Scaling groups behind Application Load Balancers in three Regions and use Route 53 geolocation routing records that point to each load balancer is less suitable because geolocation maps users by country or continent and it lacks bias controls so you cannot finely nudge traffic between Regions to relieve hot spots.

When you need both low latency and the ability to steer traffic for regional hot spots choose geoproximity with bias and multi Region Auto Scaling behind ALBs and validate routing behavior under expected load patterns.

Riverton Labs uses AWS IAM Identity Center for workforce access and wants to avoid reliance on standalone IAM users. A DevOps engineer must implement automation that disables any credentials for a newly created IAM user within 90 seconds and ensures the security operations team is alerted when such creation occurs. Which combination of actions will achieve this outcome? (Choose 3)

  • ✓ B. Build an AWS Lambda function that disables access keys and removes the console login profile for newly created IAM users, and invoke it from the EventBridge rule

  • ✓ D. Define an Amazon SNS topic as a target for an EventBridge rule and subscribe the security operations distribution list

  • ✓ E. Configure an Amazon EventBridge rule that matches IAM CreateUser API calls recorded by AWS CloudTrail

Build an AWS Lambda function that disables access keys and removes the console login profile for newly created IAM users, and invoke it from the EventBridge rule, Define an Amazon SNS topic as a target for an EventBridge rule and subscribe the security operations distribution list, and Configure an Amazon EventBridge rule that matches IAM CreateUser API calls recorded by AWS CloudTrail are correct because they detect new IAM users, remove both programmatic and console credentials quickly, and notify the security team.

Configure an Amazon EventBridge rule that matches IAM CreateUser API calls recorded by AWS CloudTrail provides a reliable, near real time signal for user creation because CloudTrail management events record the CreateUser API and EventBridge can match that event and route it immediately to targets.

Build an AWS Lambda function that disables access keys and removes the console login profile for newly created IAM users, and invoke it from the EventBridge rule performs the required automated remediation because it can enumerate and disable any access keys and also remove the console password profile so both API and console access are neutralized within seconds.

Define an Amazon SNS topic as a target for an EventBridge rule and subscribe the security operations distribution list ensures the security operations team receives an alert when the EventBridge rule fires so analysts can investigate and verify the automated remediation if needed.

Create an AWS Config rule that publishes to Amazon SNS when IAM user resources are modified is not ideal because AWS Config evaluates configuration drift and may not provide the immediate, event driven timing you need for a 90 second remediation window.

Create an Amazon EventBridge rule that triggers on IAM GetLoginProfile API calls in AWS CloudTrail is insufficient because GetLoginProfile only appears when a console profile is queried and it will not detect programmatic only users that have access keys.

Build an AWS Lambda function that only deletes login profiles for new IAM users, and invoke it from the EventBridge rule is incomplete because removing only the console password leaves access keys active and does not fully neutralize programmatic credentials.

Pair CloudTrail management events with EventBridge for real time detection and invoke a Lambda that disables keys and logins then publish a notification to SNS so the security team is alerted.

A data engineering team at Orion Cart runs batch jobs on an Amazon EMR cluster that scales across EC2 instances for a retail analytics platform. After five months, AWS Trusted Advisor highlighted multiple idle EMR nodes, and the team discovered there were no scale-in settings, which led to unnecessary spend. They want to receive timely notifications about Trusted Advisor findings so they can react quickly before costs grow. Which approaches would meet this requirement? (Choose 3)

  • ✓ A. Create an Amazon EventBridge rule that listens for Trusted Advisor check status updates and route matches to an Amazon SNS topic for email alerts

  • ✓ C. Schedule an AWS Lambda function daily to call the Trusted Advisor API, then post a message to an Amazon SNS topic to notify subscribed teams

  • ✓ E. Run an AWS Lambda job every day to refresh Trusted Advisor via API and push results to CloudWatch Logs; create a metric filter and a CloudWatch alarm that sends notifications

Create an Amazon EventBridge rule that listens for Trusted Advisor check status updates and route matches to an Amazon SNS topic for email alerts, Schedule an AWS Lambda function daily to call the Trusted Advisor API, then post a message to an Amazon SNS topic to notify subscribed teams, and Run an AWS Lambda job every day to refresh Trusted Advisor via API and push results to CloudWatch Logs and create a metric filter and a CloudWatch alarm that sends notifications are correct because each approach provides automated detection and timely notification of Trusted Advisor findings so the team can react before costs grow.

Create an Amazon EventBridge rule that listens for Trusted Advisor check status updates and route matches to an Amazon SNS topic for email alerts works because EventBridge can capture Trusted Advisor status change events and route matching events to SNS for near real time email or SMS delivery without polling.

Schedule an AWS Lambda function daily to call the Trusted Advisor API, then post a message to an Amazon SNS topic to notify subscribed teams works because a scheduled Lambda can poll the Support API on a cadence you choose, aggregate or filter findings, and publish to SNS so teams receive timely alerts.

Run an AWS Lambda job every day to refresh Trusted Advisor via API and push results to CloudWatch Logs and create a metric filter and a CloudWatch alarm that sends notifications works because exporting findings to CloudWatch Logs lets you create metric filters and alarms that trigger notifications when thresholds are hit and this provides flexible, rule driven alerting based on log data.

Enable Trusted Advisor’s built-in email summaries to receive weekly savings and checks digest is not ideal because the built in summaries are delivered weekly and that cadence is too slow for operational cost control when you need prompt action.

Turn on an automatic EventBridge notification for Trusted Advisor that emails the account owner without creating any rules is incorrect because EventBridge requires explicit rules and targets to route events and there is no one click automatic email to the account owner.

Use an Amazon EventBridge rule for Trusted Advisor and configure Amazon SES directly as the target for sending emails is not appropriate because SES is not a native direct EventBridge target for simple notifications and you should route events through SNS or invoke SES from a Lambda function for email delivery.

Favor EventBridge to SNS for event driven alerts and use scheduled Lambda or CloudWatch metric filters and alarms when you need polling or aggregated thresholds.

Which AWS continuous integration and continuous delivery design enables near zero downtime releases and allows rollback to the previous version in under 4 minutes?

  • ✓ B. Single repo with develop->main via PRs, CodeBuild on commit, CodeDeploy blue/green with traffic shifting

The correct choice is Single repo with develop→main via PRs, CodeBuild on commit, CodeDeploy blue/green with traffic shifting. This approach supports near zero downtime releases and allows rollback to the previous version in under four minutes by shifting traffic between separate environments.

A Single repo with develop→main via PRs, CodeBuild on commit, CodeDeploy blue/green with traffic shifting deployment uses the blue green pattern so CodeDeploy provisions distinct target environments and the load balancer shifts traffic only when the new version is ready. Rolling back is usually a quick traffic flip back to the prior environment so there is no need to redeploy the old version to every instance. The single repo plus pull requests simplifies governance and promotions and CodeBuild on commit gives fast pipeline feedback which keeps releases predictable.

The option AWS CodeCommit with CodeDeploy in-place rolling across an Auto Scaling group is not ideal because in place rolling updates modify instances in the live fleet which can cause brief interruptions and make rollback slower since the previous version must be redeployed across the group.

The option Per-developer repos, shared develop, CodeBuild, PRs to main, CodeDeploy blue/green is not preferred because per developer repositories add coordination overhead and complexity without improving availability or rollback speed compared with a streamlined single repo workflow.

The option AWS Elastic Beanstalk with rolling updates is not the best fit because rolling updates replace instances in batches and do not offer the immediate traffic switch that blue green provides which makes rollbacks slower as another rolling operation would be required.

When the question asks for near zero downtime and very fast rollback choose a blue green deployment with traffic shifting and a clear branching model such as a single repo with PRs.

A compliance audit at Solstice Analytics discovered that an AWS CodeBuild build retrieves a database seed script from an Amazon S3 bucket using an anonymous request. The security team has banned any unauthenticated access to S3 for this pipeline. What is the most secure way to remediate this?

  • ✓ C. Remove public access with a bucket policy and grant the CodeBuild service role least-privilege S3 permissions, then use the AWS CLI in the build to retrieve the object

Remove public access with a bucket policy and grant the CodeBuild service role least-privilege S3 permissions, then use the AWS CLI in the build to retrieve the object is the correct choice because it blocks anonymous access while allowing the pipeline to obtain the file using its assigned service role.

This solution uses a bucket policy or S3 public access settings to prevent unauthenticated reads and it uses a narrowly scoped IAM policy on the CodeBuild service role to enforce least privilege. The service role provides short lived credentials to the build environment so you avoid embedding static keys and you can call the AWS CLI from the build to retrieve the object securely.

Enable HTTPS basic authentication on the S3 bucket and use curl to pass a token for the download is incorrect because Amazon S3 does not support HTTP basic authentication and AWS access control is handled through IAM and S3 policies rather than basic auth tokens.

Remove public access with a bucket policy and download the file using the AWS CLI with a long-lived IAM access key and secret configured in environment variables is weaker because long lived credentials can be exposed or reused and they do not leverage the automatic temporary credentials that the CodeBuild role supplies.

Attach the AmazonS3FullAccess managed policy to the CodeBuild service role and keep the bucket public, then use the AWS CLI to pull the file is incorrect because it violates least privilege and it fails to remediate the audit finding about unauthenticated public access.

Use role-based temporary credentials and grant only the S3 permissions the build needs while blocking public access to buckets.

Riverton Analytics is modernizing a monolithic web application on AWS that currently runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The team wants to release a new build to a small slice of users first and then increase exposure to all traffic without any downtime. What approach should they use to perform this canary rollout?

  • ✓ C. Stand up a parallel stack with an Application Load Balancer and Auto Scaling group running the new version, then use Route 53 weighted alias records to split and progressively shift traffic between the two load balancers

The correct choice is Stand up a parallel stack with an Application Load Balancer and Auto Scaling group running the new version, then use Route 53 weighted alias records to split and progressively shift traffic between the two load balancers. This option lets you route a small percentage of users to the new build and then ramp to all traffic without downtime while you monitor health and metrics.

Stand up a parallel stack with an Application Load Balancer and Auto Scaling group running the new version, then use Route 53 weighted alias records to split and progressively shift traffic between the two load balancers is effective because you can deploy the new version behind its own ALB and then use Route 53 weighted alias records to apply precise, percentage based traffic splits. You can increase weights as confidence grows and you can roll back instantly by reducing the weight or sending traffic back to the original stack while health checks and monitoring validate the canary.

Configure a private REST API in Amazon API Gateway with an Application Load Balancer integration and create a new stage for the updated build, then use API Gateway canary release to send a small portion of requests to the stage is incorrect because API Gateway private VPC link integrations target Network Load Balancers and API Gateway canaries are designed for API stages rather than general ALB backed web applications.

Use Amazon Route 53 latency-based routing between two Application Load Balancers so that you can gradually move a percentage of users to the new stack is not suitable because latency based routing selects endpoints by measured latency and it does not provide deterministic percentage based splits required for controlled canary rollouts.

Perform a canary deployment with AWS CodeDeploy using the CodeDeployDefault.LambdaCanary20Percent15Minutes configuration is wrong because that preset is for Lambda function deployments and does not apply to EC2 Auto Scaling group or ALB based deployments.

For EC2 and ALB based rollouts use Route 53 weighted alias records to shift traffic by percentage and keep health checks and monitoring in place before increasing weight.

StratusPrint, a digital publishing firm, requires every Amazon EBS volume to be snapshotted on a biweekly schedule. An internal job reads a custom tag named BackupInterval on each volume and triggers the snapshot when the value is 14d, but an audit found several volumes were missed because the tag was not present. What is the most reliable way to enforce the tag and automatically remediate any EBS volumes in this AWS account that are missing it?

  • ✓ B. Configure AWS Config with the required-tags rule scoped to AWS::EC2::Volume and attach a Systems Manager Automation remediation to add the BackupInterval=14d tag to noncompliant volumes

The correct choice is Configure AWS Config with the required-tags rule scoped to AWS::EC2::Volume and attach a Systems Manager Automation remediation to add the BackupInterval=14d tag to noncompliant volumes. AWS Config and Systems Manager Automation together provide continuous evaluation and automatic remediation so missing tags on existing and new volumes are corrected.

Configure AWS Config with the required-tags rule scoped to AWS::EC2::Volume and attach a Systems Manager Automation remediation to add the BackupInterval=14d tag to noncompliant volumes evaluates the specific resource type and detects drift against the tagging policy. When a volume is found noncompliant Config can invoke an attached automation runbook to add the BackupInterval=14d tag which handles both preexisting untagged volumes and tags that are later removed.

Create an Amazon EventBridge rule for CloudTrail CreateVolume events and target an AWS Systems Manager Automation runbook to add the default 14d tag is insufficient because it only acts at creation and will not detect tag removals or fix volumes that were created before the rule existed.

Use an IAM policy with aws:RequestTag and aws:TagKeys to deny CreateVolume unless BackupInterval=14d is supplied enforces tags at request time only and does not address existing volumes or accidental tag deletion so it cannot perform automatic remediation.

Set up AWS Config to evaluate AWS::EC2::Instance and remediate by tagging any attached volumes with the default 14d value targets the wrong resource type and can miss unattached volumes. The required-tags rule should be scoped to AWS::EC2::Volume to ensure all volumes are checked.

Use AWS Config managed rules scoped to the exact resource type and attach SSM Automation remediations so the system both detects drift and automatically backfills missing tags for existing resources.

A public university runs its student records platform on roughly 180 Windows Server Amazon EC2 instances, each with multiple attached EBS volumes. The security team requires an automated process to install the latest security updates across the fleet and a control that continuously evaluates whether running instances were launched from an approved AMI list. Engineers must still be able to start instances from other AMIs for experiments, but operations need to be alerted whenever any noncompliant instance is detected in the VPC. What should you implement to satisfy these requirements without blocking launches?

  • ✓ C. Define a patch baseline with AWS Systems Manager Patch Manager and use an AWS Config managed rule to evaluate instances against an approved AMI list, with CloudWatch alarms for any noncompliant resources

Define a patch baseline with AWS Systems Manager Patch Manager and use an AWS Config managed rule to evaluate instances against an approved AMI list, with CloudWatch alarms for any noncompliant resources is the correct choice because it combines automated patching and continuous, nonblocking compliance checks with alerting.

Define a patch baseline with AWS Systems Manager Patch Manager automates scanning and installing security updates across the EC2 fleet by using baselines and maintenance windows so updates can run on a schedule that fits the university. AWS Config managed rules can continuously evaluate instance launches against an approved AMI list for example by ID or by tag and record compliance changes. You can route those compliance events into metrics and alerts with CloudWatch alarms or SNS so operations are notified whenever a noncompliant instance is detected without preventing engineers from launching experimental AMIs.

Use Amazon GuardDuty to continuously audit EC2 instances for missing security patches and unapproved AMIs, and trigger Amazon CloudWatch alarms for any noncompliance is incorrect because Amazon GuardDuty focuses on threat detection and anomalous behavior and it does not assess patch levels or validate AMI approval lists.

Create an IAM policy that denies launching EC2 instances from any AMI not on an approved list, and set up CloudWatch alarms to alert on violations is incorrect because that approach prevents launches and fails the requirement to allow engineers to start instances from other AMIs for experiments. A deny policy enforces prevention instead of providing nonblocking detection and alerting.

Use Amazon Inspector to scan the fleet for vulnerabilities and verify AMI approvals, then notify operations through CloudWatch alarms is incorrect because Amazon Inspector provides vulnerability findings and exposure assessments but it does not perform automated patch deployment and it is not designed to enforce or continuously evaluate an approved AMI list the way AWS Config managed rules do.

Use Patch Manager for automated patching and AWS Config managed rules to detect AMI compliance. Choose detection and alerting when the requirement forbids blocking so engineers can still run experiments.

How can you deploy EC2 Auto Scaling instances from a single reusable AMI across multiple environments while securing environment specific secrets and reducing instance boot time?

  • ✓ C. SSM Automation golden AMI + user data reads env tag + Parameter Store SecureString

SSM Automation golden AMI + user data reads env tag + Parameter Store SecureString is correct because it enables a single pre baked AMI to be reused across environments while injecting environment specific secrets at boot.

golden AMI built with Systems Manager Automation reduces instance boot time because operating system hardening and application dependencies are baked into the image. A lightweight user data bootstrap reads the instance environment tag to determine the environment and then uses the instance role to fetch KMS encrypted secrets stored as Parameter Store SecureString. This keeps secrets out of the AMI, provides auditable access controls, and preserves the single AMI reuse pattern.

Session Manager-baked AMI + Lambda from user data + Secrets Manager is incorrect because Session Manager is intended for interactive access and does not itself produce AMIs and invoking Lambda from user data adds network calls and complexity which does not reliably reduce boot time.

Patch Manager AMI prep + Lambda + AppConfig is incorrect because Patch Manager focuses on OS patching rather than creating a fully baked application image and AppConfig provides runtime configuration rather than a secrets store so it does not satisfy the secure secrets and fast boot requirements.

CloudFormation cfn-init on boot + stack parameter env + Secrets Manager is incorrect because performing full installation and configuration at first boot lengthens launch time and so it conflicts with the goal of minimizing boot time even though Secrets Manager itself is a valid secrets store.

Look for clues like single AMI and reduce boot time and prefer a pre baked golden AMI with a lightweight bootstrap that retrieves environment specific secrets at launch.

An online payments startup named LumaPay deployed a Ruby on Rails application to AWS Elastic Beanstalk in a sandbox account to handle provisioning, load balancing, auto scaling, and health checks. The environment was launched with an Amazon RDS PostgreSQL instance that is attached to the Elastic Beanstalk environment, so terminating the environment also removes the database. The team needs to perform blue green cutovers and eventually promote the stack to production without putting the database at risk. How should the DevOps engineer separate the database lifecycle from the environment with the least chance of data loss?

  • ✓ C. Use blue green, snapshot the database and turn on deletion protection, bring up a new Elastic Beanstalk environment that connects to the same RDS instance, then remove the old environment security group from the DB security group before terminating the old environment

Use blue green, snapshot the database and turn on deletion protection, bring up a new Elastic Beanstalk environment that connects to the same RDS instance, then remove the old environment security group from the DB security group before terminating the old environment is the correct option because it cleanly decouples the database lifecycle from the environment lifecycle and prevents accidental deletion or orphaned security group dependencies during a blue green cutover.

Taking a snapshot and enabling deletion protection preserves a recoverable copy of the data and prevents accidental removal of the DB instance. Bringing up a new Elastic Beanstalk environment that reuses the same RDS instance lets you perform the cutover without migrating data. Before you terminate the old environment you should remove the old environment security group from the DB security group so the database is no longer referenced by the environment and so deletion and future changes are not blocked by lingering security group rules.

Run a blue green deployment, create an RDS snapshot and enable deletion protection, launch a new Elastic Beanstalk environment configured to use the existing RDS instance, and immediately delete the old environment is close but it fails to remove the old environment security group reference from the DB security group. Leaving that reference can tie the DB to the environment and cause deletion or dependency problems.

Use a canary deployment, take an RDS snapshot and enable deletion protection, point a new Elastic Beanstalk environment at the same RDS instance, then delete the old environment is incorrect because Elastic Beanstalk does not provide a native canary strategy for environment cutovers and the option still overlooks the necessary security group cleanup.

Migrate the database to a new RDS instance with AWS Database Migration Service and then tear down the original Elastic Beanstalk environment would work but it is heavier and introduces extra moving parts. It increases complexity and risk when a snapshot, deletion protection, security group removal, and a blue green cutover will decouple the DB with less operational overhead.

Before terminating the old environment, take a snapshot and enable deletion protection and then remove the old environment security group from the DB security group so the RDS instance is safe and independent.

Northwind Studios runs a media asset management platform in its data center and connects to an AWS VPC through Direct Connect. The archive contains about 80 TB of video stored on a physical tape library. The team wants an automated index that can find people across the footage using facial recognition and store a still image for each identified person. They intend to migrate the media and the MAM workload to AWS over time, but need a solution now that adds very little operational overhead and causes minimal disruption to existing workflows. What should they do?

  • ✓ B. Deploy AWS Storage Gateway file gateway on-premises, have MAM read and write media through the gateway so files land in Amazon S3, and use AWS Lambda to invoke Amazon Rekognition to index faces from S3 and update the MAM catalog

Deploy AWS Storage Gateway file gateway on-premises, have MAM read and write media through the gateway so files land in Amazon S3, and use AWS Lambda to invoke Amazon Rekognition to index faces from S3 and update the MAM catalog is the correct choice because it provides an SMB or NFS interface backed by S3 and enables Amazon Rekognition to analyze objects with minimal changes to existing workflows and low operational overhead.

The Deploy AWS Storage Gateway file gateway on-premises, have MAM read and write media through the gateway so files land in Amazon S3, and use AWS Lambda to invoke Amazon Rekognition to index faces from S3 and update the MAM catalog approach lets the MAM continue to operate against local file paths while files are asynchronously persisted as S3 objects. AWS Lambda can trigger on S3 events to call Rekognition for face detection and it can store still images and indexing metadata back into S3 or update the MAM catalog so teams can search for people across footage without large workflow changes and while migrating content incrementally.

Ingest the MAM archive into Amazon Kinesis Video Streams, create a face collection in Amazon Rekognition, process the stream, store metadata back in MAM, and configure the stream to persist to Amazon S3 is not ideal because introducing a streaming ingestion layer is unnecessary for a large file based archive and it increases operational complexity compared with exposing files as S3 objects.

Copy all media to a large Amazon EC2 instance with attached Amazon EBS, install an open-source facial recognition tool to generate metadata, update the catalog, and then move the content into Amazon S3 is not recommended because it moves responsibility for scaling patching and monitoring to the customer and it conflicts with the requirement for very little ongoing overhead.

Stand up AWS Storage Gateway tape gateway, write media to virtual tapes from MAM, build a Rekognition face collection, and invoke Rekognition from AWS Lambda to read and analyze the tapes directly is unsuitable because tape gateway is optimized for backup and archival workflows and it stores data on virtual tapes that end up in archival tiers so Rekognition cannot access and analyze objects directly in the same way as S3 objects.

Storage Gateway file gateway surfaces on premises files as S3 objects so you can pair with managed services like Amazon Rekognition for analytics with minimal operational overhead.

A healthcare analytics startup, MedInsight Labs, uses AWS to capture API activity from users, services, and automation across a single account. After a breach investigation, the team identified a principal that executed risky actions and now requires absolute, cryptographic proof that the audit logs for the past 48 hours reflect the exact order of events and have not been modified. As the consultant, what should you implement to meet this requirement?

  • ✓ C. Configure AWS CloudTrail to deliver logs to Amazon S3 and use CloudTrail log file integrity validation to verify authenticity and ordering

The correct option is Configure AWS CloudTrail to deliver logs to Amazon S3 and use CloudTrail log file integrity validation to verify authenticity and ordering.

CloudTrail log file integrity validation generates cryptographic digest files that are signed and chained so you can prove that log files were not altered deleted or inserted and that the event sequence is preserved for the past 48 hours. The digests can be validated with AWS provided tools or CLI commands to produce verifiable evidence of provenance and ordering while storing the original logs in S3 provides durable access for investigation.

Enable AWS Config to record resource changes, store its data in Amazon S3, and enforce S3 Object Lock in compliance mode is incorrect because AWS Config captures configuration state and resource changes rather than all API activity across the account. S3 Object Lock enforces immutability but it does not create cryptographic proofs that the logs were produced by CloudTrail or that their ordering is intact.

Create an AWS CloudTrail trail that delivers to Amazon S3 and immediately transition objects to S3 Glacier with a Glacier Vault Lock policy is incorrect because Glacier Vault Lock provides WORM retention but it does not produce the cryptographic digests that prove provenance and ordering and immediate archival can slow validation when you need quick cryptographic evidence.

Stream CloudTrail events to Amazon CloudWatch Logs and use metric filters and alarms to detect tampering is incorrect because CloudWatch Logs and alarms can help with detection and monitoring but they do not provide tamper evident cryptographic chaining of log files that proves completeness and exact sequencing.

When the question asks for tamper evident logs and the exact sequence of events choose CloudTrail log file integrity validation rather than relying only on S3 WORM features.

OrbitPay, a fintech startup, runs a worldwide checkout on AWS that accepts credit cards and Ethereum using Amazon EC2, Amazon DynamoDB, Amazon S3, and Amazon CloudFront. A qualified security assessor reported within the past 45 days that primary account numbers were not properly encrypted, causing a PCI DSS compliance failure. You must fix this quickly and also increase the share of viewer requests served from CloudFront edge locations instead of the origins to improve performance. What should you implement to best protect the card data while also improving the CloudFront cache hit rate?

  • ✓ C. Enforce HTTPS to the origins and enable CloudFront field-level encryption, and have the origin send Cache-Control max-age with the longest safe value

The correct choice is Enforce HTTPS to the origins and enable CloudFront field-level encryption, and have the origin send Cache-Control max-age with the longest safe value. This option both protects primary account numbers at the edge and increases the share of requests served from CloudFront edge locations.

CloudFront field-level encryption encrypts specific form fields such as primary account numbers at the CloudFront edge with a public key so only back-end services that hold the private key can decrypt them. Enforce HTTPS to the origins ensures end to end transport security for viewer to origin traffic. Setting a long but safe Cache-Control max-age on origin responses raises the cache hit ratio at edge locations by keeping validated responses in edge caches longer.

Use CloudFront signed URLs and configure long Cache-Control max-age on origin objects helps control access to objects but it does not encrypt individual form fields so it does not address the PCI DSS failure for protecting primary account numbers in transit and at intermediate layers.

Set an origin access identity for the distribution and vary caching on User-Agent and Host headers protects S3 origins from direct access but increasing cache key variability by varying on User-Agent and Host typically reduces cache hit ratio. This approach also does not provide field level encryption for cardholder data.

Enable DynamoDB encryption with AWS KMS and turn on CloudFront Origin Shield provides encryption at rest and an extra origin caching layer but it does not encrypt user submitted fields at the edge. Origin Shield can help origin fetch efficiency but it does not replace edge caching improvements from longer Cache-Control lifetimes and it does not solve the PCI requirement for field level protection.

Use CloudFront field-level encryption for sensitive form inputs and enforce HTTPS. Prefer long Cache-Control max-age on safe, immutable responses and avoid adding extra cache key headers that lower edge hit rates.

Which ALB feature provides a per request latency breakdown across the load balancer and its targets without requiring changes to application code?

  • ✓ B. Turn on ALB access logs

The correct option is Turn on ALB access logs. Enabling these access logs gives a per request latency breakdown across the load balancer and its targets without requiring changes to application code.

Turn on ALB access logs records timing fields such as request_processing_time, target_processing_time, and response_processing_time for each request and writes the logs to Amazon S3 so you can analyze individual request latencies and identify where time is spent.

CloudWatch metrics is incorrect because CloudWatch provides aggregated metrics for the load balancer and targets and does not include per request timing fields that show the request and target breakdown.

AWS X-Ray is incorrect because X Ray requires application instrumentation and SDK integration to produce traces and it does not natively expose the ALB timing fields in the same per request log format.

CloudWatch agent on EC2 is incorrect because the agent collects host and system level telemetry from instances and it does not capture ALB per request timing details.

When a question asks for per request latency and specifies no code changes think ALB access logs because they include request and target timing fields and are delivered to S3 for analysis.


Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.