Professional DevOps Engineer Exam Dumps and AWS Braindumps
All AWS questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
AWS DevOps Certification Exam Simulator
Despite the title of this article, this is not an AWS DevOps Engineer Professional braindump in the traditional sense.
I don’t believe in cheating.
Traditionally, the word “braindump” referred to someone taking an exam, memorizing the questions, and posting them online for others to use. That practice is unethical and violates the AWS certification agreement. It provides no integrity, skill development, or lasting value.
Better than braindumps & exam dumps
This is not an AWS braindump.
All of these questions come from my AWS DevOps Engineer Professional Udemy course and from the certificationexams.pro website, which offers hundreds of free AWS DevOps Engineer Practice Questions.
Each question has been carefully written to match the official AWS Certified DevOps Engineer Professional exam topics. They mirror the tone, logic, and technical depth of real AWS scenarios, but none are copied from the actual test. Every question is designed to help you learn, reason, and master automation, monitoring, CI/CD, and operational excellence the right way.
If you can answer these questions and understand why the incorrect options are wrong, you will not only pass the real AWS DevOps Engineer Professional exam, you will gain a deep understanding of how to build, deploy, monitor, and optimize resilient and automated systems on AWS.
So if you want to call this your AWS DevOps Engineer Braindump, that’s fine, but know that every question here is built to teach, not to cheat.
Each item includes detailed explanations, practical examples, and tips that help you think like an AWS DevOps professional during the exam.
Study thoroughly, practice often, and approach your certification with integrity. Success in AWS DevOps comes not from memorizing answers but from understanding how automation, monitoring, and continuous delivery come together to build the modern cloud.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
AWS DevOps Engineer Sample Questions
All AWS questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
Kestrel Works has updated its security baseline to require that Amazon EBS encryption be enabled by default in every account, and any unencrypted volumes must be surfaced as noncompliant in a centralized view. The company runs many AWS accounts under AWS Organizations, and a DevOps engineer must roll out the enforcement and compliance evaluation automatically across all member accounts. What is the most efficient way to achieve this?
-
❏ A. Apply an SCP in AWS Organizations that denies launching EC2 instances whose attached EBS volumes are not encrypted, and attach the SCP to all accounts
-
❏ B. Schedule an AWS Lambda function in every account to audit EBS volume encryption, upload results to a central Amazon S3 bucket, and query with Amazon Athena
-
❏ C. Create an organization-wide AWS Config rule to evaluate EBS encryption by default and attach an SCP that prevents disabling or deleting AWS Config in any account
-
❏ D. Use AWS CloudFormation StackSets to roll out the ec2-ebs-encryption-by-default Config rule and the EBS default encryption setting to each member account
A ride-sharing startup maintains a public web portal that must fail over between AWS Regions with very little interruption. The application stores data in an Amazon Aurora cluster in the primary Region (eu-west-2), and Aurora Parallel Query is enabled to accelerate analytics-heavy requests. Amazon Route 53 is used to route users to the active Region. What should the team put in place to minimize downtime if the primary Aurora cluster fails?
-
❏ A. Use CloudWatch alarms and EventBridge with SNS to alert the ops team on Slack, direct users to an S3 static status page, manually promote the replica, verify, and then switch Route 53 to the secondary Region
-
❏ B. Create a cross-Region Aurora read replica in eu-central-1, configure RDS event notifications to an SNS topic, subscribe a Lambda function that promotes the replica on failure, and update Route 53 to direct traffic to the secondary Region
-
❏ C. Set an EventBridge scheduled rule to invoke a Lambda function every 30 minutes to probe the primary Aurora cluster; if unhealthy, promote the replica and modify Route 53 to fail over
-
❏ D. Enable automated cross-Region snapshot copies every 15 minutes and, during an outage, restore the latest snapshot in the secondary Region and point Route 53 to the new endpoint
NorthPeak Analytics is launching a new serverless backend that runs on AWS Lambda. A DevOps engineer needs to build a CI/CD pipeline that rolls out changes safely and reduces risk if a release has issues, with the ability to revert quickly. Which deployment setup best meets these goals?
-
❏ A. Use AWS CloudFormation to publish a new Lambda version on each stack update and configure the RoutingConfig of an AWS::Lambda::Alias to weight traffic toward the new version
-
❏ B. Use AWS CloudFormation to deploy the application and use AWS CodeDeploy with the AllAtOnce deployment type while observing metrics in Amazon CloudWatch
-
❏ C. Use an AWS SAM template for the serverless app and deploy with AWS CodeDeploy using the Canary5Percent15Minutes deployment type
-
❏ D. Use AWS AppConfig feature flags to control exposure while updating Lambda code through AWS CloudFormation
BluePeak Logistics needs an automated way to send near real-time, customized alerts whenever a security group in their primary AWS account permits open SSH access. The alert must include both the security group name and the group ID. The team has already turned on the AWS Config managed rule restricted-ssh and created an Amazon SNS topic with subscribers. What is the most effective approach to produce these tailored notifications?
-
❏ A. Create an Amazon EventBridge rule that matches all evaluation results for the restricted-ssh rule, use an input transformer, and rely on an SNS topic filter policy that passes only messages containing NON_COMPLIANT
-
❏ B. Create an Amazon EventBridge rule that matches NON_COMPLIANT evaluations for all AWS Config managed rules, use an input transformer, and use an SNS topic filter policy to pass only NON_COMPLIANT messages
-
❏ C. Create an Amazon EventBridge rule that filters for restricted-ssh evaluations with complianceType set to NON_COMPLIANT, add an input transformer to include the security group name and ID, and publish to the existing SNS topic
-
❏ D. Create an Amazon EventBridge rule that matches ERROR evaluations for the restricted-ssh rule and forward them to the SNS topic
An edtech company manages more than 120 AWS accounts under AWS Organizations and wants to centralize observability in a single operations and security account. The team needs to search, visualize, and troubleshoot CloudWatch metrics, CloudWatch Logs, and AWS X-Ray traces from all member accounts in one place, and any new accounts added to the organization should be included automatically. What is the best approach to meet these requirements?
-
❏ A. From the CloudWatch console, enable cross-account observability and connect each account directly to the monitoring account
-
❏ B. Configure CloudWatch Metric Streams to deliver metrics to Kinesis Data Firehose, store in Amazon S3, and build dashboards with Amazon Athena
-
❏ C. Use CloudWatch cross-account observability with AWS Organizations to designate the central ops account as the monitoring account and link all organization accounts
-
❏ D. Create CloudWatch alarms that send events to Amazon EventBridge, write data to an S3 bucket, and visualize with Athena
Objects are added directly to an S3 bucket overnight between 11 PM and 1 AM. How can you ensure an AWS Storage Gateway file share shows these new objects by 9 AM?
-
❏ A. Require uploads via AWS Transfer Family
-
❏ B. Enable automatic cache refresh on the file gateway
-
❏ C. Use EventBridge to run a Lambda that calls RefreshCache on the file share before 9 AM
-
❏ D. Configure S3 Event Notifications to make the gateway refresh its cache
Northstar Analytics provides GraphQL endpoints for institutional clients and partners to consume global equities data. The service runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer, and releases are rolled out through a CI/CD pipeline using AWS CodeDeploy. Leadership wants the DevOps engineer to detect deployment-related health regressions as quickly as possible and automatically pause the rollout if latency crosses a predefined threshold until it stabilizes. Which approach offers the fastest detection during deployments?
-
❏ A. Enable Detailed Monitoring for the Application Load Balancer in CloudWatch and use the alarm to stop the deployment
-
❏ B. Analyze ALB access logs with an AWS Lambda function to compute average latency and signal the pipeline to stop when the threshold is breached
-
❏ C. Create a CloudWatch alarm on the ALB TargetResponseTime metric and attach it to the CodeDeploy deployment group to automatically halt the rollout on breach
-
❏ D. Instrument the service with AWS X-Ray and abort the deployment when traces show increased latency
Meridian Threads, a retail startup, plans to launch a customer portal on Amazon EC2 instances behind an Application Load Balancer, backed by a managed MySQL-compatible database. The business requires a recovery point objective of 45 minutes and a recovery time objective of 12 minutes during a Regional outage. Which combination of deployment choices best meets these objectives? (Choose 2)
-
❏ A. Provision two separate Amazon Aurora clusters in different Regions and replicate changes by using AWS Database Migration Service
-
❏ B. Configure an Amazon Aurora global database spanning two Regions and promote the secondary to read/write if the primary Region fails
-
❏ C. Deploy the web tier in two Regions and use Amazon Route 53 latency-based routing to target the ALB in each Region with health checks
-
❏ D. Deploy the web tier in two Regions and use Amazon Route 53 failover routing to point to the ALBs with health checks enabled
-
❏ E. Create an Amazon Aurora multi-master cluster across multiple Regions and run the database in active-active mode
Northwind Labs is building a serverless REST API using Amazon API Gateway, AWS SAM, and AWS Lambda. They want backend Lambda deployments to run automatically whenever code is pushed to GitHub. They also require distinct pipelines for STAGING and PRODUCTION, and only STAGING should deploy without human intervention while PRODUCTION must be gated. What approach should the DevOps engineer take to implement this?
-
❏ A. Configure two AWS CodePipeline pipelines for STAGING and PRODUCTION, add a manual approval step to both, use separate GitHub repositories per environment, and deploy Lambda with AWS CloudFormation
-
❏ B. Build two AWS CodePipeline pipelines for STAGING and PRODUCTION, use a single GitHub repository with separate environment branches, trigger on branch commits, deploy via AWS CloudFormation, and require a manual approval only in the PRODUCTION pipeline
-
❏ C. Create two pipelines in AWS CodePipeline, enable automatic releases in PRODUCTION, use AWS CodeDeploy to update Lambda from independent repositories
-
❏ D. Use a single AWS CodePipeline pipeline with STAGING and PRODUCTION stages that both auto-deploy from pushes to the main branch, and publish Lambda with AWS CloudFormation
A fintech startup runs a usage-based public API behind Amazon API Gateway at https://paylearn.io/api/v2. Its static website is hosted on Amazon S3 and can surface a new endpoint at https://paylearn.io/api/v2/insights if enabled. The team wants to send only a small slice of production traffic to this new path first and validate latency and error metrics before a full release. The API integrations use AWS Lambda. What should you do to conduct this test safely?
-
❏ A. Create a Lambda alias, enable canary on the alias, deploy the new handler to the alias, send a small traffic weight to the alias, and stream metrics to Amazon OpenSearch Service
-
❏ B. Use Amazon Route 53 weighted routing between two API Gateway custom domain names to move a small percentage of requests to a canary deployment and observe results in AWS CloudTrail
-
❏ C. Enable a canary release on the API Gateway stage serving v2, deploy the updated API to that stage, direct a small percentage of traffic to the canary deployment, and monitor CloudWatch metrics
-
❏ D. Create a Lambda function alias, enable alias canary, update the API integration to the alias, set a small weight for canary, and track behavior with CloudWatch
At RivetWorks, a DevOps engineer is building a serverless workload that uses several AWS Lambda functions with Amazon DynamoDB for persistence. The team needs a lightweight front end that accepts plain HTTP traffic and directs requests to different functions based on the URL path segment. What approach should they use?
-
❏ A. Amazon API Gateway REST API configured with resources for each URL path using the ANY method and Lambda proxy integrations over an HTTP endpoint
-
❏ B. Application Load Balancer with an HTTP listener and Lambda targets in separate target groups, using path-based routing rules to map URL paths to functions
-
❏ C. Network Load Balancer with an HTTP listener and Lambda targets in distinct target groups, using IP-based rules to forward based on path values
-
❏ D. Amazon API Gateway HTTP API with routes for each path and Lambda non-proxy integrations over an HTTP endpoint
An application retrieves a database password from AWS Secrets Manager and caches it, and after enabling rotation authentication begins to fail. What is the most likely cause?
-
❏ A. KMS key policy blocks decryption of the secret
-
❏ B. No VPC endpoint for Secrets Manager
-
❏ C. Initial rotation replaced the password while the app kept a cached value
-
❏ D. GetSecretValue omitted VersionStage and read the wrong version
An online learning startup runs its web tier on Amazon EC2 instances in an Auto Scaling group. Before any instance is terminated during scale-in, the team must collect all application log files from that instance. They also need a searchable catalog to quickly find log objects by instance ID and within a specific date range. As the DevOps engineer, what architecture would you implement to meet these goals in an event-driven and fault-tolerant way? (Choose 3)
-
❏ A. Set up an Auto Scaling termination lifecycle hook that sends an EventBridge event to invoke Lambda, then use SSM Run Command to upload the logs to CloudWatch Logs and stream them through Kinesis Data Firehose to S3
-
❏ B. Create a DynamoDB table with partition key instanceId and sort key eventTime
-
❏ C. Create a DynamoDB table with partition key eventTime and sort key instanceId
-
❏ D. Configure an S3 PUT event notification to trigger a Lambda function that writes object metadata into DynamoDB
-
❏ E. Use an Auto Scaling termination lifecycle hook with an EventBridge rule that triggers Lambda to call SSM Run Command and copy the logs directly to S3
-
❏ F. Create an EventBridge rule for PutObject API calls to invoke a Lambda function that records entries in DynamoDB
Helios Media processes about 25 GB TAR archives on AWS Fargate when new objects are placed into an Amazon S3 bucket. CloudTrail data events for the bucket are already enabled. To minimize cost, the team wants a single task running only when there is new work, to start additional tasks only when more files are uploaded again, and after processing and bucket cleanup, to stop all ECS tasks. What is the simplest way to implement this?
-
❏ A. Use Amazon CloudWatch Alarms on AWS CloudTrail data events with two Lambda functions to raise or lower the ECS service desired count
-
❏ B. Create Amazon EventBridge rules for S3 object PUT to target ECS RunTask and for S3 object DELETE to invoke a Lambda that calls StopTask on all running tasks
-
❏ C. Configure an Amazon EventBridge rule to invoke a Lambda on S3 PUT that updates capacity provider settings for the ECS service, and another rule on S3 DELETE to scale in
-
❏ D. AWS Batch
An Australian retail startup uses an AWS CloudFormation template to publish a static marketing site from Amazon S3 in the Asia Pacific (Singapore) region. The stack creates an S3 bucket and a Lambda-backed custom resource that fetches files from an internal file share into the bucket. The team plans to relocate the site to the Asia Pacific (Sydney) region for lower latency, but the existing stack cannot be deleted cleanly in CloudFormation. What explains the failure and what should the engineer do to address it now and in subsequent releases?
-
❏ A. The bucket has S3 Object Lock retention enabled; clear the retention or legal hold before deleting the stack
-
❏ B. The S3 website bucket still contains objects and versions; update the Lambda custom resource to empty the bucket on the Delete event so CloudFormation can remove it
-
❏ C. The S3 bucket uses static website hosting; remove the website configuration from the template to allow deletion
-
❏ D. The bucket deletion fails due to the DeletionPolicy; set DeletionPolicy to ForceDelete so the bucket is removed even if it is not empty
All AWS questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
A logistics SaaS company, DriftCart, is building a microservices platform that needs an ultra-low-latency in-memory datastore. The data must be kept durable across at least two Availability Zones, the system requires strong read-after-write consistency, and it should scale smoothly to tens of terabytes without re-architecting. Which AWS service should they choose?
-
❏ A. Amazon ElastiCache for Redis
-
❏ B. Amazon Managed Grafana
-
❏ C. Amazon MemoryDB for Redis
-
❏ D. Amazon ElastiCache for Memcached
Helios Media uses AWS CodePipeline to orchestrate releases for a containerized service. Deployments to Amazon ECS are performed by AWS CodeDeploy with a blue and green strategy. The team needs to run smoke tests against the green task set before any production traffic is routed, and these checks complete in under 4 minutes. If the tests fail, the deployment must automatically roll back without human action. Which approach best satisfies these requirements?
-
❏ A. Insert a stage in CodePipeline before deployment that runs AWS CodeBuild to execute the test scripts; on failures, call aws deploy stop-deployment
-
❏ B. Add hooks to the CodeDeploy AppSpec and use the AfterAllowTraffic event to run the scripts; on failure, stop the deployment with the AWS CLI
-
❏ C. Define AppSpec hooks for the ECS deployment and use the AfterAllowTestTraffic event to invoke an AWS Lambda function that runs the tests; if any test fails, return an error from the function to trigger rollback
-
❏ D. Create an extra stage in CodePipeline ahead of the deploy stage to invoke an AWS Lambda function that runs the scripts; if errors are detected, call aws deploy stop-deployment
Which AWS solution provides continuous vulnerability detection for EC2 instances and centralized auditing of operating system login events across approximately 120 instances?
-
❏ A. AWS Systems Manager Patch Manager and Kinesis Agent to Amazon S3
-
❏ B. AWS Security Hub and AWS CloudTrail
-
❏ C. Amazon GuardDuty and AWS Detective
-
❏ D. Amazon Inspector with CloudWatch agent to CloudWatch Logs
A biotech research firm restricts DevOps staff from logging directly into Amazon EC2 instances that handle regulated data except during rare break-glass events. The security team needs to be alerted within 10 minutes whenever an engineer signs in to any of these instances. Which approach delivers this with the least ongoing operational effort?
-
❏ A. Use AWS Systems Manager to run a recurring script on each EC2 instance that uploads system logs to Amazon S3, trigger an S3 event to an AWS Lambda function that parses for login entries, and notify the team with Amazon SNS
-
❏ B. Configure an AWS CloudTrail trail to deliver events to Amazon CloudWatch Logs, invoke an AWS Lambda function to look for login activity, and publish alerts via Amazon SNS
-
❏ C. Install the Amazon CloudWatch agent on the EC2 instances to stream system logs into CloudWatch Logs and create a metric filter with an alarm that sends an Amazon SNS notification when login messages are detected
-
❏ D. Install the AWS Systems Manager agent and depend on Amazon EventBridge to emit an event when a user logs in, then trigger AWS Lambda to send an Amazon SNS message
Aurora Metrics is releasing a new serverless backend that uses an Amazon API Gateway REST API in front of several AWS Lambda functions. The DevOps lead wants a deployment approach that initially exposes the new release to roughly 10% of clients, validates behavior, and then shifts all traffic when stable. Which deployment strategy should the engineer use to meet these goals?
-
❏ A. Deploy with AWS SAM using a Lambda alias, then perform a canary via an Amazon Route 53 weighted policy to shift a small percentage of users
-
❏ B. Use the AWS CDK to provision the stack and rely on an Amazon Route 53 failover policy to simulate a canary before switching all traffic
-
❏ C. Use AWS CloudFormation to manage Lambda function versions and configure API Gateway stage canary settings; update the stack to roll out the new code and promote after validation
-
❏ D. Use AWS CodeDeploy with Lambda traffic shifting to perform a canary and update the API separately afterward
An R&D team at Vantage Labs stores sensitive patent drafts in an Amazon S3 bucket in US West (Oregon). The compliance group needs a simple and low-cost way to track and search object-level actions like PutObject, GetObject, and DeleteObject for periodic audits. Which approach best meets these needs?
-
❏ A. Enable S3 server access logging to a separate bucket and index the logs with Amazon OpenSearch Service
-
❏ B. Create an Amazon EventBridge rule for S3 object-level API actions and route events directly to Amazon CloudWatch Logs
-
❏ C. Enable AWS CloudTrail data events for the bucket, store the logs in Amazon S3, and query them ad hoc with Amazon Athena
-
❏ D. Configure the S3 bucket to send object API activity directly to an Amazon CloudWatch Logs log group
You are the DevOps lead at StreamForge, a digital media subscription startup, and your production Amazon RDS for MySQL database is provisioned by AWS CloudFormation using AWS::RDS::DBInstance and configured for Multi-AZ. You must perform a major upgrade from MySQL 5.7 to 8.0 and want to keep application downtime to only a brief cutover while managing changes through CloudFormation. What approach should you take to achieve this with minimal interruption?
-
❏ A. Update the existing AWS::RDS::DBInstance in the stack by setting EngineVersion to 8.0 and run a stack update
-
❏ B. Change the DBInstance property DBEngineVersion to 8.0 in CloudFormation and update the stack
-
❏ C. Create a read replica with CloudFormation using SourceDBInstanceIdentifier, wait for it to catch up, update the replica’s EngineVersion to 8.0, promote it, then point applications to the promoted instance
-
❏ D. Use AWS Database Migration Service with ongoing replication to migrate to a new RDS MySQL 8.0 instance and cut over
A regional healthcare analytics firm with a hybrid cloud setup operates a portfolio of 42 web applications. Each application is a multi-tier workload running on Auto Scaling groups of On-Demand Amazon EC2 instances behind Application Load Balancers and uses external Amazon RDS databases. The security team requires that only the corporate data center can reach the applications, and all other external IPs must be blocked. The corporate network exits through 12 proxy appliances, each with a unique public IP address, and these IPs are rotated roughly every three weeks. The networking team uploads a CSV with the current proxy IPs to a private Amazon S3 bucket whenever the rotation occurs. What should the DevOps engineer implement to allow access solely from the corporate network in the most cost-effective way with minimal ongoing effort?
-
❏ A. Place all applications in one VPC, establish AWS Direct Connect with active and standby connections, and restrict ALB security groups to allow HTTPS only from the corporate IP addresses
-
❏ B. Create a Python utility using the AWS SDK to download the CSV and update ALB security groups, and invoke it from AWS Lambda on a one-minute Amazon EventBridge schedule
-
❏ C. Use an AWS Lambda function triggered by Amazon S3 event notifications on object updates to read the CSV and modify ALB security groups to permit HTTPS only from the listed proxy IPs
-
❏ D. Open ALB security groups to HTTPS from the internet, integrate Amazon Cognito with the company Active Directory, enable all 42 apps to use Cognito for login, log to Amazon CloudWatch Logs, and rely on AWS Config for twice-monthly changes
What is the least disruptive way to allow Lambda functions in one AWS account to mount and read and write to an Amazon EFS file system in a different account? (Choose 2)
-
❏ A. AWS PrivateLink for EFS
-
❏ B. Lambda role permissions and EFS access point mount
-
❏ C. Transit Gateway only, no EFS policy change
-
❏ D. Use Organizations SCPs
-
❏ E. VPC peering plus EFS file system policy
A digital payments startup recently built a Node.js application that exposes a GraphQL API to read and write payment records. The system runs on a company-managed server with a local MySQL instance. The QA and UX teams will drive frequent test iterations and feature tweaks, and the business requires that releases introduce no downtime or performance dips while builds, tests, and deployments occur. The target design must also scale quickly during traffic spikes. What approach will let the DevOps team move this workload to AWS rapidly while meeting these requirements?
-
❏ A. Move the repository to GitHub using AWS CodeStar Connections, configure AWS CodeBuild for automated unit and integration tests, create two Elastic Beanstalk environments that share an external Amazon RDS MySQL Multi-AZ database, deploy the current version to both, and have CodeBuild promote new versions to Elastic Beanstalk using in-place updates
-
❏ B. Containerize the application and push images to Amazon ECR, use AWS CodeDeploy to run tests and perform deployments to Elastic Beanstalk, and keep an external Amazon RDS Multi-AZ database while using a rolling or in-place deployment strategy
-
❏ C. Link the codebase to GitHub via AWS CodeStar Connections, use AWS CodeBuild for automated unit and functional tests, stand up two Elastic Beanstalk environments wired to a shared external Amazon RDS MySQL Multi-AZ database, deploy the current version to both, then have the pipeline deploy new builds to Elastic Beanstalk and perform a blue/green cutover
-
❏ D. Connect GitHub with AWS CodeStar Connections, run tests with AWS CodeBuild, create two Elastic Beanstalk environments each with its own Amazon RDS MySQL Multi-AZ database, deploy the current release to both, and use blue/green deployments during cutover
ClearPulse Health, a healthcare SaaS provider, has multiple product teams deploying internal APIs into separate AWS accounts. Each account’s VPC was created with the same 10.8.0.0/24 CIDR, and services run on Amazon EC2 behind public Application Load Balancers with HTTPS while interservice calls currently traverse the internet. After a security review, the company requires all service-to-service communication to stay private over HTTPS and the design must scale as additional VPCs are added over time. What long-term architecture should the solutions architect propose to meet these goals?
-
❏ A. Create a Network Load Balancer in each VPC, set up interface VPC endpoints for com.amazonaws.us-west-2.elasticloadbalancing in consuming VPCs, approve AWS PrivateLink endpoint service connections, and use the endpoint DNS names for cross-VPC calls
-
❏ B. Build full-mesh VPC peering across all accounts, add routes to each peer VPC CIDR in every route table, and integrate services using NLB DNS names
-
❏ C. Readdress the overlapping CIDRs, deploy a centralized AWS Transit Gateway shared with AWS RAM, attach each VPC and add routes to their CIDRs through the attachments, and front services with internal NLBs for private HTTPS
-
❏ D. AWS App Mesh
A subscription media startup runs a serverless stack with CloudFront, Amazon API Gateway, and AWS Lambda, and today they publish new Lambda versions with ad hoc AWS CLI scripts and manually run another script to revert if problems occur; the current approach takes about 18 minutes to roll out and 12 minutes to undo changes, and leadership wants deployments completed in under 5 minutes with rapid detection and automatic rollback when issues surface with minimal impact to live traffic; what should you implement to meet these goals?
-
❏ A. AWS AppConfig feature flags for Lambda
-
❏ B. AWS SAM with CodeDeploy traffic shifting, pre-traffic and post-traffic hooks, and CloudWatch alarm rollback
-
❏ C. Nested CloudFormation stacks with change sets and stack rollback for Lambda updates
-
❏ D. CloudFormation change sets with pre-traffic and post-traffic tests and rollback on alarms
WillowWorks Health runs a production service behind a Network Load Balancer that terminates TLS. The platform team needs detailed connection information to study client behavior and troubleshoot spikes, and the captured records must be stored with encryption at rest and readable only by the platform engineers. What should the engineer implement to satisfy these requirements?
-
❏ A. Enable NLB access logs to an Amazon S3 bucket, use SSE-KMS on the bucket, and set a bucket policy that grants write access to the AWS service account
-
❏ B. Send NLB access logs directly to Amazon CloudWatch Logs with a KMS-encrypted log group and restrict access to the team
-
❏ C. Enable NLB access logs to an S3 bucket, turn on SSE-S3, allow delivery.logs.amazonaws.com to write via bucket policy, and grant the platform team read access with IAM
-
❏ D. Enable NLB access logs to an S3 bucket with SSE-S3 and a bucket policy that only permits delivery.logs.amazonaws.com to write
All AWS questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
A video streaming startup, LunarFlix, is rolling out its primary service on a fleet of about 36 Amazon EC2 instances across two Regions. The platform team needs a centralized way to search application logs from the instances together with the AWS account API event history. Which approach should they implement to enable unified, ad hoc querying of both data sources?
-
❏ A. Configure AWS CloudTrail to write API events to Amazon S3, use the CloudWatch Agent on EC2 to send application logs to Amazon S3, and query both with Amazon Athena
-
❏ B. Configure AWS CloudTrail to send events to Amazon CloudWatch Logs, deploy the CloudWatch Agent on EC2 to push application logs to CloudWatch Logs, and run CloudWatch Logs Insights queries across both log groups
-
❏ C. Deliver AWS CloudTrail events to Amazon Kinesis Data Streams, have the CloudWatch Agent publish instance logs to Kinesis Data Streams, and analyze both streams with Kinesis Data Analytics
-
❏ D. Send AWS CloudTrail logs to Amazon S3, forward EC2 application logs to Amazon CloudWatch Logs using the CloudWatch Agent, and use Amazon Athena to query both datasets
Which approach ensures that CI/CD build artifacts are encrypted at rest while requiring the least operational effort?
-
❏ A. Store artifacts in Amazon S3 with default bucket encryption and run Jenkins on EC2
-
❏ B. Use AWS Systems Manager to patch Jenkins instances and encrypt EBS volumes
-
❏ C. Use AWS CodeBuild with KMS-encrypted artifacts
-
❏ D. Run Jenkins on Amazon ECS and store artifacts with S3 SSE-S3
A DevOps engineer at a global logistics firm, Aurora Freight Systems, operates a hybrid environment that links its on-premises campus to AWS through an AWS Direct Connect gateway. The team needs to automate operating system updates for about 180 Windows servers running both in the data center and on Amazon EC2 by using AWS Systems Manager. What steps should be configured to enable this patching workflow across the hybrid fleet? (Choose 2)
-
❏ A. Create multiple IAM service roles for Systems Manager that use STS AssumeRoleWithSAML, register the roles to issue service tokens, and use them to run managed-instance activation
-
❏ B. Install the SSM Agent on the on-prem Windows servers using the activation code and ID, register them so they appear with an mi- prefix in the console, and apply updates with Patch Manager
-
❏ C. Install the SSM Agent and register the servers so they appear with an i- prefix, then orchestrate patches through State Manager
-
❏ D. Use AWS Quick Setup to enable Patch Manager across all Regions without creating a hybrid activation or service role, allowing Systems Manager to auto-discover on-premises servers
-
❏ E. Create a single IAM service role for Systems Manager that the service can assume with STS AssumeRole, register the role so activations can be created, and use it for managed-instance activation of the data center machines
Northwind Press runs a stateless web API on Amazon EC2 behind an Application Load Balancer in us-west-2, and DNS for shopwidget.net is hosted in Amazon Route 53 with health checks that verify availability. A second identical stack is deployed in eu-central-1. The team needs requests to go to the Region with the lowest latency and to fail over automatically to the other Region if a Regional outage occurs. What Route 53 configuration should be implemented?
-
❏ A. Create a subdomain named na.shopwidget.net with multivalue answer routing, listing the us-west-2 ALB first and the eu-central-1 ALB second; create eu.shopwidget.net similarly with the EU ALB first; then configure failover routing on shopwidget.net aliased to na.shopwidget.net and eu.shopwidget.net
-
❏ B. Create a subdomain named na.shopwidget.net with latency-based routing that targets the us-west-2 ALB first and the eu-central-1 ALB second; create eu.shopwidget.net similarly; then configure failover routing on shopwidget.net that aliases to na.shopwidget.net as primary and eu.shopwidget.net as secondary
-
❏ C. Create a subdomain named na.shopwidget.net with failover routing, setting the us-west-2 ALB as primary and the eu-central-1 ALB as secondary; create eu.shopwidget.net with failover routing, setting the eu-central-1 ALB as primary and the us-west-2 ALB as secondary; then configure latency-based alias records for shopwidget.net that point to na.shopwidget.net and eu.shopwidget.net
-
❏ D. Create a subdomain named na.shopwidget.net with weighted routing (us-west-2 weight 3, eu-central-1 weight 1) and eu.shopwidget.net with weighted routing (eu-central-1 weight 3, us-west-2 weight 1); then configure geolocation routing on shopwidget.net mapping North America to na.shopwidget.net and Europe to eu.shopwidget.net
An e-commerce analytics startup runs compute jobs on an Auto Scaling group of EC2 instances that send data to an external REST API at http://api.labexample.io as part of processing. After a code update switched the call to HTTPS, the job began failing at the external call, although the engineer can reach the API successfully using Postman from the internet and the VPC still uses the default network ACL. What is the next best step to pinpoint the root cause?
-
❏ A. Log in to the console, review application logs in CloudWatch Logs, and verify that the security group and network ACL allow outbound access to the external API
-
❏ B. Log in to the console, examine VPC Flow Logs for ACCEPT records from the Auto Scaling instances, and confirm that ingress security group rules allow traffic from the external API
-
❏ C. Log in to the console, inspect VPC Flow Logs for REJECT entries originating from the Auto Scaling instances, and validate that the security group outbound rules permit HTTPS to the API endpoint
-
❏ D. AWS Global Accelerator
A fintech startup is deploying a serverless Angular admin portal on AWS for employees only. In the CI/CD pipeline, AWS CodeBuild compiles the app and uploads the static assets to Amazon S3. The buildspec.yml shows version 0.2, an install phase using nodejs 18 with npm install -g @angular/cli, a build phase running ng build –configuration production, and a post_build command that executes aws s3 cp dist s3://harbor-city-angular-internal –acl authenticated-read. After release, the security team finds that any AWS account holder can read the objects in the bucket, even though the portal is intended to be internal. What should the DevOps engineer change to fix this in the most secure way?
-
❏ A. Add –sse AES256 to the aws s3 cp command to enable server-side encryption
-
❏ B. Create a bucket policy that limits reads to the company’s AWS accounts and remove the authenticated-read ACL from the upload step
-
❏ C. Replace –acl authenticated-read with –acl public-read
-
❏ D. Put Amazon CloudFront in front of the S3 bucket and keep the existing ACL
Riverton Analytics needs to detect suspicious behavior that suggests their Amazon EC2 instances have been breached and to receive email alerts whenever such indicators appear. What is the most appropriate solution?
-
❏ A. Configure Amazon Inspector to detect instance compromise and have Inspector publish state changes to an Amazon SNS topic
-
❏ B. Enable Amazon GuardDuty to monitor for EC2 compromise behaviors and route matching findings through an Amazon EventBridge rule to an Amazon SNS topic for email
-
❏ C. Create an AWS CloudTrail trail with metric filters and Amazon CloudWatch alarms for suspected compromise API calls and notify via Amazon SNS
-
❏ D. Use AWS Security Hub to detect EC2 compromises and send alerts via an Amazon SNS topic
DevOps Professional Mock Exam Answers
All AWS questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
Kestrel Works has updated its security baseline to require that Amazon EBS encryption be enabled by default in every account, and any unencrypted volumes must be surfaced as noncompliant in a centralized view. The company runs many AWS accounts under AWS Organizations, and a DevOps engineer must roll out the enforcement and compliance evaluation automatically across all member accounts. What is the most efficient way to achieve this?
-
✓ C. Create an organization-wide AWS Config rule to evaluate EBS encryption by default and attach an SCP that prevents disabling or deleting AWS Config in any account
Create an organization-wide AWS Config rule to evaluate EBS encryption by default and attach an SCP that prevents disabling or deleting AWS Config in any account is correct because it provides a centralized, organization-level evaluation that continuously flags accounts and volumes that are not compliant and it pairs with an enforcement control to stop tampering.
This approach uses the built-in ec2-ebs-encryption-by-default rule at the organization level so that evaluation and results are managed centrally and aggregated for all member accounts. The organization rule continuously evaluates whether EBS default encryption is enabled and surfaces unencrypted volumes as noncompliant in a centralized view. Attaching an SCP to prevent disabling or deleting AWS Config ensures the compliance engine cannot be turned off, so the organization retains visibility and enforcement of the baseline.
Apply an SCP in AWS Organizations that denies launching EC2 instances whose attached EBS volumes are not encrypted, and attach the SCP to all accounts is preventative only for new instance launches and does not assess or report on existing unencrypted volumes, so it does not satisfy the requirement to surface noncompliant volumes centrally.
Schedule an AWS Lambda function in every account to audit EBS volume encryption, upload results to a central Amazon S3 bucket, and query with Amazon Athena is possible but introduces custom code, scheduling, and maintenance in each account. This increases operational complexity and is less efficient than using native AWS Config organization rules that provide continuous evaluation and aggregation.
Use AWS CloudFormation StackSets to roll out the ec2-ebs-encryption-by-default Config rule and the EBS default encryption setting to each member account can deploy the rule and settings but adds deployment and lifecycle overhead and it does not by itself prevent disabling AWS Config. Organization-level Config rules simplify ongoing management and should be paired with SCPs for enforcement.
Use AWS Config organization rules for centralized, continuous compliance and pair them with SCPs to prevent disabling the control plane
A ride-sharing startup maintains a public web portal that must fail over between AWS Regions with very little interruption. The application stores data in an Amazon Aurora cluster in the primary Region (eu-west-2), and Aurora Parallel Query is enabled to accelerate analytics-heavy requests. Amazon Route 53 is used to route users to the active Region. What should the team put in place to minimize downtime if the primary Aurora cluster fails?
-
✓ B. Create a cross-Region Aurora read replica in eu-central-1, configure RDS event notifications to an SNS topic, subscribe a Lambda function that promotes the replica on failure, and update Route 53 to direct traffic to the secondary Region
Create a cross-Region Aurora read replica in eu-central-1, configure RDS event notifications to an SNS topic, subscribe a Lambda function that promotes the replica on failure, and update Route 53 to direct traffic to the secondary Region is correct because it provides an automated, event driven failover path that minimizes downtime and reduces the chance of human error.
Creating a cross Region Aurora read replica gives a ready copy of the data that keeps RPO low and lets you promote the replica quickly. Using RDS event notifications to an SNS topic lets the system detect failure signals immediately and trigger automation. Subscribing a Lambda function to that SNS topic allows automated promotion and then updating Route 53 directs users to the newly promoted cluster so the outage window is small.
Use CloudWatch alarms and EventBridge with SNS to alert the ops team on Slack, direct users to an S3 static status page, manually promote the replica, verify, and then switch Route 53 to the secondary Region is wrong because it relies on manual intervention and human verification. Manual steps increase recovery time and raise the risk of prolonged downtime.
Set an EventBridge scheduled rule to invoke a Lambda function every 30 minutes to probe the primary Aurora cluster; if unhealthy, promote the replica and modify Route 53 to fail over is wrong because periodic polling can delay detection for the length of the schedule. Scheduled probes also add unnecessary complexity and may miss fast failures compared to event driven notifications.
Enable automated cross-Region snapshot copies every 15 minutes and, during an outage, restore the latest snapshot in the secondary Region and point Route 53 to the new endpoint is wrong because snapshot restore is a slow process and can take a long time to complete. Restoring from snapshots increases recovery time and can result in greater data loss versus using a continuously replicated read replica.
Choose event driven automation with continuous cross Region replication to minimize RTO and RPO and avoid solutions that depend on manual cutovers or infrequent polling.
NorthPeak Analytics is launching a new serverless backend that runs on AWS Lambda. A DevOps engineer needs to build a CI/CD pipeline that rolls out changes safely and reduces risk if a release has issues, with the ability to revert quickly. Which deployment setup best meets these goals?
-
✓ C. Use an AWS SAM template for the serverless app and deploy with AWS CodeDeploy using the Canary5Percent15Minutes deployment type
The correct choice is Use an AWS SAM template for the serverless app and deploy with AWS CodeDeploy using the Canary5Percent15Minutes deployment type. This approach performs a small initial traffic shift and monitors alarms so it can automatically roll back if errors increase which reduces customer impact.
Using SAM with CodeDeploy gives a built in deployment preference that stages releases and ties pre and post traffic hooks to CloudWatch alarms. The SAM template can auto publish versions and enable the canary deployment configuration so only a small percentage of traffic hits the new code initially. If alarms detect problems the deployment can automatically revert which meets the requirement to reduce risk and enable quick rollbacks.
Use AWS CloudFormation to publish a new Lambda version on each stack update and configure the RoutingConfig of an AWS::Lambda::Alias to weight traffic toward the new version relies on manual alias weight changes and does not provide the integrated pre and post traffic hooks and automated rollback that CodeDeploy offers.
Use AWS CloudFormation to deploy the application and use AWS CodeDeploy with the AllAtOnce deployment type while observing metrics in Amazon CloudWatch pushes all traffic to the new version immediately which maximizes the blast radius and undermines the goal of minimizing impact from failures.
Use AWS AppConfig feature flags to control exposure while updating Lambda code through AWS CloudFormation is useful for configuration toggles but it does not manage Lambda traffic shifting or perform automatic code rollback so it does not satisfy the deployment safety requirement.
Remember that SAM deployment preferences and CodeDeploy canary or linear strategies enable small rollouts and automatic rollback with CloudWatch alarms which limits blast radius and speeds recovery.
BluePeak Logistics needs an automated way to send near real-time, customized alerts whenever a security group in their primary AWS account permits open SSH access. The alert must include both the security group name and the group ID. The team has already turned on the AWS Config managed rule restricted-ssh and created an Amazon SNS topic with subscribers. What is the most effective approach to produce these tailored notifications?
-
✓ C. Create an Amazon EventBridge rule that filters for restricted-ssh evaluations with complianceType set to NON_COMPLIANT, add an input transformer to include the security group name and ID, and publish to the existing SNS topic
Create an Amazon EventBridge rule that filters for restricted-ssh evaluations with complianceType set to NON_COMPLIANT, add an input transformer to include the security group name and ID, and publish to the existing SNS topic is the correct choice. This option uses EventBridge to catch the exact AWS Config noncompliance signal and it enriches the event so the notification contains the security group name and the group ID.
Create an Amazon EventBridge rule that filters for restricted-ssh evaluations with complianceType set to NON_COMPLIANT, add an input transformer to include the security group name and ID, and publish to the existing SNS topic provides near real time alerts from AWS Config and it minimizes noise by matching only the NON_COMPLIANT outcome for the specific managed rule. The input transformer allows extracting and formatting the resource name and resourceId so subscribers receive the exact fields they need. Publishing to the existing SNS topic leverages the already configured delivery channels and subscriber list.
Create an Amazon EventBridge rule that matches all evaluation results for the restricted-ssh rule, use an input transformer, and rely on an SNS topic filter policy that passes only messages containing NON_COMPLIANT is not ideal because it matches compliant and other states and that increases event volume. This option also assumes filtering is applied at the topic level and it is important to remember that message filtering is configured on subscriptions.
Create an Amazon EventBridge rule that matches NON_COMPLIANT evaluations for all AWS Config managed rules, use an input transformer, and use an SNS topic filter policy to pass only NON_COMPLIANT messages is too broad because it captures noncompliance across all rules and that will generate unnecessary noise and cost. It also misuses topic level filtering instead of subscription filters and it does not focus on the specific restricted SSH condition that the team cares about.
Create an Amazon EventBridge rule that matches ERROR evaluations for the restricted-ssh rule and forward them to the SNS topic is incorrect because error evaluations indicate rule execution problems and they do not represent security policy violations. This will miss actual unrestricted SSH findings which appear as NON_COMPLIANT evaluations.
Target the NON_COMPLIANT evaluation for the specific managed rule and use an input transformer to add resource identifiers. Remember that SNS message filtering is applied on subscriptions not on the topic.
An edtech company manages more than 120 AWS accounts under AWS Organizations and wants to centralize observability in a single operations and security account. The team needs to search, visualize, and troubleshoot CloudWatch metrics, CloudWatch Logs, and AWS X-Ray traces from all member accounts in one place, and any new accounts added to the organization should be included automatically. What is the best approach to meet these requirements?
-
✓ C. Use CloudWatch cross-account observability with AWS Organizations to designate the central ops account as the monitoring account and link all organization accounts
Use CloudWatch cross-account observability with AWS Organizations to designate the central ops account as the monitoring account and link all organization accounts is correct because it provides a single monitoring account that can search visualize and troubleshoot CloudWatch metrics CloudWatch Logs and AWS X-Ray traces across all member accounts and it supports automatic inclusion of new accounts when linked through AWS Organizations.
Use CloudWatch cross-account observability with AWS Organizations to designate the central ops account as the monitoring account and link all organization accounts centralizes queries and visualizations so the operations and security team can run Logs Insights queries view Metrics Explorer charts and inspect X-Ray traces from the monitoring account without building custom pipelines or manual per-account integrations and it leverages Organizations for auto-onboarding.
From the CloudWatch console, enable cross-account observability and connect each account directly to the monitoring account is less suitable because it implies manual connections for each account and it does not provide automatic enrollment for new accounts added to the organization.
Configure CloudWatch Metric Streams to deliver metrics to Kinesis Data Firehose, store in Amazon S3, and build dashboards with Amazon Athena is incorrect because Metric Streams handles only metrics and does not natively include logs or X-Ray traces and it does not provide the integrated cross-account search and automatic onboarding required here.
Create CloudWatch alarms that send events to Amazon EventBridge, write data to an S3 bucket, and visualize with Athena is wrong because alarms and EventBridge are not a unified solution for searching and visualizing metrics logs and traces and the pattern does not automatically bring new AWS Organizations accounts into a central observability view.
When the requirements list centralized search for metrics logs and traces plus automatic onboarding for new AWS Organizations accounts pick CloudWatch cross-account observability with Organizations.
Objects are added directly to an S3 bucket overnight between 11 PM and 1 AM. How can you ensure an AWS Storage Gateway file share shows these new objects by 9 AM?
-
✓ C. Use EventBridge to run a Lambda that calls RefreshCache on the file share before 9 AM
The correct choice is Use EventBridge to run a Lambda that calls RefreshCache on the file share before 9 AM. File Gateway caches directory listings and metadata and it does not automatically discover objects written directly to the backing S3 bucket. Calling RefreshCache updates the file share view of the bucket so objects added overnight reliably appear for users by the start of the day.
A Lambda function can call the Storage Gateway RefreshCache API for the target file share and EventBridge can schedule that Lambda to run before 9 AM each day. Automating the RefreshCache call is the simplest reliable pattern to ensure consistency without manual steps and it avoids forcing changes to how objects are uploaded.
Require uploads via AWS Transfer Family is incorrect because changing the ingestion protocol does not make the gateway refresh its cached listing and it introduces an unnecessary architectural change for this visibility problem.
Enable automatic cache refresh on the file gateway is incorrect since there is no built in automatic cache refresh that picks up direct S3 writes. You must explicitly call RefreshCache or automate that call to update the share.
Configure S3 Event Notifications to make the gateway refresh its cache is incorrect because Storage Gateway cannot directly consume S3 event notifications and you would still need an intermediary such as Lambda to invoke RefreshCache. A scheduled EventBridge rule that triggers Lambda for a daily refresh is a simpler and reliable solution for the described requirement.
Schedule a daily EventBridge rule to trigger a Lambda that calls RefreshCache shortly before users arrive to ensure the file share shows objects added directly to S3.
Northstar Analytics provides GraphQL endpoints for institutional clients and partners to consume global equities data. The service runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer, and releases are rolled out through a CI/CD pipeline using AWS CodeDeploy. Leadership wants the DevOps engineer to detect deployment-related health regressions as quickly as possible and automatically pause the rollout if latency crosses a predefined threshold until it stabilizes. Which approach offers the fastest detection during deployments?
-
✓ C. Create a CloudWatch alarm on the ALB TargetResponseTime metric and attach it to the CodeDeploy deployment group to automatically halt the rollout on breach
Create a CloudWatch alarm on the ALB TargetResponseTime metric and attach it to the CodeDeploy deployment group to automatically halt the rollout on breach is the correct choice because it gives the fastest built in detection of deployment related latency regressions.
The CloudWatch alarm observes the ALB TargetResponseTime metric which is published at one minute granularity and CodeDeploy can directly associate alarms with a deployment group so a breached alarm can immediately stop or pause the rollout. This approach avoids log delivery and processing delays and leverages native integrations to minimize detection latency during a deployment.
Enable Detailed Monitoring for the Application Load Balancer in CloudWatch and use the alarm to stop the deployment is incorrect because Application Load Balancers already publish metrics at 60 second granularity and they do not support a separate Detailed Monitoring mode. There is no faster native metric available by enabling detailed monitoring so this does not improve detection speed.
Analyze ALB access logs with an AWS Lambda function to compute average latency and signal the pipeline to stop when the threshold is breached is incorrect because access logs are delivered in batches to Amazon S3 and require processing which introduces significant delay compared to native metrics.
Instrument the service with AWS X-Ray and abort the deployment when traces show increased latency is incorrect because tracing introduces collection and analysis overhead and X Ray does not natively trigger CodeDeploy to halt a rollout as quickly as a CloudWatch alarm on the ALB metric.
Attach CloudWatch alarms on ALB latency directly to the CodeDeploy deployment group so rollbacks or pauses occur with minimal delay.
Meridian Threads, a retail startup, plans to launch a customer portal on Amazon EC2 instances behind an Application Load Balancer, backed by a managed MySQL-compatible database. The business requires a recovery point objective of 45 minutes and a recovery time objective of 12 minutes during a Regional outage. Which combination of deployment choices best meets these objectives? (Choose 2)
-
✓ B. Configure an Amazon Aurora global database spanning two Regions and promote the secondary to read/write if the primary Region fails
-
✓ D. Deploy the web tier in two Regions and use Amazon Route 53 failover routing to point to the ALBs with health checks enabled
Configure an Amazon Aurora global database spanning two Regions and promote the secondary to read/write if the primary Region fails and Deploy the web tier in two Regions and use Amazon Route 53 failover routing to point to the ALBs with health checks enabled are correct because they provide coordinated cross Region recovery for the database and deterministic failover for the web tier that meet the RPO and RTO targets.
Configure an Amazon Aurora global database spanning two Regions and promote the secondary to read/write if the primary Region fails delivers storage level, asynchronous cross Region replication with very low replication lag and a controlled promotion process, and this design supports a 45 minute RPO and a 12 minute RTO for the database tier when configured and tested properly.
Deploy the web tier in two Regions and use Amazon Route 53 failover routing to point to the ALBs with health checks enabled uses health checks to detect Region failure and to shift traffic to the standby ALB, and this gives a predictable and testable web tier recovery path that complements the Aurora global database promotion.
Provision two separate Amazon Aurora clusters in different Regions and replicate changes by using AWS Database Migration Service is incorrect because using AWS Database Migration Service for ongoing cross Region replication adds operational complexity and potential replication lag, and it requires a more manual cutover that risks exceeding the RTO.
Deploy the web tier in two Regions and use Amazon Route 53 latency-based routing to target the ALB in each Region with health checks is incorrect because latency based routing optimizes for the lowest latency rather than providing an automated DR failover behavior, and it does not guarantee that traffic will move away from a failing Region in the deterministic way that failover routing does.
Create an Amazon Aurora multi-master cluster across multiple Regions and run the database in active-active mode is incorrect because Aurora multi master is not supported across Regions for active active operation, and that architecture cannot be relied on for cross Region disaster recovery on current Aurora offerings.
For cross Region DR think of Aurora Global Database for the data layer and use Route 53 failover with health checks for deterministic web tier cutover.
Northwind Labs is building a serverless REST API using Amazon API Gateway, AWS SAM, and AWS Lambda. They want backend Lambda deployments to run automatically whenever code is pushed to GitHub. They also require distinct pipelines for STAGING and PRODUCTION, and only STAGING should deploy without human intervention while PRODUCTION must be gated. What approach should the DevOps engineer take to implement this?
-
✓ B. Build two AWS CodePipeline pipelines for STAGING and PRODUCTION, use a single GitHub repository with separate environment branches, trigger on branch commits, deploy via AWS CloudFormation, and require a manual approval only in the PRODUCTION pipeline
Build two AWS CodePipeline pipelines for STAGING and PRODUCTION, use a single GitHub repository with separate environment branches, trigger on branch commits, deploy via AWS CloudFormation, and require a manual approval only in the PRODUCTION pipeline is correct. This option provides an automated STAGING flow and a gated PRODUCTION flow which matches the requirements.
This approach uses branch based triggers so pushes to the STAGING branch drive the STAGING pipeline automatically while the PRODUCTION pipeline only progresses after a manual approval. Deploying Lambda with CloudFormation fits well with AWS SAM and keeps deployments repeatable and trackable. Having separate pipelines preserves environment isolation and lets you configure different actions and approvals for each environment.
Configure two AWS CodePipeline pipelines for STAGING and PRODUCTION, add a manual approval step to both, use separate GitHub repositories per environment, and deploy Lambda with AWS CloudFormation is not ideal because requiring manual approval in both pipelines prevents STAGING from auto deploying and maintaining separate repositories creates unnecessary repository sprawl and management overhead.
Create two pipelines in AWS CodePipeline, enable automatic releases in PRODUCTION, use AWS CodeDeploy to update Lambda from independent repositories is incorrect because it violates the requirement to gate PRODUCTION with a human approval and using separate repositories adds complexity. AWS CodeDeploy is not the common mechanism for simple serverless Lambda deployments with SAM and CloudFormation.
Use a single AWS CodePipeline pipeline with STAGING and PRODUCTION stages that both auto-deploy from pushes to the main branch, and publish Lambda with AWS CloudFormation is wrong because a single pipeline that auto-deploys both stages would not provide the required manual gate for PRODUCTION and it reduces isolation between environments.
Use a single repository with environment branches and set the STAGING pipeline to trigger on the staging branch while adding a manual approval action only to the production pipeline to enforce gated releases.
A fintech startup runs a usage-based public API behind Amazon API Gateway at https://paylearn.io/api/v2. Its static website is hosted on Amazon S3 and can surface a new endpoint at https://paylearn.io/api/v2/insights if enabled. The team wants to send only a small slice of production traffic to this new path first and validate latency and error metrics before a full release. The API integrations use AWS Lambda. What should you do to conduct this test safely?
-
✓ C. Enable a canary release on the API Gateway stage serving v2, deploy the updated API to that stage, direct a small percentage of traffic to the canary deployment, and monitor CloudWatch metrics
Enable a canary release on the API Gateway stage serving v2, deploy the updated API to that stage, direct a small percentage of traffic to the canary deployment, and monitor CloudWatch metrics is the correct choice because API Gateway stage canaries let you shift a configurable percentage of live traffic to a new deployment while the rest stays on the stable version.
Enable a canary release on the API Gateway stage serving v2, deploy the updated API to that stage, direct a small percentage of traffic to the canary deployment, and monitor CloudWatch metrics is appropriate for this scenario because the new path lives under the same API stage and the stage canary will route a weighted portion of requests to the new deployment so you can measure latency and error rates before a full rollout. The canary integrates natively with Amazon CloudWatch so you can observe latency, error rate, and throughput with built in metrics and logs.
Create a Lambda alias, enable canary on the alias, deploy the new handler to the alias, send a small traffic weight to the alias, and stream metrics to Amazon OpenSearch Service is incorrect because Lambda alias weighting does not itself perform an API Gateway stage level canary for a new route and API Gateway does not send its standard metrics to OpenSearch by default. CloudWatch is the native destination for API Gateway operational metrics.
Use Amazon Route 53 weighted routing between two API Gateway custom domain names to move a small percentage of requests to a canary deployment and observe results in AWS CloudTrail is incorrect because DNS weighted routing cannot target a specific path on the same domain and CloudTrail records audit events rather than providing real time latency and error metrics for performance validation.
Create a Lambda function alias, enable alias canary, update the API integration to the alias, set a small weight for canary, and track behavior with CloudWatch is incorrect because while alias weights can split Lambda version traffic they do not offer the route scoped or stage level canary behavior that API Gateway stage canaries provide and they are less convenient when you need to exercise a new API path under the same stage.
When you need a gradual rollout for a new path under the same API, use API Gateway stage canaries and monitor with CloudWatch to validate latency and error trends before shifting all traffic.
All AWS questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
At RivetWorks, a DevOps engineer is building a serverless workload that uses several AWS Lambda functions with Amazon DynamoDB for persistence. The team needs a lightweight front end that accepts plain HTTP traffic and directs requests to different functions based on the URL path segment. What approach should they use?
-
✓ B. Application Load Balancer with an HTTP listener and Lambda targets in separate target groups, using path-based routing rules to map URL paths to functions
The correct solution is Application Load Balancer with an HTTP listener and Lambda targets in separate target groups, using path-based routing rules to map URL paths to functions.
Application Load Balancer with an HTTP listener and Lambda targets in separate target groups, using path-based routing rules to map URL paths to functions supports plain HTTP listeners and has native integration for Lambda targets. The load balancer can inspect the URL path and apply path based routing rules to forward requests to different target groups, which lets a lightweight front end route plain HTTP requests to different Lambda functions based on the path.
Amazon API Gateway REST API configured with resources for each URL path using the ANY method and Lambda proxy integrations over an HTTP endpoint is not suitable because API Gateway endpoints require TLS and do not provide plain unencrypted HTTP access.
Amazon API Gateway HTTP API with routes for each path and Lambda non-proxy integrations over an HTTP endpoint also fails the requirement because HTTP APIs likewise enforce HTTPS for their endpoints and cannot serve plain HTTP.
Network Load Balancer with an HTTP listener and Lambda targets in distinct target groups, using IP-based rules to forward based on path values is incorrect because the Network Load Balancer operates at Layer 4 and does not evaluate URL paths so it cannot perform path based routing to different targets.
When a question requires plain HTTP plus path based routing to Lambda remember to pick ALB with Lambda targets rather than API Gateway or NLB.
An application retrieves a database password from AWS Secrets Manager and caches it, and after enabling rotation authentication begins to fail. What is the most likely cause?
-
✓ C. Initial rotation replaced the password while the app kept a cached value
Initial rotation replaced the password while the app kept a cached value is correct because enabling rotation triggers an immediate rotation that updates the secret and changes the database password, and the application continued using the cached old password which caused authentication failures.
When rotation runs Secrets Manager creates a new secret version and marks it AWSCURRENT while the previous version becomes AWSPREVIOUS. If the application does not retrieve the AWSCURRENT value at connection time or use the Secrets Manager caching client with automatic refresh the app will still use the stale password and fail to authenticate. The remedy is to fetch secrets at runtime use the official caching client that refreshes on rotation or implement logic to refresh cached credentials when rotation occurs.
KMS key policy blocks decryption of the secret is unlikely because a KMS policy issue would have caused decryption failures before rotation was enabled and enabling rotation does not change KMS permissions.
No VPC endpoint for Secrets Manager is not specific to rotation because if the application could reach Secrets Manager before rotation then networking was functioning and rotation would not suddenly fail. Secrets Manager can be accessed via NAT or internet gateways when configured.
GetSecretValue omitted VersionStage and read the wrong version is not the likely cause because GetSecretValue returns the AWSCURRENT version by default so omitting VersionStage normally returns the current secret and not a previous version.
Watch for clues like enabling rotation and cached credentials in the question and prefer runtime retrieval or the Secrets Manager caching client to avoid stale passwords.
An online learning startup runs its web tier on Amazon EC2 instances in an Auto Scaling group. Before any instance is terminated during scale-in, the team must collect all application log files from that instance. They also need a searchable catalog to quickly find log objects by instance ID and within a specific date range. As the DevOps engineer, what architecture would you implement to meet these goals in an event-driven and fault-tolerant way? (Choose 3)
-
✓ B. Create a DynamoDB table with partition key instanceId and sort key eventTime
-
✓ D. Configure an S3 PUT event notification to trigger a Lambda function that writes object metadata into DynamoDB
-
✓ E. Use an Auto Scaling termination lifecycle hook with an EventBridge rule that triggers Lambda to call SSM Run Command and copy the logs directly to S3
Create a DynamoDB table with partition key instanceId and sort key eventTime, Configure an S3 PUT event notification to trigger a Lambda function that writes object metadata into DynamoDB, and Use an Auto Scaling termination lifecycle hook with an EventBridge rule that triggers Lambda to call SSM Run Command and copy the logs directly to S3 are correct because they capture logs reliably at termination and provide a searchable index by instance and time.
Use an Auto Scaling termination lifecycle hook with an EventBridge rule that triggers Lambda to call SSM Run Command and copy the logs directly to S3 pauses instance termination and runs a remote command to copy log files to durable S3 storage before the instance is removed. The EventBridge and Lambda pieces keep this event driven and fault tolerant because the invocation can be retried and the logs land in S3 for durable retention.
Configure an S3 PUT event notification to trigger a Lambda function that writes object metadata into DynamoDB keeps an up to date index of every uploaded log object. The Lambda can record the instance id and timestamp as metadata so that objects are immediately discoverable without scanning S3.
Create a DynamoDB table with partition key instanceId and sort key eventTime aligns the primary key to the access pattern so queries by instance id and by a date range are efficient and cost effective. Partitioning by instance id groups related logs together and the sort key lets you use range queries for time windows.
Set up an Auto Scaling termination lifecycle hook that sends an EventBridge event to invoke Lambda, then use SSM Run Command to upload the logs to CloudWatch Logs and stream them through Kinesis Data Firehose to S3 is incorrect because it is more complex and optimised for continuous streaming ingestion rather than a one time drain at termination.
Create a DynamoDB table with partition key eventTime and sort key instanceId is incorrect because partitioning on time misaligns the table with instance centric lookups and can lead to hot partitions and inefficient queries when searching by instance and date range.
Create an EventBridge rule for PutObject API calls to invoke a Lambda function that records entries in DynamoDB is incorrect because that approach depends on CloudTrail data events and adds latency and operational overhead when native S3 PUT event notifications can invoke Lambda directly for object created events.
Map the DynamoDB primary key to the query pattern and use lifecycle hooks to pause termination for log collection. Prefer S3 event notifications to keep an object index in sync.
Helios Media processes about 25 GB TAR archives on AWS Fargate when new objects are placed into an Amazon S3 bucket. CloudTrail data events for the bucket are already enabled. To minimize cost, the team wants a single task running only when there is new work, to start additional tasks only when more files are uploaded again, and after processing and bucket cleanup, to stop all ECS tasks. What is the simplest way to implement this?
-
✓ B. Create Amazon EventBridge rules for S3 object PUT to target ECS RunTask and for S3 object DELETE to invoke a Lambda that calls StopTask on all running tasks
Create Amazon EventBridge rules for S3 object PUT to target ECS RunTask and for S3 object DELETE to invoke a Lambda that calls StopTask on all running tasks is correct because it maps S3 object events directly to on demand Fargate task launches and uses a small Lambda to stop tasks after processing so only a single task runs when there is work and additional tasks start only when new files arrive.
This solution leverages EventBridge as a native event router so a S3 PUT can trigger an ECS RunTask call to start Fargate tasks quickly and cost efficiently. The S3 DELETE event can invoke a small Lambda that calls StopTask to terminate any running tasks after the TAR archives are processed and the bucket is cleaned up. This keeps the architecture simple and minimizes running time and charges compared to keeping an always on service.
Use Amazon CloudWatch Alarms on AWS CloudTrail data events with two Lambda functions to raise or lower the ECS service desired count is not ideal because CloudWatch alarms are metric based and add complexity and delay when you need immediate, object level reaction. This approach also requires more moving parts than using EventBridge targets directly.
Configure an Amazon EventBridge rule to invoke a Lambda on S3 PUT that updates capacity provider settings for the ECS service, and another rule on S3 DELETE to scale in is incorrect because capacity providers are intended to manage backing capacity and not to toggle short lived desired count for event driven tasks. Using capacity providers for this use case misuses their purpose and increases complexity.
AWS Batch is not the simplest fit because it introduces an additional orchestration layer and it does not natively react to S3 object events without extra integration. AWS Batch is better suited for large scale batch workloads and is heavier than the direct EventBridge plus RunTask pattern for this scenario.
When S3 object level events must start Fargate tasks think EventBridge to call RunTask and use a small Lambda to call StopTask after cleanup so tasks run only while processing.
An Australian retail startup uses an AWS CloudFormation template to publish a static marketing site from Amazon S3 in the Asia Pacific (Singapore) region. The stack creates an S3 bucket and a Lambda-backed custom resource that fetches files from an internal file share into the bucket. The team plans to relocate the site to the Asia Pacific (Sydney) region for lower latency, but the existing stack cannot be deleted cleanly in CloudFormation. What explains the failure and what should the engineer do to address it now and in subsequent releases?
-
✓ B. The S3 website bucket still contains objects and versions; update the Lambda custom resource to empty the bucket on the Delete event so CloudFormation can remove it
The S3 website bucket still contains objects and versions and update the Lambda custom resource to empty the bucket on the Delete event so CloudFormation can remove it is correct because CloudFormation can only delete S3 buckets that are empty. The Lambda-backed custom resource needs a Delete handler that recursively removes all objects, object versions, and delete markers before returning success so the stack can be removed cleanly.
Implementing the Delete event logic means the custom resource should list and delete current objects and then list and delete versioned objects and delete markers when the bucket has versioning enabled. Testing the Delete flow in a nonproduction stack helps ensure future releases can be moved or removed without manual cleanup.
The bucket has S3 Object Lock retention enabled and clear the retention or legal hold before deleting the stack is unlikely because Object Lock is not implied by the scenario and even if Object Lock were enabled retention in compliance mode cannot be bypassed. The more common failure when CloudFormation cannot delete a bucket is that the bucket is not empty.
The S3 bucket uses static website hosting and remove the website configuration from the template to allow deletion is incorrect because website configuration does not prevent bucket deletion. If the bucket is empty CloudFormation can delete it regardless of the website hosting settings.
The bucket deletion fails due to the DeletionPolicy and set DeletionPolicy to ForceDelete so the bucket is removed even if it is not empty is wrong because ForceDelete is not a supported DeletionPolicy value. For CloudFormation S3 buckets you can set DeletionPolicy to Delete or Retain and deletion still requires the bucket to be emptied first.
Remember that S3 buckets must be empty to be deleted by CloudFormation and that versioned buckets require removing versions and delete markers. Implement and test a Delete handler in the Lambda custom resource to purge the bucket before stack deletion.
A logistics SaaS company, DriftCart, is building a microservices platform that needs an ultra-low-latency in-memory datastore. The data must be kept durable across at least two Availability Zones, the system requires strong read-after-write consistency, and it should scale smoothly to tens of terabytes without re-architecting. Which AWS service should they choose?
-
✓ C. Amazon MemoryDB for Redis
The correct choice is Amazon MemoryDB for Redis because it meets the ultra low latency requirement while providing durable, multi Availability Zone persistence and strong read after write consistency and it can scale to tens of terabytes without re architecting.
Amazon MemoryDB for Redis uses a durable, multi AZ transactional log to preserve data across Availability Zones and it implements strong consistency semantics so reads reflect recent writes. It is Redis compatible so it delivers in memory performance for very low latency and it supports cluster scaling that accommodates multi terabyte memory footprints without redesigning the application architecture.
Amazon ElastiCache for Redis is primarily intended as a cache rather than a durable system of record and it relies on replica synchronization that can be eventually consistent. It does not provide the same log backed durability guarantees as MemoryDB and that can increase the risk of data loss during failover.
Amazon ElastiCache for Memcached is an ephemeral cache with no built in persistence or cross Availability Zone replication and it cannot satisfy the durability or strong consistency requirements of this scenario.
Amazon Managed Grafana is a visualization and monitoring service and it is not a datastore so it does not meet the platform requirements.
When questions require durable multi Availability Zone persistence and strong read after write consistency with in memory latency prefer MemoryDB instead of cache offerings.
Helios Media uses AWS CodePipeline to orchestrate releases for a containerized service. Deployments to Amazon ECS are performed by AWS CodeDeploy with a blue and green strategy. The team needs to run smoke tests against the green task set before any production traffic is routed, and these checks complete in under 4 minutes. If the tests fail, the deployment must automatically roll back without human action. Which approach best satisfies these requirements?
-
✓ C. Define AppSpec hooks for the ECS deployment and use the AfterAllowTestTraffic event to invoke an AWS Lambda function that runs the tests; if any test fails, return an error from the function to trigger rollback
The correct option is Define AppSpec hooks for the ECS deployment and use the AfterAllowTestTraffic event to invoke an AWS Lambda function that runs the tests and if any test fails return an error from the function to trigger rollback. This solution uses AfterAllowTestTraffic to validate the green task set while test traffic is routed and before any production traffic is shifted.
Using the AfterAllowTestTraffic lifecycle event lets the deployment run smoke tests in the blue green window and ensures that returning an error from the invoked Lambda causes CodeDeploy to mark the deployment as failed and automatically roll back to the previous version. The Lambda can run quick checks that complete in under four minutes and any failure is handled natively by CodeDeploy without human intervention.
Insert a stage in CodePipeline before deployment that runs AWS CodeBuild to execute the test scripts and on failures call aws deploy stop-deployment is not ideal because tests run outside the CodeDeploy blue green lifecycle and the approach depends on an out of band CLI call instead of native hook driven rollback.
Add hooks to the CodeDeploy AppSpec and use the AfterAllowTraffic event to run the scripts and on failure stop the deployment with the AWS CLI is incorrect because AfterAllowTraffic fires after production traffic has been shifted so it is too late for pre shift validation.
Create an extra stage in CodePipeline ahead of the deploy stage to invoke an AWS Lambda function that runs the scripts and if errors are detected call aws deploy stop-deployment also bypasses the purpose built AfterAllowTestTraffic hook and relies on manual stop calls, which is less reliable and does not use CodeDeploy automatic rollback behavior.
Remember to use the AfterAllowTestTraffic AppSpec hook for ECS blue green so smoke tests run against the green task set before traffic shifts and failing the hook triggers an automatic rollback.
Which AWS solution provides continuous vulnerability detection for EC2 instances and centralized auditing of operating system login events across approximately 120 instances?
-
✓ D. Amazon Inspector with CloudWatch agent to CloudWatch Logs
Amazon Inspector with CloudWatch agent to CloudWatch Logs is correct because Amazon Inspector performs continuous vulnerability assessments of EC2 instances and the CloudWatch agent forwards in-guest operating system authentication and login events to CloudWatch Logs for centralized auditing and alerting.
Amazon Inspector continuously scans for software vulnerabilities and unintended network exposure and it produces prioritized findings that you can act on. The CloudWatch agent can collect OS level logs including authentication and login events and send them to CloudWatch Logs where you can query logs, create metric filters and alarms, and retain logs for auditing.
The option AWS Security Hub and AWS CloudTrail is incorrect because AWS Security Hub and AWS CloudTrail do not perform continuous in-guest vulnerability scanning. AWS Security Hub and AWS CloudTrail aggregate findings and record AWS API activity respectively and CloudTrail does not capture in-guest OS login events.
The option Amazon GuardDuty and AWS Detective is incorrect because Amazon GuardDuty and AWS Detective focus on threat detection and investigation using telemetry such as VPC Flow Logs and CloudTrail and they do not provide continuous software vulnerability scanning or collect OS login events from inside instances.
The option AWS Systems Manager Patch Manager and Kinesis Agent to Amazon S3 is incorrect because AWS Systems Manager Patch Manager and Kinesis Agent to Amazon S3 address patch compliance and remediation rather than full vulnerability assessment and routing OS logs to S3 via the Kinesis Agent makes centralized querying and alerting more complex than using the CloudWatch agent and CloudWatch Logs.
Remember to distinguish vulnerability scanning from threat detection and match Amazon Inspector with the CloudWatch agent when you need continuous vulnerability assessment plus centralized OS login auditing in CloudWatch Logs.
A biotech research firm restricts DevOps staff from logging directly into Amazon EC2 instances that handle regulated data except during rare break-glass events. The security team needs to be alerted within 10 minutes whenever an engineer signs in to any of these instances. Which approach delivers this with the least ongoing operational effort?
-
✓ C. Install the Amazon CloudWatch agent on the EC2 instances to stream system logs into CloudWatch Logs and create a metric filter with an alarm that sends an Amazon SNS notification when login messages are detected
Install the Amazon CloudWatch agent on the EC2 instances to stream system logs into CloudWatch Logs and create a metric filter with an alarm that sends an Amazon SNS notification when login messages are detected is correct because it provides a managed and continuous way to collect in guest OS logs and trigger near real time alerts with minimal operational overhead.
The CloudWatch agent streams system and authentication logs directly into CloudWatch Logs and a metric filter can match login messages to generate a custom metric. An alarm on that metric can publish to Amazon SNS within minutes, so this approach meets the ten minute alert requirement while keeping the solution simple to operate.
Configure an AWS CloudTrail trail to deliver events to Amazon CloudWatch Logs, invoke an AWS Lambda function to look for login activity, and publish alerts via Amazon SNS is incorrect because CloudTrail records AWS control plane API calls and not OS level in guest logins such as SSH or local console sign ins that occur inside the instance.
Install the AWS Systems Manager agent and depend on Amazon EventBridge to emit an event when a user logs in, then trigger AWS Lambda to send an Amazon SNS message is incorrect because Systems Manager and EventBridge do not natively emit events for operating system login activity without first collecting and parsing the instance logs, which adds complexity.
Use AWS Systems Manager to run a recurring script on each EC2 instance that uploads system logs to Amazon S3, trigger an S3 event to an AWS Lambda function that parses for login entries, and notify the team with Amazon SNS is incorrect because this solution introduces more moving parts, higher latency, and more maintenance than streaming logs to CloudWatch Logs with a metric filter and alarm.
Remember that CloudWatch agent plus CloudWatch Logs metric filters and alarms is the low maintenance pattern for in guest OS events. Use CloudTrail for API activity and not for OS logins.
All AWS questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
Aurora Metrics is releasing a new serverless backend that uses an Amazon API Gateway REST API in front of several AWS Lambda functions. The DevOps lead wants a deployment approach that initially exposes the new release to roughly 10% of clients, validates behavior, and then shifts all traffic when stable. Which deployment strategy should the engineer use to meet these goals?
-
✓ C. Use AWS CloudFormation to manage Lambda function versions and configure API Gateway stage canary settings; update the stack to roll out the new code and promote after validation
Use AWS CloudFormation to manage Lambda function versions and configure API Gateway stage canary settings and update the stack to roll out the new code and promote after validation is the correct choice because it supports a controlled canary at the API stage while keeping versioned Lambda code in place for safe promotion.
The correct approach works because API Gateway stage canary settings allow you to route a specific percentage of a stage’s traffic to the new deployment and versioned Lambda functions let you run the new code alongside the stable version. Using infrastructure as code with CloudFormation keeps the Lambda versions and the API Gateway stage canary configuration in the same stack so rollouts are coordinated reversible and auditable while you validate the 10 percent canary and then shift full traffic.
Deploy with AWS SAM using a Lambda alias, then perform a canary via an Amazon Route 53 weighted policy to shift a small percentage of users is unsuitable because DNS based weighting cannot target a single API Gateway stage and it does not provide the stage level canary controls or immediate rollback behavior that API Gateway canaries provide.
Use the AWS CDK to provision the stack and rely on an Amazon Route 53 failover policy to simulate a canary before switching all traffic is incorrect because failover routing only swaps endpoints after a health check outcome and it cannot perform gradual percentage based traffic shifts for a canary.
Use AWS CodeDeploy with Lambda traffic shifting to perform a canary and update the API separately afterward is close but not ideal for this scenario because CodeDeploy shifts at the Lambda alias level and does not automatically coordinate API Gateway stage canary settings and API configuration changes in a single, unified canary deployment.
When a question pairs API Gateway and Lambda canaries think of API Gateway stage canary plus versioned Lambda functions managed by IaC rather than DNS based or failover routing approaches.
An R&D team at Vantage Labs stores sensitive patent drafts in an Amazon S3 bucket in US West (Oregon). The compliance group needs a simple and low-cost way to track and search object-level actions like PutObject, GetObject, and DeleteObject for periodic audits. Which approach best meets these needs?
-
✓ C. Enable AWS CloudTrail data events for the bucket, store the logs in Amazon S3, and query them ad hoc with Amazon Athena
The correct choice is Enable AWS CloudTrail data events for the bucket, store the logs in Amazon S3, and query them ad hoc with Amazon Athena. This option captures object level API calls such as GetObject PutObject and DeleteObject and lets you search and analyze those events on demand.
Enable AWS CloudTrail data events for the bucket, store the logs in Amazon S3, and query them ad hoc with Amazon Athena records the detailed data events that include object identifiers and caller information and delivers the logs to S3 for durable storage. You can create an Athena table over the CloudTrail data and run ad hoc SQL queries when audits are required. This approach is cost effective because you pay for S3 storage and Athena query scanned bytes only when you run queries and you avoid running persistent indexing infrastructure.
Enable S3 server access logging to a separate bucket and index the logs with Amazon OpenSearch Service is less suitable because S3 server access logs are coarser and they focus on access records rather than full API data events and indexing into OpenSearch adds ongoing cluster and operational cost.
Create an Amazon EventBridge rule for S3 object-level API actions and route events directly to Amazon CloudWatch Logs is not sufficient on its own because EventBridge does not receive raw S3 object level API events unless those events are emitted via CloudTrail first and so this alone will miss the detailed data events you need.
Configure the S3 bucket to send object API activity directly to an Amazon CloudWatch Logs log group is not supported for S3 object level API auditing because S3 does not natively stream object API data events into CloudWatch Logs and you would still rely on CloudTrail for those records.
When you need object level S3 auditing remember that CloudTrail data events provide the detailed records and that pairing them with Athena lets you run low cost ad hoc queries.
You are the DevOps lead at StreamForge, a digital media subscription startup, and your production Amazon RDS for MySQL database is provisioned by AWS CloudFormation using AWS::RDS::DBInstance and configured for Multi-AZ. You must perform a major upgrade from MySQL 5.7 to 8.0 and want to keep application downtime to only a brief cutover while managing changes through CloudFormation. What approach should you take to achieve this with minimal interruption?
-
✓ C. Create a read replica with CloudFormation using SourceDBInstanceIdentifier, wait for it to catch up, update the replica’s EngineVersion to 8.0, promote it, then point applications to the promoted instance
The correct choice is Create a read replica with CloudFormation using SourceDBInstanceIdentifier, wait for it to catch up, update the replica’s EngineVersion to 8.0, promote it, then point applications to the promoted instance because it lets you perform the major MySQL upgrade on a replica while the primary remains online and then execute a brief cutover when you promote the replica.
This pattern keeps the primary serving reads and writes while you create and synchronize the replica and then upgrade and validate the replica on MySQL 8.0 before promotion. Promotion converts the replica to a writable instance and you can repoint your application endpoint to the promoted instance for minimal downtime. Using CloudFormation to create the replica with SourceDBInstanceIdentifier lets you manage the new instance lifecycle as code and reduces manual steps.
Update the existing AWS::RDS::DBInstance in the stack by setting EngineVersion to 8.0 and run a stack update is risky because a CloudFormation update can trigger an in place engine upgrade or an instance replacement that may cause significant downtime and that violates the minimal interruption requirement.
Change the DBInstance property DBEngineVersion to 8.0 in CloudFormation and update the stack is incorrect because the property name is EngineVersion not DBEngineVersion and using the wrong property will not produce the intended change.
Use AWS Database Migration Service with ongoing replication to migrate to a new RDS MySQL 8.0 instance and cut over can achieve low downtime but it adds extra components and operational overhead, and for same engine upgrades the replica upgrade and promote pattern is usually simpler and more direct.
Follow a replica, upgrade, promote, switch workflow and remember the CloudFormation property is EngineVersion to avoid naming mistakes.
A regional healthcare analytics firm with a hybrid cloud setup operates a portfolio of 42 web applications. Each application is a multi-tier workload running on Auto Scaling groups of On-Demand Amazon EC2 instances behind Application Load Balancers and uses external Amazon RDS databases. The security team requires that only the corporate data center can reach the applications, and all other external IPs must be blocked. The corporate network exits through 12 proxy appliances, each with a unique public IP address, and these IPs are rotated roughly every three weeks. The networking team uploads a CSV with the current proxy IPs to a private Amazon S3 bucket whenever the rotation occurs. What should the DevOps engineer implement to allow access solely from the corporate network in the most cost-effective way with minimal ongoing effort?
-
✓ C. Use an AWS Lambda function triggered by Amazon S3 event notifications on object updates to read the CSV and modify ALB security groups to permit HTTPS only from the listed proxy IPs
Use an AWS Lambda function triggered by Amazon S3 event notifications on object updates to read the CSV and modify ALB security groups to permit HTTPS only from the listed proxy IPs is correct because it applies an IP allow list at the network layer only when the proxy list changes and it minimizes cost and operational effort.
This Lambda and S3 event approach is event driven so it runs only on updates and it avoids frequent polling and unnecessary invocations. The solution enforces access at the load balancer security group level which matches the requirement to allow only the corporate proxy IPs and it scales easily across all 42 applications behind ALBs.
Place all applications in one VPC, establish AWS Direct Connect with active and standby connections, and restrict ALB security groups to allow HTTPS only from the corporate IP addresses is unnecessary and costly because Direct Connect is for private network connectivity and it adds ongoing expense and management when the requirement is simply to allow a set of public proxy IPs.
Create a Python utility using the AWS SDK to download the CSV and update ALB security groups, and invoke it from AWS Lambda on a one-minute Amazon EventBridge schedule is inferior because the scheduled polling increases invocations and cost and may perform updates when nothing changed. Event driven updates from S3 are simpler and cheaper.
Open ALB security groups to HTTPS from the internet, integrate Amazon Cognito with the company Active Directory, enable all 42 apps to use Cognito for login, log to Amazon CloudWatch Logs, and rely on AWS Config for twice-monthly changes does not meet the explicit network allow list requirement because it opens the apps to the internet and shifts the control to identity solutions which do not prevent unwanted IPs from reaching the applications.
Event-driven automations like S3 object notifications reduce runtime and cost compared to polling. Prefer enforcing IP-based controls at the network layer when the requirement is to restrict traffic by source IP.
What is the least disruptive way to allow Lambda functions in one AWS account to mount and read and write to an Amazon EFS file system in a different account? (Choose 2)
-
✓ B. Lambda role permissions and EFS access point mount
-
✓ E. VPC peering plus EFS file system policy
Lambda role permissions and EFS access point mount and VPC peering plus EFS file system policy are the correct choices because they together provide the required authorization and network path for a Lambda function in one AWS account to mount and read and write to an Amazon EFS file system in another account.
Using Lambda role permissions and EFS access point mount ensures the Lambda function has an execution role with the necessary IAM permissions and that the EFS access point can enforce the POSIX identity and directory permissions needed for read and write operations. Using VPC peering plus EFS file system policy supplies a simple network route between the VPCs and lets the file system resource policy explicitly allow the other account to mount and use the file system. You must also configure security group rules and run the Lambda in a VPC that can reach the EFS mount targets.
AWS PrivateLink for EFS is incorrect because Amazon EFS cannot be exposed over PrivateLink and you cannot mount EFS through an interface endpoint.
Transit Gateway only, no EFS policy change is incorrect because providing only network connectivity does not grant access and the EFS file system must explicitly allow the other account via a resource policy.
Use Organizations SCPs is incorrect because service control policies limit permissions but do not grant cross account access and they do not replace IAM roles or resource based policies that are required for mounting and accessing EFS.
Think in two layers and verify both network connectivity and resource based permissions when answering cross account storage questions.
A digital payments startup recently built a Node.js application that exposes a GraphQL API to read and write payment records. The system runs on a company-managed server with a local MySQL instance. The QA and UX teams will drive frequent test iterations and feature tweaks, and the business requires that releases introduce no downtime or performance dips while builds, tests, and deployments occur. The target design must also scale quickly during traffic spikes. What approach will let the DevOps team move this workload to AWS rapidly while meeting these requirements?
-
✓ C. Link the codebase to GitHub via AWS CodeStar Connections, use AWS CodeBuild for automated unit and functional tests, stand up two Elastic Beanstalk environments wired to a shared external Amazon RDS MySQL Multi-AZ database, deploy the current version to both, then have the pipeline deploy new builds to Elastic Beanstalk and perform a blue/green cutover
Link the codebase to GitHub via AWS CodeStar Connections, use AWS CodeBuild for automated unit and functional tests, stand up two Elastic Beanstalk environments wired to a shared external Amazon RDS MySQL Multi-AZ database, deploy the current version to both, then have the pipeline deploy new builds to Elastic Beanstalk and perform a blue/green cutover is correct because it enables zero downtime deployments while keeping the data layer decoupled from the application environments.
This approach uses blue green deployments on Elastic Beanstalk to swap traffic between two independent environments and to provide simple rollback if a release has an issue. Having the environments share an external Amazon RDS MySQL Multi AZ database preserves data continuity and high availability while avoiding database lifecycle coupling to the application platform. Integrating GitHub with AWS CodeStar Connections and automating tests with AWS CodeBuild supports rapid iterative changes and quality gates for QA and UX driven workflows.
Move the repository to GitHub using AWS CodeStar Connections, configure AWS CodeBuild for automated unit and integration tests, create two Elastic Beanstalk environments that share an external Amazon RDS MySQL Multi-AZ database, deploy the current version to both, and have CodeBuild promote new versions to Elastic Beanstalk using in-place updates is wrong because in place updates can cause brief unavailability or performance degradation during rollout. This fails the no downtime requirement for releases.
Containerize the application and push images to Amazon ECR, use AWS CodeDeploy to run tests and perform deployments to Elastic Beanstalk, and keep an external Amazon RDS Multi-AZ database while using a rolling or in-place deployment strategy is not ideal because rolling or in place strategies can still impact availability and do not provide the simple zero downtime cutover and rollback that blue green provides. Using ECR and CodeDeploy does not by itself guarantee the required zero downtime pattern.
Connect GitHub with AWS CodeStar Connections, run tests with AWS CodeBuild, create two Elastic Beanstalk environments each with its own Amazon RDS MySQL Multi-AZ database, deploy the current release to both, and use blue/green deployments during cutover is incorrect because duplicating separate databases per environment creates synchronization and migration complexity. Attaching a database per environment couples the datastore lifecycle to Elastic Beanstalk and is not recommended for blue green workflows where a shared external Multi AZ RDS provides continuity.
Use blue/green deployments on Elastic Beanstalk and keep the database external and Multi AZ to achieve zero downtime and easy rollback while enabling fast CI driven iterations.
ClearPulse Health, a healthcare SaaS provider, has multiple product teams deploying internal APIs into separate AWS accounts. Each account’s VPC was created with the same 10.8.0.0/24 CIDR, and services run on Amazon EC2 behind public Application Load Balancers with HTTPS while interservice calls currently traverse the internet. After a security review, the company requires all service-to-service communication to stay private over HTTPS and the design must scale as additional VPCs are added over time. What long-term architecture should the solutions architect propose to meet these goals?
-
✓ C. Readdress the overlapping CIDRs, deploy a centralized AWS Transit Gateway shared with AWS RAM, attach each VPC and add routes to their CIDRs through the attachments, and front services with internal NLBs for private HTTPS
Readdress the overlapping CIDRs, deploy a centralized AWS Transit Gateway shared with AWS RAM, attach each VPC and add routes to their CIDRs through the attachments, and front services with internal NLBs for private HTTPS is the correct long term architecture because it provides a scalable hub and spoke model that removes address conflicts and enables private, encrypted service to service traffic across accounts.
Using a centralized Transit Gateway lets you attach many VPCs and advertise routes between the hub and spokes while avoiding the mesh explosion that occurs with point to point peering. Sharing the Transit Gateway with AWS Resource Access Manager simplifies cross account access and governance and internal Network Load Balancers expose services on private IPs so HTTPS can terminate or pass through inside the VPC boundary.
Create a Network Load Balancer in each VPC, set up interface VPC endpoints for com.amazonaws.us-west-2.elasticloadbalancing in consuming VPCs, approve AWS PrivateLink endpoint service connections, and use the endpoint DNS names for cross-VPC calls keeps traffic off the public internet but requires creating per-service endpoint services and per-consumer endpoints and it does not provide transitive routing. This pattern becomes operationally heavy as the number of services and VPCs grows.
Build full-mesh VPC peering across all accounts, add routes to each peer VPC CIDR in every route table, and integrate services using NLB DNS names does not scale because the number of peering connections grows rapidly and it cannot be used when VPCs share overlapping CIDRs. The mesh is hard to manage and it fails the current addressing constraint.
AWS App Mesh provides application layer features like observability, retries, and mTLS but it does not create the underlying private network connectivity across accounts or resolve overlapping IP ranges. App Mesh can complement a network design but it does not replace a transit layer.
Before connecting many accounts resolve any overlapping IP ranges and choose Transit Gateway for hub and spoke scale. Use PrivateLink selectively for exposing individual producer services rather than for full cross VPC connectivity.
A subscription media startup runs a serverless stack with CloudFront, Amazon API Gateway, and AWS Lambda, and today they publish new Lambda versions with ad hoc AWS CLI scripts and manually run another script to revert if problems occur; the current approach takes about 18 minutes to roll out and 12 minutes to undo changes, and leadership wants deployments completed in under 5 minutes with rapid detection and automatic rollback when issues surface with minimal impact to live traffic; what should you implement to meet these goals?
-
✓ B. AWS SAM with CodeDeploy traffic shifting, pre-traffic and post-traffic hooks, and CloudWatch alarm rollback
AWS SAM with CodeDeploy traffic shifting, pre-traffic and post-traffic hooks, and CloudWatch alarm rollback is correct because it integrates Lambda versioning and alias management with CodeDeploy deployment preferences so you can shift traffic in controlled canary or linear steps, run validations before and after traffic moves, and automatically roll back when CloudWatch alarms detect problems. This approach meets the goals of completing deployments in under five minutes and performing rapid automated rollback with minimal impact to live traffic.
AWS SAM simplifies the setup by allowing AutoPublish of new versions and by declaring a DeploymentPreference that ties into CodeDeploy traffic shifting and lifecycle hooks. The pre-traffic hook runs verification logic before user traffic is routed to the new version and the post-traffic hook confirms health after the shift. CloudWatch alarm integration triggers an automatic rollback if errors or latency exceed thresholds so mean time to detect and recover is much lower than manual scripts.
AWS AppConfig feature flags for Lambda is incorrect because AppConfig controls configuration and progressive exposure of features and it does not publish Lambda versions or shift alias traffic with lifecycle hooks, so it cannot provide the automated deployment and rollback behavior required.
Nested CloudFormation stacks with change sets and stack rollback for Lambda updates is incorrect because change sets only preview and apply resource changes and stack rollback handles infrastructure failures, but they do not perform runtime traffic shifting or run pre and post deployment validation hooks, so they will not meet the fast detection and minimal-impact rollback requirement.
CloudFormation change sets with pre-traffic and post-traffic tests and rollback on alarms is incorrect because CloudFormation change sets do not provide Lambda lifecycle test hooks or traffic shifting for aliases as CodeDeploy does, and the specific test hook capabilities needed for safe Lambda deployments are provided by CodeDeploy integration.
When a question asks for safe Lambda deployments with fast automatic rollback look for CodeDeploy traffic shifting and a SAM DeploymentPreference that uses pre and post traffic hooks and CloudWatch alarms.
WillowWorks Health runs a production service behind a Network Load Balancer that terminates TLS. The platform team needs detailed connection information to study client behavior and troubleshoot spikes, and the captured records must be stored with encryption at rest and readable only by the platform engineers. What should the engineer implement to satisfy these requirements?
-
✓ C. Enable NLB access logs to an S3 bucket, turn on SSE-S3, allow delivery.logs.amazonaws.com to write via bucket policy, and grant the platform team read access with IAM
Enable NLB access logs to an S3 bucket, turn on SSE-S3, allow delivery.logs.amazonaws.com to write via bucket policy, and grant the platform team read access with IAM is the correct choice because it delivers Network Load Balancer access logs to Amazon S3 with server side encryption at rest and enforces least privilege for write and read access.
This option is correct because NLB access logs are delivered to S3 and turning on SSE-S3 ensures objects are encrypted at rest without extra KMS configuration. The bucket policy must allow the logging service principal delivery.logs.amazonaws.com to put objects so the service can write logs. Granting the platform engineers read access via IAM limits human access to only those who need it and meets the requirement that only the team can read the captured records.
Enable NLB access logs to an Amazon S3 bucket, use SSE-KMS on the bucket, and set a bucket policy that grants write access to the AWS service account is incorrect because it does not address granting the platform team read permissions and using a KMS customer managed key would also require updating the key policy to allow the ELB service to use the key which adds complexity.
Send NLB access logs directly to Amazon CloudWatch Logs with a KMS-encrypted log group and restrict access to the team is incorrect because NLB access logs are delivered only to Amazon S3 and cannot be sent directly to CloudWatch Logs.
Enable NLB access logs to an S3 bucket with SSE-S3 and a bucket policy that only permits delivery.logs.amazonaws.com to write is incorrect because it omits granting read permissions to the platform engineers via IAM and the bucket policy alone prevents authorized humans from accessing the logs.
Remember that NLB access logs go only to S3 and that the service principal delivery.logs.amazonaws.com must be allowed to write while human read access is granted via IAM.
A video streaming startup, LunarFlix, is rolling out its primary service on a fleet of about 36 Amazon EC2 instances across two Regions. The platform team needs a centralized way to search application logs from the instances together with the AWS account API event history. Which approach should they implement to enable unified, ad hoc querying of both data sources?
-
✓ B. Configure AWS CloudTrail to send events to Amazon CloudWatch Logs, deploy the CloudWatch Agent on EC2 to push application logs to CloudWatch Logs, and run CloudWatch Logs Insights queries across both log groups
Configure AWS CloudTrail to send events to Amazon CloudWatch Logs, deploy the CloudWatch Agent on EC2 to push application logs to CloudWatch Logs, and run CloudWatch Logs Insights queries across both log groups is correct because it centralizes both API event history and instance application logs inside CloudWatch Logs so you can run unified, ad hoc queries with CloudWatch Logs Insights.
Configure AWS CloudTrail to send events to Amazon CloudWatch Logs, deploy the CloudWatch Agent on EC2 to push application logs to CloudWatch Logs, and run CloudWatch Logs Insights queries across both log groups places CloudTrail records and application logs in the same log service which removes the need to move data between different analytics tools. This lets CloudWatch Logs Insights search and correlate multiple log groups with a single query interface and reduces operational overhead for a small fleet running across two Regions.
Configure AWS CloudTrail to write API events to Amazon S3, use the CloudWatch Agent on EC2 to send application logs to Amazon S3, and query both with Amazon Athena is incorrect because the CloudWatch Agent does not write directly to S3 and you cannot rely on this agent to land both datasets in S3 without additional forwarding components.
Deliver AWS CloudTrail events to Amazon Kinesis Data Streams, have the CloudWatch Agent publish instance logs to Kinesis Data Streams, and analyze both streams with Kinesis Data Analytics is incorrect because CloudTrail does not natively deliver to Kinesis Data Streams and the CloudWatch Agent does not publish to Kinesis Data Streams, so the described pipeline is not supported without extra custom integration.
Send AWS CloudTrail logs to Amazon S3, forward EC2 application logs to Amazon CloudWatch Logs using the CloudWatch Agent, and use Amazon Athena to query both datasets is incorrect because Athena queries data on S3 and cannot directly query CloudWatch Logs, so this option would split analytics across two different query tools rather than providing a single unified query layer.
Put both API events and instance logs into the same log store so you can run single ad hoc queries. Use CloudWatch Logs Insights when you need to search and correlate multiple log groups.
All AWS questions all come from certificationexams.pro and my Certified DevOps Engineer Udemy course.
Which approach ensures that CI/CD build artifacts are encrypted at rest while requiring the least operational effort?
-
✓ C. Use AWS CodeBuild with KMS-encrypted artifacts
Use AWS CodeBuild with KMS-encrypted artifacts is correct because it is a fully managed build service that natively integrates with AWS KMS to encrypt artifacts at rest while removing the need to operate build hosts and perform patching or scaling tasks.
Use AWS CodeBuild with KMS-encrypted artifacts meets both goals of strong encryption and minimal operational effort because KMS provides customer controlled keys and policies and CodeBuild handles the build infrastructure for you. This combination gives you key management features like access controls and rotation while avoiding ongoing administration of CI servers.
Store artifacts in Amazon S3 with default bucket encryption and run Jenkins on EC2 does provide encryption at rest for the stored objects, but it requires you to operate and maintain the Jenkins EC2 instances which increases ongoing operational work.
Use AWS Systems Manager to patch Jenkins instances and encrypt EBS volumes improves instance security and disk encryption, but it does not directly enforce artifact encryption for build outputs and it still leaves you with substantial management of build servers.
Run Jenkins on Amazon ECS and store artifacts with S3 SSE-S3 reduces some operational overhead compared to raw EC2, yet you still run and manage Jenkins and S3 SSE-S3 uses S3 managed keys which give you less control than KMS keys. This option therefore does not minimize operations or provide the same key management capabilities as a KMS integrated, fully managed build service.
Minimize operational effort by choosing fully managed CI services that integrate with AWS KMS for artifact encryption so you get both strong key control and reduced maintenance.
A DevOps engineer at a global logistics firm, Aurora Freight Systems, operates a hybrid environment that links its on-premises campus to AWS through an AWS Direct Connect gateway. The team needs to automate operating system updates for about 180 Windows servers running both in the data center and on Amazon EC2 by using AWS Systems Manager. What steps should be configured to enable this patching workflow across the hybrid fleet? (Choose 2)
-
✓ B. Install the SSM Agent on the on-prem Windows servers using the activation code and ID, register them so they appear with an mi- prefix in the console, and apply updates with Patch Manager
-
✓ E. Create a single IAM service role for Systems Manager that the service can assume with STS AssumeRole, register the role so activations can be created, and use it for managed-instance activation of the data center machines
Install the SSM Agent on the on-prem Windows servers using the activation code and ID, register them so they appear with an mi- prefix in the console, and apply updates with Patch Manager and Create a single IAM service role for Systems Manager that the service can assume with STS AssumeRole, register the role so activations can be created, and use it for managed-instance activation of the data center machines are correct because together they enroll on premises Windows servers as hybrid managed instances and provide the IAM trust and permissions that Systems Manager needs to run Patch Manager across the hybrid fleet.
Install the SSM Agent on the on-prem Windows servers using the activation code and ID, register them so they appear with an mi- prefix in the console, and apply updates with Patch Manager is correct because hybrid servers must run the SSM Agent and be registered through a hybrid activation so they appear as managed instances with the mi- prefix. Patch Manager is the Systems Manager capability designed to scan, approve, and apply OS patches using patch baselines and maintenance windows so it automates Windows updates at scale.
Create a single IAM service role for Systems Manager that the service can assume with STS AssumeRole, register the role so activations can be created, and use it for managed-instance activation of the data center machines is correct because hybrid activations require an IAM role that grants the managed instance permissions and that Systems Manager can assume when issuing activations. A single properly configured service role with the managed instance policy simplifies activation creation and lets the service manage on premises machines securely.
Create multiple IAM service roles for Systems Manager that use STS AssumeRoleWithSAML, register the roles to issue service tokens, and use them to run managed-instance activation is wrong because hybrid activations do not use AssumeRoleWithSAML or multiple roles and SSM expects a service role trusted for AssumeRole to create activations.
Install the SSM Agent and register the servers so they appear with an i- prefix, then orchestrate patches through State Manager is wrong because on premises managed instances register with the mi- prefix not i- and Patch Manager is the designated feature for OS patching rather than State Manager.
Use AWS Quick Setup to enable Patch Manager across all Regions without creating a hybrid activation or service role, allowing Systems Manager to auto-discover on-premises servers is wrong because Quick Setup does not auto discover on premises servers and hybrid activations plus the service role are required to register and manage data center machines.
Remember that hybrid managed instances use IDs that start with mi- and that on premises servers require a hybrid activation plus a single IAM service role that uses AssumeRole. Use Patch Manager to automate OS updates across the hybrid fleet.
Northwind Press runs a stateless web API on Amazon EC2 behind an Application Load Balancer in us-west-2, and DNS for shopwidget.net is hosted in Amazon Route 53 with health checks that verify availability. A second identical stack is deployed in eu-central-1. The team needs requests to go to the Region with the lowest latency and to fail over automatically to the other Region if a Regional outage occurs. What Route 53 configuration should be implemented?
-
✓ C. Create a subdomain named na.shopwidget.net with failover routing, setting the us-west-2 ALB as primary and the eu-central-1 ALB as secondary; create eu.shopwidget.net with failover routing, setting the eu-central-1 ALB as primary and the us-west-2 ALB as secondary; then configure latency-based alias records for shopwidget.net that point to na.shopwidget.net and eu.shopwidget.net
Create a subdomain named na.shopwidget.net with failover routing, setting the us-west-2 ALB as primary and the eu-central-1 ALB as secondary; create eu.shopwidget.net with failover routing, setting the eu-central-1 ALB as primary and the us-west-2 ALB as secondary; then configure latency-based alias records for shopwidget.net that point to na.shopwidget.net and eu.shopwidget.net is correct because it combines regional health checks and failover with an apex routing policy that selects the lowest latency Region for users.
This approach uses failover records at the Region subdomain level so each Region can fail over to the other when its health check fails. The apex record uses latency-based routing to pick the Region with the best performance for the client while still allowing the subdomain-level failover to redirect traffic to the other Region during a Regional outage.
Create a subdomain named na.shopwidget.net with multivalue answer routing, listing the us-west-2 ALB first and the eu-central-1 ALB second; create eu.shopwidget.net similarly with the EU ALB first; then configure failover routing on shopwidget.net aliased to na.shopwidget.net and eu.shopwidget.net is incorrect because multivalue answer routing returns multiple healthy answers and does not prioritize lowest latency or provide deterministic regional failover sequencing.
Create a subdomain named na.shopwidget.net with latency-based routing that targets the us-west-2 ALB first and the eu-central-1 ALB second; create eu.shopwidget.net similarly; then configure failover routing on shopwidget.net that aliases to na.shopwidget.net as primary and eu.shopwidget.net as secondary is incorrect because it applies failover at the apex instead of at the Region-specific records and that reverses the necessary layering for reliable regional failover combined with latency steering.
Create a subdomain named na.shopwidget.net with weighted routing (us-west-2 weight 3, eu-central-1 weight 1) and eu.shopwidget.net with weighted routing (eu-central-1 weight 3, us-west-2 weight 1); then configure geolocation routing on shopwidget.net mapping North America to na.shopwidget.net and Europe to eu.shopwidget.net is incorrect because weighted and geolocation policies do not guarantee the lowest latency for clients and they do not provide the simple health-checked cross-Region failover that failover records provide.
Layer routing policies so that failover is applied per Region with health checks and the apex uses latency-based routing to select the best Region.
An e-commerce analytics startup runs compute jobs on an Auto Scaling group of EC2 instances that send data to an external REST API at http://api.labexample.io as part of processing. After a code update switched the call to HTTPS, the job began failing at the external call, although the engineer can reach the API successfully using Postman from the internet and the VPC still uses the default network ACL. What is the next best step to pinpoint the root cause?
-
✓ C. Log in to the console, inspect VPC Flow Logs for REJECT entries originating from the Auto Scaling instances, and validate that the security group outbound rules permit HTTPS to the API endpoint
Log in to the console, inspect VPC Flow Logs for REJECT entries originating from the Auto Scaling instances, and validate that the security group outbound rules permit HTTPS to the API endpoint is correct because the failure began after switching from HTTP to HTTPS and that change requires outbound TCP 443 to be allowed. Checking Flow Logs and egress rules directly targets the likely network denial that would break HTTPS calls.
Log in to the console, inspect VPC Flow Logs for REJECT entries originating from the Auto Scaling instances, and validate that the security group outbound rules permit HTTPS to the API endpoint helps isolate the problem because VPC Flow Logs record whether traffic from the instances is accepted or rejected at the network interface. The security group controls egress from the instances and a missing rule for TCP 443 will cause HTTPS requests to fail even if HTTP previously worked.
Log in to the console, review application logs in CloudWatch Logs, and verify that the security group and network ACL allow outbound access to the external API is not the best first step because application logs may not show low level network rejects and the default network ACL is typically permissive so it is unlikely to be the immediate cause. CloudWatch Logs can help later but it is slower to expose a network egress denial than Flow Logs.
Log in to the console, examine VPC Flow Logs for ACCEPT records from the Auto Scaling instances, and confirm that ingress security group rules allow traffic from the external API is incorrect because the instances initiate the connection and ingress rules on the instances are not relevant for outbound requests. Looking for ACCEPT entries will not reveal packets that were rejected and it can miss egress rule denials.
AWS Global Accelerator is irrelevant because that service improves inbound client performance and resiliency and it does not address outbound EC2 connections to third party APIs.
When an EC2 call worked on HTTP but fails on HTTPS first check egress security group rules for TCP 443 and use VPC Flow Logs to find REJECT entries that show dropped traffic.
A fintech startup is deploying a serverless Angular admin portal on AWS for employees only. In the CI/CD pipeline, AWS CodeBuild compiles the app and uploads the static assets to Amazon S3. The buildspec.yml shows version 0.2, an install phase using nodejs 18 with npm install -g @angular/cli, a build phase running ng build –configuration production, and a post_build command that executes aws s3 cp dist s3://harbor-city-angular-internal –acl authenticated-read. After release, the security team finds that any AWS account holder can read the objects in the bucket, even though the portal is intended to be internal. What should the DevOps engineer change to fix this in the most secure way?
-
✓ B. Create a bucket policy that limits reads to the company’s AWS accounts and remove the authenticated-read ACL from the upload step
Create a bucket policy that limits reads to the company’s AWS accounts and remove the authenticated-read ACL from the upload step is the correct choice because the pipeline currently applies a canned ACL that grants broad access and that must be replaced by an explicit resource policy that limits principals to the organization.
The pipeline flag authenticated-read maps to the AuthenticatedUsers canned ACL and it grants read access to any AWS principal with valid credentials. A bucket policy scoped to your account IDs or to specific IAM roles gives explicit and auditable access control and it avoids relying on ACLs for sensitive assets. You can also enable S3 Object Ownership to disable ACLs and ensure that bucket policies are the single source of truth for access.
Add –sse AES256 to the aws s3 cp command to enable server-side encryption only encrypts objects at rest and it does not change who can read the objects so it does not fix the exposure.
Replace –acl authenticated-read with –acl public-read would make the files readable by the entire internet and it is far less secure than the existing issue.
Put Amazon CloudFront in front of the S3 bucket and keep the existing ACL does not solve the underlying ACL problem because other AWS accounts could still read the objects unless you also lock down S3 with a policy or use origin access control.
Prefer managing S3 access with bucket policies or Object Ownership and avoid canned ACLs in CI pipelines to prevent accidental wide access.
Riverton Analytics needs to detect suspicious behavior that suggests their Amazon EC2 instances have been breached and to receive email alerts whenever such indicators appear. What is the most appropriate solution?
-
✓ B. Enable Amazon GuardDuty to monitor for EC2 compromise behaviors and route matching findings through an Amazon EventBridge rule to an Amazon SNS topic for email
Enable Amazon GuardDuty to monitor for EC2 compromise behaviors and route matching findings through an Amazon EventBridge rule to an Amazon SNS topic for email is correct because it provides continuous threat detection across network and account telemetry and lets you route findings to an SNS topic for email alerts.
Amazon GuardDuty continuously analyzes VPC Flow Logs, DNS logs, and CloudTrail events to surface indicators of instance compromise. Using EventBridge rules to match those findings and forward them to an Amazon SNS topic provides a reliable pipeline for timely email notifications and automated responses.
Configure Amazon Inspector to detect instance compromise and have Inspector publish state changes to an Amazon SNS topic is incorrect because Amazon Inspector is focused on vulnerability scanning and host assessment rather than continuous runtime threat detection.
Create an AWS CloudTrail trail with metric filters and Amazon CloudWatch alarms for suspected compromise API calls and notify via Amazon SNS is incorrect since AWS CloudTrail with CloudWatch metric filters can detect suspicious API activity but it does not capture the network, DNS, and runtime signals that Amazon GuardDuty uses to identify compromises.
Use AWS Security Hub to detect EC2 compromises and send alerts via an Amazon SNS topic is incorrect because AWS Security Hub aggregates and normalizes findings from primary detectors like Amazon GuardDuty and it is not the primary threat detection engine itself.
Pair GuardDuty findings with EventBridge rules and an SNS topic to get timely email alerts when suspected EC2 compromises are detected.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
