Sample Questions for the AWS Devops Engineer Certification

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.
AWS Certified DevOps Engineer Exam Topics Test
The AWS Certified DevOps Engineer Professional exam validates your ability to automate, operate, and manage distributed applications and systems on the AWS platform. It focuses on continuous delivery, automation of security controls, governance, monitoring, metrics, and high availability strategies that support scalable and resilient cloud solutions.
To prepare effectively, explore these AWS DevOps Engineer Practice Questions designed to reflect the structure, logic, and depth of the real certification exam. You’ll find Real AWS DevOps Exam Questions that mirror complex deployment and automation scenarios, along with AWS DevOps Engineer Sample Questions covering CI/CD pipelines, CloudFormation, Elastic Beanstalk, ECS, and CodePipeline integrations.
Targeted AWS Exam Topics
Each section includes AWS DevOps Engineer Exam Questions and Answers created to teach as well as test. These scenarios challenge your understanding of how to build automated solutions using services like CloudWatch, CodeDeploy, and Systems Manager. The explanations clarify not only what the correct answers are, but why, helping you reason through real-world DevOps trade-offs.
For additional preparation, try the AWS DevOps Engineer Exam Simulator and complete AWS DevOps Engineer Practice Tests that track your progress over time. These simulated exams provide the same tone and pacing as the official test, helping you get comfortable with AWS exam timing and complexity.
If you prefer focused study materials, the AWS DevOps Engineer Exam Dump, AWS DevOps Engineer Braindump, and AWS DevOps Engineer Sample Questions & Answers collections organize hundreds of authentic practice items to strengthen your understanding of automation, monitoring, and deployment pipelines across multi-account AWS environments.
Mastering these AWS DevOps Engineer Exam Questions gives you the confidence to pass the certification and the practical skills to deliver automated, secure, and scalable solutions in real AWS production environments.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
AWS Certified DevOps Engineer Sample Questions
Question 1
A media analytics startup, NovaStream Media, runs its customer portal on Amazon EC2 with deployments handled by AWS CodeDeploy. The application uses Amazon RDS for PostgreSQL for transactional data and Amazon DynamoDB to persist user session state. As the platform engineer, how should the application obtain secure access to both the RDS database and DynamoDB?
-
❏ A. Put both the RDS password and supposed DynamoDB credentials in AWS Secrets Manager and grant the EC2 instance role permission to read the secrets
-
❏ B. Keep IAM user access keys and the RDS password in Secrets Manager and let the EC2 instance role retrieve them
-
❏ C. Store the RDS database credentials in AWS Secrets Manager and attach an EC2 instance profile that can read that secret and call DynamoDB APIs
-
❏ D. Use AWS Systems Manager Parameter Store SecureString for both the RDS credentials and DynamoDB, and allow the instance role to read those parameters
Question 2
How can CloudFormation stacks automatically use the latest approved hardened AMI in a scalable and decoupled way?
-
❏ A. Tag the latest AMI in EC2 Image Builder and have CloudFormation look it up by tag
-
❏ B. Write the current AMI ID to SSM Parameter Store and reference it via CloudFormation SSM dynamic reference
-
❏ C. Store the AMI ID in S3 and read it via a cross-stack export
-
❏ D. Use AWS Service Catalog with a Launch Template referencing $Latest
Question 3
At BrightWave Logistics, a platform engineer must deploy an application using AWS CloudFormation, but their IAM permissions do not allow creating several resources defined in the template. What approach enables the engineer to launch the stack with the minimum necessary privileges?
-
❏ A. Create an AWS CloudFormation service role with the necessary permissions and set it on the stack; use it for deployments
-
❏ B. Create an AWS CloudFormation service role with full administrative permissions, attach it to the stack, and allow iam:PassRole only when a ResourceTag matches
-
❏ C. Create an AWS CloudFormation service role scoped to the needed actions, associate it with the stack, and grant the engineer iam:PassRole to let CloudFormation assume that role during deployments
-
❏ D. Create an AWS CloudFormation service role with required permissions and add an aws:SourceIp condition listing developer IPs; associate it to the stack and grant iam:PassRole
Question 4
How can you automate failover of an Aurora MySQL cluster across regions so that a replica is promoted and a central endpoint is updated for client reconnection while achieving an RTO under 10 minutes?
-
❏ A. Aurora Global Database managed failover
-
❏ B. Amazon Route 53 DNS failover between writer and reader endpoints
-
❏ C. EventBridge rule triggers Lambda to promote the replica and update Systems Manager Parameter Store
-
❏ D. Amazon RDS Proxy for multi-Region failover with a stable endpoint
Question 5
NorthPoint Media, a global streaming firm, uses a hub-and-spoke multi-account model on AWS where a shared production account hosts Amazon EC2 instances for several internal divisions. A single division might operate two or more member accounts that interact with resources in the production account. Over the past 90 days, engineers from one division accidentally terminated EC2 instances that belonged to another division. The platform team needs a multi-account governance approach so only the division that owns a resource can terminate its own EC2 instances and related assets. What should they implement?
-
❏ A. Use a centralized AWS Config aggregator with AWS Control Tower and Customizations for AWS Control Tower to restrict EC2 termination per division
-
❏ B. Enable EC2 termination protection on all instances and route termination requests through AWS Systems Manager Change Manager approvals
-
❏ C. Use AWS Organizations with OUs and a per business unit IAM role in the production account that allows TerminateInstances only on resources it owns, assumed via cross-account trust
-
❏ D. Create an SCP in the production account via AWS Service Catalog that permits business-unit-specific termination actions and attach it to the appropriate OUs
Question 6
When managing an Amazon RDS for MySQL instance with CloudFormation how can you perform a major engine version upgrade while minimizing downtime?
-
❏ A. Enable AutoMinorVersionUpgrade and update the stack
-
❏ B. Use AWS Database Migration Service
-
❏ C. Update EngineVersion with AllowMajorVersionUpgrade for an in-place change
-
❏ D. Provision a read replica, upgrade the replica to the target major version, promote it, then cut over
Question 7
A regional insurance firm operates an Oracle Real Application Clusters database in its data center and plans to move it to AWS. The platform lead asked the DevOps team to automate operating system patching on the servers that will run the database and to implement scheduled backups with roughly 60 days of retention to meet disaster recovery objectives. What is the simplest approach to achieve these goals with minimal engineering effort?
-
❏ A. Migrate the database to Amazon Aurora, enable automated backups, and rely on Aurora maintenance windows for patching
-
❏ B. Rehost the Oracle RAC database on EBS-backed Amazon EC2, install the SSM agent, use AWS Systems Manager Patch Manager for OS patches, and configure Amazon Data Lifecycle Manager to schedule EBS snapshots
-
❏ C. Move the RAC database to Amazon EC2 and trigger CreateSnapshot with an AWS Lambda function on an Amazon EventBridge schedule, and use AWS CodeDeploy and AWS CodePipeline to manage patching
-
❏ D. Move the on-premises database to Amazon RDS for Oracle with Multi-AZ and let RDS handle backups and host patching
Question 8
How can a company centrally collect operating system and application logs from on premises servers and Amazon EC2 instances and run low cost ad hoc queries with minimal setup?
-
❏ A. CloudWatch Logs + Logs Insights
-
❏ B. CloudWatch agent to CloudWatch Logs, Firehose to S3, query with Athena
-
❏ C. CloudWatch agent to CloudWatch Logs, stream to Amazon OpenSearch Service
-
❏ D. AWS Security Lake
Question 9
The platform team at a global sportswear marketplace is rolling out its primary web service to an EC2 Auto Scaling group using AWS CodeDeploy with an in-place, batched deployment. When the rollout finishes, the group has six instances, where four serve the new build and two still run the previous version, yet CodeDeploy marks the deployment as successful. What is the most likely cause of this situation?
-
❏ A. A CloudWatch alarm fired during the rollout
-
❏ B. Two instances lacked IAM permissions to retrieve the revision from Amazon S3
-
❏ C. An Auto Scaling scale-out event occurred during the deployment, so the new instances launched with the last successfully deployed revision
-
❏ D. The Auto Scaling group is using an outdated launch template or launch configuration version
Question 10
How can CloudFormation automatically resolve the latest golden AMI ID to use for EC2 instances when a stack is updated?
-
❏ A. EventBridge rule invoking Lambda to update template with newest AMI
-
❏ B. CloudFormation dynamic reference to SSM Parameter Store for AMI ID
-
❏ C. EC2 Image Builder pipeline automatically chosen by CloudFormation
-
❏ D. AWS Service Catalog

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.
Question 11
BlueWave Labs runs all development for several independent teams in one AWS account in a single Region. The DevOps engineer needs an easy way to alert the operations manager when new resource provisioning is getting close to account service quotas. Which approach requires the least custom development effort?
-
❏ A. Build a Lambda function to enumerate account resources and compare usage against published service quotas, trigger it on a schedule with EventBridge, and publish alerts to an SNS topic
-
❏ B. Create an AWS Config custom rule that evaluates service quota consumption and posts to an SNS topic, with a Lambda subscriber that notifies the operations manager
-
❏ C. Schedule a Lambda to refresh AWS Trusted Advisor service quota checks with EventBridge and add another EventBridge rule that matches Trusted Advisor limit events and publishes to an SNS topic subscribed by the operations manager
-
❏ D. Create a Lambda function to refresh AWS Health checks on a timer and configure an EventBridge rule that matches Trusted Advisor events and sends notifications to an SNS topic
Question 12
How can you automatically remediate IAM access keys that AWS Health reports as exposed by deleting the key, generating a summary of recent CloudTrail activity, and notifying the team?
-
❏ A. Amazon GuardDuty and Amazon Macie with Step Functions
-
❏ B. EventBridge rule for aws.health AWS_RISK_CREDENTIALS_EXPOSED to Step Functions
-
❏ C. AWS Config multi-account rule invoking Step Functions
-
❏ D. Amazon Detective with EventBridge automation
Question 13
A travel-tech firm plans to move its marketing website to AWS across three accounts in a landing zone. The existing platform runs Windows IIS with Microsoft SQL Server on premises. They need elastic scaling and must capture ad click attributes from the site, delivering events to an Amazon S3 bucket for billing and into Amazon Redshift for analytics within a few minutes. Which architecture should they implement to meet these objectives while keeping the web tier stateless?
-
❏ A. Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; publish click data with Amazon Kinesis Data Streams directly to Amazon S3 for billing and send another stream to Amazon Redshift
-
❏ B. Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; use Amazon Athena to push click logs to Amazon S3 and also load them into Amazon Redshift
-
❏ C. Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; use Amazon Kinesis Data Firehose to deliver click events to Amazon S3 for billing and a second Firehose delivery stream to load Amazon Redshift
-
❏ D. Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; send click events to Amazon MSK and rely on an AWS Glue crawler and catalog to populate Amazon Redshift
Question 14
How does the AWS Config rule cloudformation-stack-drift-detection-check treat CloudFormation custom resources for drift and what can cause the rule to report NON_COMPLIANT while the CloudFormation console shows the stack as IN_SYNC? (Choose 2)
-
❏ A. Use CloudFormation change sets to detect drift
-
❏ B. Drift detection does not evaluate custom resources
-
❏ C. Provide a cloudformationRoleArn with broader permissions to fix the mismatch
-
❏ D. Define all custom resource properties so drift can be detected
-
❏ E. The rule calls DetectStackDrift and treats API throttling or failures as NON_COMPLIANT
Question 15
A regional logistics startup is moving its Docker workloads from its data center to AWS. They will run several services on Amazon ECS using the EC2 launch type behind an Application Load Balancer. The operations team needs the platform to automatically aggregate all container and load balancer logs and deliver them to an Amazon S3 bucket for near-real-time analysis, targeting about a two-minute end-to-end delay. How should the team configure the ECS environment to achieve this? (Choose 3)
-
❏ A. Enable access logging on the Application Load Balancer and configure the destination to the specified S3 bucket
-
❏ B. Set up Amazon Macie to scan the S3 bucket and provide near-real-time analysis of the access logs
-
❏ C. Use the awslogs log driver in ECS task definitions, install the CloudWatch Logs agent on the container instances, and grant the needed permissions on the instance role
-
❏ D. Create a CloudWatch Logs subscription filter to an Amazon Kinesis Data Firehose delivery stream that writes continuously to the S3 bucket
-
❏ E. Turn on Detailed Monitoring in CloudWatch for the load balancer to store access logs in S3
-
❏ F. Use an AWS Lambda function triggered by an Amazon EventBridge schedule every 2 minutes to call CreateLogGroup and CreateExportTask to move logs to S3
Question 16
Which actions provide near real time alerts for Trusted Advisor findings that a security group has SSH port 22 open to 0.0.0.0/0 and enforce automatic restriction to a specified IP? (Choose 3)
-
❏ A. Custom AWS Config remediation that invokes Lambda to edit security groups
-
❏ B. EventBridge rule and Lambda to refresh Trusted Advisor via Support API every 10 minutes and publish to SNS
-
❏ C. AWS Config rule flagging SGs with port 22 open to 0.0.0.0/0 and notifying via SNS
-
❏ D. Amazon GuardDuty to detect open security groups and auto-remediate
-
❏ E. AWS Config auto-remediation using Systems Manager Automation to restrict SG port 22 to a specified IP
-
❏ F. AWS Security Hub with Foundational Security Best Practices to auto-remediate Trusted Advisor findings
Question 17
A ticketing startup operates a Node.js web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. During abrupt traffic surges, some scale-out events fail and intermittent errors appear. CloudWatch logs show messages like: ‘Instance did not finish the user’s lifecycle action; lifecycle action with token <abc-123> was abandoned due to heartbeat timeout.’ What should a DevOps engineer do to capture logs from all impacted instances and preserve them for later analysis? (Choose 3)
-
❏ A. Enable VPC Flow Logs for the subnets hosting the Auto Scaling group and export them to Amazon S3
-
❏ B. Configure an Auto Scaling lifecycle hook on instance termination and use an Amazon EventBridge rule to trigger an AWS Lambda function that runs AWS Systems Manager Run Command to pull application logs and upload them to Amazon S3
-
❏ C. Update the Auto Scaling group health check to point to the correct application port and protocol
-
❏ D. Turn on access logging at the target group and send logs to an Amazon S3 bucket
-
❏ E. Adjust the AWS CodeDeploy deployment group to resolve account deployment concurrency limits that have been reached
-
❏ F. Use Amazon Athena to query the logs directly from the Amazon S3 bucket
Question 18
What is the most operationally efficient method to send an email to a user after a successful sign in to an Amazon Cognito User Pool?
-
❏ A. Use Amazon EventBridge with CloudTrail sign-in events to trigger Lambda and send via Amazon SES
-
❏ B. Use the Cognito Post Authentication trigger to invoke Lambda that emails via Amazon SES
-
❏ C. Call an Amazon API Gateway endpoint after login that invokes Lambda to send an SES email
-
❏ D. Use the Cognito Custom Message trigger with Lambda and Amazon SES for the login email
Question 19
A streaming media startup runs a serverless backend that handles tens of thousands of API calls with AWS Lambda and stores state in Amazon DynamoDB. Clients invoke a Lambda function through an Amazon API Gateway HTTP API to read large batches from the DynamoDB PlaybackSessions table. Although the table uses DynamoDB Accelerator, users still see cold-start delays of roughly 8–12 seconds during afternoon surges. Traffic reliably peaks from 3 PM to 6 PM and tapers after 9 PM. What Lambda configuration change should a DevOps engineer make to keep latency consistently low at all times?
-
❏ A. Configure reserved concurrency for the function and use Application Auto Scaling to set reserved concurrency to roughly half of observed peak traffic
-
❏ B. Enable provisioned concurrency for the function and configure Application Auto Scaling with a minimum of 2 and a maximum of 120 provisioned instances
-
❏ C. Increase the Lambda memory size to 8,192 MB to reduce initialization and execution time
-
❏ D. Set the function’s ephemeral storage to 10,240 MB to cache data in /tmp between invocations
Question 20
What is the best way to automate cross account remediation of Amazon EC2 instances with persistently low utilization across 10 AWS accounts while filtering actions by tags using native AWS services?
-
❏ A. Amazon CloudWatch metrics with EventBridge and Lambda to stop or terminate by tag
-
❏ B. AWS Cost Anomaly Detection with EventBridge and Lambda to remove idle instances
-
❏ C. AWS Trusted Advisor Low Utilization EC2 check via EventBridge with a Lambda remediation filtered by tags
-
❏ D. Enroll in AWS Compute Optimizer and auto shutdown via EventBridge and Lambda
AWS DevOps Sample Exam Answers

All questions come from certificationexams.pro and my AWS DevOps Engineer Udemy course.
Eight is Enough, Final Practice Exam Answers
Question 1
A media analytics startup, Scrumtuous Media, runs its customer portal on Amazon EC2 with deployments handled by AWS CodeDeploy. The application uses Amazon RDS for PostgreSQL for transactional data and Amazon DynamoDB to persist user session state. As the platform engineer, how should the application obtain secure access to both the RDS database and DynamoDB?
-
✓ C. Store the RDS database credentials in AWS Secrets Manager and attach an EC2 instance profile that can read that secret and call DynamoDB APIs
The most secure and AWS-aligned approach is to use Store the RDS database credentials in AWS Secrets Manager and attach an EC2 instance profile that can read that secret and call DynamoDB APIs. Secrets Manager provides encrypted storage and native rotation for RDS credentials, and the EC2 instance role supplies temporary credentials to retrieve the secret and access DynamoDB via IAM policies without hardcoding keys.
Put both the RDS password and supposed DynamoDB credentials in AWS Secrets Manager and grant the EC2 instance role permission to read the secrets is incorrect because DynamoDB does not rely on database-style credentials; access is granted with IAM, so there is nothing meaningful to store for DynamoDB.
Keep IAM user access keys and the RDS password in Secrets Manager and let the EC2 instance role retrieve them is not recommended because long-lived IAM user keys should not be used on EC2; instance profiles provide short-lived credentials and eliminate the need to distribute user keys.
Use AWS Systems Manager Parameter Store SecureString for both the RDS credentials and DynamoDB, and allow the instance role to read those parameters is weaker than the correct design since Parameter Store does not provide native RDS credential rotation, and DynamoDB access should be controlled with IAM rather than stored secrets.
Cameron’s Exam Tip
For applications on EC2, use IAM roles to access AWS services and store database passwords in AWS Secrets Manager with rotation. Remember that DynamoDB uses IAM and does not require stored credentials.
Question 2
How can CloudFormation stacks automatically use the latest approved hardened AMI in a scalable and decoupled way?
-
✓ B. Write the current AMI ID to SSM Parameter Store and reference it via CloudFormation SSM dynamic reference
Write the current AMI ID to SSM Parameter Store and reference it via CloudFormation SSM dynamic reference is correct because CloudFormation dynamic references (ssm or ssm-secure) fetch the current parameter value at deploy time, letting teams publish a new AMI ID centrally while stacks automatically consume the latest approved image without template changes or tight coupling.
The option Tag the latest AMI in EC2 Image Builder and have CloudFormation look it up by tag is incorrect because CloudFormation cannot natively resolve an AMI by tag. This requires a custom resource, adding complexity and operational overhead.
Store the AMI ID in S3 and read it via a cross-stack export is incorrect since S3 files and cross-stack exports are not purpose-built for parameter discovery, complicate governance, and lack built-in secure resolution semantics compared to Parameter Store dynamic references.
Use AWS Service Catalog with a Launch Template referencing $Latest is incorrect because Launch Templates require an explicit AMI ID and do not inherently track the newest AMI. Service Catalog does not solve the dynamic AMI resolution for CloudFormation and introduces additional management layers.
Cameron’s Exam Tip
Prefer dynamic references to SSM Parameter Store when a template must pull the freshest configuration at deploy time. Recognize that tags are not a native CloudFormation lookup for AMIs. Avoid S3 files for configuration pointers when a parameter store exists. Look for wording emphasizing decoupling, scalability, and automatic pickup of the latest value to steer you toward SSM Parameter Store dynamic references.
Question 3
At McKenzie Logistics, a platform engineer must deploy an application using AWS CloudFormation, but their IAM permissions do not allow creating several resources defined in the template. What approach enables the engineer to launch the stack with the minimum necessary privileges?
-
✓ C. Create an AWS CloudFormation service role scoped to the needed actions, associate it with the stack, and grant the engineer iam:PassRole to let CloudFormation assume that role during deployments
Create an AWS CloudFormation service role scoped to the needed actions, associate it with the stack, and grant the engineer iam:PassRole to let CloudFormation assume that role during deployments is correct because CloudFormation can then use that role’s permissions to create and manage stack resources on the user’s behalf, while the role policy itself enforces least privilege.
Create an AWS CloudFormation service role with the necessary permissions and set it on the stack; use it for deployments is insufficient because the user must also have iam:PassRole to allow CloudFormation to assume the role during operations.
Create an AWS CloudFormation service role with full administrative permissions, attach it to the stack, and allow iam:PassRole only when a ResourceTag matches is not appropriate since it violates least-privilege and ResourceTag conditions are not supported to restrict iam:PassRole for roles.
Create an AWS CloudFormation service role with required permissions and add an aws:SourceIp condition listing developer IPs; associate it to the stack and grant iam:PassRole is ineffective because CloudFormation uses its own IP addresses for downstream calls, so SourceIp conditions won’t apply to those requests.
Cameron’s Exam Tip
When a user lacks direct permissions to create stack resources, use a CloudFormation service role with least privilege and grant the user iam:PassRole so CloudFormation can assume it; avoid attempting to control PassRole with ResourceTag or using aws:SourceIp for CloudFormation calls.
Question 4
How can you automate failover of an Aurora MySQL cluster across regions so that a replica is promoted and a central endpoint is updated for client reconnection while achieving an RTO under 10 minutes?
-
✓ C. EventBridge rule triggers Lambda to promote the replica and update Systems Manager Parameter Store
EventBridge rule triggers Lambda to promote the replica and update Systems Manager Parameter Store is correct because EventBridge can capture RDS and Aurora failure or failover events, invoke a Lambda function to promote the cross-Region replica to writer, and then update a Parameter Store key that applications read during reconnect. This decouples the connection string from code and enables fast, automated endpoint rotation to meet a tight RTO.
Aurora Global Database managed failover is not sufficient because while it can orchestrate a role switch between Regions, it does not automatically detect failures nor update application endpoints; you still need automation to redirect clients.
Amazon Route 53 DNS failover between writer and reader endpoints cannot promote an Aurora replica to writer, and Aurora endpoint names can change during promotion, so DNS health checks alone do not complete the recovery workflow.
Amazon RDS Proxy for multi-Region failover with a stable endpoint is incorrect because RDS Proxy is scoped to a single Region and cluster and does not perform cross-Region promotion or endpoint management. For the exam, look for solutions that combine event detection, automation, and centralized configuration. Use EventBridge for detection, Lambda for orchestration, and a configuration store like Parameter Store or Secrets Manager for indirection. DNS health checks help with routing but cannot perform database role changes. Prefer approaches that let clients fetch the current endpoint at reconnect rather than hardcoding it.
Question 5
Pickering is Springfield Media, a global streaming firm, uses a hub-and-spoke multi-account model on AWS where a shared production account hosts Amazon EC2 instances for several internal divisions. A single division might operate two or more member accounts that interact with resources in the production account. Over the past 90 days, engineers from one division accidentally terminated EC2 instances that belonged to another division. The platform team needs a multi-account governance approach so only the division that owns a resource can terminate its own EC2 instances and related assets. What should they implement?
-
✓ C. Use AWS Organizations with OUs and a per business unit IAM role in the production account that allows TerminateInstances only on resources it owns, assumed via cross-account trust
The scalable approach is to use AWS Organizations for account grouping and to enforce least privilege with cross-account IAM roles scoped by resource-level permissions. In the shared production account, create one IAM role per division with a trust policy that allows only that division’s member accounts to assume it, and attach an IAM policy that permits ec2:TerminateInstances solely on instances the division owns, typically enforced with conditions such as ec2:ResourceTag or account-scoped ARNs. This design ensures that developers can manage only their own EC2 resources while preventing other divisions from terminating them, which makes Use AWS Organizations with OUs and a per business unit IAM role in the production account that allows TerminateInstances only on resources it owns, assumed via cross-account trust the correct choice.
Use a centralized AWS Config aggregator with AWS Control Tower and Customizations for AWS Control Tower to restrict EC2 termination per division is incorrect because Config aggregators collect configuration and compliance data and Control Tower sets guardrails and baselines, but neither natively enforces per-instance termination rights at the IAM resource level.
Enable EC2 termination protection on all instances and route termination requests through AWS Systems Manager Change Manager approvals is incorrect since termination protection blocks all terminations until manually disabled and Change Manager adds process controls, but this does not implement fine-grained, owner-based authorization or scale across accounts.
Create an SCP in the production account via AWS Service Catalog that permits business-unit-specific termination actions and attach it to the appropriate OUs is incorrect because SCPs do not grant permissions and cannot be targeted to specific resources like individual EC2 instances, and AWS Service Catalog is unrelated to defining cross-account authorization boundaries.
Cameron’s Exam Tip
For cross-account governance, think Organizations + OUs for grouping and IAM roles with resource-level conditions for enforcement; remember that SCPs never grant permissions and services like Config or Control Tower do not replace IAM for per-resource authorization.
Question 6
When managing an Amazon RDS for MySQL instance with CloudFormation how can you perform a major engine version upgrade while minimizing downtime?
-
✓ D. Provision a read replica, upgrade the replica to the target major version, promote it, then cut over
The best approach for minimal downtime is to avoid an in-place major upgrade on the primary. With Amazon RDS for MySQL, you can create a read replica, upgrade the replica to the target major version, promote it, and then switch application traffic to the new endpoint. This keeps the primary serving reads/writes until the brief cutover window. Therefore, Provision a read replica, upgrade the replica to the target major version, promote it, then cut over is correct.
The option Enable AutoMinorVersionUpgrade and update the stack is incorrect because AutoMinorVersionUpgrade only applies minor versions, not major upgrades.
The option Use AWS Database Migration Service is a valid migration path that can reduce downtime, but it introduces additional moving parts and is not an in-place CloudFormation-driven upgrade of the existing RDS instance, which the question emphasizes.
The option Update EngineVersion with AllowMajorVersionUpgrade for an in-place change is incorrect for minimal downtime because in-place major upgrades restart the engine and can incur noticeable downtime even with Multi-AZ.
Cameron’s Exam Tip
Remember that AutoMinorVersionUpgrade is only for minor releases. Major version upgrades require AllowMajorVersionUpgrade and generally incur downtime if performed in place. For minimal downtime, think in terms of replica–promote cutover or other blue/green style patterns. Use CloudFormation to define read replicas (SourceDBInstanceIdentifier), apply the upgrade to the replica, and then promote and redirect traffic for a fast cutover.
Question 7
A regional insurance firm operates an Oracle Real Application Clusters database in its data center and plans to move it to AWS. The platform lead asked the DevOps team to automate operating system patching on the servers that will run the database and to implement scheduled backups with roughly 60 days of retention to meet disaster recovery objectives. What is the simplest approach to achieve these goals with minimal engineering effort?
-
✓ B. Rehost the Oracle RAC database on EBS-backed Amazon EC2, install the SSM agent, use AWS Systems Manager Patch Manager for OS patches, and configure Amazon Data Lifecycle Manager to schedule EBS snapshots
Rehost the Oracle RAC database on EBS-backed Amazon EC2, install the SSM agent, use AWS Systems Manager Patch Manager for OS patches, and configure Amazon Data Lifecycle Manager to schedule EBS snapshots is correct because Oracle RAC is only supported on Amazon EC2, AWS Systems Manager Patch Manager provides automated OS patching at scale, and Amazon Data Lifecycle Manager offers native, low-effort scheduling and retention for EBS snapshots.
Migrate the database to Amazon Aurora, enable automated backups, and rely on Aurora maintenance windows for patching is incorrect since Aurora does not support Oracle RAC, so it cannot host the workload.
Move the on-premises database to Amazon RDS for Oracle with Multi-AZ and let RDS handle backups and host patching is incorrect because RDS for Oracle does not support RAC, making this migration path infeasible.
Move the RAC database to Amazon EC2 and trigger CreateSnapshot with an AWS Lambda function on an Amazon EventBridge schedule, and use AWS CodeDeploy and AWS CodePipeline to manage patching is incorrect because CodeDeploy and CodePipeline are CI/CD services rather than patch automation tools, and DLM provides a simpler native mechanism for scheduled EBS snapshots than custom Lambda orchestration.
Cameron’s Exam Tip
When you see Oracle RAC, think EC2, not RDS or Aurora. For OS patching at scale, use Systems Manager Patch Manager, and for scheduled EBS backups with retention, use Amazon Data Lifecycle Manager for the least operational effort.
Question 8
How can a company centrally collect operating system and application logs from on premises servers and Amazon EC2 instances and run low cost ad hoc queries with minimal setup?
-
✓ B. CloudWatch agent to CloudWatch Logs, Firehose to S3, query with Athena
CloudWatch agent to CloudWatch Logs, Firehose to S3, query with Athena is the best fit. The unified CloudWatch agent runs on both on-prem servers and EC2 to send OS and application logs into CloudWatch Logs. A subscription filter streams those logs via Kinesis Data Firehose into a centralized Amazon S3 bucket. S3 provides low-cost, durable storage suitable for long-term retention, and Amazon Athena enables serverless, ad hoc SQL queries over that data with virtually no infrastructure to manage. This pattern minimizes setup and ongoing operations while meeting centralized collection and audit-friendly querying.
The option CloudWatch Logs + Logs Insights is workable but is not the lowest cost for large volumes or long retention because CloudWatch Logs storage and per-GB query pricing can exceed S3 plus Athena for audit workloads.
The option CloudWatch agent to CloudWatch Logs, stream to Amazon OpenSearch Service provides powerful search and dashboards but introduces cluster management and higher ongoing costs compared to serverless S3 plus Athena, which the question emphasizes as low cost and minimal setup.
The option AWS Security Lake targets security telemetry in OCSF format and is not designed for general OS or application logs from arbitrary servers, so it does not align with the requirement.
Cameron’s Exam Tip
Look for keywords such as hybrid collection, low-cost storage, and minimal setup. Prefer serverless, managed data paths. The unified CloudWatch agent covers both on-prem and EC2. For long-term audits, S3 plus Athena is typically favored over CloudWatch Logs Insights or OpenSearch due to cost and operational simplicity.
Question 9
The platform team at a global sportswear marketplace is rolling out its primary web service to an EC2 Auto Scaling group using AWS CodeDeploy with an in-place, batched deployment. When the rollout finishes, the group has six instances, where four serve the new build and two still run the previous version, yet CodeDeploy marks the deployment as successful. What is the most likely cause of this situation?
-
✓ C. An Auto Scaling scale-out event occurred during the deployment, so the new instances launched with the last successfully deployed revision
The most plausible cause is An Auto Scaling scale-out event occurred during the deployment, so the new instances launched with the last successfully deployed revision. During an in-place rollout, if the Auto Scaling group scales out mid-deployment, the newly launched instances install the most recent successful revision, not the revision currently being deployed, which can leave the fleet with mixed versions while CodeDeploy still reports success.
A CloudWatch alarm fired during the rollout is unlikely because alarms do not inherently pin subsets of instances to older revisions or cause mixed versions in a successful deployment.
Two instances lacked IAM permissions to retrieve the revision from Amazon S3 would cause those instances to fail lifecycle events, which would make the deployment fail rather than succeed.
The Auto Scaling group is using an outdated launch template or launch configuration version would influence how instances launch, but CodeDeploy targets instances in the deployment group and would attempt to update them, so this does not align with a successful deployment that leaves only a subset updated.
To prevent this, pause scale-out processes during deployments or use a blue/green strategy; afterward, redeploy the latest revision to normalize versions across the group.
Cameron’s Exam Tip
If you see mixed app versions with a reported success during an in-place CodeDeploy to an Auto Scaling group, think scale-out during deployment, where new instances pull the last successful revision instead of the in-progress one.
Question 10
How can CloudFormation automatically resolve the latest golden AMI ID to use for EC2 instances when a stack is updated?
-
✓ B. CloudFormation dynamic reference to SSM Parameter Store for AMI ID
The correct approach is to use CloudFormation dynamic reference to SSM Parameter Store for AMI ID. Publish the latest golden AMI ID to a Systems Manager Parameter in each Region and reference it in the template via a dynamic reference or the AWS::SSM::Parameter::Value<AWS::EC2::Image::Id> parameter type. CloudFormation resolves the parameter at create or update time so stacks pick up the current AMI without template edits.
The option EventBridge rule invoking Lambda to update template with newest AMI is a custom workaround that introduces brittle template rewriting and extra operational burden.
The option EC2 Image Builder pipeline automatically chosen by CloudFormation is incorrect because CloudFormation does not natively pull the latest AMI from an Image Builder pipeline; you must publish AMI IDs to SSM or use a custom resource.
The option AWS Service Catalog does not make CloudFormation auto-resolve AMIs and is unnecessary for this use case.
Cameron’s Exam Tip
Watch for keywords like automatically resolve latest AMI and keep using CloudFormation. Prefer native mechanisms such as SSM Parameter Store dynamic references or the AWS::SSM::Parameter::Value<AWS::EC2::Image::Id> parameter type. For multi-Region deployments, ensure the parameter exists per Region or use public SSM AMI parameters. Avoid solutions that rewrite templates or require custom code unless explicitly asked.
Question 11
Scrumtuous Labs runs all development for several independent teams in one AWS account in a single Region. The DevOps engineer needs an easy way to alert the operations manager when new resource provisioning is getting close to account service quotas. Which approach requires the least custom development effort?
-
✓ C. Schedule a Lambda to refresh AWS Trusted Advisor service quota checks with EventBridge and add another EventBridge rule that matches Trusted Advisor limit events and publishes to an SNS topic subscribed by the operations manager
Schedule a Lambda to refresh AWS Trusted Advisor service quota checks with EventBridge and add another EventBridge rule that matches Trusted Advisor limit events and publishes to an SNS topic subscribed by the operations manager is best because Trusted Advisor already includes service quota checks and integrates with EventBridge, so you only need a lightweight refresh and a rule-to-SNS pipeline.
Build a Lambda function to enumerate account resources and compare usage against published service quotas, trigger it on a schedule with EventBridge, and publish alerts to an SNS topic requires extensive custom logic to inventory resources and map them to limits, which is higher effort.
Create an AWS Config custom rule that evaluates service quota consumption and posts to an SNS topic, with a Lambda subscriber that notifies the operations manager still needs custom evaluation of quotas and usage, making it more complex than using Trusted Advisor’s native checks.
Create a Lambda function to refresh AWS Health checks on a timer and configure an EventBridge rule that matches Trusted Advisor events and sends notifications to an SNS topic is ineffective because AWS Health does not track or alert on service quota usage and the mixed configuration is inconsistent.
Cameron’s Exam Tip
For near-limit alerts with the least effort, think Trusted Advisor service quota checks plus EventBridge for events and SNS for notifications; avoid building custom inventory or Config rules, and remember AWS Health is not for service quota monitoring.
Question 12
How can you automatically remediate IAM access keys that AWS Health reports as exposed by deleting the key, generating a summary of recent CloudTrail activity, and notifying the team?
-
✓ B. EventBridge rule for aws.health AWS_RISK_CREDENTIALS_EXPOSED to Step Functions
The correct choice is EventBridge rule for aws.health AWS_RISK_CREDENTIALS_EXPOSED to Step Functions. AWS Health emits an event when AWS detects an exposed IAM access key on public sources such as GitHub. You can create an EventBridge rule with source aws.health and event code AWS_RISK_CREDENTIALS_EXPOSED to trigger a Step Functions state machine. That workflow can run Lambda tasks to deactivate and delete the key, query CloudTrail for recent activity tied to the access key, and publish an Amazon SNS notification to the team. This pattern is the native and scalable way to integrate AWS Health with automated remediation.
The option Amazon GuardDuty and Amazon Macie with Step Functions is incorrect because these services do not monitor public code hosting for IAM keys nor emit the AWS Health event for exposed credentials.
The option AWS Config multi-account rule invoking Step Functions is incorrect because AWS Config evaluates configuration state and does not process AWS Health events.
The option Amazon Detective with EventBridge automation is incorrect because Detective is for security investigations and graph analysis, not for detecting exposed keys or emitting the required Health event.
Cameron’s Exam Tip
When you see leaked or exposed AWS access keys, associate detection with the AWS Health event AWS_RISK_CREDENTIALS_EXPOSED and routing via EventBridge source aws.health. Choose Step Functions for multi-step remediation, Lambda for discrete actions like IAM key deletion and CloudTrail lookups, and SNS for alerts. Remember that the Personal Health Dashboard is a console view; automated actions come from EventBridge rules, not a PHD rule.
Question 13
A travel-tech firm plans to move its marketing website to AWS across three accounts in a landing zone. The existing platform runs Windows IIS with Microsoft SQL Server on premises. They need elastic scaling and must capture ad click attributes from the site, delivering events to an Amazon S3 bucket for billing and into Amazon Redshift for analytics within a few minutes. Which architecture should they implement to meet these objectives while keeping the web tier stateless?
-
✓ C. Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; use Amazon Kinesis Data Firehose to deliver click events to Amazon S3 for billing and a second Firehose delivery stream to load Amazon Redshift
The web tier should be stateless behind Auto Scaling, and ad click events need a managed path to land in S3 for billing and be loaded into Redshift with minimal custom code. Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; use Amazon Kinesis Data Firehose to deliver click events to Amazon S3 for billing and a second Firehose delivery stream to load Amazon Redshift is best because Firehose reliably buffers, batches, and delivers data to S3 and supports loading Redshift using an S3 staging bucket with minimal operations.
Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; publish click data with Amazon Kinesis Data Streams directly to Amazon S3 for billing and send another stream to Amazon Redshift is incorrect since Kinesis Data Streams does not write to S3 or Redshift without consumer applications or Firehose.
Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; use Amazon Athena to push click logs to Amazon S3 and also load them into Amazon Redshift is incorrect because Athena is a query service and not a data ingestion mechanism.
Run the site on stateless EC2 in an Auto Scaling group and move SQL Server to Amazon RDS; send click events to Amazon MSK and rely on an AWS Glue crawler and catalog to populate Amazon Redshift is incorrect since a Glue crawler only catalogs data and MSK would still require additional ETL to load Redshift, adding complexity and latency.
Cameron’s Exam Tip
For streaming delivery with minimal code, prefer Amazon Kinesis Data Firehose to load into Amazon S3 and Amazon Redshift; remember that Kinesis Data Streams needs consumers, and Athena queries data in S3 rather than ingesting it.
Question 14
How does the AWS Config rule cloudformation-stack-drift-detection-check treat CloudFormation custom resources for drift and what can cause the rule to report NON_COMPLIANT while the CloudFormation console shows the stack as IN_SYNC? (Choose 2)
-
✓ B. Drift detection does not evaluate custom resources
-
✓ E. The rule calls DetectStackDrift and treats API throttling or failures as NON_COMPLIANT
The correct choices are Drift detection does not evaluate custom resources and The rule calls DetectStackDrift and treats API throttling or failures as NON_COMPLIANT. CloudFormation’s drift detection explicitly excludes custom resources, so they are not evaluated, which answers how custom resources are handled. The AWS Config managed rule cloudformation-stack-drift-detection-check invokes the DetectStackDrift API. If that API is throttled or fails, the rule defaults the evaluation to NON_COMPLIANT even when the CloudFormation console reports IN_SYNC, causing the observed mismatch. Re-running the evaluation or addressing API throttling typically resolves the discrepancy.
The option Use CloudFormation change sets to detect drift is incorrect because change sets preview proposed updates rather than detect out-of-band changes.
Provide a cloudformationRoleArn with broader permissions to fix the mismatch is incorrect because permission problems usually raise explicit access errors; they are not the cause of the rule’s default-to-NON_COMPLIANT behavior on DetectStackDrift failures.
Define all custom resource properties so drift can be detected is wrong because custom resources are unsupported for drift detection regardless of property definitions.
Cameron’s Exam Tip
When you see custom resources combined with drift detection, remember that custom resources are not checked. For AWS Config managed rules that wrap service APIs, a mismatch like IN_SYNC in the console versus NON_COMPLIANT in the rule often points to API throttling or transient failures. Re-evaluate the rule, check service quotas and CloudWatch metrics for throttling, and verify recent API errors in logs.
Question 15
A regional logistics startup is moving its Docker workloads from its data center to AWS. They will run several services on Amazon ECS using the EC2 launch type behind an Application Load Balancer. The operations team needs the platform to automatically aggregate all container and load balancer logs and deliver them to an Amazon S3 bucket for near-real-time analysis, targeting about a two-minute end-to-end delay. How should the team configure the ECS environment to achieve this? (Choose 3)
-
✓ A. Enable access logging on the Application Load Balancer and configure the destination to the specified S3 bucket
-
✓ C. Use the awslogs log driver in ECS task definitions, install the CloudWatch Logs agent on the container instances, and grant the needed permissions on the instance role
-
✓ D. Create a CloudWatch Logs subscription filter to an Amazon Kinesis Data Firehose delivery stream that writes continuously to the S3 bucket
To satisfy near-real-time delivery of container and ALB logs into S3, send container STDOUT/STDERR to CloudWatch Logs from ECS, stream those logs to S3 with a subscription and Kinesis Data Firehose, and enable ALB access logging directly to S3. This creates an automated pipeline with low latency and complete coverage of service and load balancer logs.
Enable access logging on the Application Load Balancer and configure the destination to the specified S3 bucket is correct because ALB can write detailed request logs straight to S3, covering client requests, latencies, and responses.
Use the awslogs log driver in ECS task definitions, install the CloudWatch Logs agent on the container instances, and grant the needed permissions on the instance role is correct because the awslogs driver routes container logs to CloudWatch Logs, and with appropriate IAM and the agent on EC2-backed clusters, this centralizes logs without filling instance disks.
Create a CloudWatch Logs subscription filter to an Amazon Kinesis Data Firehose delivery stream that writes continuously to the S3 bucket is correct because subscription filters provide a near-real-time stream from CloudWatch Logs to S3 via Firehose, meeting the latency target.
Set up Amazon Macie to scan the S3 bucket and provide near-real-time analysis of the access logs is incorrect because Macie focuses on sensitive data discovery and classification rather than streaming log ingestion or real-time analytics.
Turn on Detailed Monitoring in CloudWatch for the load balancer to store access logs in S3 is incorrect because Detailed Monitoring increases metric resolution but does not produce or store access logs.
Use an AWS Lambda function triggered by an Amazon EventBridge schedule every 2 minutes to call CreateLogGroup and CreateExportTask to move logs to S3 is incorrect because CloudWatch Logs export tasks are batch, not continuous, and are unsuitable for near-real-time delivery.
Cameron’s Exam Tip
For near-real-time log delivery from CloudWatch Logs to S3, think subscription filter + Kinesis Data Firehose; for ALB requests, enable ALB access logging; and remember that CloudWatch Detailed Monitoring is metrics-only while ExportTask is batch.
Question 16
Which actions provide near real time alerts for Trusted Advisor findings that a security group has SSH port 22 open to 0.0.0.0/0 and enforce automatic restriction to a specified IP? (Choose 3)
-
✓ B. EventBridge rule and Lambda to refresh Trusted Advisor via Support API every 10 minutes and publish to SNS
-
✓ C. AWS Config rule flagging SGs with port 22 open to 0.0.0.0/0 and notifying via SNS
-
✓ E. AWS Config auto-remediation using Systems Manager Automation to restrict SG port 22 to a specified IP
EventBridge rule and Lambda to refresh Trusted Advisor via Support API every 10 minutes and publish to SNS delivers near real-time alerts by programmatically refreshing Trusted Advisor checks and forwarding results to Amazon SNS. AWS Config rule flagging SGs with port 22 open to 0.0.0.0/0 and notifying via SNS continuously evaluates security groups for the risky SSH rule and alerts on noncompliance. AWS Config auto-remediation using Systems Manager Automation to restrict SG port 22 to a specified IP enforces the desired state by automatically updating noncompliant security groups to allow SSH only from the approved source.
The option Custom AWS Config remediation that invokes Lambda to edit security groups is incorrect because native Config remediation uses AWS Systems Manager Automation documents, not direct Lambda calls.
The option Amazon GuardDuty to detect open security groups and auto-remediate is incorrect since GuardDuty focuses on threat detection, not configuration compliance or Trusted Advisor findings.
The option AWS Security Hub with Foundational Security Best Practices to auto-remediate Trusted Advisor findings is incorrect because Security Hub does not ingest Trusted Advisor findings and cannot directly remediate them. On the exam, tie Trusted Advisor near real-time alerts to scheduling a refresh via the AWS Support API using EventBridge and Lambda. Map configuration compliance to AWS Config managed rules (for SSH, look for INCOMING_SSH_DISABLED) and map automatic fixes to Config remediation with SSM Automation, not direct Lambda. Watch for distractors that mix Security Hub or GuardDuty with Trusted Advisor-specific requirements.
Question 17
A ticketing startup operates a Node.js web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. During abrupt traffic surges, some scale-out events fail and intermittent errors appear. CloudWatch logs show messages like: ‘Instance did not finish the user’s lifecycle action; lifecycle action with token <abc-123> was abandoned due to heartbeat timeout.’ What should a DevOps engineer do to capture logs from all impacted instances and preserve them for later analysis? (Choose 3)
-
✓ B. Configure an Auto Scaling lifecycle hook on instance termination and use an Amazon EventBridge rule to trigger an AWS Lambda function that runs AWS Systems Manager Run Command to pull application logs and upload them to Amazon S3
-
✓ E. Adjust the AWS CodeDeploy deployment group to resolve account deployment concurrency limits that have been reached
-
✓ F. Use Amazon Athena to query the logs directly from the Amazon S3 bucket
The heartbeat timeout during a lifecycle action indicates the instance did not complete the hook processing in time, which often happens when related automation is throttled or deployment concurrency limits are hit.
Use Configure an Auto Scaling lifecycle hook on instance termination and use an Amazon EventBridge rule to trigger an AWS Lambda function that runs AWS Systems Manager Run Command to pull application logs and upload them to Amazon S3 to reliably retrieve logs before the instance is terminated. This provides a deterministic path to gather app logs from every affected instance.
When CodeDeploy reaches account or deployment group concurrency limits, hooks may stall and lead to abandoned lifecycle actions. Selecting Adjust the AWS CodeDeploy deployment group to resolve account deployment concurrency limits that have been reached addresses this systemic cause so the log-collection workflow can complete consistently during bursts.
Once logs are in S3, Use Amazon Athena to query the logs directly from the Amazon S3 bucket enables quick, serverless analysis using SQL without additional data movement.
Enable VPC Flow Logs for the subnets hosting the Auto Scaling group and export them to Amazon S3 is not suitable because flow logs capture network metadata, not application logs.
Update the Auto Scaling group health check to point to the correct application port and protocol does not collect or store logs and there is no evidence of a health check misconfiguration.
Turn on access logging at the target group and send logs to an Amazon S3 bucket is not applicable for application log capture; even load balancer access logs would not include the instance-level application logs needed for troubleshooting.
Cameron’s Exam Tip
For ephemeral instances, use lifecycle hooks to run last-mile tasks such as log export before termination, store logs in S3, and analyze at scale with Athena; do not confuse network or access logs with application logs.
Question 18
What is the most operationally efficient method to send an email to a user after a successful sign in to an Amazon Cognito User Pool?
-
✓ B. Use the Cognito Post Authentication trigger to invoke Lambda that emails via Amazon SES
The most operationally efficient approach is Use the Cognito Post Authentication trigger to invoke Lambda that emails via Amazon SES. This leverages a native Cognito User Pool trigger that fires immediately after a successful sign-in, requires no changes to the client application flow, and avoids introducing extra services or pipelines. A small Lambda function can send the email through Amazon SES directly.
The option Use Amazon EventBridge with CloudTrail sign-in events to trigger Lambda and send via Amazon SES is incorrect because Cognito User Pool end-user sign-ins are not recorded in CloudTrail. CloudTrail captures management events for Cognito, not user authentication events, so this path is unreliable for sign-in notifications.
The option Call an Amazon API Gateway endpoint after login that invokes Lambda to send an SES email works but is less efficient. It requires client changes and adds extra managed services when Cognito can natively invoke Lambda via a trigger.
The option Use the Cognito Custom Message trigger with Lambda and Amazon SES for the login email is incorrect because the Custom Message trigger is designed for verification codes, MFA, and password reset messages. It does not run after a successful authentication.
Cameron’s Exam Tip
Look for native Amazon Cognito triggers to keep solutions simple and event-driven. For events that occur immediately after authentication, think Post Authentication trigger. Be cautious with answers that route through CloudTrail or streaming pipelines for user sign-in events; Cognito User Pool sign-ins are not CloudTrail management events.
Question 19
A streaming media startup runs a serverless backend that handles tens of thousands of API calls with AWS Lambda and stores state in Amazon DynamoDB. Clients invoke a Lambda function through an Amazon API Gateway HTTP API to read large batches from the DynamoDB PlaybackSessions table. Although the table uses DynamoDB Accelerator, users still see cold-start delays of roughly 8–12 seconds during afternoon surges. Traffic reliably peaks from 3 PM to 6 PM and tapers after 9 PM. What Lambda configuration change should a DevOps engineer make to keep latency consistently low at all times?
-
✓ B. Enable provisioned concurrency for the function and configure Application Auto Scaling with a minimum of 2 and a maximum of 120 provisioned instances
The best way to eliminate cold-start latency for predictable spikes is Enable provisioned concurrency for the function and configure Application Auto Scaling with a minimum of 2 and a maximum of 120 provisioned instances. Provisioned concurrency keeps a pool of pre-initialized environments ready to serve requests immediately, and scaling it with predictable traffic windows ensures consistently low latency.
Configure reserved concurrency for the function and use Application Auto Scaling to set reserved concurrency to roughly half of observed peak traffic controls maximum concurrency and isolates capacity, but it does not pre-initialize runtimes, so cold starts can still occur.
Increase the Lambda memory size to 8,192 MB to reduce initialization and execution time can reduce duration, but memory alone cannot consistently remove cold-start penalties during bursts.
Set the function’s ephemeral storage to 10,240 MB to cache data in /tmp between invocations affects temporary storage capacity, not initialization, so it will not address cold-start latency.
Cameron’s Exam Tip
When you see predictable traffic spikes and complaints about cold starts, think provisioned concurrency; reserved concurrency caps throughput, memory tuning speeds execution, and ephemeral storage helps with caching, but none of these eliminate cold starts.
Question 20
What is the best way to automate cross account remediation of Amazon EC2 instances with persistently low utilization across 10 AWS accounts while filtering actions by tags using native AWS services?
-
✓ C. AWS Trusted Advisor Low Utilization EC2 check via EventBridge with a Lambda remediation filtered by tags
The best approach is AWS Trusted Advisor Low Utilization EC2 check via EventBridge with a Lambda remediation filtered by tags. Trusted Advisor provides a native, organization-wide check for consistently underutilized EC2 instances when using a Business or Enterprise Support plan. Integrating Trusted Advisor with EventBridge allows you to react to check item changes, and a Lambda function can evaluate resource tags (for shared environments or cost centers) before stopping or terminating instances. This delivers automated, tag-aware, cross-account remediation using managed services without building custom data pipelines.
The option Amazon CloudWatch metrics with EventBridge and Lambda to stop or terminate by tag is not ideal because it relies on manual metric alarms and dashboards rather than the built-in cost-optimization signal provided by Trusted Advisor. While possible, it is more operationally heavy and less purpose-built for cross-account EC2 cost optimization.
The option AWS Cost Anomaly Detection with EventBridge and Lambda to remove idle instances is incorrect because it detects spend anomalies, not sustained low utilization of specific instances. It will not reliably identify which EC2 instances are persistently underutilized.
The option Enroll in AWS Compute Optimizer and auto shutdown via EventBridge and Lambda is not correct for automatic remediation. Compute Optimizer offers rightsizing recommendations but does not natively emit events for automatic shutdown and is intended for informed human decision-making rather than immediate termination workflows.
Cameron’s Exam Tip
Look for services that directly surface low-utilization EC2 signals across accounts. Trusted Advisor plus EventBridge is a common remediation pattern on this exam. Watch for distractors that focus on cost anomalies or general monitoring dashboards, which do not inherently target persistent low utilization. When you see tag-based actions across many accounts, think about centralized checks and event-driven remediation with EventBridge and Lambda.
Jira, Scrum & AI Certification |
---|
Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.