Five Star AWS DevOps Certification Study Guide – ★ ★ ★ ★ ★

AWS DevOps Professional Certification Book


The AWS DevOps Engineer Professional Book of Exam Questions

The AWS Certified DevOps Engineer Professional Book of Exam Questions and Answers is a focused guide for DOP-C02 that reads like a lab partner and a coach.

It targets the real skills you will use when you design pipelines, harden delivery, and keep production stable on AWS.

The coverage fits cleanly within the broader AWS certification path and complements associate level tracks like Cloud Practitioner, Developer Associate, and Solutions Architect Associate.

Why this AWS DevOps book stands out

The review sets mirror what you will face on test day. Scenarios force you to weigh tradeoffs across automation, resilience, observability, and security while staying inside cost and governance boundaries. Each item ties back to the DOP-C02 blueprint on SDLC automation, configuration management and IaC, resilient cloud solutions, monitoring and logging, incident and event response, and security and compliance. If you are mapping multi-cert goals, the patterns also reinforce choices found in SAP-C02, Security, and Data Engineer.

Teaching and learning AWS DevOps

  1. Build and secure CI/CD with CodePipeline, CodeBuild, CodeDeploy, Amazon ECR, and container targets like Amazon ECS and Amazon EKS, which aligns with automation themes you also see in GCP DevOps Engineer.
  2. Harden delivery using blue green and canary strategies, protect rollouts with health checks, and use metrics to trigger rollback, a mindset that translates well to SAA-C03 design prompts.
  3. Express infrastructure with AWS CloudFormation, AWS CDK, and AWS SAM, then promote through multiple accounts with StackSets and guardrails from Organizations and SCPs.
  4. Operate with visibility using Amazon CloudWatch metrics, Logs Insights, dashboards, and distributed tracing through AWS X-Ray, plus event routing with Amazon EventBridge and notifications through Amazon SNS.
  5. Design for reliability with Multi-AZ and multi-Region patterns, Auto Scaling, and routing through Amazon Route 53, which echoes disaster recovery ideas you will also see on Architect Professional.
  6. Apply security at scale with AWS IAM, Secrets Manager, KMS, AWS Config, Security Hub, and managed network protections, which supports outcomes on Security and GCP Security Engineer.

How it aligns

Each question includes a clear explanation of why the correct answer wins and why the distractors fail. That back and forth is the secret to performance on scenario heavy certifications like Machine Learning Specialty and SAP-C02. You learn to spot cues such as “least operational effort,” “roll back safely,” “encrypt at rest and in transit,” or “centralize governance,” then map them to the right AWS service or pattern. If you prefer to preview that style, watch the study walkthrough here on YouTube and compare notes with practice sets on Udemy.

Who will benefit?

This book suits engineers who already ship on AWS and want a structured push to DOP-C02, as well as team leads who want a common toolbox for reviews and game days. It also helps multi-cloud learners who cross reference with GCP certifications such as Solutions Architect Professional, Developer Professional, and ML Engineer Pro.

How to Study

  • Skim the official exam objectives and tag each chapter to a domain. Track coverage using a simple checklist.
  • Answer a block of questions, then read every explanation, even when you were right. Note the telltale phrases that map to services or deployment strategies.
  • Translate explanations into micro-labs. For example, build a canary with CodeDeploy or route events with EventBridge. That habit strengthens recall for scenario prompts.
  • Rotate in adjacent blueprints like CLF-C02, DVA-C02, SAA-C03, and DEA-C01 to reinforce fundamentals that resurface on DOP-C02.
  • If your roadmap includes AI delivery, pair your prep with AI Practitioner and Generative AI Leader since CI/CD and guardrails are shared concerns.

Verdict?

AWS Certified DevOps Engineer Professional Book of Exam Questions and Answers delivers targeted practice that builds judgment, not guesswork. It matches the DOP-C02 domains, strengthens instincts for automation and reliability, and prepares you for real operations in the AWS Cloud. If your goal is to add DevOps Professional to a stack of credentials like Solutions Architect Associate, Architect Professional, ML Specialty, or Security, this book earns a strong recommendation.


AWS DevOps Study Guide Excerpt

How can you automatically start and stop EC2 instances and a single RDS DB only while CodePipeline test stages run which last about four hours and occur roughly three times per week without changing the existing architecture?

  • ❏ A. Use CodeBuild pre/post steps with AWS CLI to start and stop resources

  • ❏ B. AWS Instance Scheduler with a weekly cron schedule

  • ❏ C. EventBridge CodePipeline state-change events invoking Systems Manager Automation to start/stop EC2 and RDS

  • ❏ D. Migrate to Aurora Serverless, add an ALB, and use Lambda to toggle power

How can EC2 instances running in private subnets securely retrieve user data and bootstrap packages without any Internet egress?

  • ❏ A. Attach an egress-only internet gateway

  • ❏ B. Mirror dependencies to S3 and fetch via an S3 gateway VPC endpoint using an instance role

  • ❏ C. Route outbound traffic through a NAT gateway

  • ❏ D. Create a VPC endpoint to Amazon Linux repositories

Which deployment strategy provides zero downtime releases with immediate rollback, stops further deployments on failure, and avoids any capacity reduction for a stack using Lambda, DynamoDB, and OpenSearch?

  • ❏ A. CloudFormation; CodeDeploy in-place; Elastic Beanstalk Rolling with additional batch

  • ❏ B. CloudFormation; CodeDeploy blue/green (Lambda); Elastic Beanstalk Immutable

  • ❏ C. CloudFormation; CodeDeploy blue/green; Elastic Beanstalk All at once

  • ❏ D. CloudFormation; CodeDeploy canary (Lambda); Elastic Beanstalk Rolling

How can you automatically update an S3 static status page when EC2 Auto Scaling instances launch or terminate while maintaining a durable searchable log of the scaling events?

  • ❏ A. Create an EventBridge rule that delivers events to CloudWatch Logs and periodically export to S3

  • ❏ B. Configure an EventBridge rule with two targets, S3 and CloudWatch Logs

  • ❏ C. Use an EventBridge rule for Auto Scaling instance launch and terminate to invoke Lambda; Lambda updates the S3 page and writes events to CloudWatch Logs

  • ❏ D. Run a scheduled EventBridge rule every 15 minutes to have Lambda scan the Auto Scaling group and update S3, logging to S3

Which approach most simply ensures that each of 15 EC2 instances retains the same private IP address and hostname across reboots and replacements by reusing preallocated Elastic Network Interfaces?

  • ❏ A. Develop a custom AWS CLI or SDK script to launch and replace instances, attaching preallocated ENIs

  • ❏ B. CloudFormation nested stacks with one-instance Auto Scaling groups that reattach designated ENIs and set hostnames at boot

  • ❏ C. AWS Elastic Beanstalk

  • ❏ D. AWS CDK with Lambda custom resources to manage ENI attachment

An EC2 instance that uses an instance profile receives a 403 Access Denied error when it attempts to call S3 GetObject on a restricted bucket. What are two likely causes? (Choose 2)

  • ❏ A. The EC2 instance security group blocks outbound HTTPS traffic

  • ❏ B. Bucket policy denies or omits required access

  • ❏ C. Default encryption is enabled on the S3 bucket

  • ❏ D. IAM role for the instance lacks or is prevented from s3:GetObject

  • ❏ E. S3 Block Public Access is enabled on the bucket

In an AWS CodeBuild buildspec.yml file how should you configure pushing a Docker image to a private Amazon ECR repository so that the image is pushed only after a successful build?

  • ❏ A. Use a finally block in post_build to push the image

  • ❏ B. AWS CodePipeline

  • ❏ C. Use post_build commands to push the image to ECR

  • ❏ D. Use pre_build commands to push the image

Which AWS native solution provides real time alerts for each sts AssumeRole action that targets a specific IAM role?

  • ❏ A. CloudTrail Insights via EventBridge to SNS

  • ❏ B. EventBridge rule on CloudTrail sts:AssumeRole for the role, invoke Lambda to SNS

  • ❏ C. EventBridge rule for console sign-in; Lambda to SNS

  • ❏ D. CloudTrail Lake scheduled query alerts to SNS

Which AWS architecture ensures multi-AZ high availability, continuous vulnerability and exposure scanning, and correct Route 53 apex mapping to an ALB?

  • ❏ A. GuardDuty, EC2 Auto Scaling in two AZs behind ALB, RDS MySQL Multi-AZ with read replicas, Route 53 CNAME at apex

  • ❏ B. Inspector, EC2 Auto Scaling across four AZs behind ALB, Aurora, Route 53 alias apex to ALB

  • ❏ C. Security Hub, EC2 Auto Scaling across three AZs behind ALB, Aurora, Route 53 alias apex to ALB

  • ❏ D. Macie, DynamoDB, EC2 Auto Scaling across two AZs behind ALB, Route 53 non-alias A apex

Which AWS tool can discover a legacy Java application, containerize it for deployment to Amazon ECS or Amazon EKS, and bootstrap an automated CI/CD pipeline?

  • ❏ A. AWS Copilot CLI for ECS with CI/CD

  • ❏ B. AWS App2Container with CodeBuild/CodeDeploy

  • ❏ C. AWS Proton service templates

  • ❏ D. AWS Elastic Beanstalk Docker with CodePipeline

When using CodeDeploy with ALB traffic shifting how can you automatically roll back if EC2 maximum CPU usage spikes during live traffic?

  • ❏ A. CloudWatch alarm on ALB 5XXErrorRate tied to CodeDeploy rollback

  • ❏ B. CloudWatch alarm on EC2 Max CPU with CodeDeploy automatic rollback enabled

  • ❏ C. Auto Scaling target tracking on CPU plus ALB health checks

  • ❏ D. ValidateService hook samples CPU for 10 minutes and fails to trigger rollback

For a stateful Apache Kafka deployment with 12 brokers distributed across three Availability Zones which CloudFormation pattern ensures failed brokers are automatically replaced and the replacement instance in the same AZ reattaches its dedicated EBS volume?

  • ❏ A. EC2 Auto Scaling with Launch Template and EBS BlockDeviceMappings set DeleteOnTermination=false

  • ❏ B. Create 12 EC2 instances with attached EBS volumes in one stack and rely on stack updates for recovery

  • ❏ C. Single Auto Scaling group across 3 AZs with preprovisioned EBS volumes and user data to claim any free volume

  • ❏ D. Nested stacks: one-instance ASG per broker with a dedicated, tagged EBS volume and boot-time reattach

Which AWS service provides continuous threat detection across an account by analyzing CloudTrail events, VPC flow logs, and DNS logs?

  • ❏ A. Amazon Inspector

  • ❏ B. Amazon GuardDuty

  • ❏ C. Amazon Macie

  • ❏ D. Amazon Detective

How can you deploy to an Auto Scaling group behind a load balancer so that only one EC2 instance is updated at a time and the deployment automatically rolls back if CPU utilization exceeds 95%?

  • ❏ A. Elastic Beanstalk rolling update with batch size 1 and CPU alarm rollback

  • ❏ B. EC2 Auto Scaling Instance Refresh with 100% minimum healthy and rollback on CPU alarm

  • ❏ C. AWS CodeDeploy with Auto Scaling, OneAtATime, CloudWatch CPU alarm at 95%, auto rollback

  • ❏ D. AWS Systems Manager Automation with lifecycle hooks and Step Functions

Which choices provide a serverless container runtime and encrypted connectivity from on premises to a VPC for a Docker service migrating to AWS? (Choose 2)

  • ❏ A. AWS PrivateLink

  • ❏ B. ECS with Fargate launch type

  • ❏ C. AWS Direct Connect without VPN

  • ❏ D. Site-to-Site VPN over IPsec

  • ❏ E. Amazon EKS on EC2

How should a Lambda function retrieve Amazon RDS credentials at runtime while ensuring the database password is automatically rotated every 30 days?

  • ❏ A. AWS Systems Manager Parameter Store SecureString

  • ❏ B. Amazon RDS IAM authentication tokens

  • ❏ C. AWS Key Management Service

  • ❏ D. AWS Secrets Manager with rotation enabled

Which approach allows implementing a canary deployment that shifts approximately 15 percent of traffic before completing a full rollout for a Lambda API behind API Gateway?

  • ❏ A. AWS AppConfig feature flags for 15% exposure

  • ❏ B. Manage Lambda versions/aliases with CloudFormation and use API Gateway stage canary

  • ❏ C. AWS CDK with Route 53 failover routing

  • ❏ D. CodeDeploy all-at-once for Lambda with Route 53 simple routing

Which approach detects changes to an S3 bucket or a federated IAM role in near real time and triggers automated remediation?

  • ❏ A. EventBridge scheduled Lambda every 15 minutes to scan IAM and S3 policies

  • ❏ B. AWS CloudFormation drift detection daily with Lambda remediation

  • ❏ C. AWS Config change-triggered rule for the S3 bucket and federated role invoking SSM Automation via Lambda to roll back

  • ❏ D. EventBridge with CloudTrail API events to invoke Lambda on policy updates

ECS Fargate tasks running in private subnets without a NAT gateway fail with the error ‘ResourceInitializationError unable to pull secrets or registry auth’ What actions will allow the tasks to pull container images and retrieve secrets so they can start successfully? (Choose 2)

  • ❏ A. Run tasks in public subnets with public IPs

  • ❏ B. Create VPC endpoints: ECR API, ECR DKR, and S3

  • ❏ C. Add a Secrets Manager VPC endpoint only

  • ❏ D. Enable awsvpc networking on tasks

  • ❏ E. Fix the task execution IAM role permissions for ECR and Secrets Manager

Which solution enables end-to-end tracing for AWS Lambda that includes per-request subsegments, automatic anomaly detection, and notifications delivered within two minutes?

  • ❏ A. Internal Lambda extension with X-Ray, active tracing, and CloudWatch Logs Insights alerts

  • ❏ B. External Lambda extension emitting X-Ray segments/subsegments, active tracing, X-Ray groups and Insights, alerts via EventBridge and CloudWatch

  • ❏ C. AWS Distro for OpenTelemetry for Lambda with CloudWatch ServiceLens and Contributor Insights

  • ❏ D. Merge workflows into one function, enable X-Ray active tracing and Insights, add CloudWatch alarms

How can you replace SSH access to EC2 instances with AWS Systems Manager Session Manager while ensuring all session traffic stays private within the VPC and does not traverse the internet? (Choose 2)

  • ❏ A. Launch a bastion host

  • ❏ B. Create interface VPC endpoints for Systems Manager

  • ❏ C. Route Session Manager traffic via a NAT gateway

  • ❏ D. Attach IAM instance profile with AmazonSSMManagedInstanceCore

  • ❏ E. Create an interface VPC endpoint for Amazon EC2

What is the simplest way to obtain per-object unique user counts and total request counts for an S3 bucket in 12 hour windows while minimizing operational overhead?

  • ❏ A. Load S3 access logs into Amazon Redshift and run SQL

  • ❏ B. Enable S3 server access logging and query with Amazon Athena

  • ❏ C. Amazon S3 Storage Lens

  • ❏ D. Stream S3 data events via AWS CloudTrail to CloudWatch Logs and aggregate with Lambda

In AWS CloudFormation what stack structure allows teams to deploy independently while keeping cross stack dependencies manageable and templates maintained in source control?

  • ❏ A. Parent stack with nested stacks; update parent on any change

  • ❏ B. CloudFormation StackSets from a central account for all components

  • ❏ C. Decoupled stacks per domain with Export/ImportValue and separate versioning

  • ❏ D. Single monolithic template deployed on every change

How can you enforce TLS for data in transit and ensure encryption at rest while replicating Amazon S3 objects to a second Region on another continent at about 20,000 objects per second without causing throughput bottlenecks?

  • ❏ A. Deny non-TLS with aws:SecureTransport, use SSE-KMS with multi-Region keys, and enable S3 Cross-Region Replication

  • ❏ B. Block non-TLS via aws:SecureTransport, set default SSE-S3, and use S3 Cross-Region Replication

  • ❏ C. Use S3 Multi-Region Access Points to route to two buckets, enforce TLS, and use SSE-KMS

  • ❏ D. Use S3 Replication Time Control with SSE-KMS and rely on client retries

How can you trigger immediate email alerts whenever a CloudWatch Logs stream contains log entries marked CRITICAL?

  • ❏ A. AWS Firewall Manager with EventBridge rule

  • ❏ B. CloudWatch Logs Insights scheduled query to SNS

  • ❏ C. CloudWatch Logs metric filter with CloudWatch alarm to SNS

  • ❏ D. CloudWatch Synthetics canary and CloudWatch alarm

What approach lets you query using the same partition key with an alternate sort key in DynamoDB while still supporting strongly consistent reads?

  • ❏ A. Add a Global Secondary Index with the same partition key and a different sort key

  • ❏ B. Build a new table with an LSI that reuses the partition key and adds a new sort key; migrate data

  • ❏ C. Add a Local Secondary Index to the existing table

  • ❏ D. Use DynamoDB Accelerator (DAX) for strong read consistency

How can you make only the catalog table globally available while keeping the accounts and view_history tables regional in Aurora and requiring minimal application changes?

  • ❏ A. Use DynamoDB Global Tables for catalog; keep accounts and view_history in Aurora

  • ❏ B. Provision Aurora Global Database for catalog; keep other tables regional in Aurora

  • ❏ C. Migrate all tables to DynamoDB Global Tables

  • ❏ D. Create cross-Region Aurora read replicas for the entire cluster

During an AWS CodeDeploy blue green deployment of EC2 instances behind an Application Load Balancer the AllowTraffic step fails but CodeDeploy logs show no errors. What is the most likely cause?

  • ❏ A. AWS WAF rules are blocking ALB health checks

  • ❏ B. ALB target group health checks are misconfigured

  • ❏ C. Auto Scaling scale-in removed instances during deployment

  • ❏ D. CodeDeploy service role lacks Elastic Load Balancing permissions

What is the most cost-effective method to trigger an alert within 120 seconds when the DynamoDB DeleteTable API is called?

  • ❏ A. Use AWS Config with a custom rule to notify SNS on table deletion

  • ❏ B. Create an EventBridge rule on DynamoDB service events for DeleteTable and send to SNS

  • ❏ C. Enable CloudTrail management events and add an EventBridge rule filtering dynamodb DeleteTable to SNS

  • ❏ D. Stream CloudTrail to CloudWatch Logs with a metric filter on DeleteTable and trigger Lambda to publish to SNS

How should EC2 instances securely retrieve database credentials at runtime across development, test, and production environments?

  • ❏ A. Store the password in Amazon S3 encrypted with SSE-KMS and download at boot with an instance role

  • ❏ B. Bake the password into the AMI and use an instance profile only for other services

  • ❏ C. Use static access keys on instances to read a SecureString from AWS Systems Manager Parameter Store

  • ❏ D. Use an instance profile and retrieve credentials from AWS Secrets Manager

How can you declaratively enable HTTP to HTTPS redirects on an Elastic Beanstalk application load balancer so CodePipeline applies the change without making direct environment modifications?

  • ❏ A. Add a CloudFormation stage to modify the ALB listener

  • ❏ B. Commit .ebextensions/alb.config with option_settings for aws:elbv2:listener:default redirect

  • ❏ C. Use EB CLI saved_configs to push listener rules

  • ❏ D. Use container_commands to call ALB APIs from instances

For a multi Region serverless API using Amazon API Gateway and AWS Lambda, which routing method best directs users to the lowest latency regional endpoint while providing automatic failover?

  • ❏ A. Route 53 geolocation routing to regional API Gateway with health checks; DynamoDB global tables

  • ❏ B. Route 53 latency-based routing to regional API Gateway with health checks; in-Region Lambda; DynamoDB global tables

  • ❏ C. CloudFront in front of a single regional API Gateway

  • ❏ D. Route 53 failover routing with Application Recovery Controller across two API Gateways

In CodeDeploy for EC2 instances with an Application Load Balancer which lifecycle event should run live health checks so a rolling deployment can automatically roll back on failure?

  • ❏ A. Integrate the deployment group with an ALB and rely on target health checks to roll back

  • ❏ B. Configure a CloudWatch alarm for the deployment to trigger rollback when it goes to ALARM

  • ❏ C. Run health checks in the ValidateService hook in appspec.yml and enable automatic rollback on failures

  • ❏ D. Use EventBridge to invoke a Lambda that stops the deployment and redeploys the previous version on failed probes

What is the most efficient way to automate consistent, isolated deployments of an application to five additional AWS Regions?

  • ❏ A. AWS CodePipeline with cross-Region actions

  • ❏ B. Use AWS CloudFormation change sets from an administrator account across Regions

  • ❏ C. AWS CloudFormation StackSets with a delegated admin to deploy identical stacks to targeted Regions

  • ❏ D. AWS CDK Pipelines across Regions

Which AWS service enables low-latency local writes while providing global reads with multi-active replication across six Regions?

  • ❏ A. Amazon DynamoDB Accelerator (DAX)

  • ❏ B. Amazon Aurora Global Database

  • ❏ C. Amazon DynamoDB Global Tables

  • ❏ D. Amazon RDS with cross-Region read replicas

AWS DevOps Practice Test Answers

How can you automatically start and stop EC2 instances and a single RDS DB only while CodePipeline test stages run which last about four hours and occur roughly three times per week without changing the existing architecture?

  • ✓ C. EventBridge CodePipeline state-change events invoking Systems Manager Automation to start/stop EC2 and RDS

The best choice is EventBridge CodePipeline state-change events invoking Systems Manager Automation to start/stop EC2 and RDS. EventBridge CodePipeline state-change events invoking Systems Manager Automation to start/stop EC2 and RDS uses EventBridge to detect pipeline and stage state changes and then invokes a Systems Manager Automation runbook to orchestrate the start and stop of the EC2 instances and the single RDS DB only while tests run.

This approach is event driven and centrally managed. EventBridge CodePipeline state-change events invoking Systems Manager Automation to start/stop EC2 and RDS lets you create an idempotent Automation runbook that starts the required EC2 instances and starts the RDS DB at test start and then stops them when tests finish. This keeps costs low and preserves the existing architecture because it does not require changing the workload to a different database engine or adding load balancers.

The option Use CodeBuild pre/post steps with AWS CLI to start and stop resources is risky because it places the start and stop logic inside build steps. Those steps may not run or may fail on pipeline cancellations and they require managing scripts and credentials, which increases operational complexity and failure modes.

The option AWS Instance Scheduler with a weekly cron schedule is wrong because it is strictly time based. A weekly cron may start or stop instances at fixed times and it can power resources when no tests are running or fail to cover ad hoc or shifted pipeline runs, so it does not align resource runtime to actual pipeline test windows.

The option Migrate to Aurora Serverless, add an ALB, and use Lambda to toggle power is incorrect because it requires redesigning the architecture. Migrating the database and adding an ALB are unnecessary changes for the stated requirement and they introduce extra cost and complexity when an event driven Automation runbook can meet the goal without rearchitecting.

Use event driven triggers from EventBridge to invoke Systems Manager Automation runbooks for coordinated start and stop actions so resources run only during pipeline test stages and the architecture remains unchanged.

How can EC2 instances running in private subnets securely retrieve user data and bootstrap packages without any Internet egress?

  • ✓ B. Mirror dependencies to S3 and fetch via an S3 gateway VPC endpoint using an instance role

Mirror dependencies to S3 and fetch via an S3 gateway VPC endpoint using an instance role is correct because it lets EC2 instances in private subnets retrieve user data and bootstrap packages without any Internet egress.

Mirror dependencies to S3 and fetch via an S3 gateway VPC endpoint using an instance role keeps traffic on the AWS network so there is no public path to the Internet. Using an instance role avoids embedding credentials in images or scripts and lets IAM enforce least privilege. You can further restrict access by adding an S3 bucket policy that uses aws:SourceVpce to allow only the specific VPC endpoint and by enabling S3 Block Public Access to prevent accidental exposure.

Attach an egress-only internet gateway is incorrect because it still provides outbound Internet connectivity and it only works for IPv6. That behavior violates a strict no Internet egress requirement.

Route outbound traffic through a NAT gateway is wrong because a NAT gateway explicitly enables Internet egress for private subnets and therefore does not meet the isolation requirement.

Create a VPC endpoint to Amazon Linux repositories is not viable because public operating system repositories do not offer PrivateLink VPC endpoints. To avoid Internet access you must mirror those repositories into a private store such as S3 or run an internal package cache.

When a question specifies no Internet egress look for solutions that use VPC endpoints and private mirrors such as S3 and enforce access with instance roles and bucket policies.

Which deployment strategy provides zero downtime releases with immediate rollback, stops further deployments on failure, and avoids any capacity reduction for a stack using Lambda, DynamoDB, and OpenSearch?

  • ✓ B. CloudFormation; CodeDeploy blue/green (Lambda); Elastic Beanstalk Immutable

The correct choice is CloudFormation; CodeDeploy blue/green (Lambda); Elastic Beanstalk Immutable.

CodeDeploy blue green for Lambda uses aliases to shift traffic and it supports automatic rollback on failed health checks which also stops downstream pipeline stages. Elastic Beanstalk Immutable provisions a parallel Auto Scaling group and validates the new instances before switching traffic which provides true zero downtime and allows a fast clean rollback without reducing capacity. CloudFormation ties the stack resources together so it can orchestrate the deployment across Lambda, DynamoDB, and OpenSearch environments.

CloudFormation; CodeDeploy in-place; Elastic Beanstalk Rolling with additional batch is wrong because in place updates lack environment isolation and they make rollback slower and riskier even if capacity can be preserved during parts of the update.

CloudFormation; CodeDeploy blue/green; Elastic Beanstalk All at once is wrong because an all at once policy replaces every instance at the same time and that causes a visible outage which fails the zero downtime requirement.

CloudFormation; CodeDeploy canary (Lambda); Elastic Beanstalk Rolling is wrong because canary deployments improve Lambda safety but a rolling update for the web tier can reduce capacity during the update and does not guarantee zero downtime or an immediate, isolated rollback.

When the question demands zero downtime and instant rollback remember to pick blue/green for Lambda and Immutable for instance based web tiers.

How can you automatically update an S3 static status page when EC2 Auto Scaling instances launch or terminate while maintaining a durable searchable log of the scaling events?

  • ✓ C. Use an EventBridge rule for Auto Scaling instance launch and terminate to invoke Lambda; Lambda updates the S3 page and writes events to CloudWatch Logs

The best solution is Use an EventBridge rule for Auto Scaling instance launch and terminate to invoke Lambda; Lambda updates the S3 page and writes events to CloudWatch Logs. This approach uses EventBridge to capture Auto Scaling lifecycle events and invokes Lambda to make immediate updates to the S3 hosted status page and to record each event in CloudWatch Logs.

EventBridge delivers near real time event detection for instance launch and terminate and a Lambda function can update either the rendered page or a backing JSON file on S3 so the site reflects current fleet state immediately. The same Lambda can write structured entries to CloudWatch Logs so you have a durable searchable audit trail without additional export steps.

The option Create an EventBridge rule that delivers events to CloudWatch Logs and periodically export to S3 is not ideal because it does not provide real time updates to the S3 page and it requires scheduled exports which add delay and operational overhead.

The option Configure an EventBridge rule with two targets, S3 and CloudWatch Logs is incorrect because S3 is not a supported direct EventBridge target so you cannot push objects to S3 without an intermediary such as Lambda.

The option Run a scheduled EventBridge rule every 15 minutes to have Lambda scan the Auto Scaling group and update S3, logging to S3 is weaker because periodic scans introduce lag compared to event driven triggers and storing logs only in S3 makes them less immediately searchable without services like Athena.

Use EventBridge to trigger Lambda on Auto Scaling lifecycle events and send logs to CloudWatch Logs for durable searchable audit trails

Which approach most simply ensures that each of 15 EC2 instances retains the same private IP address and hostname across reboots and replacements by reusing preallocated Elastic Network Interfaces?

  • ✓ B. CloudFormation nested stacks with one-instance Auto Scaling groups that reattach designated ENIs and set hostnames at boot

CloudFormation nested stacks with one-instance Auto Scaling groups that reattach designated ENIs and set hostnames at boot is correct because it most simply ensures each EC2 instance keeps the same private IP and hostname while still allowing automatic replacement and recovery.

Preallocating a dedicated ENI per node preserves the private IP address when an instance is replaced and lets you consistently apply hostname and DNS configuration at boot. Using an Auto Scaling group sized to one provides self healing because the ASG can relaunch and reattach the designated ENI for that node. Modeling the pattern with CloudFormation nested stacks makes it easy to replicate the setup for all 15 nodes without extensive custom scripting.

Develop a custom AWS CLI or SDK script to launch and replace instances, attaching preallocated ENIs is operationally brittle and manual because it lacks built in self healing and requires ongoing maintenance to cover failure modes and edge cases.

AWS Elastic Beanstalk abstracts the infrastructure and does not provide deterministic control over reusing specific ENIs or enforcing per instance static hostnames so it is not suitable for stable network identities.

AWS CDK with Lambda custom resources to manage ENI attachment can accomplish ENI reuse but it adds custom code and event handling which increases complexity and operational burden compared with a straightforward CloudFormation nested stack pattern.

When the requirement is persistent private IPs per EC2 instance think preallocated ENIs and use one instance Auto Scaling groups managed by CloudFormation for self healing and minimal custom code.

An EC2 instance that uses an instance profile receives a 403 Access Denied error when it attempts to call S3 GetObject on a restricted bucket. What are two likely causes? (Choose 2)

  • ✓ B. Bucket policy denies or omits required access

  • ✓ D. IAM role for the instance lacks or is prevented from s3:GetObject

Bucket policy denies or omits required access and IAM role for the instance lacks or is prevented from s3:GetObject are the correct answers.

A Bucket policy denies or omits required access will cause S3 to return 403 Access Denied when it does not include an Allow for the caller or when it contains an explicit Deny or conditions such as a required source VPC endpoint that the request does not meet. Bucket policies are evaluated with IAM permissions and an explicit Deny will override any Allow.

An IAM role for the instance lacks or is prevented from s3:GetObject will also produce 403 when the instance profile role does not grant s3 GetObject or when the role is blocked by a permissions boundary or an organization service control policy. Objects encrypted with a customer managed KMS key will also yield 403 if the role lacks the KMS decrypt permission even when S3 permissions are present.

The option The EC2 instance security group blocks outbound HTTPS traffic is incorrect because network egress blocks normally lead to connection failures or timeouts rather than an S3 403 authorization error.

The option Default encryption is enabled on the S3 bucket is incorrect because S3 transparently decrypts for authorized principals and default encryption alone does not deny access unless KMS key permissions are missing.

The option S3 Block Public Access is enabled on the bucket is incorrect because Block Public Access prevents public ACLs and policies from granting public access but it does not block requests from an authenticated IAM role that has the proper permissions.

When you see a 403 from S3 check bucket policies IAM role permissions and KMS key grants before troubleshooting network connectivity

In an AWS CodeBuild buildspec.yml file how should you configure pushing a Docker image to a private Amazon ECR repository so that the image is pushed only after a successful build?

  • ✓ C. Use post_build commands to push the image to ECR

Use post_build commands to push the image to ECR is correct because AWS CodeBuild executes phases in order and it only continues to later phases when earlier phases succeed. Placing the docker push in the post_build phase ensures the image is pushed only after the build and tests complete successfully.

Practically you should perform ECR login and setup in pre_build and build and image creation in build and then run the push in post_build. This sequence keeps the push conditional on a successful build and prevents publishing images from failed runs.

Use a finally block in post_build to push the image is incorrect because a finally section runs regardless of success or failure and it can push images even when the build has failed.

AWS CodePipeline is not the correct answer for this specific question because the requirement is about configuring the buildspec so the push happens only after a successful build inside the build job. CodePipeline can orchestrate stages but it does not replace the need to put a conditional push in the buildspec.

Use pre_build commands to push the image is incorrect because pre_build runs before the build steps and it could push incomplete or failing artifacts.

Remember that CodeBuild phases run sequentially and stop on failure so use pre_build for login, build for creating the image and post_build for pushing the image only on success.

Which AWS native solution provides real time alerts for each sts AssumeRole action that targets a specific IAM role?

  • ✓ B. EventBridge rule on CloudTrail sts:AssumeRole for the role, invoke Lambda to SNS

EventBridge rule on CloudTrail sts:AssumeRole for the role, invoke Lambda to SNS is correct because you can create an EventBridge rule on the default event bus that uses the AWS API Call via CloudTrail event source and filter for detail.eventSource=”sts.amazonaws.com” detail.eventName=”AssumeRole” and detail.requestParameters.roleArn for the specific role. This approach provides deterministic real time alerts and keeps CloudTrail as the audit source of record.

Using a EventBridge rule matched to CloudTrail API events lets EventBridge deliver each matching event immediately and then invoke a Lambda function to format and publish a message to SNS. The Lambda gives you flexible formatting and enrichment before the notification so recipients get actionable details right away.

CloudTrail Insights via EventBridge to SNS is incorrect because Insights flags anomalous patterns and not every occurrence of a specific API call against a given role. Insights can miss routine AssumeRole calls that are not statistically anomalous.

EventBridge rule for console sign-in; Lambda to SNS is incorrect because ConsoleLogin events do not show subsequent sts:AssumeRole calls and this method would miss CLI and programmatic role assumptions.

CloudTrail Lake scheduled query alerts to SNS is incorrect because CloudTrail Lake queries are batch oriented and cannot deliver immediate per event notifications.

When the requirement is a precise near real time alert look for EventBridge event patterns on AWS API Call via CloudTrail with content based filters on fields such as eventName eventSource and requestParameters and avoid anomaly detection or batch query services when the prompt stresses immediate and deterministic alerts.

For near real time per event alerts use EventBridge rules on CloudTrail API call events and filter on eventName and requestParameters to target the specific role.

Which AWS architecture ensures multi-AZ high availability, continuous vulnerability and exposure scanning, and correct Route 53 apex mapping to an ALB?

  • ✓ B. Inspector, EC2 Auto Scaling across four AZs behind ALB, Aurora, Route 53 alias apex to ALB

The correct choice is Inspector, EC2 Auto Scaling across four AZs behind ALB, Aurora, Route 53 alias apex to ALB. This option aligns the required continuous vulnerability and exposure scanning with infrastructure that spans multiple Availability Zones and it maps the zone apex correctly to an Application Load Balancer.

Inspector provides native continuous vulnerability and exposure scanning for EC2 instances and for container images stored in ECR and for Lambda code so it meets the ongoing assessment requirement. Deploying EC2 Auto Scaling across four AZs behind ALB gives high availability by spreading instances across multiple fault domains and by using an Application Load Balancer to distribute traffic. Using Aurora provides a resilient database tier and using a Route 53 alias apex to ALB ensures the root domain can point to the load balancer which is not possible with a plain CNAME.

The option GuardDuty, EC2 Auto Scaling in two AZs behind ALB, RDS MySQL Multi-AZ with read replicas, Route 53 CNAME at apex is incorrect because GuardDuty focuses on threat detection and findings rather than continuous vulnerability scanning and a CNAME cannot be used at the zone apex to map to an ALB.

The option Security Hub, EC2 Auto Scaling across three AZs behind ALB, Aurora, Route 53 alias apex to ALB is incorrect because Security Hub aggregates and normalizes security findings and standards but it does not itself perform the vulnerability and exposure scanning required by the question.

The option Macie, DynamoDB, EC2 Auto Scaling across two AZs behind ALB, Route 53 non-alias A apex is incorrect because Macie is for sensitive data discovery and classification rather than vulnerability scanning and a non-alias A record cannot directly target an ALB at the zone apex.

Match each requirement to the service capability and remember that a Route 53 apex must use an alias record for an ALB and that Inspector is the AWS service for continuous vulnerability and exposure scanning.

Which AWS tool can discover a legacy Java application, containerize it for deployment to Amazon ECS or Amazon EKS, and bootstrap an automated CI/CD pipeline?

  • ✓ B. AWS App2Container with CodeBuild/CodeDeploy

AWS App2Container with CodeBuild/CodeDeploy is correct because it is designed to discover existing Java applications running on servers, produce container images and task or pod definitions for deployment to Amazon ECS or Amazon EKS, and generate a working CI/CD pipeline that uses CodeBuild and CodeDeploy.

AWS App2Container with CodeBuild/CodeDeploy automates application discovery and containerization so you do not need to manually refactor the application to create a container image. It also scaffolds deployment artifacts and CI/CD integration so you can move a legacy Java workload to ECS or EKS with minimal manual steps.

AWS Copilot CLI for ECS with CI/CD is incorrect because Copilot is focused on deploying and managing new containerized services and their pipelines rather than analyzing and containerizing applications that are already running on servers.

AWS Proton service templates is incorrect because Proton standardizes infrastructure and service templates across teams rather than transforming legacy applications into containers or performing discovery on existing VMs.

AWS Elastic Beanstalk Docker with CodePipeline is incorrect because Elastic Beanstalk can run Dockerized applications and integrate with pipelines but it does not provide automated discovery and containerization of an existing Java workload.

When the question highlights discovery of an existing application plus automated containerization and CI CD scaffolding choose App2Container.

When using CodeDeploy with ALB traffic shifting how can you automatically roll back if EC2 maximum CPU usage spikes during live traffic?

  • ✓ B. CloudWatch alarm on EC2 Max CPU with CodeDeploy automatic rollback enabled

CloudWatch alarm on EC2 Max CPU with CodeDeploy automatic rollback enabled is correct because CodeDeploy supports attaching CloudWatch alarms to a deployment group and will automatically rollback a deployment if a monitored alarm breaches while traffic is being shifted by an ALB.

Create an alarm that monitors the EC2 metric CPUUtilization with the statistic Maximum and enable automatic rollback on the deployment group so CodeDeploy watches that alarm during live traffic and reverts the deployment if the alarm breaches.

CloudWatch alarm on ALB 5XXErrorRate tied to CodeDeploy rollback is not appropriate because the scenario asks to react to high EC2 CPU and an ALB 5XX error rate may not increase when instances are CPU constrained but still returning responses.

Auto Scaling target tracking on CPU plus ALB health checks is incorrect because scaling policies and health checks manage capacity and instance health and they do not cause CodeDeploy to roll back a deployment.

ValidateService hook samples CPU for 10 minutes and fails to trigger rollback is brittle and is not the supported automatic rollback mechanism during ALB traffic shifting so it does not provide a reliable rollback on live CPU spikes.

For automatic rollback during ALB traffic shifting choose a CloudWatch alarm attached to the deployment group and enable automatic rollback so the rollback triggers on the exact metric that matches the failure.

For a stateful Apache Kafka deployment with 12 brokers distributed across three Availability Zones which CloudFormation pattern ensures failed brokers are automatically replaced and the replacement instance in the same AZ reattaches its dedicated EBS volume?

  • ✓ D. Nested stacks: one-instance ASG per broker with a dedicated, tagged EBS volume and boot-time reattach

Nested stacks: one-instance ASG per broker with a dedicated, tagged EBS volume and boot-time reattach is correct because it gives each broker its own min=1 max=1 Auto Scaling group and a dedicated AZ local volume that a replacement instance can identify and reattach at boot.

Nested stacks: one-instance ASG per broker with a dedicated, tagged EBS volume and boot-time reattach works since a single instance ASG performs health based replacement and the tagged volume stays in the same Availability Zone so a new instance launched into that AZ can find the volume by tag and attach it using user data or a lifecycle hook. Using nested stacks makes it easy to repeat the same per broker pattern across all twelve brokers while keeping templates maintainable.

EC2 Auto Scaling with Launch Template and EBS BlockDeviceMappings set DeleteOnTermination=false is wrong because preventing deletion only leaves detached volumes behind and it does not provide a mechanism for the replacement instance to automatically find and reattach the prior broker volume.

Create 12 EC2 instances with attached EBS volumes in one stack and rely on stack updates for recovery is wrong because CloudFormation will not recreate instances that are terminated outside a stack operation and it does not handle stateful reattachment of existing volumes for replacements.

Single Auto Scaling group across 3 AZs with preprovisioned EBS volumes and user data to claim any free volume is wrong because a single ASG can launch instances in any AZ and EBS volumes are AZ scoped so volumes cannot move between AZs. Relying on a generic claim process cannot reliably match a broker to its correct disk.

Stateful nodes need one to one ASGs and AZ local volumes so prefer per node ASGs with tagged volumes and boot time reattach logic when the question stresses AZ affinity and automatic recovery

Which AWS service provides continuous threat detection across an account by analyzing CloudTrail events, VPC flow logs, and DNS logs?

  • ✓ B. Amazon GuardDuty

Amazon GuardDuty is the correct choice because it continuously analyzes AWS CloudTrail events, VPC Flow Logs, and DNS logs to detect compromised resources and suspicious activity across an account and it produces prioritized, actionable findings.

Amazon GuardDuty ingests those log sources and applies threat intelligence, anomaly detection, and machine learning to surface threats without requiring you to deploy sensors or agents. The service runs continuously and integrates findings with other AWS services for alerting and automated response.

Amazon Inspector is incorrect because it focuses on vulnerability and configuration assessments of workloads and it does not provide continuous account level detection from CloudTrail, VPC flow, or DNS logs.

Amazon Macie is incorrect because it specializes in discovering and protecting sensitive data in Amazon S3 and it does not analyze account activity logs for threat detection.

Amazon Detective is incorrect because it is intended for investigation and root cause analysis by building relationship graphs from findings such as those produced by Amazon GuardDuty and it does not perform the initial continuous threat detection across CloudTrail and VPC logs.

Continuous detection across CloudTrail, VPC Flow Logs, and DNS logs is a strong clue to choose the managed threat detection service. If the scenario emphasizes vulnerability scanning pick Inspector and if it emphasizes S3 sensitive data pick Macie.

How can you deploy to an Auto Scaling group behind a load balancer so that only one EC2 instance is updated at a time and the deployment automatically rolls back if CPU utilization exceeds 95%?

  • ✓ C. AWS CodeDeploy with Auto Scaling, OneAtATime, CloudWatch CPU alarm at 95%, auto rollback

AWS CodeDeploy with Auto Scaling, OneAtATime, CloudWatch CPU alarm at 95%, auto rollback is correct because CodeDeploy can perform one-instance-at-a-time deployments to instances in an Auto Scaling group and it can bind CloudWatch alarms to a deployment so that a CPU breach triggers an automatic stop and rollback.

CodeDeploy provides a OneAtATime deployment configuration that removes a single instance from the load balancer, updates it, and then returns it to service while other instances continue serving traffic. You can target an Auto Scaling group with a deployment group and associate CloudWatch alarms with the deployment so that a CPU alarm exceeding the 95 percent threshold will stop the deployment and initiate rollback automatically.

Elastic Beanstalk rolling update with batch size 1 and CPU alarm rollback is incorrect because Elastic Beanstalk supports rolling updates and health checks but it does not offer the same native per-deployment, CloudWatch-metric-driven automatic rollback behavior as CodeDeploy.

EC2 Auto Scaling Instance Refresh with 100% minimum healthy and rollback on CPU alarm is incorrect because Instance Refresh focuses on replacing instances based on health checks and does not natively use a CloudWatch CPU alarm to trigger rollback during the refresh process.

AWS Systems Manager Automation with lifecycle hooks and Step Functions is incorrect because that solution requires a custom orchestration and it lacks the built-in deployment semantics and automatic CloudWatch alarm rollback that CodeDeploy provides.

Remember to think CodeDeploy OneAtATime plus an attached CloudWatch alarm when the exam asks for single-instance updates behind a load balancer with automatic rollback on a metric breach.

Which choices provide a serverless container runtime and encrypted connectivity from on premises to a VPC for a Docker service migrating to AWS? (Choose 2)

  • ✓ B. ECS with Fargate launch type

  • ✓ D. Site-to-Site VPN over IPsec

The correct choices are ECS with Fargate launch type and Site-to-Site VPN over IPsec. ECS with Fargate launch type provides a serverless container runtime and Site-to-Site VPN over IPsec provides encrypted connectivity from on premises to a VPC.

ECS with Fargate launch type runs containers without requiring you to manage servers or EC2 capacity. This reduces operational overhead and meets the requirement to minimize infrastructure management for a Docker service migration.

Site-to-Site VPN over IPsec creates an encrypted IPsec tunnel between your on premises network and the AWS VPC. This satisfies the requirement for in transit encryption for traffic between the on premises environment and the service hosted in AWS.

AWS PrivateLink is not suitable because it provides private access to specific AWS services through interface endpoints inside a VPC and it does not act as a general routed encrypted path from on premises to the VPC.

AWS Direct Connect without VPN is not sufficient because Direct Connect by itself does not provide encryption by default and you would need to add a VPN or other encryption to meet the encrypted connectivity requirement.

Amazon EKS on EC2 is not ideal because running EKS on EC2 requires managing worker nodes, patching, capacity and scaling which increases management overhead and conflicts with the goal of minimizing infrastructure management.

Think serverless for container runtimes when the goal is minimal management and think Site-to-Site VPN for straightforward encrypted on premises to VPC connectivity.

How should a Lambda function retrieve Amazon RDS credentials at runtime while ensuring the database password is automatically rotated every 30 days?

  • ✓ D. AWS Secrets Manager with rotation enabled

The best solution is AWS Secrets Manager with rotation enabled. This service allows a Lambda function to retrieve the database credentials at runtime and it supports managed rotation so the password can be changed automatically on a 30 day schedule.

AWS Secrets Manager with rotation enabled stores credentials securely and integrates with Amazon RDS. Lambda can call the GetSecretValue API to obtain the secret at execution time and Secrets Manager can deploy a rotation Lambda that updates the RDS password and the stored secret on the schedule you specify, which meets the automatic rotation requirement and avoids embedding credentials in code.

AWS Systems Manager Parameter Store SecureString can hold encrypted values but it lacks built in, managed rotation for RDS credentials. Achieving a 30 day password rotation with Parameter Store would require building and running custom rotation code and schedules.

Amazon RDS IAM authentication tokens use short lived tokens instead of a static password and they remove the need to store a password, but they do not satisfy a requirement that explicitly mandates rotating a database password. If the policy allowed token based authentication then tokens could be acceptable.

AWS Key Management Service manages encryption keys and not application secrets. KMS is commonly used to encrypt secrets that are stored elsewhere, but it does not itself store or rotate database passwords.

When a question asks for secret storage plus automatic rotation of database passwords choose Secrets Manager over Parameter Store or KMS. Remember that RDS IAM tokens are token based and do not perform password rotation.

Which approach allows implementing a canary deployment that shifts approximately 15 percent of traffic before completing a full rollout for a Lambda API behind API Gateway?

  • ✓ B. Manage Lambda versions/aliases with CloudFormation and use API Gateway stage canary

Manage Lambda versions/aliases with CloudFormation and use API Gateway stage canary is correct because it lets you shift approximately 15 percent of traffic to a new Lambda version before promoting the change to 100 percent.

Lambda aliases support weighted routing between versions and API Gateway stages provide a canary setting to divert a controlled percentage of requests to the new version. Using CloudFormation keeps the alias weights and stage canary configuration declarative and repeatable so the rollout can be automated and rolled back by pipelines if needed.

AWS AppConfig feature flags for 15% exposure is incorrect because AppConfig controls configuration and feature flags rather than HTTP request routing or Lambda traffic splitting. It can gate features per user or environment but it does not perform API Gateway or Lambda percentage based traffic shifting.

AWS CDK with Route 53 failover routing is incorrect because failover routing is a binary active or passive mechanism and does not support controlled percentage based canaries. The CDK is an infrastructure as code framework and it does not change the fact that Route 53 failover cannot split traffic by percent.

CodeDeploy all-at-once for Lambda with Route 53 simple routing is incorrect because an all at once deployment immediately shifts 100 percent of traffic and Route 53 simple routing offers no weighting. That combination provides no gradual exposure window for canary testing or easy rollback during the rollout.

Look for keywords such as percentage based canary, Lambda aliases and API Gateway stage canary when the question asks about shifting a specific portion of traffic

Which approach detects changes to an S3 bucket or a federated IAM role in near real time and triggers automated remediation?

  • ✓ C. AWS Config change-triggered rule for the S3 bucket and federated role invoking SSM Automation via Lambda to roll back

The correct choice is AWS Config change-triggered rule for the S3 bucket and federated role invoking SSM Automation via Lambda to roll back. This approach is designed to detect configuration changes in near real time and to trigger an automated remediation workflow.

The AWS Config change-triggered rule for the S3 bucket and federated role invoking SSM Automation via Lambda to roll back evaluates resource state on configuration change events and provides a canonical, state-based view of resources. It integrates with Systems Manager Automation so remediations are standardized and auditable and invoking SSM via Lambda enables automated rollback of unintended edits quickly.

The option EventBridge scheduled Lambda every 15 minutes to scan IAM and S3 policies is periodic and introduces latency. It requires custom scanning and diff logic and it cannot provide the near real time, state-aware detection that Config rules do.

The option AWS CloudFormation drift detection daily with Lambda remediation runs infrequently and only covers stack-managed resources. It can miss changes made outside CloudFormation and it does not provide immediate event-driven remediation.

The option EventBridge with CloudTrail API events to invoke Lambda on policy updates is event-driven but it tracks API calls rather than the authoritative resource state. That approach needs careful filtering and can miss state changes or produce duplicate handling compared with AWS Config’s resource evaluations.

Favor state-based, change-triggered AWS Config rules with SSM Automation for near real time, auditable remediation. Remember that CloudTrail and EventBridge show API activity and scheduled scans add latency.

ECS Fargate tasks running in private subnets without a NAT gateway fail with the error ‘ResourceInitializationError unable to pull secrets or registry auth’ What actions will allow the tasks to pull container images and retrieve secrets so they can start successfully? (Choose 2)

  • ✓ B. Create VPC endpoints: ECR API, ECR DKR, and S3

  • ✓ E. Fix the task execution IAM role permissions for ECR and Secrets Manager

The correct choices are Create VPC endpoints: ECR API, ECR DKR, and S3 and Fix the task execution IAM role permissions for ECR and Secrets Manager. Together these provide the private network paths and the required authorization for Fargate tasks to pull images and retrieve secrets when they run in private subnets without a NAT gateway.

Creating the VPC endpoints gives the task a route to ECR and to S3 without internet egress. The ECR interface endpoints cover the registry and token endpoints and the S3 gateway endpoint is needed because image layers are stored in S3. With these endpoints configured the Fargate control plane and the task can reach the registry and download layers from inside the VPC.

Fixing the task execution IAM role ensures the ECS agent can call ECR and Secrets Manager APIs. The execution role needs permissions such as ecr:GetAuthorizationToken, ecr:BatchGetImage, ecr:GetDownloadUrlForLayer and permissions to read secrets and use KMS for decryption when secrets are encrypted. Without those permissions the pull and secret retrieval actions fail and the task reports a ResourceInitializationError.

Run tasks in public subnets with public IPs can provide internet egress and would allow pulls to succeed, but it violates the requirement to keep tasks in private subnets and is unnecessary if proper VPC endpoints are in place.

Add a Secrets Manager VPC endpoint only is incomplete because registry authentication and image layer downloads still require ECR and S3 endpoints, and the execution role must still have the right API permissions.

Enable awsvpc networking on tasks does not address the problem because Fargate already uses awsvpc mode and that setting does not create the network or IAM permissions needed to pull images or retrieve secrets.

Remember check both VPC endpoints for ECR and S3 and the task execution role permissions when Fargate in private subnets fails with registry auth or secrets errors

Which solution enables end-to-end tracing for AWS Lambda that includes per-request subsegments, automatic anomaly detection, and notifications delivered within two minutes?

  • ✓ B. External Lambda extension emitting X-Ray segments/subsegments, active tracing, X-Ray groups and Insights, alerts via EventBridge and CloudWatch

External Lambda extension emitting X-Ray segments/subsegments, active tracing, X-Ray groups and Insights, alerts via EventBridge and CloudWatch is correct because it enables end to end tracing with per request subsegments and it supports automatic trace based anomaly detection plus rapid notifications.

The external extension can use the Lambda Telemetry API to buffer and reliably emit X Ray segments and subsegments from inside the function while active tracing propagates context across services so traces span multiple components. X Ray groups aggregate related traces and X Ray Insights provides automatic anomaly detection on trace patterns. EventBridge can route X Ray Insight events for fast notifications and CloudWatch alarms can alert on metrics like error rate and duration to meet near real time notification goals.

Internal Lambda extension with X-Ray, active tracing, and CloudWatch Logs Insights alerts is incorrect because internal extensions are less suitable for exporting telemetry outside the execution environment and CloudWatch Logs Insights analyzes logs rather than trace data so it cannot provide the trace level anomaly detection that X Ray Insights offers.

AWS Distro for OpenTelemetry for Lambda with CloudWatch ServiceLens and Contributor Insights is incorrect because ADOT can emit traces but CloudWatch ServiceLens is mainly a visualization layer and Contributor Insights focuses on log pattern analysis rather than detecting anomalies in traces and fine grained subsegments.

Merge workflows into one function, enable X-Ray active tracing and Insights, add CloudWatch alarms is incorrect because collapsing workflows reduces modularity and removes cross function visibility and distributed tracing should span multiple functions and services so merging functions does not improve trace correlation and is unnecessary when X Ray can correlate traces across components.

When a question asks for per request subsegments and automatic trace based anomaly detection think X Ray Insights and active tracing with an external Lambda extension that uses the Telemetry API. Use EventBridge for Insight events and CloudWatch alarms for metric alerts.

How can you replace SSH access to EC2 instances with AWS Systems Manager Session Manager while ensuring all session traffic stays private within the VPC and does not traverse the internet? (Choose 2)

  • ✓ B. Create interface VPC endpoints for Systems Manager

  • ✓ D. Attach IAM instance profile with AmazonSSMManagedInstanceCore

The correct choices are Create interface VPC endpoints for Systems Manager and Attach IAM instance profile with AmazonSSMManagedInstanceCore. Together these allow AWS Systems Manager Session Manager to operate without exposing session traffic to the public internet.

Create interface VPC endpoints for Systems Manager provides private control and data channels for Session Manager by using the ssm, ec2messages, and ssmmessages interface endpoints inside the VPC and you should enable Private DNS so AWS resolves the service privately. These endpoints keep Session Manager traffic on the AWS network and prevent sessions from traversing the internet.

Attach IAM instance profile with AmazonSSMManagedInstanceCore gives the SSM Agent the permissions it needs to register the instance, communicate with the SSM service, and start or proxy sessions. The role must be attached to the instance and the SSM Agent must be installed and running so Session Manager can establish sessions.

Launch a bastion host is incorrect because it still relies on SSH access and introduces public or perimeter access that Session Manager is intended to remove.

Route Session Manager traffic via a NAT gateway is incorrect because routing via a NAT sends traffic to public SSM endpoints over the internet and that violates a strict private only requirement.

Create an interface VPC endpoint for Amazon EC2 is incorrect because the EC2 API endpoint does not carry Session Manager message channels and it does not provide the private session data path that SSM uses.

Remember you need the three interface endpoints ssm, ec2messages, and ssmmessages with Private DNS enabled and an instance role with AmazonSSMManagedInstanceCore plus a running SSM Agent to keep sessions fully private.

What is the simplest way to obtain per-object unique user counts and total request counts for an S3 bucket in 12 hour windows while minimizing operational overhead?

  • ✓ B. Enable S3 server access logging and query with Amazon Athena

Enable S3 server access logging and query with Amazon Athena is correct because it provides per-request records and allows serverless, low operations querying to compute total requests and distinct requesters per object in 12 hour windows.

S3 server access logs contain one record per request and you can partition those logs by date and hour so queries run efficiently. Athena lets you query the logs in place with standard SQL and compute COUNT(*) and COUNT(DISTINCT requester) per object key for any 12 hour window without provisioning or managing servers.

The option Load S3 access logs into Amazon Redshift and run SQL is functional but it requires extra ETL work and ongoing cluster management and cost which contradicts the minimal operations requirement.

The option Amazon S3 Storage Lens gives aggregated storage and activity insights but it does not provide per-object request counts or distinct-user metrics needed for this task.

The option Stream S3 data events via AWS CloudTrail to CloudWatch Logs and aggregate with Lambda is more complex and can become costly at high event volumes and it introduces unnecessary operational overhead compared with querying access logs with Athena.

Favor serverless solutions when the question asks for minimal operational overhead and log analysis. Query S3 access logs directly with Athena to get per-object and distinct-user counts without managing infrastructure.

In AWS CloudFormation what stack structure allows teams to deploy independently while keeping cross stack dependencies manageable and templates maintained in source control?

  • ✓ C. Decoupled stacks per domain with Export/ImportValue and separate versioning

Decoupled stacks per domain with Export/ImportValue and separate versioning is the correct choice because it supports independent team deployments while keeping cross stack dependencies explicit and manageable and it lets each template be versioned and stored in source control.

Using separate templates per logical domain reduces blast radius and avoids unnecessary coordination across teams. Exports and ImportValue calls provide stable, explicit references between stacks and let each stack evolve on its own release cadence. Managing each template in source control with its own versioning aligns with CloudFormation best practices and supports independent testing and rollbacks.

Parent stack with nested stacks is not ideal because nested stacks share the parent lifecycle and you must update the parent for changes in nested components. This increases coordination and the risk that an unrelated change affects other parts of the system.

CloudFormation StackSets from a central account for all components is useful for multi account or regional standardization and for pushing consistent stacks out at scale. It centralizes deployment cadence and is not the best fit when teams need fine grained, independent releases and direct cross stack dependencies.

Single monolithic template deployed on every change tightly couples all resources and increases the blast radius of changes. This pattern prevents teams from deploying independently and makes source control and versioning for individual domains harder to manage.

Look for phrases like independent deployments, manageable dependencies, and source control. Prefer domain scoped stacks with Export/ImportValue for explicit cross stack references and separate versioning.

How can you enforce TLS for data in transit and ensure encryption at rest while replicating Amazon S3 objects to a second Region on another continent at about 20,000 objects per second without causing throughput bottlenecks?

  • ✓ B. Block non-TLS via aws:SecureTransport, set default SSE-S3, and use S3 Cross-Region Replication

Block non-TLS via aws:SecureTransport, set default SSE-S3, and use S3 Cross-Region Replication is correct because it enforces encryption in transit while avoiding KMS per-request limits and still provides asynchronous replication to another Region.

Using a bucket policy that denies requests when aws:SecureTransport is false enforces HTTPS for data in transit. Setting default SSE-S3 provides server side encryption at rest without invoking AWS KMS for every PUT so you avoid KMS requests per second becoming a throughput bottleneck at very high ingest rates. Enabling S3 Cross-Region Replication copies objects asynchronously to the target Region so you meet the cross-continent replication requirement without introducing synchronous write latency.

Deny non-TLS with aws:SecureTransport, use SSE-KMS with multi-Region keys, and enable S3 Cross-Region Replication is not optimal because each PUT encrypted with SSE-KMS invokes KMS and KMS requests per second can throttle at very high object rates unless you raise quotas significantly.

Use S3 Multi-Region Access Points to route to two buckets, enforce TLS, and use SSE-KMS is incorrect because Multi-Region Access Points provide global routing and failover rather than automatic object replication, and using SSE-KMS still risks KMS throttling at extreme PUT rates.

Use S3 Replication Time Control with SSE-KMS and rely on client retries is incorrect because Replication Time Control does not remove KMS per-request limits and relying on client retries does not address enforcing TLS at the bucket level.

Prefer SSE-S3 for extremely high PUT rates to avoid KMS TPS limits and enforce TLS with a bucket policy that denies requests when aws:SecureTransport is false.

How can you trigger immediate email alerts whenever a CloudWatch Logs stream contains log entries marked CRITICAL?

  • ✓ C. CloudWatch Logs metric filter with CloudWatch alarm to SNS

The correct choice is CloudWatch Logs metric filter with CloudWatch alarm to SNS.

CloudWatch Logs metric filter with CloudWatch alarm to SNS uses a metric filter to match the text “CRITICAL” in incoming log events and converts those matches into a custom CloudWatch metric. A CloudWatch alarm watches that metric and can immediately publish to an Amazon SNS topic which sends email to subscribed addresses once the subscription is confirmed. This approach provides near real time notification and uses built in CloudWatch features so it is simple and reliable.

AWS Firewall Manager with EventBridge rule is incorrect because Firewall Manager focuses on managing and enforcing security policies across accounts and it does not parse CloudWatch Logs for specific severity strings or produce per line alerts.

CloudWatch Logs Insights scheduled query to SNS is incorrect because Logs Insights is intended for interactive queries and scheduled analysis and it does not provide continuous per line alerting to SNS without additional custom components such as Lambda or other automation.

CloudWatch Synthetics canary and CloudWatch alarm is incorrect because CloudWatch Synthetics runs synthetic tests against endpoints and APIs and it does not scan existing logs for string matches like “CRITICAL”.

When you need immediate alerts on specific log text use an CloudWatch Logs metric filter to create a metric and attach an CloudWatch alarm that publishes to an Amazon SNS topic so email is delivered promptly.

What approach lets you query using the same partition key with an alternate sort key in DynamoDB while still supporting strongly consistent reads?

  • ✓ B. Build a new table with an LSI that reuses the partition key and adds a new sort key; migrate data

The correct choice is Build a new table with an LSI that reuses the partition key and adds a new sort key and migrate data. A Local Secondary Index keeps the same partition key while providing an alternate sort key and it supports strongly consistent reads.

Local Secondary Indexes live with the base table and share the same partition key so they can be read with strong consistency. Because LSIs must be defined when the table is created you need to create a new table that includes the LSI and then migrate your items into that table to use the alternate sort key with strong reads.

Add a Global Secondary Index with the same partition key and a different sort key is incorrect because global secondary indexes do not support strongly consistent reads and are eventually consistent for indexed queries.

Add a Local Secondary Index to the existing table is incorrect because you cannot add an LSI after table creation since LSIs must be specified when the table is created.

Use DynamoDB Accelerator (DAX) for strong read consistency is incorrect because DAX is a caching layer that provides cached, eventually consistent responses and it does not guarantee strong consistency from the underlying table.

Keep in mind that LSI = same partition key and alternate sort key and supports strong consistency and LSIs must be created with the table.

How can you make only the catalog table globally available while keeping the accounts and view_history tables regional in Aurora and requiring minimal application changes?

  • ✓ B. Provision Aurora Global Database for catalog; keep other tables regional in Aurora

Provision Aurora Global Database for catalog; keep other tables regional in Aurora is the correct choice because isolating the catalog into its own Aurora cluster and making that cluster an Aurora Global Database provides low latency global reads and fast cross Region replication while preserving SQL compatibility and existing Aurora drivers.

Using Aurora Global Database for just the catalog lets you make that dataset global without moving the accounts and view_history tables. Those tables remain regional in their own Aurora clusters which preserves locality and keeps application changes minimal.

Use DynamoDB Global Tables for catalog; keep accounts and view_history in Aurora is incorrect because it mixes NoSQL and relational systems and would force changes to data models, queries, and drivers which violates the minimal changes requirement.

Migrate all tables to DynamoDB Global Tables is incorrect because it requires a full migration from relational to NoSQL and a major application rewrite that is unnecessary when only one table needs to be global.

Create cross-Region Aurora read replicas for the entire cluster is incorrect because cluster level replication would replicate every table and cannot keep only the catalog global while leaving accounts and view_history regional and it also increases operational complexity.

When the question mentions relational data and minimal changes prefer making the global subset its own Aurora cluster and enable Aurora Global Database rather than mixing data models or moving the whole cluster.

During an AWS CodeDeploy blue green deployment of EC2 instances behind an Application Load Balancer the AllowTraffic step fails but CodeDeploy logs show no errors. What is the most likely cause?

  • ✓ B. ALB target group health checks are misconfigured

The failure is most likely due to ALB target group health checks are misconfigured. In a blue/green deployment CodeDeploy updates the Application Load Balancer listeners to route traffic to the new target group and then waits for targets to pass health checks before allowing traffic.

If the health check path port protocol or expected success codes are wrong the new instances will not reach healthy status and the ALB target group health checks are misconfigured condition causes the AllowTraffic step to time out or fail while CodeDeploy logs may not show an application error. You should also confirm that security groups and network ACLs allow the ALB to reach targets and check the target health page for reason codes and recent failures.

The option AWS WAF rules are blocking ALB health checks is incorrect because AWS WAF filters client requests at the listener level and it does not block the ALB from performing its own health probes to targets.

The option Auto Scaling scale-in removed instances during deployment is unlikely because scale in events produce instance terminations and lifecycle messages and you would see instance health or lifecycle failures rather than a consistent AllowTraffic timeout with otherwise clean CodeDeploy logs.

The option CodeDeploy service role lacks Elastic Load Balancing permissions is incorrect because missing IAM permissions normally generate explicit authorization errors in the deployment output and logs instead of a silent health check based failure.

Check the target group health checks first and verify the health check path port protocol and success codes when AllowTraffic fails with an ALB. Also review the target health page for reason codes and ensure security groups allow ALB probes to targets.

What is the most cost-effective method to trigger an alert within 120 seconds when the DynamoDB DeleteTable API is called?

  • ✓ C. Enable CloudTrail management events and add an EventBridge rule filtering dynamodb DeleteTable to SNS

Enable CloudTrail management events and add an EventBridge rule filtering dynamodb DeleteTable to SNS is the correct choice because it captures management API calls and routes them to SNS in near real time while keeping costs low and architecture simple.

Enable CloudTrail management events and add an EventBridge rule filtering dynamodb DeleteTable to SNS works because CloudTrail records management API calls such as DeleteTable and Amazon EventBridge can pattern match those CloudTrail events and deliver them directly to an SNS topic. This flow avoids continuous log ingestion and extra Lambda execution costs and it meets the sub two minute alerting requirement with managed services and minimal components.

Use AWS Config with a custom rule to notify SNS on table deletion is not ideal because AWS Config focuses on configuration state and compliance evaluations and those evaluations can introduce delays and additional costs compared to a CloudTrail driven event route.

Create an EventBridge rule on DynamoDB service events for DeleteTable and send to SNS is incorrect because DynamoDB does not emit direct service events for management API calls and those API calls are delivered via CloudTrail rather than native DynamoDB service events.

Stream CloudTrail to CloudWatch Logs with a metric filter on DeleteTable and trigger Lambda to publish to SNS can work but it is less cost effective due to log ingestion metrics and Lambda invocation charges when compared to routing CloudTrail management events through EventBridge to SNS.

For a low cost near real time alert on specific API calls remember that management API events come from CloudTrail and that EventBridge can route those events directly to SNS with minimal components.

How should EC2 instances securely retrieve database credentials at runtime across development, test, and production environments?

  • ✓ D. Use an instance profile and retrieve credentials from AWS Secrets Manager

The correct choice is Use an instance profile and retrieve credentials from AWS Secrets Manager.

The recommended pattern gives each EC2 instance a instance profile so the instance receives temporary AWS credentials and does not need long lived keys. The pattern stores the database password in AWS Secrets Manager so the secret is encrypted managed and auditable. Secrets Manager also supports automatic rotation of database credentials and integrates with IAM and CloudTrail so you can separate development test and production secrets and maintain a clear audit trail.

The option Store the password in Amazon S3 encrypted with SSE-KMS and download at boot with an instance role is not ideal because S3 is not a dedicated secret store and it does not provide native secret rotation workflows. That approach increases operational and audit complexity for credential lifecycle management.

The option Bake the password into the AMI and use an instance profile only for other services is insecure and operationally brittle because secrets embedded in images are hard to rotate and they can proliferate across image copies which raises the risk of unintended disclosure.

The option Use static access keys on instances to read a SecureString from AWS Systems Manager Parameter Store is weak because placing long lived keys on instances is risky and undermines credential hygiene. While Parameter Store SecureString encrypts values it lacks the native database credential rotation features that Secrets Manager provides and relying on static keys increases exposure.

Favor instance profiles for temporary AWS credentials and use a managed secrets service with rotation when the question emphasizes passwords rotation or auditability.

How can you declaratively enable HTTP to HTTPS redirects on an Elastic Beanstalk application load balancer so CodePipeline applies the change without making direct environment modifications?

  • ✓ B. Commit .ebextensions/alb.config with option_settings for aws:elbv2:listener:default redirect

The correct choice is Commit .ebextensions/alb.config with option_settings for aws:elbv2:listener:default redirect. This declares the ALB listener redirect as part of the Elastic Beanstalk application configuration so CodePipeline deploys the change without making direct environment modifications.

Commit .ebextensions/alb.config with option_settings for aws:elbv2:listener:default redirect leverages the aws:elbv2 namespaces and option_settings to configure listener behavior declaratively. Elastic Beanstalk reconciles and manages its load balancer resources so the redirect is applied at the ALB layer and remains controlled by Beanstalk during pipeline driven deployments.

Add a CloudFormation stage to modify the ALB listener is incorrect because Elastic Beanstalk owns and reconciles the ALB and out of band CloudFormation changes can be reverted. A separate CloudFormation update is fragile and usually requires broader permissions and coordination that can conflict with Beanstalk.

Use EB CLI saved_configs to push listener rules is incorrect since that approach requires direct environment access and bypasses the pipeline. It does not meet the requirement that CodePipeline should apply the change declaratively through source control.

Use container_commands to call ALB APIs from instances is incorrect because instance level scripts are imperative and brittle and they are not an idempotent or supported mechanism for managing ALB listener rules in a Beanstalk managed setup.

Use .ebextensions option_settings to make ALB listener redirects declarative so CodePipeline can deploy the change and Elastic Beanstalk will manage the ALB consistently.

For a multi Region serverless API using Amazon API Gateway and AWS Lambda, which routing method best directs users to the lowest latency regional endpoint while providing automatic failover?

  • ✓ B. Route 53 latency-based routing to regional API Gateway with health checks; in-Region Lambda; DynamoDB global tables

Route 53 latency-based routing to regional API Gateway with health checks; in-Region Lambda; DynamoDB global tables is correct because it steers each client to the lowest latency regional API endpoint and uses health checks to fail over automatically while keeping compute and data local for low latency.

The key piece is latency-based routing which evaluates DNS responses to send clients to the Region with the best observed latency and health checks which detect an unhealthy regional API so traffic can fail over. Running Lambdas in-Region keeps execution near the client and using DynamoDB global tables provides active active replication so reads and writes remain low latency across Regions.

Route 53 geolocation routing to regional API Gateway with health checks; DynamoDB global tables is incorrect because geolocation routing routes by the requester location rather than the measured network latency and it can send clients to a slower Region under real network conditions.

CloudFront in front of a single regional API Gateway is incorrect because CloudFront primarily provides edge caching and TLS offload and it does not perform per request multi Region origin selection by latency for a single dynamic API origin.

Route 53 failover routing with Application Recovery Controller across two API Gateways is incorrect because failover routing is an active passive disaster recovery pattern and it is not intended for normal active active low latency steering across multiple Regions.

Latency-based routing plus DNS health checks is the usual exam answer for lowest latency multi Region APIs. Remember to pair regional Lambdas with DynamoDB global tables for active active data access.

In CodeDeploy for EC2 instances with an Application Load Balancer which lifecycle event should run live health checks so a rolling deployment can automatically roll back on failure?

  • ✓ C. Run health checks in the ValidateService hook in appspec.yml and enable automatic rollback on failures

Run health checks in the ValidateService hook in appspec.yml and enable automatic rollback on failures is correct because it lets CodeDeploy perform live endpoint verification after the application is started and after the instance receives traffic from the load balancer so a failing check is treated as a deployment failure and will trigger rollback when automatic rollback is enabled.

ValidateService is the final lifecycle event for EC2 or On-Premises deployments and it runs after ApplicationStart and after the load balancer begins routing traffic. A failing verification in this hook causes CodeDeploy to mark the deployment as failed and CodeDeploy will automatically roll back when the deployment group is configured to do so. Placing runtime health checks here provides rolling updates that can revert safely if runtime checks fail.

Integrate the deployment group with an ALB and rely on target health checks to roll back is insufficient because ALB target health status alone does not instruct CodeDeploy to roll back. CodeDeploy requires a failing lifecycle event or a configured alarm to consider the deployment failed.

Configure a CloudWatch alarm for the deployment to trigger rollback when it goes to ALARM can be used to stop and roll back a deployment but it does not address the lifecycle hook requirement. The exam expects you to place live application checks in a hook such as ValidateService for rolling deployments.

Use EventBridge to invoke a Lambda that stops the deployment and redeploys the previous version on failed probes adds unnecessary custom orchestration because CodeDeploy already supports rollback on failed hooks or alarms and you do not need extra Lambda orchestration for this pattern.

Place runtime endpoint checks in ValidateService and enable automatic rollback in the deployment group so failed checks automatically trigger rollback during rolling updates.

What is the most efficient way to automate consistent, isolated deployments of an application to five additional AWS Regions?

  • ✓ C. AWS CloudFormation StackSets with a delegated admin to deploy identical stacks to targeted Regions

AWS CloudFormation StackSets with a delegated admin to deploy identical stacks to targeted Regions is the correct option for automating consistent, isolated deployments to five additional AWS Regions.

AWS CloudFormation StackSets with a delegated admin to deploy identical stacks to targeted Regions is purpose built to centrally manage identical CloudFormation stacks across multiple Regions and accounts. Each Region receives its own independent stack which preserves isolation, and the delegated administrator can orchestrate rollouts, targeted deployments, updates, and drift detection at scale. For repeatable, consistent provisioning across Regions the native targeting and fleet level controls in StackSets make it the most efficient choice.

AWS CodePipeline with cross-Region actions is incorrect because it is primarily a workflow orchestrator and it still relies on separate regional actions. It can move artifacts and run tasks in multiple Regions, but it is not as direct or scalable for uniform infrastructure replication as StackSets.

Use AWS CloudFormation change sets from an administrator account across Regions is incorrect because change sets only preview changes to a single stack. Change sets do not provide multi-Region orchestration or automated replication across Regions.

AWS CDK Pipelines across Regions is incorrect because it can be made to work but it adds extra complexity and it lacks the native fleet level multi-Region management, targeting, and drift controls that StackSets provides.

Read the question for keywords such as multi-Region, consistent provisioning, central administration, and isolation per Region and then map those keywords to a service that manages identical stacks across Regions.

Which AWS service enables low-latency local writes while providing global reads with multi-active replication across six Regions?

  • ✓ C. Amazon DynamoDB Global Tables

The correct choice is Amazon DynamoDB Global Tables. This service provides true multi Region multi active replication so each Region can accept low latency local writes while changes are propagated so reads are available globally which meets the six Region multi active requirement.

The Amazon DynamoDB Global Tables feature implements multi master replication across Regions so data is stored locally for fast writes and updates are asynchronously replicated to other Regions for global read availability. This design avoids a single write bottleneck and is the standard AWS solution for low latency local writes with global reads across multiple Regions.

The option Amazon DynamoDB Accelerator (DAX) is incorrect because DAX is a regional in memory cache for speeding reads and it does not provide cross Region replication or multi active writes.

The option Amazon Aurora Global Database is incorrect because Aurora Global Database uses a single primary write Region with secondary read only Regions and it does not enable multi master writes across Regions.

The option Amazon RDS with cross-Region read replicas is incorrect because RDS read replicas in other Regions are read only and the architecture relies on a single writable primary which prevents local writes outside the primary Region.

Focus on keywords like multi Region, multi active, and local writes to spot that DynamoDB Global Tables is the correct answer.



Want more study material and walkthroughs?

Watch the companion video guide on YouTube, compare domain weights across the AWS catalog, and explore adjacent paths like Machine Learning on AWS or GCP Developer Professional.

When you are ready to drill, add timed sets from Udemy and tighten your weaker domains before exam day.

Other AWS Certification Books