Certified AWS Security Specialist Exam Dumps and Braindumps

AWS Certified Security Specialty Badge and Logo Credly

All questions come from certificationexams.pro and my Udemy AWS Security course.

Free AWS Security Specialty Exam Topics Tests

Despite the title of this article, this is not an AWS Security Exam Braindump in the traditional sense. I do not believe in cheating. Traditionally, a braindump meant someone memorized exam questions and shared them online, which is unethical and violates the AWS certification agreement. It offers no real learning or growth.

This is not an Certified Security Engineer Braindump. All of these questions are honestly sources, as they come from my AWS Security Specialty Udemy course and the certificationexams.pro website, which provides hundreds of free Certified Security Engineer Practice Questions.

Security Engineer Exam Simulator

Each question has been carefully written to align with the official AWS Certified Security Specialty exam objectives. They match the tone, logic, and technical depth of real AWS scenarios but none are copied from the actual test. Every question is designed to help you learn, reason, and master core AWS security concepts including encryption, IAM management, incident response, monitoring, and compliance.

If you can answer these questions and understand why the incorrect options are wrong, you will not only pass the AWS exam but also gain the ability to secure and monitor AWS workloads effectively. You can call this an AWS Exam Dump if you like, but remember that every question here is built to teach, not to cheat. Each item includes detailed explanations, realistic examples, and insights that help you think like a security specialist during the exam.

Study diligently, practice regularly, and approach your certification with integrity. Success as an AWS Security professional comes from understanding how IAM, encryption, and governance work together to protect and strengthen cloud operations. Use the AWS Security Specialty Exam Simulator and Practice Tests to prepare effectively and move closer to earning your security certification.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AWS Security Specialty Exam Dump Questions

 

Question 1

 

Solstice Mutual enforces access to every Amazon S3 bucket strictly through identity-based policies. The security team needs an immediate alert if anyone applies or updates a bucket policy on any S3 bucket. What is the most efficient way to achieve this?

  • ❏ A. Use AWS Config to react to S3 bucket configuration changes, invoke an AWS Lambda function to parse the event, and send a notification with Amazon SNS

  • ❏ B. Create an Amazon EventBridge rule that filters CloudTrail events for s3:PutBucketPolicy and publish notifications to Amazon SNS

  • ❏ C. Use Amazon Macie to watch for S3 bucket policy changes and alert through Amazon SNS

  • ❏ D. Create an Amazon EventBridge rule using the CloudWatch Alarm State Change event source with a s3:PutBucketPolicy pattern and send to Amazon SNS

Question 2

 

Which AWS services and configuration record all console and API user activity across an organization, create alarms for specific user actions, and deliver notifications to on-call within 30 seconds? (Choose 2)

  • ❏ A. Deliver organization CloudTrail files only to a central S3 bucket for archival

  • ❏ B. Send CloudTrail events to EventBridge rules that route matching events to SNS

  • ❏ C. Stream an organization CloudTrail trail into a central CloudWatch Logs log group

  • ❏ D. Run scheduled Athena queries on CloudTrail files in S3 to detect actions and send SNS alerts

  • ❏ E. Use CloudWatch Logs metric filters and CloudWatch alarms that publish to an SNS topic

Question 3

 

At Cobalt Retail, a security engineer deployed an AWS Firewall Manager policy that provisions an AWS WAF web ACL. After about 45 minutes, several in-scope Application Load Balancers and Amazon CloudFront distributions still do not show the web ACL attached. What is the most likely reason for the missing associations?

  • ❏ A. With only auto remediation enabled, Firewall Manager replaces any existing web ACLs on in-scope resources with the policy web ACL

  • ❏ B. When auto remediation and replace web ACLs are both enabled, web ACLs created by AWS Shield Advanced policies cannot be replaced

  • ❏ C. Auto remediation for noncompliant resources is disabled in the Firewall Manager policy, so the web ACL is not associated automatically

  • ❏ D. An AWS Organizations service control policy is preventing the association calls

Question 4

 

How do you ensure an EC2 instance in a private subnet sends DynamoDB traffic over the Gateway VPC Endpoint instead of the NAT gateway?

  • ❏ A. Create an Interface VPC Endpoint for DynamoDB

  • ❏ B. Associate the gateway endpoint with the private subnet’s route table

  • ❏ C. Update the gateway endpoint policy to list the instance private IP

  • ❏ D. Add a route for the DynamoDB service prefix that points to the NAT gateway

Question 5

 

A digital media analytics firm uses AWS Systems Manager Session Manager to administer Amazon EC2 instances in the eu-west-2 Region. A newly launched Amazon Linux 2023 instance in a private subnet of a brand-new VPC cannot be reached using Session Manager. The instance profile on the instance has been verified as correct. What is the most likely root cause?

  • ❏ A. No interface VPC endpoint for com.amazonaws.eu-west-2.ssmmessages exists in the VPC

  • ❏ B. The security group is missing an inbound rule for SSH port 22

  • ❏ C. A bastion host is not deployed in the VPC

  • ❏ D. The IAM user does not have permission to start a Session Manager session

Question 6

 

Which KMS key approach lets you perform cryptographic erasure by removing key material to render encrypted data immediately unusable within 30 minutes?

  • ❏ A. Use a KMS customer-managed key and schedule key deletion

  • ❏ B. Import and delete your own key material in KMS to render keys unusable

  • ❏ C. Create a KMS key in a custom key store backed by CloudHSM

  • ❏ D. Use an AWS owned KMS key that AWS manages

Question 7

 

Hawthorne Robotics uses a SAML 2.0 enterprise identity provider federated with AWS IAM, and employees sign in to the AWS Management Console through single sign-on. Last week, an Amazon EC2 instance was terminated and the security team must quickly determine which federated identity performed the action. What is the fastest way to identify the user who terminated the instance?

  • ❏ A. Run an SQL query in Amazon Athena over CloudTrail logs to find TerminateInstances, note the IAM user ARN, then query for AssumeRoleWithSAML events that include that user ARN

  • ❏ B. Use the CloudTrail console to locate the TerminateInstances event, capture the assumed role ARN, then find the matching AssumeRoleWithSAML event for that role to read the federated username

  • ❏ C. Search CloudTrail for the TerminateInstances event, identify the IAM user ARN, then look up the AssumeRoleWithWebIdentity event that includes that user ARN

  • ❏ D. Query CloudTrail with Amazon Athena for TerminateInstances, record the role ARN, then search for AssumeRoleWithWebIdentity events that reference that role ARN

Question 8

 

How can AWS WAF Web ACL request logs be reliably delivered to a specified S3 bucket? (Choose 2)

  • ❏ A. Send WAF data to a Kinesis Data Streams stream and persist to S3 with custom consumers.

  • ❏ B. Enable WAF logging and route the Web ACL log stream to a Kinesis Data Firehose delivery stream.

  • ❏ C. Enable CloudFront access logging to write standard access logs to the S3 bucket.

  • ❏ D. Create a Kinesis Data Firehose delivery stream in the same region and set the S3 bucket as the Firehose destination.

  • ❏ E. Configure WAF to publish request logs directly to CloudWatch Logs.

Question 9

 

A security operations team at TrailGrid, a logistics startup, runs roughly 350 EC2-based microservices that emit high-volume application logs. They need a managed approach that ingests streaming events, supports real-time analysis with dashboards, allows replaying past events for reprocessing, and keeps the logs stored durably for investigations. Which AWS services should they use together to meet these needs? (Choose 2)

  • ❏ A. Amazon SQS

  • ❏ B. Amazon OpenSearch Service

  • ❏ C. Amazon ElastiCache

  • ❏ D. Amazon Kinesis Data Streams

  • ❏ E. Amazon Athena

Question 10

 

How can an automated, reversible isolation of a compromised EC2 instance be implemented while preserving the instance for forensic analysis?

  • ❏ A. Use EventBridge to route findings to a Kinesis Data Stream consumed by a Lambda that updates the instance security group to remove ingress and egress rules

  • ❏ B. Trigger a Lambda from EventBridge that runs a Systems Manager Run Command to execute a quarantine script on the host

  • ❏ C. Have EventBridge invoke a Lambda that updates subnet Network ACLs to block the compromised instance’s traffic

  • ❏ D. Send findings through Kinesis Data Analytics which transforms the stream and then updates the instance security group rules

Question 11

 

An online ticketing firm NovaTix stores all container images in Amazon Elastic Container Registry. Security requires vulnerability checks for both operating system layers and application language packages, but each image should be evaluated only once at the time it is pushed, not on an ongoing basis. The company manages 14 repositories and wants to target specific repositories using policy-style matching. What should they configure in Amazon ECR to meet these needs?

  • ❏ A. Enable enhanced scanning with a filter for continuous scanning

  • ❏ B. Use basic scanning with scan on push

  • ❏ C. Enable enhanced scanning and apply a scan-on-push repository filter

  • ❏ D. AWS Security Hub

Question 12

 

How can you configure a customer-managed AWS KMS key so it can be used only by Amazon S3?

  • ❏ A. Use kms:GranteePrincipal to allow grants only for the S3 service principal

  • ❏ B. Use kms:ViaService in the key policy to allow key use only when called through the S3 service endpoint

  • ❏ C. Require kms:RequestAlias so the key is only usable when callers reference a particular KMS alias

  • ❏ D. Add key policy conditions requiring aws:SourceAccount or aws:SourceArn to limit calls to a specific S3 bucket

Question 13

 

A digital ticketing startup operates a three-tier application across two Availability Zones with distinct subnets for the web, application, and database layers. The security team needs continuous detection of risky network exposure on EC2-hosted web servers and an automated alert whenever instances become reachable on nonapproved ports that violate policy. Which AWS services should be combined to implement automated notifications with the least custom development effort? (Choose 2)

  • ❏ A. AWS Shield

  • ❏ B. Amazon SNS

  • ❏ C. VPC Flow Logs

  • ❏ D. Amazon GuardDuty

  • ❏ E. Amazon Inspector

Question 14

 

How can an IAM policy ensure a role can create and manage EC2 instances only when those instances are launched in a specific VPC and have required tag key/value pairs?

  • ❏ A. Use an Organizations Service Control Policy to block launches outside the VPC and enforce tags.

  • ❏ B. Attach an IAM policy that requires aws:RequestTag on RunInstances, uses ec2:ResourceTag for other EC2 actions, and adds a condition on ec2:Subnet or ec2:Vpc to restrict the VPC.

  • ❏ C. Use an AWS Config rule to detect missing tags or wrong VPC and auto-remediate with Lambda.

  • ❏ D. Use ec2:CreateTags in a policy condition to restrict which instances can be created.

Question 15

 

An analytics workload runs on an Amazon EC2 instance in a development account (Acct Dev01). It must write objects to an Amazon S3 bucket it owns in the same account and must also read, but not change, objects from an S3 bucket in a separate data account (Acct Lake02). The organization uses a multi-account model where the Platform Governance team enforces security and guardrails across all accounts, the App Engineering team operates the workload in Acct Dev01, and the Data Platform team manages the lake in Acct Lake02. Governance requires that all AWS API calls across every account use encrypted transport and that member accounts cannot leave the organization, and they also require least-privilege cross-account access from Acct Dev01 to the lake bucket in Acct Lake02. Which set of controls should you implement to meet these requirements while keeping access scoped appropriately? (Choose 2)

  • ❏ A. Attach a bucket policy in Acct Lake02 that grants full control to the role in Acct Dev01

  • ❏ B. Apply an organization-wide permission boundary that blocks non-TLS requests and stops accounts from leaving the organization

  • ❏ C. Have the app team create an IAM role in Acct Dev01 for the EC2 instance and add a bucket policy on the Lake02 bucket that allows read-only access to that role

  • ❏ D. Use an S3 bucket policy with aws:PrincipalOrgID to restrict access to your organization and rely on VPC endpoint policies to require TLS for all API calls

  • ❏ E. Create a Service Control Policy that denies any requests not using SSL and disallows member accounts from leaving the organization, and attach it to the organization root

Question 16

 

Which approach provides interactive shell access to EC2 without opening inbound ports or managing SSH keys, while ensuring audited, encrypted session logs?

  • ❏ A. Use AWS Systems Manager Run Command to run commands and log outputs to CloudWatch with encryption.

  • ❏ B. Use AWS Systems Manager Session Manager with session logging and encryption enabled.

  • ❏ C. Use AWS CloudShell from the console to SSH into instances in the VPC.

  • ❏ D. Deploy an AWS Client VPN and then SSH into instances while sending logs to CloudWatch with encryption.

Question 17

 

A regional insurance cooperative is updating its AWS environment to comply with a security standard that requires using the company’s own imported key material for customer managed KMS keys used by AWS services. The policy also requires rotating these encryption keys every 15 months. What is the best way to implement this while minimizing application changes?

  • ❏ A. Enable automatic key rotation for the KMS key that uses imported key material

  • ❏ B. Schedule deletion of the current KMS key and immediately create a replacement with the same name, then import the new material

  • ❏ C. Create a new KMS key, import the company-supplied key material into it, and repoint the existing alias from the old key to the new key

  • ❏ D. Use a KMS custom key store backed by AWS CloudHSM and rely on HSM rotation features

Question 18

 

Which controls protect EC2 and RDS data at rest and in transit and provide automatic rotation of database credentials?

  • ❏ A. Encrypt RDS and EBS with KMS, provision TLS, and store secrets in Systems Manager Parameter Store SecureString expecting it to auto-rotate.

  • ❏ B. Encrypt EBS and RDS with KMS, enforce TLS, store DB credentials in AWS Secrets Manager and enable automatic rotation.

  • ❏ C. Use CloudHSM-backed keys for RDS and snapshots, enforce TLS, store credentials in KMS and rotate yearly.

  • ❏ D. Use IAM database authentication for RDS, encrypt storage with KMS, and avoid managing passwords.

Question 19

 

At Helios Mobility, a departing contractor committed dozens of AWS access key IDs to a public Git repository, and the security team now has a spreadsheet listing 58 exposed key IDs across a multi-account AWS Organizations setup. The team must quickly determine which IAM users in each account own these keys so they can rotate the credentials immediately. What is the most effective approach?

  • ❏ A. Generate IAM Access Analyzer findings in every account in the organization, then combine the results to identify the users and rotate the keys

  • ❏ B. Run an AWS CloudTrail Lake query from the management account across all accounts to search for those access key IDs and map them to principals, then rotate the keys

  • ❏ C. Generate an IAM credential report in each member account in AWS Organizations, aggregate the reports to map access key IDs to IAM users, and immediately rotate the keys

  • ❏ D. Generate an IAM credential report only in the management account for AWS Organizations, identify the users, and rotate the keys

Question 20

 

How should permissions and ownership for customer-managed KMS keys be granted and managed? (Choose 2)

  • ❏ A. The account root user automatically has full access to any KMS key.

  • ❏ B. The IAM identity that creates a key does not automatically become the key owner; explicit key policy, IAM policy, or grant is required.

  • ❏ C. A service control policy can grant principals direct cross-account usage of a KMS key.

  • ❏ D. An identity with kms:CreateKey can supply the initial key policy when creating the key and include statements that allow that identity to administer or use the key.

  • ❏ E. IAM policies alone can grant other accounts use of a KMS key without changes to the key policy.

Question 21

 

Northbridge Mutual, a regional insurer, has stored sensitive customer records on an on-premises system using encryption. Initially it used one key, then moved to multiple keys by splitting data into four segments. The company plans to migrate about 26 TB of files into a single Amazon S3 bucket and wants each object encrypted with a distinct key for strong protection, without reintroducing any logical partitioning or extra data-splitting processes. What should the team implement?

  • ❏ A. Place each data category into separate Amazon S3 buckets and enable server-side encryption with Amazon S3 managed keys (SSE-S3)

  • ❏ B. Use client-side encryption with the AWS SDK S3 Encryption Client backed by AWS KMS Multi-Region keys so each file is encrypted before upload

  • ❏ C. Use a single Amazon S3 bucket and enable server-side encryption with Amazon S3 managed keys (SSE-S3)

  • ❏ D. Use a single Amazon S3 bucket with server-side encryption using AWS KMS (SSE-KMS) and rely on an encryption context to force a unique key per object

Question 22

 

Which AWS services provide a complete audit trail of resource configuration changes and API activity across Regions?

  • ❏ A. Enable AWS CloudTrail for API and user activity and use Amazon EventBridge to route events

  • ❏ B. Enable AWS Config for configuration history and AWS CloudTrail for API and user activity across Regions

  • ❏ C. Use AWS Config with IAM Access Analyzer to record who called APIs

  • ❏ D. Send CloudTrail logs to CloudWatch Logs and rely on CloudWatch for configuration history

Question 23

 

At Peregrine Logistics, an IAM role used by a nightly export job must be allowed to upload objects to Amazon S3 only from May 15, 2024 through June 14, 2024 in UTC, inclusive. Which identity-based policy enforces this requirement?

  • ❏ A. { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: “s3:PutObject”, “Principal”: “*”, “Condition”: { “DateGreaterThan”: {“aws:CurrentTime”: “2024-05-15T00:00:00Z”}, “DateLessThan”: {“aws:CurrentTime”: “2024-06-14T23:59:59Z”} } } ] }

  • ❏ B. { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: “s3:PutObject”, “Resource”: “*”, “Condition”: { “DateGreaterThan”: {“aws:CurrentTime”: “2024-05-15T00:00:00Z”}, “DateLessThan”: {“aws:CurrentTime”: “2024-06-14T23:59:59Z”} } } ] }

  • ❏ C. { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Deny”, “Action”: “s3:PutObject”, “Resource”: “*”, “Condition”: { “DateGreaterThan”: {“{aws:logintime}”: “2024-05-15T00:00:00Z”}, “DateLessThan”: {“{aws:logintime}”: “2024-06-14T23:59:59Z”} } } ] }

  • ❏ D. { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: “s3:PutObject”, “Resource”: “*”, “Condition”: { “DateGreaterThan”: {“aws:CurrentTime”: “2024-06-14T23:59:59Z”}, “DateLessThan”: {“aws:CurrentTime”: “2024-05-15T00:00:00Z”} } } ] }

Question 24

 

A Firewall Manager web ACL was created but is not being associated with in-scope resources. What is the most likely reason?

  • ❏ A. Firewall Manager cannot replace web ACLs that were originally associated by AWS Shield Advanced

  • ❏ B. An unrelated AWS Config noncompliance is blocking the web ACL association

  • ❏ C. Auto-remediation was not enabled, so Firewall Manager created the web ACL but did not attach it

  • ❏ D. The target resource type is not eligible for managed WAF association

Question 25

 

Northstar Media Group operates roughly 40 AWS accounts under AWS Organizations, grouped into four organizational units that mirror departments such as finance, product, operations, and shared services. The security team needs a preventative control that stops any principal from deleting Amazon S3 buckets in any member account. What should they implement?

  • ❏ A. Create an AWS Config rule across the organization to detect and auto-remediate S3 bucket deletions

  • ❏ B. Amazon S3 Object Lock

  • ❏ C. Attach a Service Control Policy to each OU that explicitly denies s3:DeleteBucket

  • ❏ D. Attach an IAM policy with a Deny on s3:DeleteBucket to every user and role in all accounts

AWS Security Specialty Braindumps Answered

 

Question 1

 

Solstice Mutual enforces access to every Amazon S3 bucket strictly through identity-based policies. The security team needs an immediate alert if anyone applies or updates a bucket policy on any S3 bucket. What is the most efficient way to achieve this?

  • ✓ B. Create an Amazon EventBridge rule that filters CloudTrail events for s3:PutBucketPolicy and publish notifications to Amazon SNS

The most efficient solution is to listen for the exact API call that adds or updates an S3 bucket policy. CloudTrail records these events, and EventBridge can filter them in real time. Using a direct filter for s3:PutBucketPolicy keeps the design minimal and fast, then SNS delivers the alert.

Create an Amazon EventBridge rule that filters CloudTrail events for s3:PutBucketPolicy and publish notifications to Amazon SNS is correct because EventBridge can natively match CloudTrail API calls without intermediate processing, and SNS provides the notification channel.

Use AWS Config to react to S3 bucket configuration changes, invoke an AWS Lambda function to parse the event, and send a notification with Amazon SNS is inefficient because EventBridge can match the CloudTrail event directly, making Config and Lambda unnecessary for this use case.

Use Amazon Macie to watch for S3 bucket policy changes and alert through Amazon SNS is not suitable because Macie is for sensitive data discovery and does not monitor bucket policy change API events.

Create an Amazon EventBridge rule using the CloudWatch Alarm State Change event source with a s3:PutBucketPolicy pattern and send to Amazon SNS is incorrect because CloudWatch Alarm State Change is not the right event source for API calls; the correct source is CloudTrail API activity.

When you need alerts for specific API actions, think CloudTrail event → EventBridge rule → SNS for the leanest path, and avoid inserting Lambda or Config unless you need evaluation logic or remediation.

Question 2

 

Which AWS services and configuration record all console and API user activity across an organization, create alarms for specific user actions, and deliver notifications to on-call within 30 seconds? (Choose 2)

  • ✓ C. Stream an organization CloudTrail trail into a central CloudWatch Logs log group

  • ✓ E. Use CloudWatch Logs metric filters and CloudWatch alarms that publish to an SNS topic

The correct solution is to deploy an organization-level CloudTrail that streams events into CloudWatch Logs and then use CloudWatch Logs metric filters with CloudWatch alarms that publish to an SNS topic. Specifically, Stream an organization CloudTrail trail into a central CloudWatch Logs log group provides immediate delivery of management events into a searchable log stream and Use CloudWatch Logs metric filters and CloudWatch alarms that publish to an SNS topic converts matching user-action patterns into metrics and alerts so notifications reach on-call quickly. The combination meets all requirements: continuous organization-wide recording, pattern matching for specific user actions, and near-real-time notification delivery.

The option Deliver organization CloudTrail files only to a central S3 bucket for archival is inappropriate because S3 storage is archival and relies on batch processing for detection.

The option Send CloudTrail events to EventBridge rules that route matching events to SNS is plausible but less ideal because EventBridge focuses on event routing and may need additional configuration to ensure all management events are captured and filtered at the same fidelity as CloudWatch Logs metric filters.

The option Run scheduled Athena queries on CloudTrail files in S3 to detect actions and send SNS alerts is not suitable for sub-minute alerts since Athena queries are batch and scheduled.

when a question requires continuous organization-wide logging plus sub-minute alerts, prefer an organization CloudTrail that streams to CloudWatch Logs and then use metric filters and CloudWatch alarms to trigger SNS. Remember that CloudWatch Logs metric filters produce metrics that support alarm thresholds and sub-minute evaluation, which is why this pattern is commonly tested. References: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-organization-trail.html https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html

Question 3

 

At Cobalt Retail, a security engineer deployed an AWS Firewall Manager policy that provisions an AWS WAF web ACL. After about 45 minutes, several in-scope Application Load Balancers and Amazon CloudFront distributions still do not show the web ACL attached. What is the most likely reason for the missing associations?

  • ✓ C. Auto remediation for noncompliant resources is disabled in the Firewall Manager policy, so the web ACL is not associated automatically

The correct answer is Auto remediation for noncompliant resources is disabled in the Firewall Manager policy, so the web ACL is not associated automatically. If auto remediation is off, Firewall Manager can evaluate compliance but will not attach the policy-managed web ACL to the in-scope resources.

Firewall Manager’s association behavior hinges on two settings: auto remediation and the replace setting. With auto remediation off, no attachments occur. With auto remediation on, Firewall Manager associates the web ACL, and the replace setting determines whether it will overwrite an existing web ACL.

The option With only auto remediation enabled, Firewall Manager replaces any existing web ACLs on in-scope resources with the policy web ACL is wrong because auto remediation alone does not replace existing web ACLs. Replacement requires the explicit replace setting.

The statement When auto remediation and replace web ACLs are both enabled, web ACLs created by AWS Shield Advanced policies cannot be replaced is incorrect because the replace setting is not categorically blocked by the origin of the existing web ACL.

The choice An AWS Organizations service control policy is preventing the association calls is a less likely root cause for the described symptoms. While possible, the behavior maps directly to auto remediation being disabled, which is the most probable explanation.

When troubleshooting AWS Firewall Manager and AWS WAF, first verify whether auto remediation is enabled and whether replace web ACLs is selected. Auto remediation controls whether associations happen, and the replace setting controls whether existing web ACLs are overwritten.

Question 4

 

How do you ensure an EC2 instance in a private subnet sends DynamoDB traffic over the Gateway VPC Endpoint instead of the NAT gateway?

  • ✓ B. Associate the gateway endpoint with the private subnet’s route table

The correct action is to Associate the gateway endpoint with the private subnet’s route table. A DynamoDB Gateway VPC Endpoint works by creating a route for the DynamoDB service prefix in the subnet route table so traffic to DynamoDB is directed to the endpoint and stays on the AWS network. The other choices are incorrect for these reasons.

Create an Interface VPC Endpoint for DynamoDB is incorrect because DynamoDB is accessed via a gateway endpoint in typical configurations rather than an interface endpoint backed by AWS PrivateLink.

Update the gateway endpoint policy to list the instance private IP is incorrect because endpoint policies affect which principals can call the service, not how the subnet routes traffic.

Add a route for the DynamoDB service prefix that points to the NAT gateway is incorrect because that would explicitly force traffic to the NAT gateway and take it off the private AWS network instead of sending it to the gateway endpoint. Practical

when VPC endpoint traffic is still going via NAT, verify route table associations for the endpoint and confirm the route exists for the DynamoDB service prefix (for example com.amazonaws.<region>.dynamodb) pointing to the endpoint. Ensure the subnet uses the associated route table and that no higher priority route sends the same prefix to the NAT. For authoritative guidance see the AWS VPC endpoints documentation and the DynamoDB gateway endpoint page.

Question 5

 

A digital media analytics firm uses AWS Systems Manager Session Manager to administer Amazon EC2 instances in the eu-west-2 Region. A newly launched Amazon Linux 2023 instance in a private subnet of a brand-new VPC cannot be reached using Session Manager. The instance profile on the instance has been verified as correct. What is the most likely root cause?

  • ✓ A. No interface VPC endpoint for com.amazonaws.eu-west-2.ssmmessages exists in the VPC

The most probable cause is that the private subnet cannot reach the Session Manager control plane because there is no interface endpoint for the service. For private instances without internet egress, Session Manager needs PrivateLink endpoints.

No interface VPC endpoint for com.amazonaws.eu-west-2.ssmmessages exists in the VPC is correct because the SSM Agent uses the ssmmessages endpoint to establish and maintain session channels. Without this endpoint in a private VPC, sessions cannot be initiated.

The security group is missing an inbound rule for SSH port 22 is incorrect because Session Manager does not use inbound SSH; it uses the SSM Agent over HTTPS to the SSM, SSMMessages, and EC2Messages endpoints.

A bastion host is not deployed in the VPC is incorrect since Session Manager eliminates the need for bastion hosts entirely.

The IAM user does not have permission to start a Session Manager session is unlikely in this context; missing IAM permission would produce an authorization error, not a connectivity issue from a private subnet lacking the necessary endpoints.

For private instances with no internet or NAT, ensure interface VPC endpoints for ssm, ssmmessages, and ec2messages. Remember that Session Manager does not require SSH, bastion hosts, or key pairs.

Question 6

 

Which KMS key approach lets you perform cryptographic erasure by removing key material to render encrypted data immediately unusable within 30 minutes?

  • ✓ B. Import and delete your own key material in KMS to render keys unusable

The correct choice is Import and delete your own key material in KMS to render keys unusable. Importing your own key material into a KMS customer key allows you to remove that key material on demand, and once the imported material is deleted the KMS key can no longer perform cryptographic operations, which is the practical definition of cryptographic erasure. This satisfies a requirement to render encrypted data unusable within a short timeframe.

The option Use a KMS customer-managed key and schedule key deletion is incorrect because scheduled deletion enforces a minimum waiting period (seven days) and cannot meet an immediate or minutes-level erasure SLA.

The option Create a KMS key in a custom key store backed by CloudHSM is incorrect because, while custom key stores place key material in CloudHSM, they do not provide a documented KMS-level mechanism to instantly remove key material via KMS to achieve immediate cryptographic erasure in the same way imported material can be wiped.

The option Use an AWS owned KMS key that AWS manages is incorrect because AWS owned keys are controlled by AWS and customers cannot delete or manage the underlying material to force erasure.

when the requirement is to be able to instantly render encrypted data unusable, think about who controls the key material. If you must trigger erasure yourself, importing key material is the supported KMS approach. Be careful not to confuse scheduling key deletion (which is delayed) with immediate deletion of imported material. References: https://docs.aws.amazon.com/kms/latest/developerguide/importing-keys.html https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html

Question 7

 

Hawthorne Robotics uses a SAML 2.0 enterprise identity provider federated with AWS IAM, and employees sign in to the AWS Management Console through single sign-on. Last week, an Amazon EC2 instance was terminated and the security team must quickly determine which federated identity performed the action. What is the fastest way to identify the user who terminated the instance?

  • ✓ B. Use the CloudTrail console to locate the TerminateInstances event, capture the assumed role ARN, then find the matching AssumeRoleWithSAML event for that role to read the federated username

AWS CloudTrail records both the EC2 API action and the STS event that created the federated session. For SAML federation, the actor will assume an IAM role via AssumeRoleWithSAML, and the EC2 TerminateInstances event will show the assumed role ARN that performed the action.

Use the CloudTrail console to locate the TerminateInstances event, capture the assumed role ARN, then find the matching AssumeRoleWithSAML event for that role to read the federated username is correct because it uses CloudTrail’s event history for the quickest correlation and looks for the correct SAML-based STS call tied to the same role ARN.

Run an SQL query in Amazon Athena over CloudTrail logs to find TerminateInstances, note the IAM user ARN, then query for AssumeRoleWithSAML events that include that user ARN is wrong because SAML federation assumes roles rather than users and Athena adds time compared to the CloudTrail console.

Search CloudTrail for the TerminateInstances event, identify the IAM user ARN, then look up the AssumeRoleWithWebIdentity event that includes that user ARN is incorrect since the principal is not an IAM user and the web identity STS call does not apply to SAML.

Query CloudTrail with Amazon Athena for TerminateInstances, record the role ARN, then search for AssumeRoleWithWebIdentity events that reference that role ARN is invalid due to using the wrong STS event and because Athena is not the fastest route versus CloudTrail event history.

For federated access, trace actions in CloudTrail by correlating the service event with the assumed role ARN and the AssumeRoleWithSAML STS event. Remember that SAML uses roles not IAM users, and the fastest path is often the CloudTrail console event history rather than running Athena queries.

Question 8

 

How can AWS WAF Web ACL request logs be reliably delivered to a specified S3 bucket? (Choose 2)

  • ✓ B. Enable WAF logging and route the Web ACL log stream to a Kinesis Data Firehose delivery stream.

  • ✓ D. Create a Kinesis Data Firehose delivery stream in the same region and set the S3 bucket as the Firehose destination.

The reliable approach is to provision a Kinesis Data Firehose delivery stream pointed at the target S3 bucket and enable AWS WAF logging for the Web ACL to send its log stream to that Firehose. Specifically, Enable WAF logging and route the Web ACL log stream to a Kinesis Data Firehose delivery stream is required because AWS WAF supports delivering full Web ACL evaluation logs only to a Kinesis Data Firehose delivery stream. Equally important is to Create a Kinesis Data Firehose delivery stream in the same region and set the S3 bucket as the Firehose destination, since the Firehose must exist and be configured to write to the preconfigured S3 bucket before associating WAF logging. The incorrect options are wrong for these reasons.

Send WAF data to a Kinesis Data Streams stream and persist to S3 with custom consumers is incorrect because Kinesis Data Streams is not a native WAF logging target and would require custom consumers and additional operational complexity to achieve S3 delivery.

Enable CloudFront access logging to write standard access logs to the S3 bucket is incorrect because CloudFront access logs do not contain the full WAF evaluation payload and WAF metadata that Web ACL logs provide.

Configure WAF to publish request logs directly to CloudWatch Logs is incorrect because AWS WAF does not directly publish Web ACL request logs to CloudWatch Logs; the supported target for WAF logging is Kinesis Data Firehose. Exam tips and strategies: remember that AWS WAF Web ACL logging only supports Kinesis Data Firehose as the direct delivery target. You must create and configure the Firehose delivery stream (in the same region) with the S3 bucket as the destination, then enable WAF logging and associate the Web ACL with that Firehose. Be cautious with distractors that mention CloudTrail, CloudFront access logs, or Kinesis Data Streams; these services capture related activity or request-level information but do not replace WAF Web ACL logging to Firehose for the full WAF evaluation payload. References: https://docs.aws.amazon.com/waf/latest/developerguide/waf-logging.html https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html

Question 9

 

A security operations team at TrailGrid, a logistics startup, runs roughly 350 EC2-based microservices that emit high-volume application logs. They need a managed approach that ingests streaming events, supports real-time analysis with dashboards, allows replaying past events for reprocessing, and keeps the logs stored durably for investigations. Which AWS services should they use together to meet these needs? (Choose 2)

  • ✓ B. Amazon OpenSearch Service

  • ✓ D. Amazon Kinesis Data Streams

Amazon Kinesis Data Streams is designed for high-throughput ingestion with ordered shards and configurable retention, enabling consumers to process events in real time and to replay records by re-reading from a specific sequence number or timestamp.

Amazon OpenSearch Service provides durable indexing, search, and visualization, making it a strong destination for log analytics and persistent storage of operational logs.

Amazon SQS is a queueing service for decoupling and lacks stream processing semantics, rich replay, and integrated analytics or search features.

Amazon ElastiCache is an in-memory cache and is not a persistent datastore or analytics engine for logs.

Amazon Athena queries data in S3 using SQL and does not handle real-time ingestion or message replay workflows.

When you see requirements for real-time stream analytics, replay, and durable log storage, think of combining a streaming ingestion service with a search/analytics store such as Kinesis Data Streams plus OpenSearch.

Question 10

 

How can an automated, reversible isolation of a compromised EC2 instance be implemented while preserving the instance for forensic analysis?

  • ✓ A. Use EventBridge to route findings to a Kinesis Data Stream consumed by a Lambda that updates the instance security group to remove ingress and egress rules

Correct implementation: Use EventBridge to route findings to a Kinesis Data Stream consumed by a Lambda that updates the instance security group because it provides an automated, reversible, and instance-scoped network isolation while preserving the instance for forensic analysis. This approach separates detection (GuardDuty/Security Hub → EventBridge) from buffering (Kinesis) and remediation (Lambda), and updates the security group attached to the instance which is a first-class, reversible control-plane change that does not modify evidence on the host. The EventBridge → Kinesis → Lambda → modify security group flow is appropriate because security groups are applied at the instance ENI level, can be reverted or replaced to restore connectivity, and do not require access to the guest OS. Why the other approaches are not ideal: The EventBridge → Lambda → Systems Manager Run Command approach depends on the SSM agent and requires executing code inside the instance, which can alter logs and forensic artifacts and may fail if the agent is unavailable. The EventBridge → Lambda → modify subnet Network ACLs approach affects all hosts in the subnet rather than isolating a single instance and NACLs are stateless and harder to manage for per-instance reversibility. The Kinesis Data Analytics → update security group idea is incorrect because Kinesis Data Analytics is for stream processing and does not perform control-plane modifications; you would still need a Lambda or other service to enact security group changes. Exam tips and strategies: Look for solutions that avoid executing commands inside the compromised host to preserve forensic evidence, prefer control-plane changes that are reversible (security group modify/revert), and choose mechanisms that scope remediation to the individual instance (security groups or replacing/enforcing ENI associations) rather than subnet-level controls. In the exam, favor designs that use EventBridge for findings routing, an event buffer (like Kinesis) when you need durability or fan-out, and Lambda or a remediation function that performs a clear, reversible action. Also confirm any host-dependent approach relies on SSM and note that dependency can be a weakness. References: https://docs.aws.amazon.com/eventbridge/latest/userguide/what-is-amazon-eventbridge.html https://docs.aws.amazon.com/streams/latest/dev/kinesis-intro.html https://docs.aws.amazon.com/lambda/latest/dg/welcome.html https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Question 11

 

An online ticketing firm NovaTix stores all container images in Amazon Elastic Container Registry. Security requires vulnerability checks for both operating system layers and application language packages, but each image should be evaluated only once at the time it is pushed, not on an ongoing basis. The company manages 14 repositories and wants to target specific repositories using policy-style matching. What should they configure in Amazon ECR to meet these needs?

  • ✓ C. Enable enhanced scanning and apply a scan-on-push repository filter

The correct choice is Enable enhanced scanning and apply a scan-on-push repository filter. Enhanced scanning integrates Amazon ECR with Amazon Inspector to identify vulnerabilities in both operating system layers and application package ecosystems. Using a scan-on-push repository filter ensures each image is scanned once at push and allows you to target specific repositories without enabling continuous rescans.

Enable enhanced scanning with a filter for continuous scanning is incorrect because it continually rescans images, which violates the one-time-on-push requirement.

Use basic scanning with scan on push is incorrect because basic scanning does not include programming language package vulnerability analysis, only OS-level CVEs via Clair.

AWS Security Hub is incorrect because it aggregates and normalizes findings across services but does not scan ECR images for vulnerabilities.

When you need both OS and language package findings, choose enhanced scanning. Use the scan on push filter for a one-time scan at image push; pick continuous scanning only when you require ongoing updates as new CVEs are published. Remember that manual enhanced scans are not supported.

Question 12

 

How can you configure a customer-managed AWS KMS key so it can be used only by Amazon S3?

  • ✓ B. Use kms:ViaService in the key policy to allow key use only when called through the S3 service endpoint

The correct approach is to add a key policy condition that uses kms:ViaService so the key accepts requests only when they are invoked through the S3 service endpoint. Using kms:ViaService is the intended way to bind a KMS key to a specific AWS service because KMS evaluates that condition when a service calls KMS on behalf of an operation (for example, when S3 performs server-side encryption with a customer-managed key). This ensures the key cannot be directly used by other services or principals that are not invoking KMS via the S3 service endpoint. The other choices are incorrect for these reasons.

Use kms:GranteePrincipal to allow grants only for the S3 service principal is incorrect because kms:GranteePrincipal applies to CreateGrant requests naming a grantee and does not universally stop other KMS operations invoked outside of grants.

Require kms:RequestAlias so the key is only usable when callers reference a particular KMS alias is incorrect because alias-based restrictions control how the key is referenced but do not prevent other services from invoking the key through their endpoints.

Add key policy conditions requiring aws:SourceAccount or aws:SourceArn to limit calls to a specific S3 bucket is misleading because while SourceAccount/SourceArn can constrain the source of requests, they do not ensure the call was made via the S3 service endpoint; therefore they don’t reliably prevent other services within the same account from using the key.

when the question asks to restrict a KMS key to a specific AWS service, look for the kms:ViaService condition in the key policy. Remember that S3 (and other services) invoke KMS on behalf of operations and kms:ViaService ties allowed requests to that service endpoint. References: https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html and https://docs.aws.amazon.com/kms/latest/developerguide/kms-policies.html

Question 13

 

A digital ticketing startup operates a three-tier application across two Availability Zones with distinct subnets for the web, application, and database layers. The security team needs continuous detection of risky network exposure on EC2-hosted web servers and an automated alert whenever instances become reachable on nonapproved ports that violate policy. Which AWS services should be combined to implement automated notifications with the least custom development effort? (Choose 2)

  • ✓ B. Amazon SNS

  • ✓ E. Amazon Inspector

The right pairing with minimal custom work is Amazon Inspector and Amazon SNS. Amazon Inspector can evaluate EC2 network reachability and produce findings when instances are exposed on restricted ports, and Amazon SNS can immediately notify the security team by publishing those findings to subscribed endpoints.

AWS Shield is designed for DDoS mitigation and does not assess instance port exposure or generate compliance findings.

Amazon GuardDuty focuses on threat detection and anomaly signals from logs and telemetry, not policy-driven checks of which ports are reachable on EC2.

VPC Flow Logs record traffic metadata but would require building and maintaining custom analytics to infer exposure and trigger alerts, which is more effort than using Inspector findings with SNS.

When you see a need to detect EC2 port exposure with the least effort, think Amazon Inspector for findings and Amazon SNS for notifications; options like VPC Flow Logs, CloudWatch, or GuardDuty imply more custom logic or different detection goals.

Question 14

 

How can an IAM policy ensure a role can create and manage EC2 instances only when those instances are launched in a specific VPC and have required tag key/value pairs?

  • ✓ B. Attach an IAM policy that requires aws:RequestTag on RunInstances, uses ec2:ResourceTag for other EC2 actions, and adds a condition on ec2:Subnet or ec2:Vpc to restrict the VPC.

The correct choice is Attach an IAM policy that requires aws:RequestTag on RunInstances, uses ec2:ResourceTag for other EC2 actions, and adds a condition on ec2:Subnet or ec2:Vpc to restrict the VPC. This approach enforces tag requirements at creation time by using the aws:RequestTag condition key on RunInstances so the API call is denied unless the required tags are present, and it enforces ongoing management restrictions with ec2:ResourceTag so other EC2 actions succeed only for resources that carry the required tag key/value pairs. Constraining the target VPC is done by adding a condition on ec2:Subnet or ec2:Vpc (or by whitelisting allowed subnet IDs) so the policy only permits launches into the intended VPC. The other choices are incorrect for these reasons: Use an Organizations Service Control Policy to block launches outside the VPC and enforce tags. is incorrect because SCPs apply at the account/OU level and cannot reliably enforce resource-level tag requirements at request time.

Use an AWS Config rule to detect missing tags or wrong VPC and auto-remediate with Lambda. is incorrect because AWS Config is detective and remediative after resource creation and cannot prevent the initial unauthorized RunInstances call.

Use ec2:CreateTags in a policy condition to restrict which instances can be created. is incorrect because ec2:CreateTags is an API action name, not a condition key; to require tags at launch you must use aws:RequestTag and to restrict subsequent actions use ec2:ResourceTag.

remember the distinction between aws:RequestTag (forces tags on resource creation) and ec2:ResourceTag (limits actions to already-tagged resources), and recall that SCPs and AWS Config are not preventive controls for request-time enforcement. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-policies-for-amazon-ec2.html

Question 15

 

An analytics workload runs on an Amazon EC2 instance in a development account (Acct Dev01). It must write objects to an Amazon S3 bucket it owns in the same account and must also read, but not change, objects from an S3 bucket in a separate data account (Acct Lake02). The organization uses a multi-account model where the Platform Governance team enforces security and guardrails across all accounts, the App Engineering team operates the workload in Acct Dev01, and the Data Platform team manages the lake in Acct Lake02. Governance requires that all AWS API calls across every account use encrypted transport and that member accounts cannot leave the organization, and they also require least-privilege cross-account access from Acct Dev01 to the lake bucket in Acct Lake02. Which set of controls should you implement to meet these requirements while keeping access scoped appropriately? (Choose 2)

  • ✓ C. Have the app team create an IAM role in Acct Dev01 for the EC2 instance and add a bucket policy on the Lake02 bucket that allows read-only access to that role

  • ✓ E. Create a Service Control Policy that denies any requests not using SSL and disallows member accounts from leaving the organization, and attach it to the organization root

The correct choices are Have the app team create an IAM role in Acct Dev01 for the EC2 instance and add a bucket policy on the Lake02 bucket that allows read-only access to that role and Create a Service Control Policy that denies any requests not using SSL and disallows member accounts from leaving the organization, and attach it to the organization root. The first enforces least privilege for cross-account S3 reads via a role in the application account and a resource-based bucket policy in the data account. The second applies organization-wide guardrails to require TLS for AWS APIs and to stop accounts from leaving the organization.

Attach a bucket policy in Acct Lake02 that grants full control to the role in Acct Dev01 is excessive and violates least privilege by enabling write and delete actions.

Apply an organization-wide permission boundary that blocks non-TLS requests and stops accounts from leaving the organization is not feasible because permissions boundaries apply to IAM principals within a single account and cannot be attached at the org root.

Use an S3 bucket policy with aws:PrincipalOrgID to restrict access to your organization and rely on VPC endpoint policies to require TLS for all API calls does not meet the governance mandate because it cannot prevent accounts from leaving the organization and does not enforce TLS across all accounts for all APIs.

Use SCPs for organization-wide guardrails and use resource-based policies with targeted IAM roles for least-privilege cross-account access.

Question 16

 

Which approach provides interactive shell access to EC2 without opening inbound ports or managing SSH keys, while ensuring audited, encrypted session logs?

  • ✓ B. Use AWS Systems Manager Session Manager with session logging and encryption enabled.

The correct choice is Use AWS Systems Manager Session Manager with session logging and encryption enabled. Session Manager gives interactive, shell-style access to instances without opening inbound network ports and without requiring SSH keypairs or bastion hosts. It integrates with AWS CloudTrail and can stream or store session activity to CloudWatch Logs or S3 with server-side encryption enabled, meeting audit and encryption requirements. The other choices are incorrect for these reasons.

Use AWS Systems Manager Run Command to run commands and log outputs to CloudWatch with encryption. runs commands non-interactively and cannot provide a realtime, interactive session stream suitable for audit of administrator activity.

Use AWS CloudShell from the console to SSH into instances in the VPC. CloudShell is a client environment in the console and does not automatically provide private VPC access to EC2; using it to reach instances still requires network configuration and SSH credentials.

Deploy an AWS Client VPN and then SSH into instances while sending logs to CloudWatch with encryption. Client VPN grants network connectivity but does not eliminate SSH key management or the need to open SSH access, and it does not inherently provide session-level auditing of administrator commands.

Look for answers that remove inbound ports and SSH key management while explicitly mentioning session auditing or logging; that pattern usually indicates Session Manager. Also remember Session Manager can log to CloudWatch Logs or S3 and integrates with CloudTrail for session metadata. References: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-auditing.html

Question 17

 

A regional insurance cooperative is updating its AWS environment to comply with a security standard that requires using the company’s own imported key material for customer managed KMS keys used by AWS services. The policy also requires rotating these encryption keys every 15 months. What is the best way to implement this while minimizing application changes?

  • ✓ C. Create a new KMS key, import the company-supplied key material into it, and repoint the existing alias from the old key to the new key

Create a new KMS key, import the company-supplied key material into it, and repoint the existing alias from the old key to the new key is correct because imported key material cannot be automatically rotated and you cannot swap key material on an existing KMS key; manual rotation is achieved by creating a new key and moving the alias so applications seamlessly use the new key.

Enable automatic key rotation for the KMS key that uses imported key material is wrong because KMS auto-rotation only applies to AWS-generated symmetric customer managed keys, not to imported, asymmetric, HMAC, or custom key store keys.

Schedule deletion of the current KMS key and immediately create a replacement with the same name, then import the new material is incorrect because KMS requires a 7–30 day waiting period before deletion completes, preventing immediate recreation and risking data inaccessibility.

Use a KMS custom key store backed by AWS CloudHSM and rely on HSM rotation features is incorrect since KMS keys in custom key stores do not support automatic rotation and this approach does not fulfill the requirement to use imported key material for KMS keys.

For KMS keys with imported material, think manual rotation via aliases: create a new key, import the new material, and retarget the alias; automatic rotation is only for AWS-generated symmetric keys.

Question 18

 

Which controls protect EC2 and RDS data at rest and in transit and provide automatic rotation of database credentials?

  • ✓ B. Encrypt EBS and RDS with KMS, enforce TLS, store DB credentials in AWS Secrets Manager and enable automatic rotation.

The best choice is Encrypt EBS and RDS with KMS, enforce TLS, store DB credentials in AWS Secrets Manager and enable automatic rotation. AWS Secrets Manager is purpose-built to store database credentials encrypted with KMS and to perform automatic rotations using built-in rotation lambdas for supported database engines, which minimizes operational effort. Encrypting EBS and the RDS instance with KMS CMKs ensures data at rest protection and enforcing TLS for database connections ensures data in transit is protected. The Secrets Manager rotation feature can rotate the database user password automatically and update the stored secret, removing manual rotation tasks.

The option Encrypt RDS and EBS with KMS, provision TLS, and store secrets in Systems Manager Parameter Store SecureString expecting it to auto-rotate. is incorrect because Parameter Store SecureString is encrypted with KMS but does not provide built-in automatic rotation for DB credentials; rotation would require a custom solution.

The option Use CloudHSM-backed keys for RDS and snapshots, enforce TLS, store credentials in KMS and rotate yearly. is incorrect because CloudHSM or KMS protect keys, but KMS is not a secret management system and a yearly rotation cadence does not satisfy an automatic, frequent rotation requirement.

The option Use IAM database authentication for RDS, encrypt storage with KMS, and avoid managing passwords. is attractive but incorrect here because IAM DB authentication issues temporary tokens and does not provide the same built-in, automated credential rotation for DB user accounts that Secrets Manager does for stored credentials; it also has limitations depending on engine and client support.

when the question asks for automatic credential rotation with minimal operational effort, choose the service designed for secret storage and rotation (Secrets Manager) and combine it with KMS-backed encryption and TLS to cover data at rest and in transit. References: https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

Question 19

 

At Helios Mobility, a departing contractor committed dozens of AWS access key IDs to a public Git repository, and the security team now has a spreadsheet listing 58 exposed key IDs across a multi-account AWS Organizations setup. The team must quickly determine which IAM users in each account own these keys so they can rotate the credentials immediately. What is the most effective approach?

  • ✓ C. Generate an IAM credential report in each member account in AWS Organizations, aggregate the reports to map access key IDs to IAM users, and immediately rotate the keys

The fastest reliable method is to pull IAM credential reports from each member account, then correlate the accessKeyIds column with the leaked list and rotate affected keys. Credential reports are account-scoped and enumerate IAM users and their access keys, while no organization-level credential report exists.

The correct choice is Generate an IAM credential report in each member account in AWS Organizations, aggregate the reports to map access key IDs to IAM users, and immediately rotate the keys because this directly provides a definitive mapping of keys to IAM users across accounts.

Generate IAM Access Analyzer findings in every account in the organization, then combine the results to identify the users and rotate the keys is wrong because Access Analyzer identifies external sharing and policy issues, not key-to-user mappings.

Run an AWS CloudTrail Lake query from the management account across all accounts to search for those access key IDs and map them to principals, then rotate the keys is insufficient since CloudTrail only shows activity for keys that were recently used and may miss inactive or never-used keys.

Generate an IAM credential report only in the management account for AWS Organizations, identify the users, and rotate the keys is incorrect because there is no org-wide credential report; reports must be generated per account.

When asked to quickly map access keys to IAM users across many accounts, think IAM credential report and remember it is per account, not organization-wide.

Question 20

 

How should permissions and ownership for customer-managed KMS keys be granted and managed? (Choose 2)

  • ✓ B. The IAM identity that creates a key does not automatically become the key owner; explicit key policy, IAM policy, or grant is required.

  • ✓ D. An identity with kms:CreateKey can supply the initial key policy when creating the key and include statements that allow that identity to administer or use the key.

Correct choices are The IAM identity that creates a key does not automatically become the key owner; explicit key policy, IAM policy, or grant is required and An identity with kms:CreateKey can supply the initial key policy when creating the key and include statements that allow that identity to administer or use the key. The first correct point emphasizes that creating a customer-managed CMK does not implicitly grant ownership or usage rights to the creator; access is governed by the key policy, IAM policies (when the key policy allows them), or explicit grants. The second correct point clarifies that whoever has permission to call kms:CreateKey can provide the initial key policy at creation time and thereby establish who can administer and use the key. The incorrect choices are wrong for these reasons: The account root user automatically has full access to any KMS key is incorrect because the root user has no automatic resource-level privileges unless granted in the key policy; resource policies control KMS access.

A service control policy can grant principals direct cross-account usage of a KMS key is incorrect because SCPs only limit or allow IAM actions at the account level and cannot assign resource permissions on a KMS key.

IAM policies alone can grant other accounts use of a KMS key without changes to the key policy is misleading because IAM policy-based permissions require that the key policy permit IAM principals to be authorized; without the appropriate key policy statements, IAM policies cannot grant access.

Always check the key policy first when evaluating KMS permissions; remember that key policies are resource-based and are the primary control for CMKs, and that the initial key policy is set at creation time. Use grants for temporary or delegated cryptographic operations. For exam scenarios, assume least privilege, prefer explicit key policy statements for cross-account access, and verify whether the key policy delegates permission to use IAM policies. References: https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html https://docs.aws.amazon.com/kms/latest/developerguide/control-access-overview.html

Question 21

 

Northbridge Mutual, a regional insurer, has stored sensitive customer records on an on-premises system using encryption. Initially it used one key, then moved to multiple keys by splitting data into four segments. The company plans to migrate about 26 TB of files into a single Amazon S3 bucket and wants each object encrypted with a distinct key for strong protection, without reintroducing any logical partitioning or extra data-splitting processes. What should the team implement?

  • ✓ C. Use a single Amazon S3 bucket and enable server-side encryption with Amazon S3 managed keys (SSE-S3)

The correct choice is Use a single Amazon S3 bucket and enable server-side encryption with Amazon S3 managed keys (SSE-S3). SSE-S3 performs envelope encryption and automatically uses a distinct data key for every object, delivering strong isolation without any need to split or reorganize data. Since 2023, S3 also applies SSE-S3 by default to new uploads at no extra cost.

Place each data category into separate Amazon S3 buckets and enable server-side encryption with Amazon S3 managed keys (SSE-S3) is unnecessary because creating multiple buckets does not improve object-level key uniqueness and adds avoidable operational overhead.

Use client-side encryption with the AWS SDK S3 Encryption Client backed by AWS KMS Multi-Region keys so each file is encrypted before upload can achieve encryption but increases complexity and key management effort, and Multi-Region keys are unrelated to S3’s per-object data key behavior.

Use a single Amazon S3 bucket with server-side encryption using AWS KMS (SSE-KMS) and rely on an encryption context to force a unique key per object is misguided because encryption context does not create new keys; while SSE-KMS can work, it adds KMS costs and request quotas that are not needed to satisfy this requirement.

When the requirement is unique keys per object with minimal effort, think SSE-S3. Remember that encryption context does not generate keys and that SSE-KMS adds cost and limits; client-side encryption adds operational complexity.

Question 22

 

Which AWS services provide a complete audit trail of resource configuration changes and API activity across Regions?

  • ✓ B. Enable AWS Config for configuration history and AWS CloudTrail for API and user activity across Regions

The correct choice is Enable AWS Config for configuration history and AWS CloudTrail for API and user activity across Regions. AWS Config provides time-ordered snapshots and change records for supported resources so auditors can see what changed and when. AWS CloudTrail records management and data API calls, identity information, and delivers logs across Regions so you can see who performed which API calls. The combination meets both requirements for configuration history and detailed API access logging.

The option Enable AWS CloudTrail for API and user activity and use Amazon EventBridge to route events is incorrect because EventBridge is an event routing and pattern-matching service and does not replace CloudTrail or Config as the audit sources.

The option Use AWS Config with IAM Access Analyzer to record who called APIs is incorrect because IAM Access Analyzer is for analyzing resource policies and external access, not for recording API call history.

The option Send CloudTrail logs to CloudWatch Logs and rely on CloudWatch for configuration history is incorrect because CloudWatch Logs can store and analyze logs but does not provide resource configuration snapshots and change history; AWS Config is required for that. Tips for the exam and practical deployment: enable CloudTrail as a multi-Region or organization trail to capture all Regions and AWS accounts, send CloudTrail logs to a centralized S3 bucket and optionally to CloudWatch Logs for analysis, enable AWS Config recording in all Regions and use AWS Config Aggregator or AWS Organizations integration to centralize configuration data, and ensure S3 buckets and logs are protected with appropriate access controls and encryption. References: https://docs.aws.amazon.com/config/latest/developerguide/ https://docs.aws.amazon.com/awscloudtrail/latest/userguide/

Question 23

 

At Peregrine Logistics, an IAM role used by a nightly export job must be allowed to upload objects to Amazon S3 only from May 15, 2024 through June 14, 2024 in UTC, inclusive. Which identity-based policy enforces this requirement?

  • ✓ B. { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: “s3:PutObject”, “Resource”: “*”, “Condition”: { “DateGreaterThan”: {“aws:CurrentTime”: “2024-05-15T00:00:00Z”}, “DateLessThan”: {“aws:CurrentTime”: “2024-06-14T23:59:59Z”} } } ] }

The policy that allows s3:PutObject with aws:CurrentTime using DateGreaterThan and DateLessThan for 2024-05-15T00:00:00Z through 2024-06-14T23:59:59Z correctly enforces a time-bound permission in an identity-based policy. Using aws:CurrentTime evaluates the time of the request in UTC, which is the intended control.

The policy that includes a Principal element is invalid for identity-based policies because Principal is only used in resource-based policies.

The policy using Deny with {aws:logintime} is incorrect because policy variables like {aws:logintime} are not supported with Date operators and Deny would block uploads during the allowed window.

The policy with reversed start and end timestamps fails because the conditions never match the desired window.

For time-limited access in identity-based policies, use aws:CurrentTime with DateGreaterThan and DateLessThan, avoid Principal in identity-based policies, and express times in UTC.

Question 24

 

A Firewall Manager web ACL was created but is not being associated with in-scope resources. What is the most likely reason?

  • ✓ C. Auto-remediation was not enabled, so Firewall Manager created the web ACL but did not attach it

The correct answer is Auto-remediation was not enabled, so Firewall Manager created the web ACL but did not attach it. Justification: Firewall Manager can create centralized web ACLs as part of a policy evaluation but will only automatically associate those web ACLs to in-scope resources when the policy is configured to perform auto-remediation. If auto-remediation is disabled, Firewall Manager records noncompliance and creates the configuration artifacts but does not perform the attach/replace actions. Why the other choices are incorrect: Firewall Manager cannot replace web ACLs that were originally associated by AWS Shield Advanced is not the most likely cause because Shield Advanced associations can affect replacement in particular circumstances but do not broadly prevent Firewall Manager from attaching web ACLs when remediation is enabled.

An unrelated AWS Config noncompliance is blocking the web ACL association is incorrect because Firewall Manager uses its own compliance evaluation and does not wait for unrelated AWS Config rules to be compliant before associating web ACLs.

The target resource type is not eligible for managed WAF association is a possible cause in some cases, but the more common and likely reason in practice is that auto-remediation was not enabled in the Firewall Manager policy.

On the exam, watch for keywords like auto-remediate, remediation, attach, or replace web ACL. If a question describes a policy that created a resource but did not apply it to targets, consider whether an automatic remediation or enforcement toggle is disabled. Verify supported resource types only when the question calls out nonstandard or unsupported targets. For policies that replace existing ACLs, ensure remediation is enabled and check for special-case protections such as Shield Advanced only when the scenario explicitly mentions them. References: https://docs.aws.amazon.com/waf/latest/developerguide/fms-policy.html https://docs.aws.amazon.com/waf/latest/developerguide/firewall-manager.html

Question 25

 

Northstar Media Group operates roughly 40 AWS accounts under AWS Organizations, grouped into four organizational units that mirror departments such as finance, product, operations, and shared services. The security team needs a preventative control that stops any principal from deleting Amazon S3 buckets in any member account. What should they implement?

  • ✓ C. Attach a Service Control Policy to each OU that explicitly denies s3:DeleteBucket

The most effective organization-wide preventative control is an SCP. By applying an SCP with an explicit Deny on s3:DeleteBucket to the OUs, the organization sets a permission boundary that blocks the DeleteBucket action across all member accounts, regardless of IAM policies or who makes the call. This is a true preventive guardrail at the Organizations layer, which is exactly what the requirement demands. Therefore, Attach a Service Control Policy to each OU that explicitly denies s3:DeleteBucket is correct.

Create an AWS Config rule across the organization to detect and auto-remediate S3 bucket deletions is incorrect because Config is a detective control and any remediation runs after the fact; it does not block the DeleteBucket API in real time.

Amazon S3 Object Lock is incorrect because Object Lock primarily prevents object deletions via retention and legal holds; it is not an Organizations-level control and does not centrally stop bucket deletions across all accounts.

Attach an IAM policy with a Deny on s3:DeleteBucket to every user and role in all accounts is incorrect because IAM policies are difficult to manage consistently at scale and can be circumvented by principals not covered by the policy; SCPs provide the necessary top-level boundary for all principals, including root.

For organization-wide preventive controls across many accounts, think SCP with explicit Deny. Tools like AWS Config are detective, and IAM is per-principal, not a global guardrail.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.