AWS Security Specialty Practice Exams

All questions come from certificationexams.pro and my Udemy AWS Security course.
Free AWS Security Exam Topics Tests
Over the past few months, I have been helping cloud architects, security engineers, and DevOps professionals prepare for the AWS Certified Security Specialty exam.
This certification validates your expertise in designing and implementing secure workloads and applications on AWS. The goal is to help you understand and apply security principles that protect data, manage access, and maintain compliance across AWS environments.
Achieving the AWS Security Specialty credential demonstrates your ability to use AWS security services, identify and mitigate threats, and implement effective monitoring and incident response strategies.
The certification measures your knowledge across key domains such as identity and access management, data protection, incident response, infrastructure security, and logging and monitoring.
These are the skills organizations value when building secure, scalable, and compliant cloud solutions.
AWS Security Specialty Exam Simulator
Through my Udemy courses on AWS certifications and the free AWS Security Specialty Practice Questions available at certificationexams.pro, I have identified common areas where candidates need deeper understanding. That insight helped shape a comprehensive set of AWS Security Specialty Questions and Answers that closely match the tone, logic, and challenge of the official AWS exam.
You can also explore AWS Security Specialty Sample Questions and AWS Security Specialty Practice Tests to measure your readiness.
Each question includes a detailed explanation that reinforces key concepts such as encryption, KMS management, IAM policies, network security, and data protection best practices.
These materials are not about memorization; they focus on building analytical and practical skills that prepare you to secure AWS environments with confidence.
Real AWS Security Specialty Exam Questions
If you are looking for Real Security Engineer Exam Questions, this resource provides authentic, instructor-created scenarios that reflect the structure and complexity of the real test.
These are not AWS Security exam dumps or copied content.
Each scenario challenges you to design secure architectures, troubleshoot security incidents, and apply AWS-recommended practices. The AWS Security Specialty Exam Simulator recreates the pacing and experience of the real certification exam, helping you build confidence under realistic test conditions.
For focused learning, you can also explore topic-specific sets similar to braindump-style security exam question groupings that reinforce your understanding through repetition and applied problem solving.
Every AWS Security Specialty Practice Test is designed to help you think like a cloud security engineer.
These exercises prepare you to secure applications, manage risks, and maintain compliance in AWS environments of any scale. Study consistently, practice thoughtfully, and you will join the community of AWS professionals trusted to keep enterprise workloads safe.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
AWS Security Specialty Practice Exam Questions
Question 1
A video streaming startup operates multiple Amazon EC2 fleets that are repeatedly probed by bot traffic. The security team wants an automated workflow that can cut off traffic from confirmed malicious IPs within about 45 seconds, and they will coordinate the response with AWS Step Functions. Which design should the security engineer implement?
-
❏ A. Use AWS WAF to detect hostile requests, store the source IPs in Amazon DynamoDB, and have AWS Lambda in an AWS Step Functions workflow update the table and modify an AWS Network Firewall rule group to drop the traffic
-
❏ B. Use Amazon GuardDuty to flag suspicious sources, write the IPs to DynamoDB, and have Step Functions invoke Lambda to update the table and change Security Group rules to block those IPs
-
❏ C. Use Amazon GuardDuty findings to identify malicious sources, persist those IPs in DynamoDB, and have Step Functions run Lambda functions that update the table and push the IP set into an AWS Network Firewall rule group to immediately block the traffic
-
❏ D. Use AWS CloudTrail to watch for malicious activity, store the IPs in DynamoDB, and use Step Functions with Lambda to update the table and add a deny rule to an AWS WAF web ACL
Question 2
How do you immediately encrypt existing EBS volumes attached to Auto Scaling instances and ensure all future EBS volumes are created encrypted?
-
❏ A. Deploy an AWS Config rule with automated remediation that snapshots, copies with encryption, and reattaches volumes.
-
❏ B. Enable EBS encryption by default for the account/region and run an Auto Scaling instance refresh to replace instances so attached volumes are created encrypted.
-
❏ C. Create a new launch template with encrypted volumes and update the Auto Scaling group, relying on passive instance replacement over time.
-
❏ D. Create a new launch template version with encrypted volumes and perform a rolling update of the Auto Scaling group to replace instances.
Question 3
An operations team at Northwind Outfitters can no longer sign in to a Windows Amazon EC2 instance because the local Administrator password was misplaced. You are asked to regain access by resetting the password using EC2Launch v2 on a supported Windows AMI. What actions should you perform to complete this task? (Choose 3)
-
❏ A. Download and run EC2Rescue for Windows Server to reset the Administrator password
-
❏ B. Confirm the EC2Launch v2 service is running and detach the EBS root volume from the affected instance
-
❏ C. Launch a temporary instance using the same Windows version to prevent disk signature conflicts
-
❏ D. Start a helper instance in the same Availability Zone, attach the detached volume as a secondary disk, and delete %ProgramData%/Amazon/EC2Launch/state/.run-once
-
❏ E. Reattach the volume as the root device on the original instance, boot it, and use the instance key pair to retrieve the new Administrator password via the current public DNS
-
❏ F. Use AWS Systems Manager Session Manager to open a session and reset the password without stopping the instance
Question 4
How can CodeBuild API calls from within a VPC be kept off the public internet?
-
❏ A. Route API calls through a NAT gateway.
-
❏ B. Use AWS Direct Connect from on-premises.
-
❏ C. Create an interface VPC endpoint (AWS PrivateLink) for CodeBuild.
-
❏ D. Use a gateway VPC endpoint for CodeBuild.
Question 5
A security engineer at HarborPay needs a consolidated alert for all imported ACM certificates across every AWS Region that are due to expire within 45 days, and the alert must email the security owner while also recording the details in Security Hub for centralized tracking. What approach should be implemented to meet these requirements with minimal operational overhead?
-
❏ A. Use the ACM Certificate Expiration event in Amazon EventBridge to invoke a Lambda function that forwards each expiring certificate as a finding to Security Hub and publishes to an SNS topic, allowing an ITSM system to open a ticket from SNS
-
❏ B. Create an EventBridge schedule to run a Lambda function that queries ACM’s DaysToExpiry metric across Regions, aggregates certificates expiring within 45 days, writes findings to Security Hub in a designated Region, and publishes one SNS message to the security contact
-
❏ C. Rely on Security Hub to natively monitor ACM certificate expirations and configure it to trigger an SNS notification 45 days before expiration
-
❏ D. Use the AWS Config managed rule ACM_CERTIFICATE_EXPIRATION_CHECK to automatically renew imported ACM certificates and send rule evaluation notifications to SNS
Question 6
How can legacy clients that only support TLS 1.0 or 1.1 retrieve objects from a private S3 bucket while keeping the bucket protected and S3 using modern TLS?
-
❏ A. Use an EC2 reverse proxy to terminate older TLS and proxy requests to S3
-
❏ B. Use the S3 website endpoint as a custom origin so legacy clients can fetch objects
-
❏ C. Use a CloudFront distribution with Origin Access Control so clients connect to CloudFront while CloudFront uses modern TLS to access the private S3 bucket
-
❏ D. Add a bucket policy to allow legacy TLS connections to S3
Question 7
At Asteria Finance, the SecOps team must observe every egress flow from a specific VPC and any traffic that originates from outside that VPC. The objective is to feed a packet inspection appliance for deep analysis, troubleshooting, and threat detection. Which AWS approach best meets these needs?
-
❏ A. Enable VPC Flow Logs for the VPC and stream to CloudWatch Logs for analysis and alerts
-
❏ B. AWS Network Firewall
-
❏ C. Configure a traffic mirroring target for the inspection appliance and use a mirror filter that rejects outbound packets with a destination in the VPC CIDR and rejects inbound packets with a source in the VPC CIDR while allowing other traffic
-
❏ D. Configure a traffic mirroring target for the monitoring appliance and create filters that mirror only inbound TCP and inbound UDP traffic
Question 8
What change is required so CloudTrail can deliver log objects to an S3 bucket using a specified object prefix?
-
❏ A. Add an aws:SourceArn condition to the S3 bucket policy to restrict CloudTrail deliveries
-
❏ B. Allow the CloudTrail service principal s3:PutObject on the entire bucket without a prefix condition
-
❏ C. Update the S3 bucket policy to allow the intended object prefix and configure the same prefix in the trail
-
❏ D. Use a Lambda to rename CloudTrail objects after delivery to add prefixes
Question 9
Riverton Energy runs an AWS Organizations environment with 64 member accounts. The company has AWS Security Hub configured with a delegated administrator that consolidates findings from all accounts. The security operations team wants instant email notifications whenever Amazon Inspector generates a high-severity finding in any account. What should they implement?
-
❏ A. Enable an organization trail in AWS CloudTrail and create an Amazon EventBridge rule that triggers on Amazon Inspector ListFindings API calls, then send notifications through an Amazon SNS topic
-
❏ B. In the Security Hub delegated administrator account, create an Amazon EventBridge rule that matches high severity Amazon Inspector findings from all linked accounts and publish to an Amazon SNS topic subscribed by the SOC email list
-
❏ C. In every member account, configure an Amazon EventBridge rule for high severity Amazon Inspector findings and route to a local Amazon SNS topic with the SOC email addresses subscribed
-
❏ D. Enable Amazon GuardDuty across the organization and notify high severity findings to email using Amazon SNS
Question 10
How can a hub account host a central VPC so member accounts can launch resources into its subnets without creating VPCs or modifying other accounts’ resources?
-
❏ A. Attach a Transit Gateway in the hub and route member-account traffic to hub subnets.
-
❏ B. Publish Service Catalog products so teams provision resources into the central VPC.
-
❏ C. Expose hub services via AWS PrivateLink and have member accounts consume them.
-
❏ D. Share specific VPC subnets with member accounts using AWS Resource Access Manager.

All questions come from certificationexams.pro and my Udemy AWS Security course.
Question 11
Orion Logistics has consolidated 16 AWS accounts into a single organization with AWS Organizations to centralize governance. The security team must record every user and API action, define alarms for specific high-risk activities, and deliver immediate notifications when they occur. Which combination of services most effectively meets these needs? (Choose 2)
-
❏ A. AWS Security Hub
-
❏ B. Implement an AWS CloudTrail organization trail and stream events to an Amazon CloudWatch Logs log group
-
❏ C. Configure the organization trail to write CloudTrail files only to an Amazon S3 bucket
-
❏ D. Use Amazon Athena to query CloudTrail logs in S3 and publish results to Amazon SNS
-
❏ E. In Amazon CloudWatch Logs, create metric filters for the targeted user actions and configure Amazon CloudWatch alarms to notify through Amazon SNS
Question 12
How can you remove public network exposure for a Lambda function that reads and writes a DynamoDB table? (Choose 2)
-
❏ A. Create a DynamoDB interface VPC endpoint
-
❏ B. Run the Lambda in private VPC subnets
-
❏ C. Attach a resource-based policy to Lambda to block outbound internet
-
❏ D. Provision a gateway VPC endpoint for DynamoDB
-
❏ E. Add an IAM condition on the Lambda role to restrict access by source IP
Question 13
After a compliance audit, a fintech startup found that several customer managed AWS KMS keys allow broader access than intended. The security lead wants to ensure that only principals from accounts that are part of their AWS Organizations can use these keys, and the solution should automatically keep working as accounts are added or removed without enumerating each account. What should the team implement?
-
❏ A. Use the aws:PrincipalIsAWSService global condition key in a KMS key policy and enumerate each AWS account ID in the Condition
-
❏ B. Attach a service control policy at the organization root that denies KMS actions for any principal outside the organization
-
❏ C. Configure the KMS key policy to use the aws:PrincipalOrgID global condition key with a resource-based Principal and set the Condition to your Organization ID
-
❏ D. Use aws:PrincipalOrgID to restrict an AWS service principal from accessing the key
Question 14
How can you ensure artifacts previously signed by a departed engineer are no longer accepted for Lambda execution?
-
❏ A. Rotate the AWS KMS customer master key that encrypts the deployment packages.
-
❏ B. Revoke all versions of the signing profile that were used by the departed engineer.
-
❏ C. Remove the departed engineer’s IAM permissions to AWS Signer so they cannot sign new artifacts.
-
❏ D. Delete the signed artifacts from the deployment storage (for example, S3) to prevent their use.
Question 15
Finley Retail Group uses AWS Firewall Manager with a policy that targets every account and resource in its organization. After a 90 day consolidation across 64 accounts, some load balancers and instances moved out of the policy scope. What does Firewall Manager do by default when resources leave the scope of the policy?
-
❏ A. Any protected resource that leaves the policy scope is automatically disassociated and removed from protection
-
❏ B. An Application Load Balancer associated with a web ACL is removed from the web ACL when it leaves scope while the protection continues
-
❏ C. The associated AWS Config managed rules are deleted, and any AWS WAF web ACLs with no associated resources are removed
-
❏ D. Any AWS WAF web ACL with no resources is deleted, and an EC2 instance is automatically disassociated from the replicated security group when it leaves scope
Question 16
After baking an AMI that had the CloudWatch unified agent installed, instances launched from it send no logs to CloudWatch Logs. What could cause the agent to fail to send logs? (Choose 2)
-
❏ A. VPC must use a VPC endpoint; public egress is not allowed for CloudWatch Logs
-
❏ B. AMI contains baked instance-specific agent state that breaks the agent on new instances
-
❏ C. Agent must forward logs via Kinesis Data Firehose instead of writing directly to CloudWatch Logs
-
❏ D. Instance IAM role lacks CloudWatch Logs permissions (create groups/streams, PutLogEvents)
-
❏ E. Agent must run as root and will fail when run under other users
Question 17
A digital publishing startup, Redwood Press, moved most workloads to AWS and runs Windows applications on Amazon EC2. The applications still rely on Active Directory hosted in the company datacenter. GuardDuty has been enabled across the environment, but the security team notices there are no DNS-related findings for these instances. What change should be made so GuardDuty can evaluate DNS activity for these workloads?
-
❏ A. Enable Route 53 Resolver query logging and confirm the destination configuration
-
❏ B. Add broader permissions to the GuardDuty service-linked role so it can read DNS logs
-
❏ C. Repoint the Windows instances to the VPC’s AmazonProvidedDNS resolver rather than a custom or on-prem resolver
-
❏ D. GuardDuty does not evaluate DNS traffic and only uses VPC Flow Logs, CloudTrail, and Kubernetes audit logs
Question 18
How can a company centrally restrict accounts to specific AWS Regions and a subset of AWS services while minimizing administrative overhead?
-
❏ A. Use AWS Config rules with automated remediation to prevent out-of-scope resources.
-
❏ B. Use AWS Identity Center permission sets scoped per account to enforce Region and service limits.
-
❏ C. Apply organization-level Service Control Policies scoped by Organizational Unit to restrict Regions and services.
-
❏ D. Create team-specific IAM roles and attach fine-grained policies in every account to limit Regions and services.
Question 19
AWS Trust & Safety reports that malicious content is being distributed from a company account. The security lead at Solara Retail Analytics investigates and identifies an unapproved Amazon S3 bucket created by an unknown party. Which actions should be taken immediately to contain the incident and reduce impact? (Choose 3)
-
❏ A. Remove any resources that were created without authorization
-
❏ B. Enable Amazon GuardDuty
-
❏ C. Rotate and then remove all existing root and IAM access keys
-
❏ D. Sign in as the root user and delete every IAM user
-
❏ E. Enable AWS Shield Advanced
-
❏ F. Remove any unauthorized IAM users
Question 20
Which ECR feature scans OS and language-package vulnerabilities once when an image is pushed?
-
❏ A. Enable ECR basic image scanning with scan-on-push
-
❏ B. Use an EventBridge-triggered Lambda to start Amazon Inspector scans after push
-
❏ C. Enable Amazon ECR enhanced image scanning (Inspector integration) and set scan-on-push
-
❏ D. Enable enhanced scanning with continuous re-scans
AWS Security Specialty Practice Exam Answers

All questions come from certificationexams.pro and my Udemy AWS Security course.
Question 1
A video streaming startup operates multiple Amazon EC2 fleets that are repeatedly probed by bot traffic. The security team wants an automated workflow that can cut off traffic from confirmed malicious IPs within about 45 seconds, and they will coordinate the response with AWS Step Functions. Which design should the security engineer implement?
-
✓ C. Use Amazon GuardDuty findings to identify malicious sources, persist those IPs in DynamoDB, and have Step Functions run Lambda functions that update the table and push the IP set into an AWS Network Firewall rule group to immediately block the traffic
Use Amazon GuardDuty findings to identify malicious sources, persist those IPs in DynamoDB, and have Step Functions run Lambda functions that update the table and push the IP set into an AWS Network Firewall rule group to immediately block the traffic is correct because GuardDuty continuously surfaces malicious IPs and behaviors, while Network Firewall can enforce rapid, scalable network-level blocks across VPC traffic. Step Functions can orchestrate the Lambda automation to achieve sub-minute response.
Use AWS WAF to detect hostile requests, store the source IPs in Amazon DynamoDB, and have AWS Lambda in an AWS Step Functions workflow update the table and modify an AWS Network Firewall rule group to drop the traffic is not ideal because WAF detection is focused on layer 7 web traffic and does not provide broad host/network threat intelligence for EC2 fleets.
Use Amazon GuardDuty to flag suspicious sources, write the IPs to DynamoDB, and have Step Functions invoke Lambda to update the table and change Security Group rules to block those IPs is wrong because security groups are allow-only and cannot explicitly deny specific IPs, making them unsuitable for IP blocking at scale.
Use AWS CloudTrail to watch for malicious activity, store the IPs in DynamoDB, and use Step Functions with Lambda to update the table and add a deny rule to an AWS WAF web ACL is incorrect because CloudTrail captures API calls, not packet-level or flow-based malicious traffic, and WAF web ACLs target HTTP/S, not general EC2 ingress.
For automated IP blocking on EC2 networks, think detection with Amazon GuardDuty and enforcement with AWS Network Firewall; avoid using security groups for explicit denies.
Question 2
How do you immediately encrypt existing EBS volumes attached to Auto Scaling instances and ensure all future EBS volumes are created encrypted?
-
✓ B. Enable EBS encryption by default for the account/region and run an Auto Scaling instance refresh to replace instances so attached volumes are created encrypted.
The correct choice is Enable EBS encryption by default for the account/region and run an Auto Scaling instance refresh to replace instances so attached volumes are created encrypted. Enabling EBS encryption by default ensures every new EBS volume in that account and region is encrypted with the chosen KMS key automatically, and performing an Auto Scaling instance refresh replaces the running instances so their root and attached volumes are created under that encryption setting.
The option Deploy an AWS Config rule with automated remediation that snapshots, copies with encryption, and reattaches volumes. is possible but more complex because it requires snapshot, copy, detach/attach workflows and may need instance downtime or careful automation.
The option Create a new launch template with encrypted volumes and update the Auto Scaling group, relying on passive instance replacement over time. is insufficient on its own because Auto Scaling will not automatically replace existing instances without an explicit refresh or update strategy.
The option Create a new launch template version with encrypted volumes and perform a rolling update of the Auto Scaling group to replace instances. will work to replace instances but is more manual and doesn’t enforce encryption for other future volume creation sources across the account.
Focus on account-level defaults for prevention and use Auto Scaling instance refresh or rolling updates to remediate existing instances; remember there is no in-place toggle to retroactively encrypt a live EBS volume without snapshot/copy. Consider using a KMS CMK scoped for encryption and test the instance refresh in a nonproduction environment first.
Question 3
An operations team at Northwind Outfitters can no longer sign in to a Windows Amazon EC2 instance because the local Administrator password was misplaced. You are asked to regain access by resetting the password using EC2Launch v2 on a supported Windows AMI. What actions should you perform to complete this task? (Choose 3)
-
✓ B. Confirm the EC2Launch v2 service is running and detach the EBS root volume from the affected instance
-
✓ D. Start a helper instance in the same Availability Zone, attach the detached volume as a secondary disk, and delete %ProgramData%/Amazon/EC2Launch/state/.run-once
-
✓ E. Reattach the volume as the root device on the original instance, boot it, and use the instance key pair to retrieve the new Administrator password via the current public DNS
The correct workflow with EC2Launch v2 is to detach the root volume, modify it on a helper instance, and then reattach it to trigger a new password. Confirm the EC2Launch v2 service is running and detach the EBS root volume from the affected instance is required because EC2Launch v2 cannot reset the password while the volume is still attached as the boot volume.
Start a helper instance in the same Availability Zone, attach the detached volume as a secondary disk, and delete %ProgramData%/Amazon/EC2Launch/state/.run-once is essential since deleting the .run-once file causes EC2Launch v2 to re-execute one-time tasks, including setting a fresh Administrator password.
Reattach the volume as the root device on the original instance, boot it, and use the instance key pair to retrieve the new Administrator password via the current public DNS completes the process by allowing you to decrypt and view the newly generated password from the console.
Download and run EC2Rescue for Windows Server to reset the Administrator password applies to EC2Rescue workflows, not the EC2Launch v2 method required here.
Launch a temporary instance using the same Windows version to prevent disk signature conflicts is incorrect because to avoid disk signature collisions you should use a different Windows version for the helper instance.
Use AWS Systems Manager Session Manager to open a session and reset the password without stopping the instance might work if preconfigured, but it does not perform the EC2Launch v2 reset steps requested in this scenario.
For EC2Launch v2 password resets, remember the sequence: detach root, mount as secondary on a helper instance, delete .run-once, then reattach and retrieve the password with the key pair. If the question specifies EC2Launch v2, avoid EC2Rescue steps.
Question 4
How can CodeBuild API calls from within a VPC be kept off the public internet?
-
✓ C. Create an interface VPC endpoint (AWS PrivateLink) for CodeBuild.
The correct configuration is to create an interface VPC endpoint (AWS PrivateLink) for CodeBuild because interface endpoints provision elastic network interfaces in the VPC so service API calls stay on the AWS network and do not traverse the public internet.
The choice Route API calls through a NAT gateway. is incorrect because NAT gateways send outbound traffic over the public internet and therefore do not meet the requirement to keep API calls inside AWS. The choice Use AWS Direct Connect from on-premises. is incorrect because Direct Connect provides private connectivity between on-premises and AWS but does not change how VPC-originating API calls to AWS services are routed; it is not the supported method for keeping CodeBuild API traffic within AWS.
The choice Use a gateway VPC endpoint for CodeBuild. is incorrect because gateway endpoints only support S3 and DynamoDB and cannot be used for CodeBuild. Tip: on the exam remember that gateway endpoints = S3 and DynamoDB, interface endpoints (PrivateLink) = many other AWS service APIs including CodeBuild, and Direct Connect is for on-prem connectivity rather than VPC-to-service private routing. To validate in practice check the VPC Endpoints console for an interface endpoint in the VPC, ensure the endpoint ENIs exist in the subnets, and confirm DNS resolves to the endpoint.
Question 5
A security engineer at HarborPay needs a consolidated alert for all imported ACM certificates across every AWS Region that are due to expire within 45 days, and the alert must email the security owner while also recording the details in Security Hub for centralized tracking. What approach should be implemented to meet these requirements with minimal operational overhead?
-
✓ B. Create an EventBridge schedule to run a Lambda function that queries ACM’s DaysToExpiry metric across Regions, aggregates certificates expiring within 45 days, writes findings to Security Hub in a designated Region, and publishes one SNS message to the security contact
The best solution is Create an EventBridge schedule to run a Lambda function that queries ACM’s DaysToExpiry metric across Regions, aggregates certificates expiring within 45 days, writes findings to Security Hub in a designated Region, and publishes one SNS message to the security contact.
This approach uses a scheduled rule to batch evaluate all ACM certificates, consolidates those within the 45-day window, and produces a single SNS message to the security owner. The Lambda can also upsert findings into Security Hub in a chosen aggregation Region to centralize visibility.
Use the ACM Certificate Expiration event in Amazon EventBridge to invoke a Lambda function that forwards each expiring certificate as a finding to Security Hub and publishes to an SNS topic, allowing an ITSM system to open a ticket from SNS is suboptimal because it emits one event per certificate, leading to multiple notifications rather than a single consolidated alert.
Rely on Security Hub to natively monitor ACM certificate expirations and configure it to trigger an SNS notification 45 days before expiration is incorrect because Security Hub does not natively monitor ACM certificate expirations and is a Regional service, making cross-Region consolidation nontrivial without custom integration.
Use the AWS Config managed rule ACM_CERTIFICATE_EXPIRATION_CHECK to automatically renew imported ACM certificates and send rule evaluation notifications to SNS is incorrect because AWS Config cannot automatically renew imported ACM certificates; ACM only auto-renews eligible ACM-issued certificates.
When you need a single consolidated alert and centralized findings across Regions, prefer a scheduled aggregation workflow with Lambda and SNS, and publish Security Hub findings in one Region for a unified view.
Question 6
How can legacy clients that only support TLS 1.0 or 1.1 retrieve objects from a private S3 bucket while keeping the bucket protected and S3 using modern TLS?
-
✓ C. Use a CloudFront distribution with Origin Access Control so clients connect to CloudFront while CloudFront uses modern TLS to access the private S3 bucket
The correct solution is Use a CloudFront distribution with Origin Access Control so clients connect to CloudFront while CloudFront uses modern TLS to access the private S3 bucket. CloudFront terminates TLS for client connections at edge locations and can be configured to accept older TLS versions from legacy clients while CloudFront itself uses modern TLS when connecting to the S3 origin. Origin Access Control lets CloudFront sign requests so the S3 bucket can remain private and only serve content to the distribution. The choice Use an EC2 reverse proxy to terminate older TLS and proxy requests to S3 is technically possible but is operationally heavy, requires managing TLS and scaling and introduces credential and security maintenance. The choice Use the S3 website endpoint as a custom origin so legacy clients can fetch objects is insecure for private content because the website endpoint requires public access and prevents using OAC or similar mechanisms to restrict direct bucket access. The choice Add a bucket policy to allow legacy TLS connections to S3 is invalid because bucket policies cannot override the TLS versions supported by AWS service endpoints.
Prefer managed AWS services that provide secure access controls when an architecture requires protocol translation or client compatibility. Look for answers that use CloudFront OAC or signed origin requests to keep S3 private rather than exposing the bucket or introducing unmanaged proxies.
Question 7
At Asteria Finance, the SecOps team must observe every egress flow from a specific VPC and any traffic that originates from outside that VPC. The objective is to feed a packet inspection appliance for deep analysis, troubleshooting, and threat detection. Which AWS approach best meets these needs?
-
✓ C. Configure a traffic mirroring target for the inspection appliance and use a mirror filter that rejects outbound packets with a destination in the VPC CIDR and rejects inbound packets with a source in the VPC CIDR while allowing other traffic
The right choice is Configure a traffic mirroring target for the inspection appliance and use a mirror filter that rejects outbound packets with a destination in the VPC CIDR and rejects inbound packets with a source in the VPC CIDR while allowing other traffic. VPC Traffic Mirroring provides full packet copies to an out-of-band appliance and, with accept and reject rules, can exclude intra VPC traffic while including egress traffic and traffic sourced externally for deep inspection.
Enable VPC Flow Logs for the VPC and stream to CloudWatch Logs for analysis and alerts is insufficient because Flow Logs capture flow metadata only and cannot deliver packet payloads for content inspection or feed a packet analysis appliance.
AWS Network Firewall is an inline, managed firewall that requires routing through the service and does not natively mirror packets to an external tool, so it does not directly satisfy the out-of-band inspection requirement described.
Configure a traffic mirroring target for the monitoring appliance and create filters that mirror only inbound TCP and inbound UDP traffic fails to capture all required flows, notably egress, and will miss other protocols that may require inspection.
Use VPC Traffic Mirroring when you need full packet copies for out-of-band analysis or IDS/IPS tools; use VPC Flow Logs for flow metadata and analytics. To mirror only egress and external-source traffic, apply mirror filters that reject intra VPC CIDR traffic and accept everything else.
Question 8
What change is required so CloudTrail can deliver log objects to an S3 bucket using a specified object prefix?
-
✓ C. Update the S3 bucket policy to allow the intended object prefix and configure the same prefix in the trail
The correct remediation is the action described in Update the S3 bucket policy to allow the intended object prefix and configure the same prefix in the trail. CloudTrail includes the configured object prefix when it issues s3:PutObject requests, and the S3 bucket policy must permit those PutObject requests for that key prefix. If the bucket policy does not explicitly allow the prefix, the console shows “There is a problem with the bucket policy” and delivery is blocked. Allowing the prefix in the policy and configuring the identical prefix on the trail resolves the delivery error and keeps permissions scoped to only the keys CloudTrail will write to.
Add an aws:SourceArn condition to the S3 bucket policy to restrict CloudTrail deliveries is incorrect because while aws:SourceArn (and aws:SourceAccount) can be used to restrict which trail or account may deliver objects, those conditions do not replace the requirement to allow PutObject for the specific object key prefix CloudTrail uses. The prefix must still be permitted in the policy.
Allow the CloudTrail service principal s3:PutObject on the entire bucket without a prefix condition is incorrect because it is unnecessarily broad and does not follow least privilege best practices. It may allow delivery, but the console validation expects the prefix to be permitted and this approach removes scoping and introduces an avoidable security risk.
Use a Lambda to rename CloudTrail objects after delivery to add prefixes is incorrect because the delivery failure happens at PutObject time; a Lambda cannot run until after delivery succeeds. If the policy blocks the PutObject for the prefixed key, Lambda cannot be used to intercept or rename the object to satisfy the console validation or the delivery attempt. Tips for the exam: When CloudTrail writes to S3 with a custom object prefix, think of the S3 bucket policy as needing to explicitly allow s3:PutObject for that key prefix. The console performs a validation against the bucket policy and will surface errors if the configured prefix is not permitted. For tighter restriction, combine prefix allowance with aws:SourceArn and aws:SourceAccount conditions so the policy is both specific to the keys and limited to the intended trail/account.
Question 9
Riverton Energy runs an AWS Organizations environment with 64 member accounts. The company has AWS Security Hub configured with a delegated administrator that consolidates findings from all accounts. The security operations team wants instant email notifications whenever Amazon Inspector generates a high-severity finding in any account. What should they implement?
-
✓ B. In the Security Hub delegated administrator account, create an Amazon EventBridge rule that matches high severity Amazon Inspector findings from all linked accounts and publish to an Amazon SNS topic subscribed by the SOC email list
The best solution is to centralize event detection where findings are already aggregated. In the Security Hub delegated administrator account, create an Amazon EventBridge rule that matches high severity Amazon Inspector findings from all linked accounts and publish to an Amazon SNS topic subscribed by the SOC email list ensures immediate, organization-wide alerts with minimal operational overhead.
Enable an organization trail in AWS CloudTrail and create an Amazon EventBridge rule that triggers on Amazon Inspector ListFindings API calls, then send notifications through an Amazon SNS topic is incorrect because it relies on API activity rather than the findings events themselves and is not intended for real-time alerting on Inspector findings.
In every member account, configure an Amazon EventBridge rule for high severity Amazon Inspector findings and route to a local Amazon SNS topic with the SOC email addresses subscribed is suboptimal since it requires deploying and managing rules and subscriptions in each account, which is complex and error-prone at scale.
Enable Amazon GuardDuty across the organization and notify high severity findings to email using Amazon SNS is incorrect because GuardDuty is a different service with different findings and would not satisfy the requirement to alert specifically on Amazon Inspector high-severity findings.
For organization-wide security alerts, prefer centralized aggregation and event-driven notifications from the delegated administrator account using EventBridge and SNS, rather than per-account rules or API-call triggers.
Question 10
How can a hub account host a central VPC so member accounts can launch resources into its subnets without creating VPCs or modifying other accounts’ resources?
-
✓ D. Share specific VPC subnets with member accounts using AWS Resource Access Manager.
The correct approach is to use Share specific VPC subnets with member accounts using AWS Resource Access Manager. AWS RAM VPC sharing allows the hub account to own and manage the VPC while participant accounts create and manage application resources (for example EC2 instances, ENIs, or RDS instances) directly in the shared subnets. This satisfies the requirements that teams cannot create their own VPCs and cannot change other teams’ application resources because subnet ownership and VPC-level controls remain with the hub account. The other choices are incorrect for these reasons.
Attach a Transit Gateway in the hub and route member-account traffic to hub subnets provides network connectivity and routing but does not permit cross-account placement of resources into hub-owned subnets.
Publish Service Catalog products so teams provision resources into the central VPC helps standardize provisioning and enforce templates but does not itself enable cross-account subnet sharing or transfer subnet ownership.
Expose hub services via AWS PrivateLink makes services available privately to member accounts but does not allow those accounts to launch resources inside the hub’s subnets. Tips for the exam: identify whether the requirement is for cross-account resource placement into the same subnet (which implies VPC sharing with AWS RAM) versus simply connecting VPCs or exposing services (which implies Transit Gateway, VPC Peering, or PrivateLink).
Question 11
Orion Logistics has consolidated 16 AWS accounts into a single organization with AWS Organizations to centralize governance. The security team must record every user and API action, define alarms for specific high-risk activities, and deliver immediate notifications when they occur. Which combination of services most effectively meets these needs? (Choose 2)
-
✓ B. Implement an AWS CloudTrail organization trail and stream events to an Amazon CloudWatch Logs log group
-
✓ E. In Amazon CloudWatch Logs, create metric filters for the targeted user actions and configure Amazon CloudWatch alarms to notify through Amazon SNS
The best solution is to combine Implement an AWS CloudTrail organization trail and stream events to an Amazon CloudWatch Logs log group with In Amazon CloudWatch Logs, create metric filters for the targeted user actions and configure Amazon CloudWatch alarms to notify through Amazon SNS. CloudTrail at the organization level guarantees that every account’s API activity is captured, and streaming to CloudWatch Logs enables near real-time processing. Metric filters detect specific high-risk actions, and CloudWatch alarms can immediately notify responders via SNS.
AWS Security Hub centralizes and prioritizes findings, but it does not directly create real-time alerts from arbitrary CloudTrail events, so it does not meet the immediate, event-driven alarm requirement.
Configure the organization trail to write CloudTrail files only to an Amazon S3 bucket is useful for archival and audits but lacks near real-time detection and alerting without additional services.
Use Amazon Athena to query CloudTrail logs in S3 and publish results to Amazon SNS is better for batch or scheduled analytics and is not ideal for prompt alerts on specific user actions.
For real-time detection and notification of specific API actions across many accounts, think CloudTrail organization trail to CloudWatch Logs, then metric filters + CloudWatch alarms + SNS. Use S3 and Athena for historical analysis, not immediate alerts.
Question 12
How can you remove public network exposure for a Lambda function that reads and writes a DynamoDB table? (Choose 2)
-
✓ B. Run the Lambda in private VPC subnets
-
✓ D. Provision a gateway VPC endpoint for DynamoDB
The correct approach is to run the function inside the VPC and use a DynamoDB gateway endpoint so all traffic stays on private network paths. Running the function in private subnets and provisioning a gateway VPC endpoint together remove public network exposure because the function’s ENIs communicate via VPC routing and the gateway endpoint routes DynamoDB traffic over private addresses. Specifically, Run the Lambda in private VPC subnets prevents default internet egress unless you deliberately add a NAT, and Provision a gateway VPC endpoint for DynamoDB allows access to DynamoDB without touching public endpoints. The other choices are incorrect.
Create a DynamoDB interface VPC endpoint is wrong because DynamoDB uses a gateway endpoint rather than an interface endpoint.
Attach a resource-based policy to Lambda to block outbound internet is wrong because resource-based policies control invocation and management, not network egress.
Add an IAM condition on the Lambda role to restrict access by source IP is unreliable for Lambda since source IPs are not stable or meaningful for identifying the function’s VPC traffic.
Separate network controls from IAM controls when you read the question. If the goal is to remove public network exposure, think VPC placement and VPC endpoints first, then IAM for access control. Remember the specific endpoint type: DynamoDB uses a gateway VPC endpoint, while many other services use interface endpoints.
Question 13
After a compliance audit, a fintech startup found that several customer managed AWS KMS keys allow broader access than intended. The security lead wants to ensure that only principals from accounts that are part of their AWS Organizations can use these keys, and the solution should automatically keep working as accounts are added or removed without enumerating each account. What should the team implement?
-
✓ C. Configure the KMS key policy to use the aws:PrincipalOrgID global condition key with a resource-based Principal and set the Condition to your Organization ID
The correct approach is to use Configure the KMS key policy to use the aws:PrincipalOrgID global condition key with a resource-based Principal and set the Condition to your Organization ID. In a KMS key policy, you can add a Condition that checks aws:PrincipalOrgID equals your Organizations ID. This allows any principal from accounts in your organization and automatically adapts as accounts join or leave, without listing account IDs.
Use the aws:PrincipalIsAWSService global condition key in a KMS key policy and enumerate each AWS account ID in the Condition is incorrect because aws:PrincipalIsAWSService only indicates whether the caller is an AWS service principal and does not verify organization membership; enumerating accounts is fragile and unnecessary when aws:PrincipalOrgID exists.
Attach a service control policy at the organization root that denies KMS actions for any principal outside the organization is incorrect because SCPs only limit what principals in your org can do; they do not control access by principals outside your org to a specific KMS key, which is governed by the key policy.
Use aws:PrincipalOrgID to restrict an AWS service principal from accessing the key is incorrect because service principals do not belong to your AWS Organizations and aws:PrincipalOrgID does not evaluate true for them.
When you see a requirement to allow access only to identities in your AWS Organizations across many accounts, think resource-based policy with aws:PrincipalOrgID and avoid listing account IDs.
Question 14
How can you ensure artifacts previously signed by a departed engineer are no longer accepted for Lambda execution?
-
✓ B. Revoke all versions of the signing profile that were used by the departed engineer.
Revoke all versions of the signing profile that were used by the departed engineer. is correct because AWS Signer provides a revocation mechanism that marks signing profile versions as revoked and, when Lambda code signing verification and revocation checks are enforced, Lambda will refuse to run artifacts signed by those revoked profile versions. This ensures previously issued signatures can be invalidated without needing to locate every signed file. The other choices are incorrect for these reasons: Rotate the AWS KMS customer master key that encrypts the deployment packages. is incorrect because signature verification is separate from the encryption of stored artifacts; rotating a CMK does not change or invalidate existing cryptographic signatures.
Remove the departed engineer’s IAM permissions to AWS Signer so they cannot sign new artifacts. is incorrect because it prevents future signing but does nothing to invalidate signatures already produced.
Delete the signed artifacts from the deployment storage (for example, S3) to prevent their use. is operationally fragile because copies or backups may exist and it does not revoke the signatures themselves.
Focus on mechanisms that explicitly invalidate signatures (revocation) or change the set of allowed publishers in the Lambda code signing configuration. Be prepared to distinguish actions that prevent new keys/signatures from being created from actions that revoke or blacklist existing signatures. Check the Lambda code signing configuration to ensure it enforces publisher restrictions and revocation checks after revoking profiles.
Question 15
Finley Retail Group uses AWS Firewall Manager with a policy that targets every account and resource in its organization. After a 90 day consolidation across 64 accounts, some load balancers and instances moved out of the policy scope. What does Firewall Manager do by default when resources leave the scope of the policy?
-
✓ C. The associated AWS Config managed rules are deleted, and any AWS WAF web ACLs with no associated resources are removed
The associated AWS Config managed rules are deleted, and any AWS WAF web ACLs with no associated resources are removed is correct because when resources leave the Firewall Manager policy scope, the service cleans up its AWS Config managed rules and deletes empty WAF web ACLs, but it does not strip existing protections from resources that were previously associated.
Any protected resource that leaves the policy scope is automatically disassociated and removed from protection is incorrect because resources such as ALBs or API Gateway stages remain associated with their web ACLs by default even after leaving scope.
An Application Load Balancer associated with a web ACL is removed from the web ACL when it leaves scope while the protection continues is incorrect because the ALB remains associated with the web ACL; protection is not automatically removed or changed.
Any AWS WAF web ACL with no resources is deleted, and an EC2 instance is automatically disassociated from the replicated security group when it leaves scope is incorrect because instances are not auto detached from replicated security groups by default; only Config managed rules and unused web ACLs are cleaned up.
When you see resources moving out of a Firewall Manager policy scope, remember that associations remain while Firewall Manager cleans up only its AWS Config managed rules and deletes unused WAF web ACLs.

All questions come from certificationexams.pro and my Udemy AWS Security course.
Question 16
After baking an AMI that had the CloudWatch unified agent installed, instances launched from it send no logs to CloudWatch Logs. What could cause the agent to fail to send logs? (Choose 2)
-
✓ B. AMI contains baked instance-specific agent state that breaks the agent on new instances
-
✓ D. Instance IAM role lacks CloudWatch Logs permissions (create groups/streams, PutLogEvents)
The correct causes are that the AMI baked instance-specific agent state and that the instance IAM role lacks CloudWatch Logs permissions. Specifically, AMI contains baked instance-specific agent state that breaks the agent on new instances is correct because creating an AMI with the CloudWatch agent already installed can capture local agent state, saved credentials, instance IDs, or file permissions that are invalid on derived instances; AWS recommends cleaning agent state or reconfiguring the agent at first boot. The other correct cause is Instance IAM role lacks CloudWatch Logs permissions (create groups/streams, PutLogEvents) since the agent requires permissions such as logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents (or the managed CloudWatchAgentServerPolicy) to publish logs. The incorrect options are: VPC must use a VPC endpoint; public egress is not allowed for CloudWatch Logs, which is wrong because CloudWatch Logs public endpoints are reachable via internet egress if the VPC has NAT/IGW egress or an appropriate route; a VPC endpoint is optional but can be used to avoid internet egress.
Agent must forward logs via Kinesis Data Firehose instead of writing directly to CloudWatch Logs is incorrect because the unified agent can write directly to CloudWatch Logs without Firehose.
Agent must run as root and will fail when run under other users is also incorrect because the agent can run as a non-root user if that user has the necessary file and path permissions.
First check the agent logs on the instance (for example, /var/log/amazon/amazon-cloudwatch-agent/amazon-cloudwatch-agent.log) and use the agent control tool to check status. Verify the instance IAM role has CloudWatch Logs permissions and confirm network egress or a VPC endpoint if the instance cannot reach public endpoints. When creating AMIs, remove instance-specific agent state or use instance bootstrap (user data or SSM) to reconfigure the agent at launch.
Question 17
A digital publishing startup, Redwood Press, moved most workloads to AWS and runs Windows applications on Amazon EC2. The applications still rely on Active Directory hosted in the company datacenter. GuardDuty has been enabled across the environment, but the security team notices there are no DNS-related findings for these instances. What change should be made so GuardDuty can evaluate DNS activity for these workloads?
-
✓ C. Repoint the Windows instances to the VPC’s AmazonProvidedDNS resolver rather than a custom or on-prem resolver
The correct answer is Repoint the Windows instances to the VPC’s AmazonProvidedDNS resolver rather than a custom or on-prem resolver. GuardDuty consumes DNS query telemetry from the AWS-managed AmazonProvidedDNS in your VPC. If instances use an on-premises DNS or another custom resolver, GuardDuty cannot access that DNS data and therefore produces no DNS findings.
Enable Route 53 Resolver query logging and confirm the destination configuration is incorrect because GuardDuty does not read from Route 53 Resolver query logs. GuardDuty uses a separate internal stream for DNS telemetry and is not affected by query logging settings.
Add broader permissions to the GuardDuty service-linked role so it can read DNS logs is incorrect because IAM permissions are not the blocker. DNS visibility in GuardDuty depends on using the AWS VPC resolver, not additional roles or policies.
GuardDuty does not evaluate DNS traffic and only uses VPC Flow Logs, CloudTrail, and Kubernetes audit logs is incorrect because GuardDuty does analyze DNS activity, but only when queries go through the AmazonProvidedDNS resolvers.
For DNS findings, ensure instances use AmazonProvidedDNS in the VPC. Route 53 Resolver query logging and extra IAM permissions do not enable GuardDuty DNS visibility.
Question 18
How can a company centrally restrict accounts to specific AWS Regions and a subset of AWS services while minimizing administrative overhead?
-
✓ C. Apply organization-level Service Control Policies scoped by Organizational Unit to restrict Regions and services.
The best solution is to use Apply organization-level Service Control Policies scoped by Organizational Unit to restrict Regions and services. Service Control Policies (SCPs) set centralized maximum permissions at the organization or OU level so disallowed API calls are blocked regardless of account-level IAM permissions, which keeps governance scalable and reduces per-account administrative work.
The option Use AWS Config rules with automated remediation to prevent out-of-scope resources. is incorrect because AWS Config primarily detects and can remediate after resources are created and does not inherently prevent API calls at creation time.
The option Use AWS Identity Center permission sets scoped per account to enforce Region and service limits. is incorrect because Identity Center controls how permissions are granted to identities but does not centrally enforce organization-wide maximum permissions about Regions or services; it still relies on the underlying IAM role policies in each account.
The option Create team-specific IAM roles and attach fine-grained policies in every account to limit Regions and services. is incorrect because this approach requires managing many policies across accounts, which is operationally heavy and prone to drift. For exam strategy, look for answers that mention centralized enforcement, organizational scope, and maximum-permission controls; SCPs are a common exam keyword for such scenarios. Remember that SCPs do not grant permissions themselves but limit what permissions can be granted by IAM, so they must be used together with IAM policies.
Question 19
AWS Trust & Safety reports that malicious content is being distributed from a company account. The security lead at Solara Retail Analytics investigates and identifies an unapproved Amazon S3 bucket created by an unknown party. Which actions should be taken immediately to contain the incident and reduce impact? (Choose 3)
-
✓ A. Remove any resources that were created without authorization
-
✓ C. Rotate and then remove all existing root and IAM access keys
-
✓ F. Remove any unauthorized IAM users
When AWS Trust & Safety flags abuse such as malware hosting, treat it as an account compromise. Immediate containment centers on removing attacker access and dismantling any rogue infrastructure. The key steps are to revoke credentials, remove unauthorized identities, and delete resources the attacker stood up.
Remove any resources that were created without authorization is correct because eliminating rogue assets such as the unapproved S3 bucket halts ongoing malicious activity and limits further exposure.
Enable Amazon GuardDuty is not the best immediate containment step because it provides threat detection, not rapid removal of attacker access or assets; it is valuable, but it does not minimize current consequences as directly as revoking credentials and deleting rogue entities.
Rotate and then remove all existing root and IAM access keys is correct because access key rotation and revocation cut off compromised credentials, which is one of the first actions recommended for suspected account compromise.
Sign in as the root user and delete every IAM user is incorrect because it is unnecessarily destructive and disrupts legitimate operations; you should remove only unauthorized principals and reset or rotate credentials for legitimate ones.
Enable AWS Shield Advanced is incorrect because it focuses on DDoS protection, which does not remediate a compromised account that is already hosting malware.
Remove any unauthorized IAM users is correct because deleting attacker-created identities removes their persistence mechanisms and reduces the blast radius.
For account compromise scenarios, prioritize immediate containment: revoke credentials, remove unauthorized identities, and delete rogue resources before considering longer-term improvements like enabling new detection services.
Question 20
Which ECR feature scans OS and language-package vulnerabilities once when an image is pushed?
-
✓ C. Enable Amazon ECR enhanced image scanning (Inspector integration) and set scan-on-push
The correct choice is Enable Amazon ECR enhanced image scanning (Inspector integration) and set scan-on-push.
This is the native ECR capability that integrates with Amazon Inspector to assess both operating system and language-package vulnerabilities and can be configured to run a single scan when an image is pushed to the repository. The other choices are incorrect because Enable ECR basic image scanning with scan-on-push only covers OS-level CVEs via Clair and does not detect language-package issues.
The Use an EventBridge-triggered Lambda to start Amazon Inspector scans after push approach is a custom solution that increases operational overhead and can produce duplicate or delayed scans instead of a single native scan at push time.
The Enable enhanced scanning with continuous re-scans option intentionally re-scans images over time or on new findings, which violates the requirement to scan each image only once at push.
look for keywords like enhanced image scanning, Inspector integration, and scan-on-push as indicators of the native single-scan-on-push solution; be cautious of answers that imply continuous rescans or require custom orchestration when the question asks for a single, native scan at push.
Jira, Scrum & AI Certification |
---|
Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.