Sample Questions for AWS' Security Specialist Certification

AWS Certified Security Specialty Badge and Logo Credly

All questions come from certificationexams.pro and my Udemy AWS Security course.

AWS Security Specialty Exam Topics

The AWS Certified Security Specialty exam validates your ability to design, manage, and protect AWS workloads that handle sensitive data and mission-critical operations. It focuses on core domains such as data protection, identity and access management, incident response, infrastructure security, and logging and monitoring.

To prepare effectively, start by studying AWS Security Specialty Practice Questions. Find questions that mirror the tone, reasoning, and structure of the real AWS exam. This will help you master Amazon’s security best practices and examination style.

You can also explore Real AWS Security Specialty Exam Questions for authentic, scenario-based challenges that simulate real security engineering tasks. For focused study, review AWS Security Specialty Sample Questions covering encryption management, key rotation, access policies, VPC security, and logging configurations.

AWS Security Specialty Exam Simulator

Each section of the AWS Security Specialty Questions and Answers collection is designed to teach as well as test.

These materials reinforce essential AWS security concepts and explain why specific responses are correct, helping you think like an experienced cloud security professional.

For complete readiness, use the AWS Security Specialty Exam Simulator and take full-length AWS Security Specialty Practice Tests. These simulations replicate the structure, time constraints, and complexity of the actual AWS certification exam.

If you prefer focused study sessions, explore collections such as the AWS Security Specialty Exam Dump, Security Braindump, and AWS Security Specialty Questions and Answers by topic.

These organize questions by key security areas such as IAM, encryption, data governance, network isolation, and threat detection. Working through these exam questions helps you develop the analytical and practical skills required to protect AWS environments and ensure compliance.

Prepare today with the official-style AWS Security Specialty Practice Questions and measure your progress with full-length exams.

Train consistently and approach the security engineer exam with confidence.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AWS Security Specialty Sample Questions

Question 1

At Orion Health, a network security engineer needs to set up an interface VPC endpoint in a production VPC so workloads can call a private REST API hosted on Amazon API Gateway in a different AWS account. Which considerations should be applied when configuring the interface endpoint in the VPC? (Choose 2)

  • ❏ A. You must select subnets in multiple Availability Zones when creating the interface endpoint for resiliency

  • ❏ B. The security groups attached to the endpoint network interfaces must allow inbound TCP 443 from your VPC CIDR or from a VPC security group

  • ❏ C. To reach public APIs through a VPC endpoint, enable private DNS on the VPC endpoint

  • ❏ D. Enabling private DNS on the interface endpoint causes execute-api DNS queries from inside the VPC to resolve to the endpoint, which blocks access to public API Gateway endpoints

  • ❏ E. You cannot access the private API from on-premises over AWS Direct Connect using the public execute-api DNS names

Question 2

If the same IPv4 address is in both a GuardDuty trusted IP list and a threat intelligence list, how is it evaluated and what IAM permissions are needed to fully manage these lists? (Choose 2)

  • ❏ A. Attach only the AmazonGuardDutyFullAccess managed policy; no extra IAM permissions are required.

  • ❏ B. Trusted list entries take precedence so an IP present in both lists is treated as trusted and will not generate findings.

  • ❏ C. Grant SecurityAudit plus modify S3 bucket policies for uploads to obtain full list management.

  • ❏ D. Attach AmazonGuardDutyFullAccess and allow iam:PutRolePolicy and iam:DeleteRolePolicy on arn:aws:iam::123456789012:role/aws-service-role/guardduty.amazonaws.com/AWSServiceRoleForAmazonGuardDuty.

  • ❏ E. Grant iam:PassRole to the principal so they can pass the service-linked role and thereby manage lists.

Question 3

NovaStream, a digital media startup, runs dozens of microservices on Amazon EC2 across four AWS accounts, and each service emits logs with latency and resource metrics. The company wants to consolidate these logs into a single logging account to feed a real-time anomaly detection pipeline. What is the most effective way to aggregate and stream logs from all accounts into the central processing system in near real time?

  • ❏ A. Configure the CloudWatch Agent on all EC2 instances to send logs to an Amazon S3 bucket in the logging account, then invoke an AWS Lambda function on object creation to push data to the detection system

  • ❏ B. Enable AWS Config in every account, aggregate to a centralized aggregator, and use AWS Lambda to transform and forward the results to the anomaly detection pipeline

  • ❏ C. Create CloudWatch Logs subscription filters in each account that deliver to a cross-account Amazon Kinesis Data Firehose delivery stream in the logging account, which then forwards records to the processing system

  • ❏ D. Set up an Amazon EventBridge event bus in the logging account and create cross-account rules to route all logs to the bus, then forward them to the detection system

Question 4

Which controls should be combined so a deployment pipeline can create IAM principals and policies while ensuring created resources cannot exceed centrally enforced minimum privileges? (Choose 2)

  • ❏ A. Use an AWS Organizations Service Control Policy to centrally block role permissions.

  • ❏ B. Attach an IAM permissions boundary to limit privileges of roles the pipeline creates.

  • ❏ C. Require developers to attach their own permissions boundaries to roles created by the pipeline.

  • ❏ D. Give the pipeline a single, dedicated IAM execution role it assumes to provision resources.

  • ❏ E. Validate deployment templates with IAM Access Analyzer or CloudFormation Guard before deployment.

Question 5

A security analyst at Alpine Retail configured an allow list of trusted IP addresses for Amazon GuardDuty using recommended practices, but GuardDuty is still reporting findings that involve those same addresses. What checks should be performed to verify the configuration so GuardDuty behaves as expected? (Choose 2)

  • ❏ A. Verify that in an AWS Organizations multi-account setup, findings for member accounts are evaluated against the administrator account’s trusted IP list

  • ❏ B. Confirm that the trusted IP list contains only publicly routable IPv4 addresses

  • ❏ C. Make sure an IP address is not present in both the trusted IP list and a threat list because threat list precedence will still cause a finding

  • ❏ D. Configure more than one trusted IP list per account in each Region

  • ❏ E. Upload the trusted IP list in the same Region where GuardDuty is generating findings

Question 6

How do you centrally collect CloudTrail logs for all current and future accounts while preventing member accounts from modifying the trail?

  • ❏ A. Deploy individual trails in each account and use cross-account S3 permissions

  • ❏ B. Use an AWS Config aggregator to collect API activity into the central bucket

  • ❏ C. Create an organization-wide CloudTrail in the management account delivering logs to a centralized S3 bucket

  • ❏ D. Enforce CloudTrail with an Organization SCP from the management account

Question 7

A fintech startup has begun building on AWS. Early workloads run in Europe (Frankfurt), and a CloudTrail trail is configured to record API activity to an S3 bucket in that Region. The security team now requires that API events from every Region are collected and archived in one centrally managed Region. What is the simplest way to meet this requirement?

  • ❏ A. Create a new trail that applies to all Regions and deliver events to different S3 buckets for each Region

  • ❏ B. Modify the existing single Region trail to log all Regions and write to one S3 bucket in the central Region

  • ❏ C. Configure one trail in every Region and point all of them to the same centralized S3 bucket

  • ❏ D. CloudWatch Logs

Question 8

Which AWS services and features best mitigate layer 3 and layer 4 volumetric DDoS attacks? (Choose 3)

  • ❏ A. Elastic Load Balancing distributing incoming traffic

  • ❏ B. AWS Web Application Firewall rules engine

  • ❏ C. AWS Shield managed DDoS protection service

  • ❏ D. Amazon GuardDuty continuous threat detection

  • ❏ E. Amazon Route 53 authoritative DNS service

  • ❏ F. AWS Global Accelerator

Question 9

A streaming analytics firm named Quartz Wave runs most of its workloads in containers and stores its images in Amazon Elastic Container Registry. The security group needs an automated approach that detects vulnerabilities in both operating systems and application language packages. Every image pushed to ECR must be continually assessed, and any change in findings should trigger notifications to the DevOps and SecOps teams. What should they implement?

  • ❏ A. AWS Security Hub

  • ❏ B. Enable enhanced scanning for the Amazon ECR registry and use Amazon EventBridge to route scan update events to the required teams

  • ❏ C. Enable basic scanning for repositories with scan on push and configure EventBridge rules to notify the teams

  • ❏ D. Amazon GuardDuty

Question 10

How can an analyst remain in the main operations group while restricting their permissions to DynamoDB, RDS, and CloudWatch?

  • ❏ A. Place the analyst in another AWS account and require assuming a cross-account role limited to DynamoDB, RDS, and CloudWatch.

  • ❏ B. Add the analyst to the operations group and apply a permissions boundary that limits permissions to DynamoDB, RDS, and CloudWatch, updating it later as needed.

  • ❏ C. Use an AWS Organizations Service Control Policy on the account to block access to services except DynamoDB, RDS, and CloudWatch.

  • ❏ D. Create a separate IAM group for the analyst with policies granting only DynamoDB, RDS, and CloudWatch, and move the user into that group.

AWS Certified Security Specialty Badge and Logo Credly

All questions come from certificationexams.pro and my Udemy AWS Security course.

Question 11

A health-tech analytics provider runs an Amazon RDS for PostgreSQL database in a VPC that has no internet gateway or NAT due to strict isolation controls. The team wants AWS Secrets Manager to rotate the database credentials every 45 days, but company policy prohibits using the managed rotation Lambda. An engineer deployed a custom Lambda function in private subnets to perform rotation and allowed database access from the function’s security group. When executed, the function cannot reach Secrets Manager and the rotation fails. What should the engineer do to enable rotation without exposing the VPC to the internet?

  • ❏ A. Deploy a NAT gateway in a public subnet and add a default route from the function subnets so the function can call Secrets Manager over the internet

  • ❏ B. Create a VPC peering connection to the Secrets Manager service and update routes

  • ❏ C. Add an interface VPC endpoint for AWS Secrets Manager and place the function in subnets that can resolve and reach the endpoint

  • ❏ D. Set up an AWS Direct Connect link from the VPC to Secrets Manager and route the function’s traffic through it

Question 12

Cross-region S3 replication of objects encrypted with different customer-managed KMS keys fails; what IAM/KMS permission and key policy changes allow encrypted objects to replicate? (Choose 3)

  • ❏ A. Grant the replication role kms:GenerateDataKey and kms:Encrypt on the destination CMK only

  • ❏ B. Grant the replication role kms:Decrypt, kms:ReEncryptFrom, kms:Encrypt, and kms:ReEncryptTo on both CMKs

  • ❏ C. Add s3:GetObjectVersion and s3:GetObjectVersionTagging to the replication role

  • ❏ D. Update the destination CMK key policy to allow the replication role kms:Encrypt

  • ❏ E. Switch the destination bucket to S3-managed encryption (SSE-S3)

  • ❏ F. Modify the source CMK key policy to permit the replication role kms:Decrypt

Question 13

An international edtech platform stores documents, media, and logs in Amazon S3 across hundreds of buckets with millions of objects. The security analytics team needs to identify the 60 most frequently requested objects, the 25 largest downloads, and items with the slowest transfer times. They plan to run SQL queries and present the results in an interactive dashboard for stakeholders. Which services should a security engineer combine to meet these requirements? (Choose 2)

  • ❏ A. Enable Amazon S3 server access logging on the buckets

  • ❏ B. AWS Lambda with Amazon DynamoDB

  • ❏ C. Amazon Athena with Amazon QuickSight

  • ❏ D. Amazon GuardDuty

  • ❏ E. Amazon CloudWatch Logs Insights

Question 14

An interface VPC endpoint to a Network Load Balancer in another account shows NLB outbound bytes but the client receives no traffic. What troubleshooting step helps identify and resolve the issue?

  • ❏ A. Enable VPC Flow Logs on the subnet and ENI to inspect traffic metadata.

  • ❏ B. Enable NLB access logs to inspect connection attempts and source IPs hitting the load balancer.

  • ❏ C. Replace PrivateLink with a Transit Gateway for cross-account routing.

  • ❏ D. Verify security group rules and network ACLs in both VPCs allow required ports and protocols between the endpoint ENI and the NLB.

Question 15

Helios Analytics has introduced a security policy that requires the security response team to receive an alert every time the AWS account root user signs in to the AWS Management Console. What is the most efficient way to implement this requirement?

  • ❏ A. Store CloudTrail logs in S3, query them with Athena using a scheduled Lambda, and publish to an SNS topic when a root console sign-in is found

  • ❏ B. Create a CloudWatch Events rule that matches any account root user API activity and invoke a Lambda function that posts to an SNS topic

  • ❏ C. Use an EventBridge rule to filter ConsoleLogin events where the user identity is Root and send notifications to an SNS topic

  • ❏ D. Forward VPC Flow Logs to SQS, process with a Lambda function, and notify an SNS topic if a root login is detected

Question 16

Which CloudFront configuration gives strong protection against man-in-the-middle attacks with minimal operational overhead?

  • ❏ A. Attach an ACM TLS certificate and redirect HTTP to HTTPS

  • ❏ B. Use AWS WAF to enforce TLS and add security response headers

  • ❏ C. Apply the managed SecurityHeadersPolicy response headers policy to CloudFront

  • ❏ D. Add HSTS headers via a Lambda@Edge function

Question 17

An analytics service at Nebula Retail Group runs on Amazon EC2 instances in two private subnets and communicates with one Amazon SQS queue and a DynamoDB table that carry highly sensitive records. The security team requires that traffic between the instances and these AWS services remain on the AWS private network rather than the public internet. What actions should the engineer implement to satisfy this requirement? (Choose 3)

  • ❏ A. Create an interface VPC endpoint for Amazon SQS in the subnets hosting the instances

  • ❏ B. Establish an AWS Direct Connect link to Amazon S3 for private service access

  • ❏ C. Apply restrictive endpoint policies on the VPC endpoints that allow only the application’s DynamoDB table and SQS queue ARNs

  • ❏ D. Route traffic through a NAT gateway so instances can reach SQS and DynamoDB without public exposure

  • ❏ E. Provision a gateway VPC endpoint for DynamoDB in the VPC

  • ❏ F. Update the EC2 instance IAM role to allow outbound access to the interface endpoints

Question 18

Which IAM policy must be provided so an EC2 instance can assume an IAM role?

  • ❏ A. Permissions policy attached to the role granting S3 read/write

  • ❏ B. Role trust policy permitting the EC2 service principal to assume the role

  • ❏ C. Instance profile association linking the role to the EC2 instance

  • ❏ D. IAM permission boundary that limits role permissions

Question 19

A fintech startup is launching a payments platform that will use AWS KMS for encryption. The team will create customer managed keys in KMS using AWS-generated key material and will not import any external key material. They require the keys to rotate every 365 days. What is the most appropriate approach to achieve this?

  • ❏ A. AWS CloudHSM

  • ❏ B. Enable automatic annual rotation on each customer managed KMS key

  • ❏ C. Use AWS managed KMS keys so AWS performs rotation for you

  • ❏ D. Modify the customer managed key policy to turn on automatic rotation

Question 20

What VPC instance setting must be changed so a virtual firewall can forward traffic when it is not the packet source or destination?

  • ❏ A. Use VPC Traffic Mirroring to send copies of traffic to the firewall

  • ❏ B. Add subnet routes pointing to the firewall ENI without disabling source/destination check

  • ❏ C. Place the firewall in a public subnet and route via the internet gateway

  • ❏ D. Disable the instance source/destination check on the firewall ENI

AWS Security Specialty Sample Questions Answered

AWS Certified Security Specialty Badge and Logo Credly

All questions come from certificationexams.pro and my Udemy AWS Security course.

Question 1

At Orion Health, a network security engineer needs to set up an interface VPC endpoint in a production VPC so workloads can call a private REST API hosted on Amazon API Gateway in a different AWS account. Which considerations should be applied when configuring the interface endpoint in the VPC? (Choose 2)

  • ✓ B. The security groups attached to the endpoint network interfaces must allow inbound TCP 443 from your VPC CIDR or from a VPC security group

  • ✓ D. Enabling private DNS on the interface endpoint causes execute-api DNS queries from inside the VPC to resolve to the endpoint, which blocks access to public API Gateway endpoints

The security groups attached to the endpoint network interfaces must allow inbound TCP 443 from your VPC CIDR or from a VPC security group is correct because interface VPC endpoints for API Gateway are reached over HTTPS, and the endpoint ENIs must accept inbound 443 from your VPC sources to allow the connections.

Enabling private DNS on the interface endpoint causes execute-api DNS queries from inside the VPC to resolve to the endpoint, which blocks access to public API Gateway endpoints is also correct. When private DNS is on, the default execute-api.region.amazonaws.com name resolves to the endpoint’s private IPs inside the VPC, and since the endpoint only forwards to private APIs, attempts to reach public APIs fail.

You must select subnets in multiple Availability Zones when creating the interface endpoint for resiliency is incorrect because this is a best practice for availability, not a strict requirement when creating the endpoint.

To reach public APIs through a VPC endpoint, enable private DNS on the VPC endpoint is incorrect since VPC endpoints cannot be used to access public API Gateway endpoints; private DNS only affects name resolution within the VPC.

You cannot access the private API from on-premises over AWS Direct Connect using the public execute-api DNS names is incorrect. With private DNS enabled on the endpoint and Route 53 Resolver configured to forward queries so on-premises resolves the execute-api name to the endpoint IPs, the private API can be accessed over Direct Connect.

Remember: enabling private DNS for an API Gateway interface endpoint hijacks execute-api resolution inside the VPC, breaking access to public APIs. Use custom domains or split-horizon DNS if you need both private and public access, and always allow inbound TCP 443 to the endpoint ENIs from your VPC.

Question 2

If the same IPv4 address is in both a GuardDuty trusted IP list and a threat intelligence list, how is it evaluated and what IAM permissions are needed to fully manage these lists? (Choose 2)

  • ✓ B. Trusted list entries take precedence so an IP present in both lists is treated as trusted and will not generate findings.

  • ✓ D. Attach AmazonGuardDutyFullAccess and allow iam:PutRolePolicy and iam:DeleteRolePolicy on arn:aws:iam::123456789012:role/aws-service-role/guardduty.amazonaws.com/AWSServiceRoleForAmazonGuardDuty.

The correct behavior and permissions are twofold. First, Trusted list entries take precedence so an IP present in both lists is treated as trusted and will not generate findings. GuardDuty evaluates trusted IP lists before threat intelligence lists to avoid false positives from known trusted addresses. Second, Attach AmazonGuardDutyFullAccess and allow iam:PutRolePolicy and iam:DeleteRolePolicy on arn:aws:iam::123456789012:role/aws-service-role/guardduty.amazonaws.com/AWSServiceRoleForAmazonGuardDuty. The managed policy grants GuardDuty API permissions, while iam:PutRolePolicy and iam:DeleteRolePolicy are required to add or remove inline policies on the service-linked role that GuardDuty uses for list uploads and related operations. Why the incorrect options are wrong: Attach only the AmazonGuardDutyFullAccess managed policy; no extra IAM permissions are required. is incorrect because the managed policy alone does not allow modifying inline policies on the service-linked role, which is needed for full list management.

Grant SecurityAudit plus modify S3 bucket policies for uploads to obtain full list management. is incorrect because SecurityAudit is read-only and S3 policy changes do not substitute for the GuardDuty and IAM actions required to manage lists.

Grant iam:PassRole to the principal so they can pass the service-linked role and thereby manage lists. is incorrect because iam:PassRole only permits passing a role to a service and does not permit adding or removing inline policies or performing the list management operations.

Remember that GuardDuty trusted lists suppress findings and are evaluated first; when permissions reference the service-linked role, look for iam:PutRolePolicy and iam:DeleteRolePolicy as the extra actions required to modify list-related inline policies; always combine the appropriate managed GuardDuty policy with those IAM actions rather than assuming a single managed policy is sufficient; apply least privilege by scoping the role ARN.

Question 3

NovaStream, a digital media startup, runs dozens of microservices on Amazon EC2 across four AWS accounts, and each service emits logs with latency and resource metrics. The company wants to consolidate these logs into a single logging account to feed a real-time anomaly detection pipeline. What is the most effective way to aggregate and stream logs from all accounts into the central processing system in near real time?

  • ✓ C. Create CloudWatch Logs subscription filters in each account that deliver to a cross-account Amazon Kinesis Data Firehose delivery stream in the logging account, which then forwards records to the processing system

The most suitable approach is to use CloudWatch Logs subscriptions to stream data as it is ingested. Create CloudWatch Logs subscription filters in each account that deliver to a cross-account Amazon Kinesis Data Firehose delivery stream in the logging account, which then forwards records to the processing system enables near real-time, scalable, and centralized delivery with minimal operational overhead.

Configure the CloudWatch Agent on all EC2 instances to send logs to an Amazon S3 bucket in the logging account, then invoke an AWS Lambda function on object creation to push data to the detection system adds S3 write and Lambda trigger delays, making it less suitable for real-time streaming needs.

Enable AWS Config in every account, aggregate to a centralized aggregator, and use AWS Lambda to transform and forward the results to the anomaly detection pipeline is incorrect because AWS Config captures resource configuration changes and compliance data, not application log entries for performance and utilization.

Set up an Amazon EventBridge event bus in the logging account and create cross-account rules to route all logs to the bus, then forward them to the detection system is not appropriate since EventBridge does not subscribe to and stream CloudWatch Logs records; CloudWatch Logs subscription filters should be used for this purpose.

For cross-account, near real-time log aggregation, think CloudWatch Logs subscription filters to Kinesis Data Firehose in a central account; S3-based pipelines or configuration services like AWS Config are not optimized for streaming application logs.

Question 4

Which controls should be combined so a deployment pipeline can create IAM principals and policies while ensuring created resources cannot exceed centrally enforced minimum privileges? (Choose 2)

  • ✓ B. Attach an IAM permissions boundary to limit privileges of roles the pipeline creates.

  • ✓ D. Give the pipeline a single, dedicated IAM execution role it assumes to provision resources.

The correct combination is to use a dedicated pipeline execution role together with a permissions boundary. Using Attach an IAM permissions boundary to limit privileges of roles the pipeline creates. ensures any role the pipeline creates cannot exceed a predefined maximum permission set, providing an enforceable upper bound. Using Give the pipeline a single, dedicated IAM execution role it assumes to provision resources. centralizes who can create and modify resources, simplifies auditability, and allows security to apply and monitor the permissions boundary or other controls on that single identity.

Use an AWS Organizations Service Control Policy to centrally block role permissions. is incorrect because SCPs require accounts to be in an organization and operate at the account/OU level; they are broader account-level controls and may not be a practical replacement for per-role enforcement inside a single account.

Require developers to attach their own permissions boundaries to roles created by the pipeline. is incorrect because delegating boundary creation to developers defeats centralized enforcement; developers could misconfigure or omit boundaries, which undermines the security guardrail.

Validate deployment templates with IAM Access Analyzer or CloudFormation Guard before deployment. is incorrect as a standalone solution because template validation detects or flags risky permissions but does not guarantee enforcement at creation time; it should be used as a complementary validation step, not the primary guardrail. Tips for the exam: look for answers that provide enforceable, centralized controls rather than optional or developer-controlled mechanisms. Permissions boundaries and a single, auditable pipeline role enforce guardrails at creation time and simplify governance. Pre-deployment validation and organization policies are useful supplements but are not substitutes for an enforceable boundary applied at role creation.

Question 5

A security analyst at Alpine Retail configured an allow list of trusted IP addresses for Amazon GuardDuty using recommended practices, but GuardDuty is still reporting findings that involve those same addresses. What checks should be performed to verify the configuration so GuardDuty behaves as expected? (Choose 2)

  • ✓ B. Confirm that the trusted IP list contains only publicly routable IPv4 addresses

  • ✓ E. Upload the trusted IP list in the same Region where GuardDuty is generating findings

The correct checks are Confirm that the trusted IP list contains only publicly routable IPv4 addresses and Upload the trusted IP list in the same Region where GuardDuty is generating findings. GuardDuty evaluates trusted and threat lists only for publicly routable IPv4 addresses, and lists are Region-scoped to the GuardDuty detector that analyzes events.

Verify that in an AWS Organizations multi-account setup, findings for member accounts are evaluated against the administrator account’s trusted IP list is wrong because trusted IP lists do not trigger or drive findings; they are used to suppress findings and are not evaluated across accounts to generate alerts.

Make sure an IP address is not present in both the trusted IP list and a threat list because threat list precedence will still cause a finding is incorrect because GuardDuty gives precedence to the trusted list; if an IP appears in both, the trusted list wins and the IP will not generate a finding.

Configure more than one trusted IP list per account in each Region is not valid because GuardDuty supports only a single trusted IP list per account per Region.

For GuardDuty lists, remember two keys: Region scope and public IPv4 only. Also, the trusted list overrides the threat list; if the same IP is on both, no finding is generated.

Question 6

How do you centrally collect CloudTrail logs for all current and future accounts while preventing member accounts from modifying the trail?

  • ✓ C. Create an organization-wide CloudTrail in the management account delivering logs to a centralized S3 bucket

Correct answer justification: Creating an organization-level CloudTrail from the management account and delivering logs to a centralized S3 bucket ensures events from all current and future member accounts are captured and sent to the central bucket and that member accounts cannot modify the organization trail. This is represented by Create an organization-wide CloudTrail in the management account delivering logs to a centralized S3 bucket. Why the incorrect choices are wrong: Deploy individual trails in each account and use cross-account S3 permissions is incorrect because it requires complex cross-account permissions and still permits local administrators to modify or delete their account trails.

Use an AWS Config aggregator to collect API activity into the central bucket is incorrect because AWS Config records resource configuration and compliance changes, not CloudTrail event logs.

Enforce CloudTrail with an Organization SCP from the management account is incorrect because SCPs control permissions but do not create resources or guarantee centralized log delivery; they cannot force creation of a centralized trail.

Look for phrasing that mentions an organization or organization-wide trail, the management (formerly master) account creating the trail, centralized S3 delivery, and preventing member modification. Remember AWS Config is not a replacement for CloudTrail event logging and SCPs manage permissions but do not provision or centralize logs.

Question 7

A fintech startup has begun building on AWS. Early workloads run in Europe (Frankfurt), and a CloudTrail trail is configured to record API activity to an S3 bucket in that Region. The security team now requires that API events from every Region are collected and archived in one centrally managed Region. What is the simplest way to meet this requirement?

  • ✓ B. Modify the existing single Region trail to log all Regions and write to one S3 bucket in the central Region

The simplest solution is to use a single CloudTrail trail configured to log all Regions and deliver to one S3 bucket. You can edit the current trail to enable logging for all Regions and set the destination to a central S3 bucket in any Region.

Modify the existing single Region trail to log all Regions and write to one S3 bucket in the central Region is correct because CloudTrail supports a multi Region trail that consolidates events from every Region into one S3 bucket, and the bucket can be in any Region.

Create a new trail that applies to all Regions and deliver events to different S3 buckets for each Region is unnecessary since an all Regions trail specifies a single S3 bucket.

Configure one trail in every Region and point all of them to the same centralized S3 bucket increases operational overhead and is not the simplest approach.

CloudWatch Logs is not a replacement for centralized CloudTrail S3 delivery and does not meet the archival requirement.

When you see a requirement to capture API activity from every Region and store it centrally, look for a single CloudTrail trail configured for all Regions with delivery to one S3 bucket.

Question 8

Which AWS services and features best mitigate layer 3 and layer 4 volumetric DDoS attacks? (Choose 3)

  • ✓ A. Elastic Load Balancing distributing incoming traffic

  • ✓ C. AWS Shield managed DDoS protection service

  • ✓ E. Amazon Route 53 authoritative DNS service

The best combination is to use traffic-distribution and DDoS protection together.

Use Elastic Load Balancing distributing incoming traffic to spread and absorb large connection and packet loads across targets and AZs.

Use AWS Shield managed DDoS protection service to detect and automatically mitigate network- and transport-layer attacks and to provide advanced mitigation and response options when needed.

Use Amazon Route 53 authoritative DNS service to leverage its anycast network, high query capacity, and shuffle sharding patterns to reduce DNS blast radius and help maintain availability.

The three listed solutions work together to absorb, deflect, and limit the impact of volumetric L3/L4 floods. The other choices are incorrect for these reasons. The AWS Web Application Firewall rules engine operates at layer 7 and inspects HTTP traffic, so it does not stop raw L3/L4 floods. The Amazon GuardDuty continuous threat detection service is a monitoring and detection service and does not provide real-time network traffic absorption or mitigation.

The AWS Global Accelerator can improve performance and provide static IPs and edge routing but is not a primary volumetric DDoS mitigation control by itself.

Favor network-level mitigations for L3/L4 attacks such as Shield and distribution mechanisms like ELB and Route 53; remember WAF is for L7 and GuardDuty is detection, not traffic mitigation. Consider that AWS Shield Standard protections are applied automatically to many AWS services and Shield Advanced adds additional mitigation capacity and DDoS response support.

Question 9

A streaming analytics firm named Quartz Wave runs most of its workloads in containers and stores its images in Amazon Elastic Container Registry. The security group needs an automated approach that detects vulnerabilities in both operating systems and application language packages. Every image pushed to ECR must be continually assessed, and any change in findings should trigger notifications to the DevOps and SecOps teams. What should they implement?

  • ✓ B. Enable enhanced scanning for the Amazon ECR registry and use Amazon EventBridge to route scan update events to the required teams

The right choice is Enable enhanced scanning for the Amazon ECR registry and use Amazon EventBridge to route scan update events to the required teams. Enhanced scanning uses Amazon Inspector to automatically assess images for operating system and programming language package vulnerabilities, supports continuous monitoring with the default Lifetime duration, and publishes scan events to Amazon EventBridge for notifications.

AWS Security Hub is a findings aggregator and compliance dashboard; it does not perform image vulnerability scanning and would still require a scanner like Amazon Inspector to generate findings.

Enable basic scanning for repositories with scan on push and configure EventBridge rules to notify the teams is insufficient because basic scanning only checks OS CVEs at push time and does not continuously rescan or analyze language package vulnerabilities.

Amazon GuardDuty focuses on threat detection using data sources like CloudTrail, VPC Flow Logs, and DNS logs; it does not perform container image vulnerability scanning for ECR.

Link ECR enhanced scanning to Amazon Inspector in your mind: enhanced = Inspector, covers OS and language packages, supports continuous monitoring, and emits events to EventBridge for notifications.

Question 10

How can an analyst remain in the main operations group while restricting their permissions to DynamoDB, RDS, and CloudWatch?

  • ✓ B. Add the analyst to the operations group and apply a permissions boundary that limits permissions to DynamoDB, RDS, and CloudWatch, updating it later as needed.

Add the analyst to the operations group and apply a permissions boundary that limits permissions to DynamoDB, RDS, and CloudWatch, updating it later as needed. is correct because a permissions boundary defines the maximum permissions an identity-based policy can grant to a principal. This allows the analyst to remain in the shared operations group (reducing administrative overhead) while ensuring their effective permissions cannot exceed the boundary that limits actions to DynamoDB, RDS, and CloudWatch. The boundary can be updated later to broaden privileges as responsibilities expand.

The reason Place the analyst in another AWS account and require assuming a cross-account role limited to DynamoDB, RDS, and CloudWatch. is not ideal is that cross-account isolation adds significant operational complexity, requires additional trust setup and role assumption workflows, and is overkill for restricting a single user within the same account.

The reason Use an AWS Organizations Service Control Policy on the account to block access to services except DynamoDB, RDS, and CloudWatch. is incorrect is that SCPs apply at the account or organizational unit level and affect all principals in the account, making them blunt instruments for limiting one user and potentially impacting other teams.

The reason Create a separate IAM group for the analyst with policies granting only DynamoDB, RDS, and CloudWatch, and move the user into that group. is less desirable because it increases group proliferation and requires administratively moving users between groups as their duties change, whereas permissions boundaries let you keep group membership stable.

Tips for the exam: identify whether the requirement is to restrict a single principal while keeping admin simplicity; that pattern suggests permissions boundaries rather than SCPs or cross-account isolation. Remember that permissions boundaries limit the maximum permissions a principal can obtain from identity policies, and that they do not replace identity policies but work in conjunction with them. Also remember that SCPs affect entire accounts/organizational units and are not scoped to individual IAM users in the same way.

Question 11

A health-tech analytics provider runs an Amazon RDS for PostgreSQL database in a VPC that has no internet gateway or NAT due to strict isolation controls. The team wants AWS Secrets Manager to rotate the database credentials every 45 days, but company policy prohibits using the managed rotation Lambda. An engineer deployed a custom Lambda function in private subnets to perform rotation and allowed database access from the function’s security group. When executed, the function cannot reach Secrets Manager and the rotation fails. What should the engineer do to enable rotation without exposing the VPC to the internet?

  • ✓ C. Add an interface VPC endpoint for AWS Secrets Manager and place the function in subnets that can resolve and reach the endpoint

The correct approach is to use Add an interface VPC endpoint for AWS Secrets Manager and place the function in subnets that can resolve and reach the endpoint. An interface endpoint powered by AWS PrivateLink exposes a private IP in your subnets for the Secrets Manager API, allowing the Lambda function to communicate with the service entirely over the AWS network without an internet gateway or NAT.

Deploy a NAT gateway in a public subnet and add a default route from the function subnets so the function can call Secrets Manager over the internet is not appropriate because it requires public egress, which conflicts with the organization’s no-internet policy and adds unnecessary exposure.

Create a VPC peering connection to the Secrets Manager service and update routes is invalid because peering connects VPCs, not VPCs to AWS services, so it cannot provide private access to Secrets Manager.

Set up an AWS Direct Connect link from the VPC to Secrets Manager and route the function’s traffic through it is incorrect since Direct Connect connects on-premises networks to AWS and does not provide a private path from a VPC to an AWS service endpoint.

When a function in private subnets must call an AWS service without internet access, think interface VPC endpoints (AWS PrivateLink) rather than NAT gateways, VPC peering, or Direct Connect.

Question 12

Cross-region S3 replication of objects encrypted with different customer-managed KMS keys fails; what IAM/KMS permission and key policy changes allow encrypted objects to replicate? (Choose 3)

  • ✓ B. Grant the replication role kms:Decrypt, kms:ReEncryptFrom, kms:Encrypt, and kms:ReEncryptTo on both CMKs

  • ✓ D. Update the destination CMK key policy to allow the replication role kms:Encrypt

  • ✓ F. Modify the source CMK key policy to permit the replication role kms:Decrypt

Grant the replication role kms:Decrypt, kms:ReEncryptFrom, kms:Encrypt, and kms:ReEncryptTo on both CMKs, update the destination CMK key policy to allow the replication role kms:Encrypt, and modify the source CMK key policy to permit the replication role kms:Decrypt.

These changes allow the replication workflow to decrypt source objects, re-encrypt them for the target, and ensure the destination CMK permits the role to encrypt the replicated objects. The replication role must have the necessary KMS actions on both the source and destination CMKs and each CMK’s key policy must explicitly allow the role.

In addition, while S3 permissions such as s3:GetObjectVersion and s3:GetObjectVersionTagging may be required for replication to read object metadata, they do not fix KMS permission failures.

Explanation of incorrect choices: Grant the replication role kms:GenerateDataKey and kms:Encrypt on the destination CMK only is insufficient because the role still cannot decrypt the source ciphertext or perform re-encrypt operations on the source CMK.

Add s3:GetObjectVersion and s3:GetObjectVersionTagging to the replication role provides necessary S3 read access but does not grant KMS rights required to decrypt and re-encrypt KMS-encrypted objects.

Switch the destination bucket to S3-managed encryption (SSE-S3) avoids using a customer-managed CMK at the destination but may violate security requirements and still does not remove the need to decrypt the source if it is KMS-encrypted.

Focus on the replication workflow: replication must be able to decrypt the source object and then encrypt or re-encrypt to the destination key, so check both IAM role policies and each CMK’s key policy for the required KMS actions (kms:Decrypt, kms:ReEncryptFrom, kms:Encrypt, kms:ReEncryptTo). Also remember that CMKs are regional and key policies must explicitly allow cross-account or role access as appropriate.

Question 13

An international edtech platform stores documents, media, and logs in Amazon S3 across hundreds of buckets with millions of objects. The security analytics team needs to identify the 60 most frequently requested objects, the 25 largest downloads, and items with the slowest transfer times. They plan to run SQL queries and present the results in an interactive dashboard for stakeholders. Which services should a security engineer combine to meet these requirements? (Choose 2)

  • ✓ A. Enable Amazon S3 server access logging on the buckets

  • ✓ C. Amazon Athena with Amazon QuickSight

The required data must first be captured from S3 request activity and then queried with SQL and visualized. Enable Amazon S3 server access logging on the buckets to generate request logs that include object keys, bytes sent, and request timing, which are essential for finding the most accessed objects, the largest downloads, and the slowest transfers.

Use Amazon Athena with Amazon QuickSight to meet the analytics and dashboard needs. Athena can run standard SQL on the S3 log files without loading them elsewhere, and QuickSight turns Athena query results into interactive dashboards.

AWS Lambda with Amazon DynamoDB could transform or aggregate logs, but it does not provide SQL querying or BI-style dashboards by itself, making it a less suitable fit.

Amazon GuardDuty is for continuous threat detection and findings, not for detailed access pattern metrics or interactive dashboards based on SQL queries.

Amazon CloudWatch Logs Insights is designed for querying logs in CloudWatch Logs; S3 access logs do not land there by default, and this path is not the recommended solution for S3 access analytics.

When you see a need for SQL on S3 data plus an interactive dashboard, think S3 access logging for data capture, then Athena for queries and QuickSight to visualize.

Question 14

An interface VPC endpoint to a Network Load Balancer in another account shows NLB outbound bytes but the client receives no traffic. What troubleshooting step helps identify and resolve the issue?

  • ✓ D. Verify security group rules and network ACLs in both VPCs allow required ports and protocols between the endpoint ENI and the NLB.

The correct troubleshooting step is Verify security group rules and network ACLs in both VPCs allow required ports and protocols between the endpoint ENI and the NLB. This is the most likely cause when the NLB shows outbound bytes but the client never receives packets because return traffic is commonly blocked by security groups or NACLs. Security groups are stateful so an incorrect rule on either side can prevent responses from flowing back, and NACLs are stateless so they must explicitly permit both directions.

Enable VPC Flow Logs on the subnet and ENI to inspect traffic metadata. is useful to see whether traffic is observed on the ENI or subnet, but VPC Flow Logs may not always show full packet contents and there are documented PrivateLink edge cases; they are diagnostic but do not directly change network behavior.

Enable NLB access logs to inspect connection attempts and source IPs hitting the load balancer. can show whether the load balancer is receiving and accepting connections and provide client IPs and connect times, but access logs do not reveal whether security groups or NACLs are blocking return traffic between the endpoint ENI and backend targets.

Replace PrivateLink with a Transit Gateway for cross-account routing. is an architectural alternative, not a focused troubleshooting step. Migrating to a Transit Gateway is a major change and does not address immediate connectivity misconfigurations with PrivateLink and NLB.

Start with simple network-policy checks first—confirm security groups and NACLs on both sides permit the required ports and that rules reference the correct CIDR ranges or security group IDs. Use VPC Flow Logs and NLB access logs to gather evidence of where packets are seen. Remember that security groups are stateful and NACLs are stateless, so both ingress and egress rules matter for NACLs. Avoid assuming control-plane logs like CloudTrail contain packet-level network information; CloudTrail records API calls, not network flows.

Question 15

Helios Analytics has introduced a security policy that requires the security response team to receive an alert every time the AWS account root user signs in to the AWS Management Console. What is the most efficient way to implement this requirement?

  • ✓ C. Use an EventBridge rule to filter ConsoleLogin events where the user identity is Root and send notifications to an SNS topic

The most direct and minimal solution is Use an EventBridge rule to filter ConsoleLogin events where the user identity is Root and send notifications to an SNS topic. CloudTrail management events deliver ConsoleLogin records that EventBridge can match in near real time, and SNS fans out alerts to the security team. Ensure CloudTrail management events are enabled so the sign-in event reaches EventBridge.

Store CloudTrail logs in S3, query them with Athena using a scheduled Lambda, and publish to an SNS topic when a root console sign-in is found is operationally heavier and slower, adding data-at-rest analytics where simple event matching suffices.

Create a CloudWatch Events rule that matches any account root user API activity and invoke a Lambda function that posts to an SNS topic is broader than needed and adds Lambda; EventBridge is the recommended path and you only need the specific root ConsoleLogin event.

Forward VPC Flow Logs to SQS, process with a Lambda function, and notify an SNS topic if a root login is detected cannot work because VPC Flow Logs do not contain identity or console sign-in data.

For sign-in alerts, think CloudTrail management events feeding EventBridge with a pattern for ConsoleLogin and userIdentity.type = Root, then notify via SNS; avoid batch analytics when a simple event rule will do.

AWS Certified Security Specialty Badge and Logo Credly

All questions come from certificationexams.pro and my Udemy AWS Security course.

Question 16

Which CloudFront configuration gives strong protection against man-in-the-middle attacks with minimal operational overhead?

  • ✓ C. Apply the managed SecurityHeadersPolicy response headers policy to CloudFront

The correct choice is Apply the managed SecurityHeadersPolicy response headers policy to CloudFront because it bundles HSTS and other security headers and can be attached to a distribution with minimal configuration and no custom edge code, delivering strong protection against man-in-the-middle attacks while keeping operational overhead low. The Attach an ACM TLS certificate and redirect HTTP to HTTPS approach secures transport but does not automatically add HSTS or other security headers for long-lived client enforcement. The Use AWS WAF to enforce TLS and add security response headers approach is incorrect because AWS WAF inspects and blocks requests but does not perform TLS termination, redirects, or injection of response headers. The Add HSTS headers via a Lambda@Edge function approach works technically but is less desirable because it requires developing, deploying, and maintaining edge functions and can add complexity and potential latency.

On questions asking for the strongest protection with the least operational overhead, prefer managed features that provide built-in protection (for example managed response headers policies) over custom code. Also distinguish between enforcing TLS (ACM + redirects) and publishing long-lived client directives (HSTS headers) and remember that WAF is for filtering rather than response modification.

Question 17

An analytics service at Nebula Retail Group runs on Amazon EC2 instances in two private subnets and communicates with one Amazon SQS queue and a DynamoDB table that carry highly sensitive records. The security team requires that traffic between the instances and these AWS services remain on the AWS private network rather than the public internet. What actions should the engineer implement to satisfy this requirement? (Choose 3)

  • ✓ A. Create an interface VPC endpoint for Amazon SQS in the subnets hosting the instances

  • ✓ C. Apply restrictive endpoint policies on the VPC endpoints that allow only the application’s DynamoDB table and SQS queue ARNs

  • ✓ E. Provision a gateway VPC endpoint for DynamoDB in the VPC

Use VPC endpoints to keep service traffic private. Create an interface VPC endpoint for Amazon SQS in the subnets hosting the instances ensures the EC2-to-SQS path stays on the AWS backbone via PrivateLink. Provision a gateway VPC endpoint for DynamoDB in the VPC provides private connectivity from the VPC to DynamoDB without internet routing. To enforce least privilege, Apply restrictive endpoint policies on the VPC endpoints that allow only the application’s DynamoDB table and SQS queue ARNs so only the required resources are reachable.

Establish an AWS Direct Connect link to Amazon S3 for private service access is irrelevant to SQS and DynamoDB and does not address VPC-to-service privacy for these services.

Route traffic through a NAT gateway so instances can reach SQS and DynamoDB without public exposure still relies on internet egress and fails the private connectivity requirement.

Update the EC2 instance IAM role to allow outbound access to the interface endpoints changes permissions, not network paths; endpoint policies and the endpoints themselves control the traffic.

Remember: S3 and DynamoDB use gateway endpoints; most other AWS services, including SQS, use interface endpoints via PrivateLink. Apply endpoint policies to restrict access to specific resource ARNs.

Question 18

Which IAM policy must be provided so an EC2 instance can assume an IAM role?

  • ✓ B. Role trust policy permitting the EC2 service principal to assume the role

The correct answer is the Role trust policy permitting the EC2 service principal to assume the role. A role trust policy (also called the assume-role policy document) explicitly lists which principals are allowed to assume the role, for example the EC2 service principal (ec2.amazonaws.com) for EC2 instance roles. Without this trust policy granting the principal permission to assume the role, the instance cannot obtain temporary credentials and assume the role. The Permissions policy attached to the role granting S3 read/write is incorrect because permissions policies only define what the role can do after it is assumed; they do not allow a principal to assume the role. The Instance profile association linking the role to the EC2 instance is incorrect because an instance profile is simply the container used to attach a role to an instance; the assume-role permissions still come from the role trust policy. The IAM permission boundary that limits role permissions is incorrect because permission boundaries restrict the maximum permissions a role can have but do not enable or grant assume-role capability.

When a question asks which document allows a principal or service to assume an IAM role, think “trust policy” or “assume-role policy document”. Remember the separation of concerns: trust policy controls who can assume the role; permissions policies control what the role can do after being assumed.

Question 19

A fintech startup is launching a payments platform that will use AWS KMS for encryption. The team will create customer managed keys in KMS using AWS-generated key material and will not import any external key material. They require the keys to rotate every 365 days. What is the most appropriate approach to achieve this?

  • ✓ B. Enable automatic annual rotation on each customer managed KMS key

The correct approach is to turn on KMS automatic rotation for the customer managed keys. Enable automatic annual rotation on each customer managed KMS key ensures KMS generates new backing key material every year while retaining the same logical key and its aliases, grants, and key ID.

AWS CloudHSM is unrelated to KMS automatic key rotation and does not configure rotation for KMS customer managed keys.

Use AWS managed KMS keys so AWS performs rotation for you conflicts with the requirement to use customer managed keys and you cannot change rotation configuration for AWS managed keys even though they rotate annually.

Modify the customer managed key policy to turn on automatic rotation is wrong because policies control authorization, not key rotation; rotation is toggled as a KMS key property.

For customer managed KMS keys, enable automatic key rotation in KMS. AWS managed keys rotate automatically but you cannot enable or disable that setting, and key policies do not control rotation.

Question 20

What VPC instance setting must be changed so a virtual firewall can forward traffic when it is not the packet source or destination?

  • ✓ D. Disable the instance source/destination check on the firewall ENI

Disabling the instance source/destination check is required because EC2 instances by default drop packets that are not addressed to the instance, so the instance cannot act as a router or inline firewall until that check is disabled. Disable the instance source/destination check on the firewall ENI is correct because it permits the instance to receive and forward transit traffic when it is neither the source nor the destination.

Use VPC Traffic Mirroring to send copies of traffic to the firewall is wrong because mirroring only provides copied packets for analysis and does not enable the appliance to forward the original flows.

Add subnet routes pointing to the firewall ENI without disabling source/destination check is wrong because route table entries alone do not override the instance source/destination enforcement; the check must be disabled for forwarding to work.

Place the firewall in a public subnet and route via the internet gateway is wrong because public subnet placement or an internet gateway does not enable internal VPC forwarding by itself. Tip: For questions about making an EC2 instance forward packets that are not its own, think of the EC2 source/destination check setting and check route tables and security group/NACL rules. Use the AWS Console, CLI (modify-instance-attribute or modify-network-interface-attribute), or SDK to disable the check.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.