It confirms that you can explain the value of AWS, understand the shared responsibility model, follow security best practices, navigate billing and pricing, and recognize appropriate AWS services for common use cases.
The AWS Cloud Practitioner exam includes multiple choice and multiple response questions like the ones simulated later on in this article. Your result appears as a scaled score between 100 and 1000 and the minimum passing score is 700, which is lower than the 720 on the AWS Professional exams and the 750 needed on the AWS Specialty exams.
The scoring model is compensatory, which means you pass based on your overall performance rather than on each section individually. In addition to scored questions the exam may include unscored items that AWS uses to evaluate future content.
The CLF-C02 exam is organized into four domains. If you want to pass the exam, you need to get comfortable with all of the AWS Cloud Practitioner exam topics, especially cloud concepts.
Cloud Concepts accounts for 24 percent. AWS Security and Compliance accounts for 30 percent. a
Billing, Pricing, and Support accounts for 12 percent. You may even see a few references to AWS AI and ML technologies, although not nearly as much as you would see on the AWS AI Practitioner exam.
A regional credit union uses a SAML 2.0 identity provider to give employees single sign-on to multiple third-party SaaS applications. The organization wants to move this workforce sign-in and federation capability to AWS with a fully managed service that centralizes access to external apps. Which AWS service should they use?
A digital media startup is implementing an event-driven, serverless pipeline that invokes several AWS services in both sequence and parallel. The team needs a visual designer that shows real-time status and history for roughly 30 stages and provides built-in retries and error handling to coordinate the flow. Which AWS service should they use to orchestrate this workflow?
An ad-tech startup runs fault-tolerant analytics on Amazon EC2 and wants the lowest possible cost, even if AWS may briefly reclaim the capacity. Which EC2 purchasing option can be interrupted by AWS when capacity is needed elsewhere?
A regional media startup plans to move its applications from its own server room to AWS. Which statement best captures a primary cost benefit they would gain by adopting the AWS Cloud?
❏ A. Shift to a capital expenditure purchasing model
❏ B. Pay only for the resources you actually consume
❏ C. Rely on commitment-based discounts such as Savings Plans as the default pricing model
❏ D. Transfer all IT operations and responsibilities to AWS
An architecture firm needs a reliable private network link from its corporate data center to workloads running in AWS. The team wants predictable throughput and steady latency for ongoing replication and daily operations. Which AWS service meets these requirements?
An engineer at BrightCart Analytics needs an automated assessment that scans Amazon EC2 instances across two VPCs for unintended network exposure and known software vulnerabilities, and then produces a consolidated report. Which AWS service should be used to generate this assessment?
An early-stage genomics startup plans to run short, bursty data processing on Amazon EC2 where tasks can be stopped or reclaimed by AWS without affecting results. Which purchasing option should they select to achieve the lowest compute cost for these interruption-tolerant workloads?
A streaming media startup hosts static images and file downloads in an Amazon S3 bucket and serves customers across several continents. Which AWS service should be used so users around the world receive this content with the lowest latency?
A regional health clinic licenses a browser-based appointment system that the vendor operates from end to end. Staff simply sign in and use the features without managing servers, operating systems, databases, or upgrades. Which cloud computing model does this service represent?
A systems administrator at a regional hospital is standardizing multi-factor sign-in for AWS IAM users. They want an approach that uses a physical key that plugs into a computer’s USB port and is tapped after entering the username and password. Which authentication method should they choose?
A reliability team at a ride-sharing startup needs to ingest application and system logs and search them within seconds to troubleshoot incidents, create operational dashboards, and monitor performance. Which AWS service is the best fit for this type of operational analytics?
A nonprofit arts network is launching a new web portal in one AWS Region to serve visitors across six continents. Which AWS services should the team use to reduce latency and boost transfer speeds for this global audience? (Choose 2)
NorthStar Clinics stores about 45 TB of patient records across more than 80 Amazon S3 buckets and needs an automated, fully managed way to discover, classify, and help protect sensitive data such as PII at scale. Which AWS service should the security team use?
Polaris Learning, an edtech startup, is moving a prototype service to AWS to shorten release cycles and test features quickly. Which AWS capabilities most directly accelerate their ability to experiment and deliver changes faster? (Choose 2)
A regional e-commerce marketplace operates multiple production relational databases on Amazon RDS. The database engine vendor releases security updates roughly every quarter, and the team wants these updates applied with minimal manual work and predictable timing. What is the most efficient way to ensure the databases receive these security patches?
❏ A. Configure an AWS Config rule to check each RDS instance for the required patch level
❏ B. Sign in to each DB instance every quarter and manually download and install the vendor’s patches
❏ C. Enable automatic minor version updates and define a maintenance window in Amazon RDS
❏ D. Use AWS Systems Manager Patch Manager to schedule database engine patching across the RDS fleet
Riverton Analytics is reviewing AWS Support tiers for a new workload. Under the Basic Support plan that is included for all AWS accounts, which features are available to customers? (Choose 2)
❏ A. Infrastructure Event Management
❏ B. Access to the AWS Service Health Dashboard
❏ C. Direct, one-on-one help for account and billing inquiries
❏ D. Prescriptive use-case guidance for architectures
An engineer at Helios Motors needs to balance large volumes of TCP traffic and requires an Elastic Load Balancer that works at the OSI Layer 4 connection level. Which load balancer type should they choose?
A developer at Northwind Robotics is configuring a build server to run automation using the AWS CLI and SDKs against multiple AWS services. Which credential type should be set up so the scripts can authenticate programmatic API requests?
A fast-growing edtech startup runs a read-heavy API on Amazon DynamoDB and wants to add an in-memory layer to speed up repeated reads without changing the application’s data model. Which AWS service should they choose to add this cache to DynamoDB?
NovaRetail, an online marketplace, is deciding whether to host a third-party analytics suite on Amazon EC2 instances or adopt an AWS fully managed alternative. What is the primary advantage of choosing a fully managed service in this case?
❏ A. Greater control and flexibility
❏ B. Backups are unnecessary
❏ C. Lower operational burden
❏ D. Automatic multi-Region replication by default
NorthBridge Capital wants a managed way to ingest system and application log files from about 50 Amazon EC2 instances and several on-premises servers, search those logs, create metric filters, and trigger alarms on patterns within roughly one minute to speed up incident diagnosis. Which AWS service should they use?
At Pine Harbor Robotics, a new administrator needs to understand which credentials can be directly associated with an IAM user for day-to-day access. Which credentials can be provided to the IAM user? (Choose 2)
❏ A. An Amazon EC2 key pair
❏ B. An access key ID with its matching secret access key
❏ C. A password for signing in to the AWS Management Console
BlueWave Robotics wants to lower Amazon EC2 costs by finding instances that have shown consistently low CPU use and little network traffic during the past 30 days. Which AWS service or feature can quickly highlight these underutilized instances to guide cost savings?
An online education startup named Kestrel Learning uploads high-definition lecture videos and audio files up to 25 GB each to a single Amazon S3 bucket in us-east-2 from instructors in 14 countries. Which AWS solution should the company use to consistently accelerate these long-distance uploads to the bucket?
A startup called BlueLark Media is building a microservices application that must notify customers by both SMS text messages and email from the same managed service. Which AWS service should the team choose to send these notifications?
An engineer at BlueWave Analytics needs to enable programmatic access to AWS using the CLI and SDKs. They plan to create an Access Key ID and a Secret Access Key for ongoing use. These credentials are linked to which IAM entity?
Which cost-related benefits does moving to AWS provide for a mid-size media company that wants to manage monthly expenses efficiently without long-term contracts? (Choose 2)
❏ A. Granular, pay-as-you-go billing for only the usage you consume
❏ B. Itemized electricity or power charges appear separately on AWS bills
❏ C. Ability to shut down resources when idle so you do not pay while they are off
❏ D. One-time flat fees for on-demand capacity instead of recurring charges
A genomics startup stores rarely accessed research archives in Amazon S3 Glacier and occasionally needs to pull a small archive urgently so the data becomes available in about 1 to 5 minutes. Which retrieval tier should they use?
NovaPixel Games wants to track AWS spending trends and obtain Savings Plans recommendations derived from roughly the last 90 days of usage to choose an optimal commitment. As the Cloud Practitioner, which AWS service should be used to analyze costs and get those recommendations?
A mobile augmented reality startup needs to run compute and store data within 5G carrier networks so that users on cellular connections experience single digit millisecond latency near edge locations. Which AWS service should they choose?
A logistics startup runs EC2 instances in a private subnet that need to read and write to a DynamoDB table named Orders2025. To follow best practices and avoid storing long-term credentials on the servers, which AWS identity should the instances use to obtain permissions to the table?
NorthBay Data is moving about 18 internal services into Docker containers. They want to orchestrate these containers while maintaining full control of and SSH access to the EC2 instances that run them. Which AWS service should they use to satisfy these needs?
❏ A. AWS Fargate
❏ B. Amazon Elastic Container Registry (Amazon ECR)
❏ C. Amazon Elastic Container Service (Amazon ECS)
A media tech startup named VistaPixels wants to add image analysis using an AWS service that is delivered as a complete, provider-managed application accessed through an API, so the team does not operate servers or maintain runtimes. Which AWS service best fits the Software as a Service model for this need?
A regional news streaming startup is designing a highly available web tier on AWS and is evaluating services that enable scalable application architectures. Which statement accurately describes Elastic Load Balancing in this context?
❏ A. Automatically scales the fleet of Amazon EC2 instances to meet fluctuating demand
❏ B. Provides a dedicated private network connection from an on-premises facility to AWS without traversing the public internet
❏ C. Distributes incoming client traffic across multiple targets such as Amazon EC2 instances, containers, and IP addresses
❏ D. Offers a highly available and scalable Domain Name System service
Answers to the Certification Exam Simulator Questions
A regional credit union uses a SAML 2.0 identity provider to give employees single sign-on to multiple third-party SaaS applications. The organization wants to move this workforce sign-in and federation capability to AWS with a fully managed service that centralizes access to external apps. Which AWS service should they use?
✓ B. AWS IAM Identity Center
The correct choice is AWS IAM Identity Center. It provides a fully managed workforce single sign on service that supports SAML 2.0 and centralizes access to external SaaS applications so employees can sign in once and access multiple third party apps and AWS accounts.
AWS IAM Identity Center works as a central identity broker and can integrate with external identity providers or manage identities within AWS. It includes built in SAML support and connectors that make it the appropriate service for moving workforce federation and centralized access to third party SaaS tools to AWS.
Amazon Cognito is focused on customer and application end user authentication for web and mobile applications and it is not intended to be a centralized workforce SSO gateway for many external SaaS applications.
AWS Identity and Access Management (IAM) is used to control permissions and roles for AWS resources and it does not serve as a managed SSO solution for federating users into third party business applications.
AWS Command Line Interface (AWS CLI) is a client tool for interacting with AWS APIs from a local environment and it does not provide identity federation or single sign on functionality.
When a question asks about workforce single sign on to external SaaS using SAML choose AWS IAM Identity Center and use Amazon Cognito when the focus is on customer or app end user authentication.
A digital media startup is implementing an event-driven, serverless pipeline that invokes several AWS services in both sequence and parallel. The team needs a visual designer that shows real-time status and history for roughly 30 stages and provides built-in retries and error handling to coordinate the flow. Which AWS service should they use to orchestrate this workflow?
✓ C. AWS Step Functions
The correct choice is AWS Step Functions. It provides a visual workflow designer that shows execution status and history for each stage and it supports parallel branches while offering built in retries and error handling to coordinate complex serverless pipelines.
AWS Step Functions lets you model the workflow as a state machine so each step is tracked and visible in the console. The service integrates natively with many AWS services and it supports Parallel states for concurrent work along with Retry and Catch configurations for error handling and automatic retries.
Amazon EventBridge is an event bus for routing and filtering events and it does not provide a visual, stateful orchestrator that shows step by step history for a multi stage workflow.
AWS Lambda runs code without provisioning servers and it is not a workflow orchestration tool. Lambda lacks a native visual designer and does not track step level state or history across a multi step process.
Amazon SQS is a messaging queue used to decouple components and buffer messages and it does not orchestrate multi step workflows or provide built in retries and visual execution history.
Look for phrases like visual workflow or step history with built in retries to identify a workflow orchestrator rather than an event bus, function runtime, or message queue.
An ad-tech startup runs fault-tolerant analytics on Amazon EC2 and wants the lowest possible cost, even if AWS may briefly reclaim the capacity. Which EC2 purchasing option can be interrupted by AWS when capacity is needed elsewhere?
✓ C. Spot Instances
Spot Instances is the correct choice because these instances run on spare Amazon EC2 capacity at steep discounts and can be interrupted by AWS when that capacity is needed elsewhere, usually after a short notice period.
Spot Instances are designed for fault tolerant and flexible workloads such as batch processing and analytics where brief interruptions are acceptable. AWS reclaims the underlying capacity when demand rises and provides a short warning so you can checkpoint work or handle termination according to your chosen interruption behavior.
On-Demand Instances are not interrupted by AWS once they are running and you control when to stop or terminate them. They provide predictable availability and you pay for compute by the hour or second depending on the instance.
Standard Reserved Instances are a pricing option that gives a discount for committed usage and they do not change how or when AWS interrupts instances. They affect billing and not instance reclamation.
Convertible Reserved Instances offer discounted pricing with the flexibility to change instance attributes within certain rules and they are likewise a billing construct. They do not cause AWS to reclaim or interrupt your running instances.
When the question emphasizes lowest cost and possible interruption think Spot Instances. If the scenario needs predictable availability think On-Demand or reserved pricing instead.
A regional media startup plans to move its applications from its own server room to AWS. Which statement best captures a primary cost benefit they would gain by adopting the AWS Cloud?
✓ B. Pay only for the resources you actually consume
The correct choice is Pay only for the resources you actually consume. This option best captures the primary cost benefit the regional media startup would gain by moving applications to AWS.
With a pay as you go model costs scale with actual demand and the company avoids buying excess capacity upfront. AWS bills many services by usage such as compute hours storage bytes and data transfer so expenses align with consumption and you can scale resources up or down to match traffic patterns.
Shift to a capital expenditure purchasing model is incorrect because adopting cloud typically shifts spending from capital expenditure to operating expenditure and not the other way around. Purchasing more hardware up front is not a cloud advantage.
Rely on commitment-based discounts such as Savings Plans as the default pricing model is incorrect because commitment discounts can reduce cost but they are optional optimizations and not the fundamental benefit of the cloud. Savings Plans are a tool to lower long term spend when usage is predictable.
Transfer all IT operations and responsibilities to AWS is incorrect because AWS uses a shared responsibility model and customers continue to manage many aspects such as their applications data and configurations.
An architecture firm needs a reliable private network link from its corporate data center to workloads running in AWS. The team wants predictable throughput and steady latency for ongoing replication and daily operations. Which AWS service meets these requirements?
✓ B. AWS Direct Connect
AWS Direct Connect is the correct option because it provides a dedicated private connection between the corporate data center and AWS that supports predictable throughput and steady latency for replication and daily operations.
AWS Direct Connect bypasses the public internet to deliver consistent bandwidth and lower, more stable latency. It supports dedicated and partner hosted connections and link aggregation so you can provision and manage capacity for production replication and ongoing traffic.
AWS VPN uses encrypted tunnels over the public internet so performance can vary and bandwidth is not guaranteed. That variability makes it less suitable when consistent throughput and steady latency are required.
Amazon CloudFront is a content delivery network that accelerates delivery at the edge and it does not establish a private on premises to AWS network link for replication or steady latency.
Amazon Connect is a cloud contact center service and it does not provide hybrid network connectivity between a corporate data center and AWS.
When a question requires a private link with predictable performance choose AWS Direct Connect. If the scenario highlights quick setup over the internet or encrypted tunnels without strict performance needs then consider AWS VPN.
An engineer at BrightCart Analytics needs an automated assessment that scans Amazon EC2 instances across two VPCs for unintended network exposure and known software vulnerabilities, and then produces a consolidated report. Which AWS service should be used to generate this assessment?
✓ C. Amazon Inspector
The correct choice is Amazon Inspector because it is designed to perform automated assessments of Amazon EC2 instances for known software vulnerabilities and to evaluate network reachability across VPCs so it can identify unintended exposure and produce consolidated findings and reports.
Amazon Inspector runs vulnerability scans that detect common vulnerabilities and exposures and it analyzes reachable network paths to show which services or ports are exposed. It aggregates findings into actionable reports so teams can prioritize remediation across multiple VPCs and instances.
AWS Config evaluates and records resource configurations and checks compliance against rules and baselines. It does not perform host level vulnerability scanning or network reachability analysis of running EC2 instances.
Amazon GuardDuty analyzes telemetry such as VPC Flow Logs and CloudTrail to detect suspicious or malicious activity. It is a threat detection service and not a vulnerability scanner that produces host vulnerability and reachability reports.
Amazon Macie focuses on discovering and protecting sensitive data in Amazon S3. It does not assess EC2 instances for software vulnerabilities or unintended network exposure.
When a question pairs EC2 vulnerability scanning with network reachability across VPCs think Inspector. Remember that GuardDuty is for threat detection, Macie is for S3 data discovery, and AWS Config is for configuration compliance.
Within AWS’s global network, what best describes an Edge location used to deliver content to viewers?
✓ C. A CloudFront CDN point of presence that caches content near viewers
The correct option is A CloudFront CDN point of presence that caches content near viewers. An edge location is a CloudFront point of presence that caches and delivers content closer to end users to reduce latency.
Edge locations function as cache points in CloudFront’s global network and they serve static and dynamic content from the POP nearest the viewer so requests have lower latency and place less load on the origin. When you see the term edge location you should associate it with CloudFront POPs and caching rather than general network connectivity.
An AWS Direct Connect location is incorrect because it provides dedicated private network connections between your on prem network and AWS and it is not a CloudFront POP or a caching service.
A public Amazon S3 endpoint is incorrect because it provides access to S3 objects and does not act as a CDN cache that places content near viewers.
A virtual private gateway for a Site-to-Site VPN is incorrect because it is a VPN termination component used for VPC connectivity and it is unrelated to content delivery or caching.
Edge location almost always refers to a CloudFront POP used for caching near users so choose the CloudFront CDN option when questions ask about edge locations and content delivery.
An early-stage genomics startup plans to run short, bursty data processing on Amazon EC2 where tasks can be stopped or reclaimed by AWS without affecting results. Which purchasing option should they select to achieve the lowest compute cost for these interruption-tolerant workloads?
✓ B. Spot Instance
The correct choice is Spot Instance. Spot Instance provides the lowest compute cost for interruption tolerant workloads because AWS sells unused EC2 capacity at deep discounts and can reclaim instances when needed.
Spot Instance is ideal for short lived, bursty jobs that can be stopped and restarted without impacting results. These instances let you reduce compute spend by using spare capacity and you can combine them with Auto Scaling, Spot Fleets, or fault tolerant application design to maintain throughput despite interruptions.
Reserved Instance (RI) is not appropriate because it requires a long term commitment and it targets predictable, steady state usage rather than transient, interruptible runs.
On-Demand Instance is not optimal because it provides flexible pay as you go pricing but it does not offer the deep discounts that Spot Instance provides for interruption tolerant tasks and it will cost significantly more for the same compute.
Dedicated Host is not suitable because it allocates a physical server for compliance and licensing needs, it carries higher cost, and it is unnecessary for ephemeral, fault tolerant processing.
Look for words like interruptible, flexible, or fault-tolerant and choose Spot Instances to minimize cost for those workloads. Use RIs or Savings Plans for predictable long term demand and Dedicated Hosts only for strict licensing or compliance.
A streaming media startup hosts static images and file downloads in an Amazon S3 bucket and serves customers across several continents. Which AWS service should be used so users around the world receive this content with the lowest latency?
✓ C. Amazon CloudFront
Amazon CloudFront is the correct choice because it is a global content delivery network that caches objects at edge locations near users and it can use an Amazon S3 bucket as the origin for static images and file downloads.
CloudFront reduces latency by serving cached copies from edge locations around the world and it fetches content from the S3 origin when needed, which makes static assets load faster for users across continents.
AWS Elastic Beanstalk is not suitable because it is a platform for deploying and managing application environments and it does not provide a global edge cache for static S3 content.
AWS Lambda provides serverless compute and can run backend code but it does not provide edge caching or native CDN distribution for static files.
AWS Global Accelerator improves network performance by routing traffic over the AWS global network to regional endpoints but it does not cache S3 objects at edge locations and it is not used to distribute static website assets from S3.
For static files in S3 that must be fast worldwide think Amazon CloudFront for edge caching and use AWS Global Accelerator when you need faster routing to regional application endpoints without caching.
A regional health clinic licenses a browser-based appointment system that the vendor operates from end to end. Staff simply sign in and use the features without managing servers, operating systems, databases, or upgrades. Which cloud computing model does this service represent?
✓ B. Software as a Service (SaaS)
The correct choice is Software as a Service (SaaS).
Software as a Service (SaaS) delivers a complete, ready to use application that the vendor hosts and manages and the clinic accesses through a browser. The provider is responsible for servers, operating systems, databases, scaling, security updates and application upgrades so staff only sign in and use the service.
Infrastructure as a Service (IaaS) is incorrect because IaaS supplies virtualized compute, storage and networking while customers install and manage operating systems, middleware and applications. That model requires administrative responsibility for the OS and application stack which does not match a fully managed browser application.
Platform as a Service (PaaS) is incorrect because PaaS provides a managed runtime and services for deploying applications while customers still develop, deploy and operate their own apps. The clinic is consuming a finished vendor application so PaaS does not apply.
Function as a Service (FaaS) is incorrect because FaaS focuses on running discrete, event driven functions rather than delivering a full end user application. FaaS abstracts servers for code execution but it does not equate to a ready to use browser based appointment system.
Look for phrases like fully managed and access via browser to identify SaaS. Confirm whether customers manage the OS or application to rule out IaaS and PaaS.
A systems administrator at a regional hospital is standardizing multi-factor sign-in for AWS IAM users. They want an approach that uses a physical key that plugs into a computer’s USB port and is tapped after entering the username and password. Which authentication method should they choose?
✓ B. FIDO U2F security key
The correct choice is FIDO U2F security key. This option matches a physical key that plugs into a computer USB port and is tapped after entering the username and password.
FIDO U2F security key implements the FIDO WebAuthn standard and provides a phishing resistant second factor. It uses a hardware security key that performs cryptographic authentication with a button press or tap and does not require typing time based codes.
Hardware MFA device is a standalone token that displays 6 digit time based codes which the user must type into the sign in page. It is not a USB tap key so it does not match the described workflow.
Virtual MFA application runs on a smartphone or device and generates time based codes in an app. It requires entering a code and does not involve plugging in or tapping a USB device.
SMS text message MFA sends a one time code by text message which is not delivered via a USB key and is less resistant to interception or SIM based attacks compared with a security key.
A reliability team at a ride-sharing startup needs to ingest application and system logs and search them within seconds to troubleshoot incidents, create operational dashboards, and monitor performance. Which AWS service is the best fit for this type of operational analytics?
✓ B. Amazon OpenSearch Service
Amazon OpenSearch Service is correct because it is a managed and scalable search and analytics engine that supports near real time log ingestion, indexing, and fast querying for troubleshooting and operational dashboards.
OpenSearch provides full text search, aggregations, and visualization through OpenSearch Dashboards and it integrates with ingestion services such as Amazon Kinesis Data Firehose and CloudWatch Logs. These capabilities make OpenSearch well suited to ingest streaming application and system logs and to deliver low latency ad hoc searches and operational dashboards for observability and incident response.
Amazon QuickSight focuses on business intelligence and reporting and it does not provide the log indexing or fast search primitives needed for near real time operational log analysis.
Amazon EMR is designed for batch big data processing with frameworks such as Spark and Hadoop and it is not optimized for low latency interactive searches on streaming logs.
Amazon Athena runs interactive SQL queries over data stored in Amazon S3 and it is not intended for near real time log ingestion and subsecond search and aggregation workflows.
When a question emphasizes near real time log ingestion and fast search choose Amazon OpenSearch Service rather than tools built for BI or batch processing.
A nonprofit arts network is launching a new web portal in one AWS Region to serve visitors across six continents. Which AWS services should the team use to reduce latency and boost transfer speeds for this global audience? (Choose 2)
✓ B. AWS Global Accelerator
✓ D. Amazon CloudFront
AWS Global Accelerator and Amazon CloudFront are the correct options for reducing latency and boosting transfer speeds for a web portal hosted in a single AWS Region that must serve users across six continents.
Amazon CloudFront distributes and caches content at edge locations around the world which reduces the distance between users and content and lowers latency for static assets and many dynamic responses. It also reduces origin load and improves perceived performance by serving content from nearby edges.
AWS Global Accelerator assigns static anycast IP addresses and routes user traffic over the AWS global backbone to the closest healthy regional endpoint which improves latency consistency and failover behavior for traffic destined to a single Region. It complements CloudFront when you need optimized routing for noncacheable or interactive traffic.
Amazon S3 Transfer Acceleration is focused on accelerating uploads and downloads to and from S3 buckets and does not speed general web application traffic to compute or load balancer endpoints.
AWS Transit Gateway is built to connect VPCs and on premises networks at scale and it is not intended to optimize user facing latency for global web traffic.
AWS Direct Connect provides private dedicated links between on premises locations and AWS and it does not accelerate public internet access for a worldwide audience.
NorthStar Clinics stores about 45 TB of patient records across more than 80 Amazon S3 buckets and needs an automated, fully managed way to discover, classify, and help protect sensitive data such as PII at scale. Which AWS service should the security team use?
✓ C. Amazon Macie
The correct choice is Amazon Macie because it is a fully managed service that discovers, classifies, and helps protect sensitive data such as personally identifiable information stored in Amazon S3 at scale.
Amazon Macie uses machine learning and pattern matching to identify PII and other sensitive data across many S3 buckets and it provides automated workflows and reporting to help secure that data. This makes it suitable for a 45 TB repository spread across more than 80 buckets where automated, scalable discovery and classification are required.
AWS Key Management Service (AWS KMS) manages encryption keys and controls access to those keys but it does not scan or classify S3 objects so it cannot discover PII by itself.
AWS Secrets Manager securely stores and rotates secrets such as database credentials and API keys and it is not designed to inspect stored data for sensitive content.
Amazon GuardDuty focuses on threat detection and anomaly monitoring across your AWS environment and it does not perform content discovery or classification of S3 objects.
Polaris Learning, an edtech startup, is moving a prototype service to AWS to shorten release cycles and test features quickly. Which AWS capabilities most directly accelerate their ability to experiment and deliver changes faster? (Choose 2)
✓ A. Rapid provisioning of resources
✓ C. Elastic, on-demand compute capacity
Rapid provisioning of resources and Elastic, on-demand compute capacity are correct because they most directly accelerate Polaris Learning’s ability to experiment and deliver changes faster.
Rapid provisioning of resources lets teams create and tear down development, test, and staging environments in minutes which removes hardware and manual setup delays and shortens feedback loops for prototypes.
Elastic, on-demand compute capacity enables immediate right-sizing and scaling so experiment workloads can grow or shrink without procurement delays which supports rapid iteration and controlled cost during trials.
AWS Direct Connect provides dedicated private network connectivity to on-premises locations and it improves performance and consistency but it does not directly speed development cycles or simplify creating test environments.
Lower total cost of ownership is a financial benefit that may result from cloud use and it is valuable but it is not an operational mechanism that by itself makes teams deliver features faster.
Globally secured data centers improve security and compliance and they are important for scaling and trust yet they do not directly accelerate the pace of experimentation or deployments.
When a question emphasizes speed and agility choose options that mention rapid provisioning or elastic compute rather than answers focused on cost or security.
A regional e-commerce marketplace operates multiple production relational databases on Amazon RDS. The database engine vendor releases security updates roughly every quarter, and the team wants these updates applied with minimal manual work and predictable timing. What is the most efficient way to ensure the databases receive these security patches?
✓ C. Enable automatic minor version updates and define a maintenance window in Amazon RDS
Enable automatic minor version updates and define a maintenance window in Amazon RDS is the correct option because it lets Amazon RDS apply engine security patches automatically during a scheduled maintenance window so updates occur with minimal manual effort and predictable timing.
Enable automatic minor version updates and define a maintenance window in Amazon RDS delegates patching to the managed service so AWS performs minor engine upgrades and security fixes for supported database engines within the maintenance window you set. This reduces operational overhead and gives you control over when reboots or brief disruptions may occur.
Configure an AWS Config rule to check each RDS instance for the required patch level is incorrect because AWS Config only evaluates and records configuration compliance and it does not deploy or install engine patches on RDS instances.
Sign in to each DB instance every quarter and manually download and install the vendor’s patches is incorrect because this approach is inefficient and unnecessary for managed RDS databases and it increases operational risk and workload.
Use AWS Systems Manager Patch Manager to schedule database engine patching across the RDS fleet is incorrect because Patch Manager is intended for operating system patching on EC2 and hybrid instances and it does not perform RDS managed database engine minor version upgrades.
For questions about RDS patching remember to prefer the native managed features and think maintenance window and automatic minor version updates rather than manual or Config based solutions.
Riverton Analytics is reviewing AWS Support tiers for a new workload. Under the Basic Support plan that is included for all AWS accounts, which features are available to customers? (Choose 2)
✓ B. Access to the AWS Service Health Dashboard
✓ C. Direct, one-on-one help for account and billing inquiries
Direct, one-on-one help for account and billing inquiries and Access to the AWS Service Health Dashboard are included with the Basic support plan. These items cover nontechnical customer service and public service status information that every AWS account receives at no additional cost.
Direct, one-on-one help for account and billing inquiries is available around the clock for account and billing questions and it covers nontechnical issues handled by customer service. This support does not provide technical troubleshooting or architectural advice.
Access to the AWS Service Health Dashboard gives customers the ability to view current and historical availability and service events for AWS offerings. This is public status information rather than a personalized technical incident response service.
Infrastructure Event Management is not included with Basic and it is offered only to customers on Enterprise On-Ramp and Enterprise plans for launch and large scale event support.
Prescriptive use-case guidance for architectures is provided by paid plans such as Business and above and is not part of the Basic tier.
AWS Support client-side diagnostic tools are not available with Basic and require at least Developer, Business, Enterprise On-Ramp, or Enterprise support levels.
An engineer at Helios Motors needs to balance large volumes of TCP traffic and requires an Elastic Load Balancer that works at the OSI Layer 4 connection level. Which load balancer type should they choose?
✓ C. Network Load Balancer (NLB)
The Network Load Balancer (NLB) is the correct choice for balancing large volumes of TCP traffic because it operates at the OSI Layer 4 and makes routing decisions based on connection information rather than application payload.
The NLB is optimized for high throughput and low latency and it can scale to handle millions of TCP connections. The NLB routes on TCP or UDP connection details and can preserve client source IP addresses which helps with stateful workloads and direct client identification.
The Application Load Balancer (ALB) is not correct because it operates at OSI Layer 7 and routes based on HTTP and HTTPS request attributes such as headers and paths which do not meet a raw TCP balancing requirement.
The Classic Load Balancer (CLB) is not preferred because it is a legacy option and while it can handle both Layer 4 and Layer 7 traffic it lacks many of the newer features and optimizations that make the NLB the recommended Layer 4 choice for new architectures.
The AWS Global Accelerator is not an Elastic Load Balancer because it provides global anycast and traffic acceleration and routing rather than per connection Layer 4 load balancing within a region so it does not replace the NLB for this use case.
A developer at Northwind Robotics is configuring a build server to run automation using the AWS CLI and SDKs against multiple AWS services. Which credential type should be set up so the scripts can authenticate programmatic API requests?
✓ B. IAM access keys
The correct option is IAM access keys. They provide the access key ID and secret access key pair that scripts use to authenticate and sign programmatic API requests made with the AWS CLI or SDKs.
IAM access keys are the mechanism the CLI and SDKs expect when performing signed API calls. You can associate access keys with an IAM user or you can avoid long lived keys by using roles and temporary credentials for automation which improves security.
AWS Certificate Manager is incorrect because it provisions and manages SSL and TLS certificates to secure network traffic and it does not supply credentials for AWS API calls.
SSH public keys are incorrect because they authenticate SSH sessions to instances or services that accept SSH and they are not used to sign generic AWS service API requests.
Username and password are incorrect because those are used for console sign in and cannot be used by the CLI or SDKs to sign programmatic requests. For programmatic access you must use access keys or temporary credentials issued by roles.
When a question mentions programmatic access or the CLI think of access keys and remember that roles and temporary credentials are the more secure choice for automation.
A fast-growing edtech startup runs a read-heavy API on Amazon DynamoDB and wants to add an in-memory layer to speed up repeated reads without changing the application’s data model. Which AWS service should they choose to add this cache to DynamoDB?
✓ B. Amazon DynamoDB DAX
Amazon DynamoDB DAX is the correct choice for adding an in memory cache to DynamoDB because it is purpose built to provide microsecond read latency and it integrates with DynamoDB so the application data model and client code do not need to change.
Amazon DynamoDB DAX is a fully managed, in memory cache that sits in front of DynamoDB and it speaks the same DynamoDB API so reads can be served from the cache without rewriting access logic. This reduces DynamoDB read capacity usage and improves response times for repeated reads while keeping the data model intact.
Amazon ElastiCache is a general purpose in memory caching service that supports Redis and Memcached, but it is not a drop in cache for DynamoDB and using it typically requires application changes to implement caching logic, so it does not meet the requirement of avoiding code or data model changes.
AWS Key Management Service (KMS) handles encryption key management and it provides no caching capabilities, so it cannot be used to speed up reads for DynamoDB.
Amazon Simple Queue Service (SQS) is a message queuing service for decoupling components and it does not cache data or reduce read latency for DynamoDB queries.
When a question asks for an in memory cache that is a drop in for DynamoDB and requires no client code changes think DAX rather than a general cache like ElastiCache.
NovaRetail, an online marketplace, is deciding whether to host a third-party analytics suite on Amazon EC2 instances or adopt an AWS fully managed alternative. What is the primary advantage of choosing a fully managed service in this case?
✓ C. Lower operational burden
Lower operational burden is the correct option because a fully managed AWS service shifts routine operational tasks off your team and onto the provider.
A managed service removes responsibilities such as infrastructure maintenance, operating system and software patching, automatic scaling, and many aspects of availability and fault handling, so your engineers can focus on the analytics workload and business logic rather than server upkeep.
Backups are unnecessary is incorrect because managed services do not eliminate the need for data protection and recovery planning and you still must enable and test backups and restores as part of your responsibilities.
Greater control and flexibility is incorrect since self-managing on EC2 usually gives more low level configurability and tuning, and managed services trade some of that fine grained control for operational simplicity.
Automatic multi-Region replication by default is incorrect because most AWS managed services are regional and require explicit setup or additional features and cost to replicate data across regions.
When a question contrasts EC2 with an AWS managed alternative choose the managed option when the focus is on reducing operational work and maintenance and remember that backup and security responsibilities still apply.
NorthBridge Capital wants a managed way to ingest system and application log files from about 50 Amazon EC2 instances and several on-premises servers, search those logs, create metric filters, and trigger alarms on patterns within roughly one minute to speed up incident diagnosis. Which AWS service should they use?
✓ B. Amazon CloudWatch Logs
The correct choice is Amazon CloudWatch Logs. It is a managed service that can ingest system and application log files from Amazon EC2 instances and on prem servers, allow you to search log events, create metric filters, and drive CloudWatch alarms for near real time alerting to speed incident diagnosis.
Amazon CloudWatch Logs centralizes logs and supports agents and native integrations to collect logs from EC2 and on prem hosts. You can define metric filters that convert log patterns into CloudWatch metrics and then create alarms on those metrics so that alerts are triggered within roughly one minute for rapid investigation.
AWS CloudTrail records AWS API calls for auditing and compliance and does not serve as a general system or application log aggregation service or provide metric filters on arbitrary log content.
AWS X-Ray provides distributed request tracing and service maps to help debug latency and errors in applications and it is not designed for bulk log aggregation search or for creating CloudWatch metric filters from arbitrary logs.
Amazon OpenSearch Service is a powerful analytics and search engine for logs and metrics visualization but it requires an ingestion pipeline such as CloudWatch Logs subscriptions, Kinesis, or Firehose to receive logs. It does not by itself provide the native CloudWatch metric filter to alarm workflow that CloudWatch Logs does.
Keywords such as log ingestion, metric filters, and CloudWatch alarms usually point to the service that directly integrates with CloudWatch metrics and alarms.
At Pine Harbor Robotics, a new administrator needs to understand which credentials can be directly associated with an IAM user for day-to-day access. Which credentials can be provided to the IAM user? (Choose 2)
✓ B. An access key ID with its matching secret access key
✓ C. A password for signing in to the AWS Management Console
An access key ID with its matching secret access key and A password for signing in to the AWS Management Console are the credentials that can be directly associated with an IAM user. These two provide the programmatic and human sign in methods that IAM manages for individual users.
The An access key ID with its matching secret access key pair provides programmatic access to AWS APIs and SDKs and is created and rotated under the IAM user security credentials. These access keys are used by the AWS CLI and application code to authenticate as that IAM user.
The A password for signing in to the AWS Management Console lets a person sign in to the web console as the IAM user and can be required, allowed, or disabled by account policies and by federated or single sign on setups. The console password can be combined with MFA for stronger protection.
An Amazon EC2 key pair is not an IAM user credential because it is used to authenticate to EC2 instances over SSH or RDP and it is part of the compute access model rather than the IAM identity model.
An AWS Certificate Manager (ACM) certificate is used to provision and manage TLS certificates for endpoints and load balancers and is not issued as a login credential for an IAM user.
A local Linux server login password is an operating system credential that applies to a specific server and is unrelated to AWS IAM user credentials.
Remember that access keys are for programmatic API and CLI use and a console password is for human sign in. Confirm whether a user needs one or both before assigning credentials.
BlueWave Robotics wants to lower Amazon EC2 costs by finding instances that have shown consistently low CPU use and little network traffic during the past 30 days. Which AWS service or feature can quickly highlight these underutilized instances to guide cost savings?
✓ B. AWS Trusted Advisor
AWS Trusted Advisor is correct because it provides automated best practice recommendations and includes a Low Utilization Amazon EC2 Instances check that reviews recent activity and flags instances with consistently low CPU and minimal network traffic to help reduce EC2 costs.
The AWS Trusted Advisor low utilization check analyzes CPU and network usage over multiple days and highlights candidates for stop, termination, or downsizing so you can quickly act on cost savings. It surfaces concise recommendations across accounts and removes the need to manually scan raw metrics when the question asks for an automated tool that pinpoints underutilized instances.
AWS Cost Explorer is focused on spending analysis and cost trends and it does not automatically analyze instance performance to identify underused EC2 instances.
Amazon CloudWatch provides detailed metrics and alarms for CPU and network but it requires you to interpret those metrics and it does not itself provide a built in underutilization recommendation for EC2.
AWS Budgets monitors and alerts on budget thresholds and forecasts and it is not designed to give resource efficiency or right sizing insights.
When a question asks for an automated check that identifies low CPU and low network EC2 instances think AWS Trusted Advisor. If the question asks about more detailed right sizing recommendations also consider AWS Compute Optimizer.
An online education startup named Kestrel Learning uploads high-definition lecture videos and audio files up to 25 GB each to a single Amazon S3 bucket in us-east-2 from instructors in 14 countries. Which AWS solution should the company use to consistently accelerate these long-distance uploads to the bucket?
✓ C. Amazon S3 Transfer Acceleration
Amazon S3 Transfer Acceleration is the correct choice to consistently accelerate long distance uploads from instructors in many countries into a single S3 bucket in us-east-2.
It leverages AWS edge locations and the AWS global network so clients upload to the nearest edge and the data is then carried over the optimized AWS backbone into the target bucket which reduces latency and variability for large objects and improves throughput for files such as 25 GB lecture videos.
Amazon CloudFront is incorrect because it is designed to cache and deliver content to end users and it does not provide an optimized upload path from global clients into S3.
AWS DataSync is incorrect because it targets agent based migrations and scheduled or automated transfers from on premises storage or other file systems and it is not intended for many ad hoc uploads from distributed individual instructors.
AWS Direct Connect is incorrect because it provides a dedicated private network connection that must be provisioned for a specific network and it cannot accelerate uploads from many independent users in different countries.
When a question describes many remote clients uploading large files to S3 remember to favor S3 Transfer Acceleration because it uses edge locations and the AWS backbone to speed uploads rather than caching or dedicated links.
A startup called BlueLark Media is building a microservices application that must notify customers by both SMS text messages and email from the same managed service. Which AWS service should the team choose to send these notifications?
✓ C. Amazon SNS
The correct choice is Amazon SNS because it supports multiple delivery protocols such as SMS and email and it lets a single publish and subscribe service push notifications to end users from distributed applications.
Amazon SNS provides native transports for SMS and email and it supports fan out to multiple endpoints so a microservices publisher can send one message and let SNS handle delivery, retries, and filtering. It also integrates with other AWS services so applications can publish events and rely on SNS to route them to phones and inboxes.
Amazon EventBridge is built for event routing and integration across AWS services and SaaS partners but it does not natively send SMS or email to end recipients.
Amazon SQS is a message queuing service that decouples components and buffers messages but it does not deliver messages directly as SMS or email to users.
Amazon SES is focused on high volume and transactional email and it does not provide SMS delivery so it cannot satisfy a requirement to send both SMS and email from the same managed service.
When a question requires one managed service to send both SMS and email think Amazon SNS and look for wording about multiple delivery protocols, pub or sub, or fan out.
An engineer at BlueWave Analytics needs to enable programmatic access to AWS using the CLI and SDKs. They plan to create an Access Key ID and a Secret Access Key for ongoing use. These credentials are linked to which IAM entity?
✓ C. IAM user
The Access Key ID and Secret Access Key are long term credentials that belong to an identity and are issued to an IAM user or the AWS account root user for signing programmatic requests to the AWS CLI and SDKs.
These keys are long lived and are attached directly to the principal that owns them so an IAM user receives static access keys that persist until rotated or deleted and this makes them suitable for CLI and SDK use when ongoing programmatic access is required.
IAM role is incorrect because roles are assumed and provide temporary credentials from AWS STS and you do not create static access keys for a role.
IAM policy is incorrect because a policy is a JSON permissions document and does not contain or hold credentials.
IAM user group is incorrect because groups only aggregate users for permissions and you cannot attach long term access keys to a group.
When you see Access Key ID and Secret Access Key remember they are long term credentials for an IAM user or the root user and not for roles which use temporary STS credentials.
Which cost-related benefits does moving to AWS provide for a mid-size media company that wants to manage monthly expenses efficiently without long-term contracts? (Choose 2)
✓ A. Granular, pay-as-you-go billing for only the usage you consume
✓ C. Ability to shut down resources when idle so you do not pay while they are off
Granular, pay-as-you-go billing for only the usage you consume and Ability to shut down resources when idle so you do not pay while they are off are correct for a mid-size media company that wants to manage monthly expenses efficiently without long term contracts.
The Granular, pay-as-you-go billing for only the usage you consume option describes AWS metering that charges for actual usage of services so costs scale with consumption and you avoid paying for idle capacity. This model lets you provision resources for peaks and then reduce usage when demand falls so monthly bills reflect real consumption.
The Ability to shut down resources when idle so you do not pay while they are off option captures the elasticity that lets you stop or terminate compute and other billable resources when they are not needed. Shutting down or scaling down resources reduces ongoing charges and gives operational control over monthly spend without committing to long term contracts.
Itemized electricity or power charges appear separately on AWS bills is incorrect because AWS does not bill customers separately for electricity or power. Energy and infrastructure costs are included in the service pricing and you will not see a distinct line item for power on standard bills.
One-time flat fees for on-demand capacity instead of recurring charges is incorrect because on demand pricing is usage based and not a one time flat fee. On demand costs accrue as you consume resources and are charged per use rather than as a single upfront payment.
AWS Cost Explorer is incorrect as a cost benefit in this context because it is a reporting and analysis tool that helps you understand and visualize spending. It does not itself provide a pricing model or automatic discounts though it can help you identify opportunities to save.
In cost questions choose answers that describe pay as you go metering or the ability to stop resources to avoid charges and avoid options that claim separate power line items or one time on demand fees.
A genomics startup stores rarely accessed research archives in Amazon S3 Glacier and occasionally needs to pull a small archive urgently so the data becomes available in about 1 to 5 minutes. Which retrieval tier should they use?
✓ B. Expedited retrieval
The correct option is Expedited retrieval because it is the only S3 Glacier retrieval tier that returns data in about one to five minutes for urgent, occasional access to smaller archives.
Expedited retrieval is designed for scenarios that require rapid access and it is optimized for smaller archives where speed matters more than the lowest cost. Use this tier when you need data available within minutes rather than hours.
Standard retrieval is incorrect because it typically takes about three to five hours and it serves use cases where moderate latency is acceptable in exchange for lower cost compared to expedited.
Bulk retrieval is incorrect because it is the lowest cost retrieval option and it usually requires about five to twelve hours, so it is intended for large, nonurgent restores.
Amazon S3 Transfer Acceleration is incorrect because it is a network acceleration feature for speeding transfers to and from S3 and it is not a Glacier retrieval tier.
Remember the retrieval timings when answering speed questions. Expedited is minutes while Standard and Bulk take hours. Distinguish retrieval tiers from network features.
NovaPixel Games wants to track AWS spending trends and obtain Savings Plans recommendations derived from roughly the last 90 days of usage to choose an optimal commitment. As the Cloud Practitioner, which AWS service should be used to analyze costs and get those recommendations?
✓ C. AWS Cost Explorer
The correct choice is AWS Cost Explorer. It provides interactive visualizations of historical spend and usage and surfaces Savings Plans purchase recommendations at the account and organization levels to help you choose an optimal commitment based on recent usage.
AWS Cost Explorer analyzes past cost and usage patterns and generates Savings Plans recommendations. It offers filters and charts so you can inspect usage by service and linked accounts and compare recommendation scenarios before you commit.
AWS Budgets is designed to set cost and usage thresholds and send alerts when budgets are breached and it does not generate Savings Plans recommendations from historical usage.
AWS Cost & Usage Report (AWS CUR) delivers the most granular billing records to an S3 bucket for detailed analysis and long term storage and it requires you to process the raw data yourself so it does not natively recommend Savings Plans.
AWS Pricing Calculator is intended for forward looking cost estimates of proposed architectures and it is not used to analyze past usage or produce Savings Plans recommendations.
When the requirement is Savings Plans recommendations from recent usage pick AWS Cost Explorer and remember Budgets is for alerts while CUR is raw billing data.
A mobile augmented reality startup needs to run compute and store data within 5G carrier networks so that users on cellular connections experience single digit millisecond latency near edge locations. Which AWS service should they choose?
✓ B. AWS Wavelength
The correct option is AWS Wavelength because it embeds AWS compute and storage directly within telecommunications providers’ 5G networks so mobile users on cellular connections can achieve single digit millisecond latency at near edge locations.
AWS Wavelength places compute and storage at the edge of the carrier network so packets travel fewer hops and the mobile traffic remains inside the telecom domain, which is essential for ultra low latency augmented reality experiences.
AWS Outposts runs AWS infrastructure on customer premises and is not hosted inside mobile carrier networks, so it does not keep traffic within the telco network and does not meet the 5G carrier edge requirement.
AWS Snowball Edge is a portable appliance for data transfer and local processing in remote or disconnected environments and it is not a managed service embedded in telecom networks, so it cannot provide the carrier network latency characteristics needed.
AWS Local Zones extend AWS infrastructure closer to metro areas to reduce latency for nearby users but traffic typically exits the carrier network, so they will not match the ultra low latency of telco embedded solutions like Wavelength.
A logistics startup runs EC2 instances in a private subnet that need to read and write to a DynamoDB table named Orders2025. To follow best practices and avoid storing long-term credentials on the servers, which AWS identity should the instances use to obtain permissions to the table?
✓ B. IAM role with an instance profile
The correct choice is IAM role with an instance profile. An EC2 instance uses the attached IAM role with an instance profile to obtain temporary credentials so it can read and write the Orders2025 DynamoDB table without long term secrets on the server.
A IAM role with an instance profile is attached to the instance and the EC2 instance retrieves temporary credentials from the instance metadata service which your application uses to call DynamoDB. You then scope an IAM policy to the specific Orders2025 table so permissions follow the principle of least privilege and you avoid embedding static credentials.
AWS Key Management Service (KMS) manages encryption keys and does not grant an EC2 instance permission to call DynamoDB APIs so it is not the correct identity mechanism for access control.
Amazon Cognito issues identities and credentials for mobile and web application users and it is not typically used for backend EC2 instances to access DynamoDB.
AWS IAM user access keys are long term credentials that must be stored and rotated on the instance and they do not follow best practices when an instance role with temporary credentials is available.
NorthBay Data is moving about 18 internal services into Docker containers. They want to orchestrate these containers while maintaining full control of and SSH access to the EC2 instances that run them. Which AWS service should they use to satisfy these needs?
✓ C. Amazon Elastic Container Service (Amazon ECS)
Amazon Elastic Container Service (Amazon ECS) is correct because it supports the EC2 launch type which lets you run containers on EC2 instances that you provision and manage while ECS provides orchestration features.
Using the EC2 launch type with ECS means tasks and services are scheduled onto your own hosts so you retain full control and can SSH into instances for debugging performance tuning or custom configuration while still benefiting from ECS features such as task definitions service scheduling and integration with other AWS services.
AWS Fargate is incorrect because it is a serverless container runtime that abstracts away the hosts and does not give you SSH access to underlying instances.
Amazon Elastic Container Registry (Amazon ECR) is incorrect because it is only a managed container image registry and does not orchestrate or run containers on servers.
AWS Lambda is incorrect because it is a serverless function service and although it can accept container images it is not intended for long running containerized services or for providing host level access.
When a scenario requires keeping host level access look for ECS with EC2 or self managed EKS nodes. If the goal is to avoid managing servers look for Fargate.
A media tech startup named VistaPixels wants to add image analysis using an AWS service that is delivered as a complete, provider-managed application accessed through an API, so the team does not operate servers or maintain runtimes. Which AWS service best fits the Software as a Service model for this need?
✓ C. Amazon Rekognition
The correct choice is Amazon Rekognition. It is consumed as a fully managed application through APIs so VistaPixels does not operate servers or maintain runtimes and this aligns with the Software as a Service model.
Amazon Rekognition provides prebuilt image and video analysis features that you call via API and AWS handles the underlying infrastructure the models and the runtime environment so the startup only sends images and receives analysis results. This removes the need to provision instances manage operating systems or patch runtimes which matches SaaS expectations.
AWS Elastic Beanstalk is not correct because it is a Platform as a Service that orchestrates infrastructure and application components while you still deploy and manage your application code and environment settings.
Amazon Elastic Compute Cloud (Amazon EC2) is not correct because it is Infrastructure as a Service where you control the virtual servers the operating system and any installed software and updates.
AWS Lambda is not correct because it is serverless compute or Function as a Service where you provide function code and manage integrations and configuration rather than consuming a complete provider operated software product.
A regional news streaming startup is designing a highly available web tier on AWS and is evaluating services that enable scalable application architectures. Which statement accurately describes Elastic Load Balancing in this context?
✓ C. Distributes incoming client traffic across multiple targets such as Amazon EC2 instances, containers, and IP addresses
Distributes incoming client traffic across multiple targets such as Amazon EC2 instances, containers, and IP addresses is correct because Elastic Load Balancing distributes client requests across registered targets to improve availability and scalability.
Elastic Load Balancing performs health checks on targets and routes traffic only to healthy endpoints across multiple Availability Zones which increases resilience and helps applications handle variable load.
Automatically scales the fleet of Amazon EC2 instances to meet fluctuating demand is incorrect because that capability belongs to EC2 Auto Scaling which adjusts instance count based on scaling policies and not to the load balancer.
Provides a dedicated private network connection from an on-premises facility to AWS without traversing the public internet is incorrect because that describes AWS Direct Connect which provides private connectivity and does not distribute application traffic.
Offers a highly available and scalable Domain Name System service is incorrect because that refers to Amazon Route 53 which handles DNS resolution and does not itself balance traffic across application targets.
Match the phrase distribute traffic across targets to Elastic Load Balancing and match add or remove instances to EC2 Auto Scaling when you see similar options on the exam.
Domain 1: Cloud Concepts
Benefits of the AWS Cloud
You should understand the value proposition of AWS and how the global infrastructure supports speed of deployment, global reach, high availability, elasticity, agility, and sustainability outcomes.
Well-Architected Design Principles
You should recognize the pillars of the AWS Well-Architected Framework. These pillars are operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability. You should be able to tell how the pillars differ and why each matters.
Migration Strategies and Adoption
You should know cloud adoption strategies and resources that guide migration. You should understand the AWS Cloud Adoption Framework and how it supports reduced business risk, better ESG performance, increased revenue, and higher operational efficiency. You should be able to identify basic migration patterns such as database replication and transfer options like AWS Snowball.
Cloud Economics
You should understand fixed costs compared with variable costs, costs present in on premises environments, rightsizing practices, automation benefits, licensing approaches such as bring your own license compared with included licenses, and economies of scale that drive savings.
Domain 2 Security and Compliance
Shared Responsibility Model
You should recognize what AWS secures and what the customer secures. You should know where responsibilities can shift based on the service model, such as Amazon EC2, Amazon RDS, or AWS Lambda.
Security, Governance, and Compliance Concepts
You should know the benefits of cloud security and the basics of encryption in transit and at rest. You should know where to find compliance information such as AWS Artifact and how to address geographic or industry requirements. You should be familiar with services that support governance and auditing such as Amazon CloudWatch, AWS CloudTrail, AWS Audit Manager, and AWS Config, as well as reporting tools like access reports.
Access Management
You should understand IAM users, groups, roles, and policies following least privilege. You should know root user protections and tasks reserved for the root user. You should understand authentication methods such as multi factor authentication, IAM Identity Center, cross account roles, and safe credential storage using services like AWS Secrets Manager and AWS Systems Manager.
Security Tooling and Resources
You should be able to describe services that protect workloads such as AWS WAF, AWS Shield, AWS Firewall Manager, Amazon GuardDuty, and Amazon Inspector. You should know that third party options are available through AWS Marketplace and where to find security information such as the AWS Security Blog, the AWS Knowledge Center, and the AWS Security Center. You should recognize how AWS Trusted Advisor helps identify issues and opportunities.
Domain 3 Cloud Technology and Services
Deploying and Operating on AWS
You should know the common ways to access and manage AWS such as the Management Console, APIs, SDKs, the CLI, and infrastructure as code. You should evaluate when to run one time operations and when to create repeatable processes. You should recognize deployment models such as cloud, hybrid, and on premises.
AWS Global Infrastructure
You should understand Regions, Availability Zones, and edge locations and how they relate. You should know how multiple Availability Zones support high availability and that they do not share single points of failure. You should know when to use multiple Regions for disaster recovery, business continuity, data residency, or lower latency.
Compute Services
You should recognize when to use different EC2 instance families, how auto scaling supports elasticity, and the basic purposes of load balancers. You should know container options such as Amazon ECS and Amazon EKS and serverless options such as AWS Fargate and AWS Lambda.
Database Services
You should know when to choose managed databases over self managed EC2 hosted databases. You should identify relational options such as Amazon RDS and Amazon Aurora, NoSQL options such as Amazon DynamoDB, in memory options such as Amazon ElastiCache, and migration tools such as AWS Database Migration Service and AWS Schema Conversion Tool.
Network Services
You should understand Amazon VPC components such as subnets and gateways and security controls such as security groups and network ACLs. You should know the purpose of Amazon Route 53 and connectivity choices like AWS VPN and AWS Direct Connect.
Storage Services
You should identify use cases for Amazon S3 and its storage classes. You should recognize block storage options such as Amazon EBS and instance store and file storage options such as Amazon EFS and Amazon FSx. You should understand where cached file systems fit using AWS Storage Gateway and how lifecycle policies and AWS Backup support data management.
AI and ML plus Analytics
You should understand the basic tasks that AI and ML services perform using services like Amazon SageMaker, Amazon Lex, and Amazon Kendra. You should recognize data analytics services such as Amazon Athena, Amazon Kinesis, AWS Glue, and Amazon QuickSight.
Other In Scope Service Categories
You should be able to choose application integration services such as Amazon EventBridge, Amazon SNS, and Amazon SQS. You should know business application services such as Amazon Connect and Amazon SES. You should recognize customer enablement through AWS Support, developer tools such as AWS CodeBuild, AWS CodePipeline, and AWS X Ray, end user computing options like Amazon WorkSpaces, Amazon WorkSpaces Secure Browser, and Amazon AppStream 2.0, frontend and mobile services such as AWS Amplify and AWS AppSync, and IoT services such as AWS IoT Core.
Domain 4 Billing, Pricing, and Support
Pricing Models
You should identify when to use On Demand Instances, Reserved Instances, Spot Instances, Savings Plans, Dedicated Hosts or Instances, and Capacity Reservations. You should understand how Reserved Instances behave in AWS Organizations and how data transfer charges can vary within or across Regions. You should recognize pricing options for storage services and tiers.
Cost and Budget Resources
You should understand the capabilities of AWS Budgets and AWS Cost Explorer and when to use the AWS Pricing Calculator. You should know how AWS Organizations supports consolidated billing and how to use cost allocation tags and the AWS Cost and Usage Report.
Support and Technical Resources
You should be able to locate AWS whitepapers, documentation, blogs, and guidance resources such as AWS Prescriptive Guidance, the AWS Knowledge Center, and AWS re Post. You should know the AWS Support plan options and the roles of the AWS Support Center, Trusted Advisor, the AWS Health Dashboard and API, and the AWS Trust and Safety team. You should recognize how AWS Partners and AWS Marketplace can help and the benefits of being a partner.
Out of Scope Tasks
The exam does not expect you to code, design full cloud architectures, troubleshoot complex issues, implement full solutions, or perform load and performance testing. You do not need to know how to integrate with other clouds, like the Google Cloud Platform. The assessment focuses on understanding concepts and recognizing appropriate services and practices.
How to Prepare
Begin by studying the official certification exam topics until you have a clear picture of what is covered. Find a reliable set of practice exams so you can better understand how questions are framed and which topics matter most.
Use generative AI and ChatGPT as your personal trainer by asking for tutorials that teach the tasks and ideas listed in the objectives and request walkthroughs that build your skills step by step. Return to the mock exams for another round of review and tighten up any gaps before you sit the real exam.
A regional credit union uses a SAML 2.0 identity provider to give employees single sign-on to multiple third-party SaaS applications. The organization wants to move this workforce sign-in and federation capability to AWS with a fully managed service that centralizes access to external apps. Which AWS service should they use?
A digital media startup is implementing an event-driven, serverless pipeline that invokes several AWS services in both sequence and parallel. The team needs a visual designer that shows real-time status and history for roughly 30 stages and provides built-in retries and error handling to coordinate the flow. Which AWS service should they use to orchestrate this workflow?
An ad-tech startup runs fault-tolerant analytics on Amazon EC2 and wants the lowest possible cost, even if AWS may briefly reclaim the capacity. Which EC2 purchasing option can be interrupted by AWS when capacity is needed elsewhere?
A regional media startup plans to move its applications from its own server room to AWS. Which statement best captures a primary cost benefit they would gain by adopting the AWS Cloud?
❏ A. Shift to a capital expenditure purchasing model
❏ B. Pay only for the resources you actually consume
❏ C. Rely on commitment-based discounts such as Savings Plans as the default pricing model
❏ D. Transfer all IT operations and responsibilities to AWS
An architecture firm needs a reliable private network link from its corporate data center to workloads running in AWS. The team wants predictable throughput and steady latency for ongoing replication and daily operations. Which AWS service meets these requirements?
An engineer at BrightCart Analytics needs an automated assessment that scans Amazon EC2 instances across two VPCs for unintended network exposure and known software vulnerabilities, and then produces a consolidated report. Which AWS service should be used to generate this assessment?
An early-stage genomics startup plans to run short, bursty data processing on Amazon EC2 where tasks can be stopped or reclaimed by AWS without affecting results. Which purchasing option should they select to achieve the lowest compute cost for these interruption-tolerant workloads?
A streaming media startup hosts static images and file downloads in an Amazon S3 bucket and serves customers across several continents. Which AWS service should be used so users around the world receive this content with the lowest latency?
A regional health clinic licenses a browser-based appointment system that the vendor operates from end to end. Staff simply sign in and use the features without managing servers, operating systems, databases, or upgrades. Which cloud computing model does this service represent?
A systems administrator at a regional hospital is standardizing multi-factor sign-in for AWS IAM users. They want an approach that uses a physical key that plugs into a computer’s USB port and is tapped after entering the username and password. Which authentication method should they choose?
A reliability team at a ride-sharing startup needs to ingest application and system logs and search them within seconds to troubleshoot incidents, create operational dashboards, and monitor performance. Which AWS service is the best fit for this type of operational analytics?
A nonprofit arts network is launching a new web portal in one AWS Region to serve visitors across six continents. Which AWS services should the team use to reduce latency and boost transfer speeds for this global audience? (Choose 2)
NorthStar Clinics stores about 45 TB of patient records across more than 80 Amazon S3 buckets and needs an automated, fully managed way to discover, classify, and help protect sensitive data such as PII at scale. Which AWS service should the security team use?
Polaris Learning, an edtech startup, is moving a prototype service to AWS to shorten release cycles and test features quickly. Which AWS capabilities most directly accelerate their ability to experiment and deliver changes faster? (Choose 2)
A regional e-commerce marketplace operates multiple production relational databases on Amazon RDS. The database engine vendor releases security updates roughly every quarter, and the team wants these updates applied with minimal manual work and predictable timing. What is the most efficient way to ensure the databases receive these security patches?
❏ A. Configure an AWS Config rule to check each RDS instance for the required patch level
❏ B. Sign in to each DB instance every quarter and manually download and install the vendor’s patches
❏ C. Enable automatic minor version updates and define a maintenance window in Amazon RDS
❏ D. Use AWS Systems Manager Patch Manager to schedule database engine patching across the RDS fleet
Riverton Analytics is reviewing AWS Support tiers for a new workload. Under the Basic Support plan that is included for all AWS accounts, which features are available to customers? (Choose 2)
❏ A. Infrastructure Event Management
❏ B. Access to the AWS Service Health Dashboard
❏ C. Direct, one-on-one help for account and billing inquiries
❏ D. Prescriptive use-case guidance for architectures
An engineer at Helios Motors needs to balance large volumes of TCP traffic and requires an Elastic Load Balancer that works at the OSI Layer 4 connection level. Which load balancer type should they choose?
A developer at Northwind Robotics is configuring a build server to run automation using the AWS CLI and SDKs against multiple AWS services. Which credential type should be set up so the scripts can authenticate programmatic API requests?
A fast-growing edtech startup runs a read-heavy API on Amazon DynamoDB and wants to add an in-memory layer to speed up repeated reads without changing the application’s data model. Which AWS service should they choose to add this cache to DynamoDB?
NovaRetail, an online marketplace, is deciding whether to host a third-party analytics suite on Amazon EC2 instances or adopt an AWS fully managed alternative. What is the primary advantage of choosing a fully managed service in this case?
❏ A. Greater control and flexibility
❏ B. Backups are unnecessary
❏ C. Lower operational burden
❏ D. Automatic multi-Region replication by default
NorthBridge Capital wants a managed way to ingest system and application log files from about 50 Amazon EC2 instances and several on-premises servers, search those logs, create metric filters, and trigger alarms on patterns within roughly one minute to speed up incident diagnosis. Which AWS service should they use?
At Pine Harbor Robotics, a new administrator needs to understand which credentials can be directly associated with an IAM user for day-to-day access. Which credentials can be provided to the IAM user? (Choose 2)
❏ A. An Amazon EC2 key pair
❏ B. An access key ID with its matching secret access key
❏ C. A password for signing in to the AWS Management Console
BlueWave Robotics wants to lower Amazon EC2 costs by finding instances that have shown consistently low CPU use and little network traffic during the past 30 days. Which AWS service or feature can quickly highlight these underutilized instances to guide cost savings?
An online education startup named Kestrel Learning uploads high-definition lecture videos and audio files up to 25 GB each to a single Amazon S3 bucket in us-east-2 from instructors in 14 countries. Which AWS solution should the company use to consistently accelerate these long-distance uploads to the bucket?
A startup called BlueLark Media is building a microservices application that must notify customers by both SMS text messages and email from the same managed service. Which AWS service should the team choose to send these notifications?
An engineer at BlueWave Analytics needs to enable programmatic access to AWS using the CLI and SDKs. They plan to create an Access Key ID and a Secret Access Key for ongoing use. These credentials are linked to which IAM entity?
Which cost-related benefits does moving to AWS provide for a mid-size media company that wants to manage monthly expenses efficiently without long-term contracts? (Choose 2)
❏ A. Granular, pay-as-you-go billing for only the usage you consume
❏ B. Itemized electricity or power charges appear separately on AWS bills
❏ C. Ability to shut down resources when idle so you do not pay while they are off
❏ D. One-time flat fees for on-demand capacity instead of recurring charges
A genomics startup stores rarely accessed research archives in Amazon S3 Glacier and occasionally needs to pull a small archive urgently so the data becomes available in about 1 to 5 minutes. Which retrieval tier should they use?
NovaPixel Games wants to track AWS spending trends and obtain Savings Plans recommendations derived from roughly the last 90 days of usage to choose an optimal commitment. As the Cloud Practitioner, which AWS service should be used to analyze costs and get those recommendations?
A mobile augmented reality startup needs to run compute and store data within 5G carrier networks so that users on cellular connections experience single digit millisecond latency near edge locations. Which AWS service should they choose?
A logistics startup runs EC2 instances in a private subnet that need to read and write to a DynamoDB table named Orders2025. To follow best practices and avoid storing long-term credentials on the servers, which AWS identity should the instances use to obtain permissions to the table?
NorthBay Data is moving about 18 internal services into Docker containers. They want to orchestrate these containers while maintaining full control of and SSH access to the EC2 instances that run them. Which AWS service should they use to satisfy these needs?
❏ A. AWS Fargate
❏ B. Amazon Elastic Container Registry (Amazon ECR)
❏ C. Amazon Elastic Container Service (Amazon ECS)
A media tech startup named VistaPixels wants to add image analysis using an AWS service that is delivered as a complete, provider-managed application accessed through an API, so the team does not operate servers or maintain runtimes. Which AWS service best fits the Software as a Service model for this need?
A regional news streaming startup is designing a highly available web tier on AWS and is evaluating services that enable scalable application architectures. Which statement accurately describes Elastic Load Balancing in this context?
❏ A. Automatically scales the fleet of Amazon EC2 instances to meet fluctuating demand
❏ B. Provides a dedicated private network connection from an on-premises facility to AWS without traversing the public internet
❏ C. Distributes incoming client traffic across multiple targets such as Amazon EC2 instances, containers, and IP addresses
❏ D. Offers a highly available and scalable Domain Name System service
Answers to the Certification Exam Simulator Questions
A regional credit union uses a SAML 2.0 identity provider to give employees single sign-on to multiple third-party SaaS applications. The organization wants to move this workforce sign-in and federation capability to AWS with a fully managed service that centralizes access to external apps. Which AWS service should they use?
✓ B. AWS IAM Identity Center
The correct choice is AWS IAM Identity Center. It provides a fully managed workforce single sign on service that supports SAML 2.0 and centralizes access to external SaaS applications so employees can sign in once and access multiple third party apps and AWS accounts.
AWS IAM Identity Center works as a central identity broker and can integrate with external identity providers or manage identities within AWS. It includes built in SAML support and connectors that make it the appropriate service for moving workforce federation and centralized access to third party SaaS tools to AWS.
Amazon Cognito is focused on customer and application end user authentication for web and mobile applications and it is not intended to be a centralized workforce SSO gateway for many external SaaS applications.
AWS Identity and Access Management (IAM) is used to control permissions and roles for AWS resources and it does not serve as a managed SSO solution for federating users into third party business applications.
AWS Command Line Interface (AWS CLI) is a client tool for interacting with AWS APIs from a local environment and it does not provide identity federation or single sign on functionality.
When a question asks about workforce single sign on to external SaaS using SAML choose AWS IAM Identity Center and use Amazon Cognito when the focus is on customer or app end user authentication.
A digital media startup is implementing an event-driven, serverless pipeline that invokes several AWS services in both sequence and parallel. The team needs a visual designer that shows real-time status and history for roughly 30 stages and provides built-in retries and error handling to coordinate the flow. Which AWS service should they use to orchestrate this workflow?
✓ C. AWS Step Functions
The correct choice is AWS Step Functions. It provides a visual workflow designer that shows execution status and history for each stage and it supports parallel branches while offering built in retries and error handling to coordinate complex serverless pipelines.
AWS Step Functions lets you model the workflow as a state machine so each step is tracked and visible in the console. The service integrates natively with many AWS services and it supports Parallel states for concurrent work along with Retry and Catch configurations for error handling and automatic retries.
Amazon EventBridge is an event bus for routing and filtering events and it does not provide a visual, stateful orchestrator that shows step by step history for a multi stage workflow.
AWS Lambda runs code without provisioning servers and it is not a workflow orchestration tool. Lambda lacks a native visual designer and does not track step level state or history across a multi step process.
Amazon SQS is a messaging queue used to decouple components and buffer messages and it does not orchestrate multi step workflows or provide built in retries and visual execution history.
Look for phrases like visual workflow or step history with built in retries to identify a workflow orchestrator rather than an event bus, function runtime, or message queue.
An ad-tech startup runs fault-tolerant analytics on Amazon EC2 and wants the lowest possible cost, even if AWS may briefly reclaim the capacity. Which EC2 purchasing option can be interrupted by AWS when capacity is needed elsewhere?
✓ C. Spot Instances
Spot Instances is the correct choice because these instances run on spare Amazon EC2 capacity at steep discounts and can be interrupted by AWS when that capacity is needed elsewhere, usually after a short notice period.
Spot Instances are designed for fault tolerant and flexible workloads such as batch processing and analytics where brief interruptions are acceptable. AWS reclaims the underlying capacity when demand rises and provides a short warning so you can checkpoint work or handle termination according to your chosen interruption behavior.
On-Demand Instances are not interrupted by AWS once they are running and you control when to stop or terminate them. They provide predictable availability and you pay for compute by the hour or second depending on the instance.
Standard Reserved Instances are a pricing option that gives a discount for committed usage and they do not change how or when AWS interrupts instances. They affect billing and not instance reclamation.
Convertible Reserved Instances offer discounted pricing with the flexibility to change instance attributes within certain rules and they are likewise a billing construct. They do not cause AWS to reclaim or interrupt your running instances.
When the question emphasizes lowest cost and possible interruption think Spot Instances. If the scenario needs predictable availability think On-Demand or reserved pricing instead.
A regional media startup plans to move its applications from its own server room to AWS. Which statement best captures a primary cost benefit they would gain by adopting the AWS Cloud?
✓ B. Pay only for the resources you actually consume
The correct choice is Pay only for the resources you actually consume. This option best captures the primary cost benefit the regional media startup would gain by moving applications to AWS.
With a pay as you go model costs scale with actual demand and the company avoids buying excess capacity upfront. AWS bills many services by usage such as compute hours storage bytes and data transfer so expenses align with consumption and you can scale resources up or down to match traffic patterns.
Shift to a capital expenditure purchasing model is incorrect because adopting cloud typically shifts spending from capital expenditure to operating expenditure and not the other way around. Purchasing more hardware up front is not a cloud advantage.
Rely on commitment-based discounts such as Savings Plans as the default pricing model is incorrect because commitment discounts can reduce cost but they are optional optimizations and not the fundamental benefit of the cloud. Savings Plans are a tool to lower long term spend when usage is predictable.
Transfer all IT operations and responsibilities to AWS is incorrect because AWS uses a shared responsibility model and customers continue to manage many aspects such as their applications data and configurations.
An architecture firm needs a reliable private network link from its corporate data center to workloads running in AWS. The team wants predictable throughput and steady latency for ongoing replication and daily operations. Which AWS service meets these requirements?
✓ B. AWS Direct Connect
AWS Direct Connect is the correct option because it provides a dedicated private connection between the corporate data center and AWS that supports predictable throughput and steady latency for replication and daily operations.
AWS Direct Connect bypasses the public internet to deliver consistent bandwidth and lower, more stable latency. It supports dedicated and partner hosted connections and link aggregation so you can provision and manage capacity for production replication and ongoing traffic.
AWS VPN uses encrypted tunnels over the public internet so performance can vary and bandwidth is not guaranteed. That variability makes it less suitable when consistent throughput and steady latency are required.
Amazon CloudFront is a content delivery network that accelerates delivery at the edge and it does not establish a private on premises to AWS network link for replication or steady latency.
Amazon Connect is a cloud contact center service and it does not provide hybrid network connectivity between a corporate data center and AWS.
When a question requires a private link with predictable performance choose AWS Direct Connect. If the scenario highlights quick setup over the internet or encrypted tunnels without strict performance needs then consider AWS VPN.
An engineer at BrightCart Analytics needs an automated assessment that scans Amazon EC2 instances across two VPCs for unintended network exposure and known software vulnerabilities, and then produces a consolidated report. Which AWS service should be used to generate this assessment?
✓ C. Amazon Inspector
The correct choice is Amazon Inspector because it is designed to perform automated assessments of Amazon EC2 instances for known software vulnerabilities and to evaluate network reachability across VPCs so it can identify unintended exposure and produce consolidated findings and reports.
Amazon Inspector runs vulnerability scans that detect common vulnerabilities and exposures and it analyzes reachable network paths to show which services or ports are exposed. It aggregates findings into actionable reports so teams can prioritize remediation across multiple VPCs and instances.
AWS Config evaluates and records resource configurations and checks compliance against rules and baselines. It does not perform host level vulnerability scanning or network reachability analysis of running EC2 instances.
Amazon GuardDuty analyzes telemetry such as VPC Flow Logs and CloudTrail to detect suspicious or malicious activity. It is a threat detection service and not a vulnerability scanner that produces host vulnerability and reachability reports.
Amazon Macie focuses on discovering and protecting sensitive data in Amazon S3. It does not assess EC2 instances for software vulnerabilities or unintended network exposure.
When a question pairs EC2 vulnerability scanning with network reachability across VPCs think Inspector. Remember that GuardDuty is for threat detection, Macie is for S3 data discovery, and AWS Config is for configuration compliance.
Within AWS’s global network, what best describes an Edge location used to deliver content to viewers?
✓ C. A CloudFront CDN point of presence that caches content near viewers
The correct option is A CloudFront CDN point of presence that caches content near viewers. An edge location is a CloudFront point of presence that caches and delivers content closer to end users to reduce latency.
Edge locations function as cache points in CloudFront’s global network and they serve static and dynamic content from the POP nearest the viewer so requests have lower latency and place less load on the origin. When you see the term edge location you should associate it with CloudFront POPs and caching rather than general network connectivity.
An AWS Direct Connect location is incorrect because it provides dedicated private network connections between your on prem network and AWS and it is not a CloudFront POP or a caching service.
A public Amazon S3 endpoint is incorrect because it provides access to S3 objects and does not act as a CDN cache that places content near viewers.
A virtual private gateway for a Site-to-Site VPN is incorrect because it is a VPN termination component used for VPC connectivity and it is unrelated to content delivery or caching.
Edge location almost always refers to a CloudFront POP used for caching near users so choose the CloudFront CDN option when questions ask about edge locations and content delivery.
An early-stage genomics startup plans to run short, bursty data processing on Amazon EC2 where tasks can be stopped or reclaimed by AWS without affecting results. Which purchasing option should they select to achieve the lowest compute cost for these interruption-tolerant workloads?
✓ B. Spot Instance
The correct choice is Spot Instance. Spot Instance provides the lowest compute cost for interruption tolerant workloads because AWS sells unused EC2 capacity at deep discounts and can reclaim instances when needed.
Spot Instance is ideal for short lived, bursty jobs that can be stopped and restarted without impacting results. These instances let you reduce compute spend by using spare capacity and you can combine them with Auto Scaling, Spot Fleets, or fault tolerant application design to maintain throughput despite interruptions.
Reserved Instance (RI) is not appropriate because it requires a long term commitment and it targets predictable, steady state usage rather than transient, interruptible runs.
On-Demand Instance is not optimal because it provides flexible pay as you go pricing but it does not offer the deep discounts that Spot Instance provides for interruption tolerant tasks and it will cost significantly more for the same compute.
Dedicated Host is not suitable because it allocates a physical server for compliance and licensing needs, it carries higher cost, and it is unnecessary for ephemeral, fault tolerant processing.
Look for words like interruptible, flexible, or fault-tolerant and choose Spot Instances to minimize cost for those workloads. Use RIs or Savings Plans for predictable long term demand and Dedicated Hosts only for strict licensing or compliance.
A streaming media startup hosts static images and file downloads in an Amazon S3 bucket and serves customers across several continents. Which AWS service should be used so users around the world receive this content with the lowest latency?
✓ C. Amazon CloudFront
Amazon CloudFront is the correct choice because it is a global content delivery network that caches objects at edge locations near users and it can use an Amazon S3 bucket as the origin for static images and file downloads.
CloudFront reduces latency by serving cached copies from edge locations around the world and it fetches content from the S3 origin when needed, which makes static assets load faster for users across continents.
AWS Elastic Beanstalk is not suitable because it is a platform for deploying and managing application environments and it does not provide a global edge cache for static S3 content.
AWS Lambda provides serverless compute and can run backend code but it does not provide edge caching or native CDN distribution for static files.
AWS Global Accelerator improves network performance by routing traffic over the AWS global network to regional endpoints but it does not cache S3 objects at edge locations and it is not used to distribute static website assets from S3.
For static files in S3 that must be fast worldwide think Amazon CloudFront for edge caching and use AWS Global Accelerator when you need faster routing to regional application endpoints without caching.
A regional health clinic licenses a browser-based appointment system that the vendor operates from end to end. Staff simply sign in and use the features without managing servers, operating systems, databases, or upgrades. Which cloud computing model does this service represent?
✓ B. Software as a Service (SaaS)
The correct choice is Software as a Service (SaaS).
Software as a Service (SaaS) delivers a complete, ready to use application that the vendor hosts and manages and the clinic accesses through a browser. The provider is responsible for servers, operating systems, databases, scaling, security updates and application upgrades so staff only sign in and use the service.
Infrastructure as a Service (IaaS) is incorrect because IaaS supplies virtualized compute, storage and networking while customers install and manage operating systems, middleware and applications. That model requires administrative responsibility for the OS and application stack which does not match a fully managed browser application.
Platform as a Service (PaaS) is incorrect because PaaS provides a managed runtime and services for deploying applications while customers still develop, deploy and operate their own apps. The clinic is consuming a finished vendor application so PaaS does not apply.
Function as a Service (FaaS) is incorrect because FaaS focuses on running discrete, event driven functions rather than delivering a full end user application. FaaS abstracts servers for code execution but it does not equate to a ready to use browser based appointment system.
Look for phrases like fully managed and access via browser to identify SaaS. Confirm whether customers manage the OS or application to rule out IaaS and PaaS.
A systems administrator at a regional hospital is standardizing multi-factor sign-in for AWS IAM users. They want an approach that uses a physical key that plugs into a computer’s USB port and is tapped after entering the username and password. Which authentication method should they choose?
✓ B. FIDO U2F security key
The correct choice is FIDO U2F security key. This option matches a physical key that plugs into a computer USB port and is tapped after entering the username and password.
FIDO U2F security key implements the FIDO WebAuthn standard and provides a phishing resistant second factor. It uses a hardware security key that performs cryptographic authentication with a button press or tap and does not require typing time based codes.
Hardware MFA device is a standalone token that displays 6 digit time based codes which the user must type into the sign in page. It is not a USB tap key so it does not match the described workflow.
Virtual MFA application runs on a smartphone or device and generates time based codes in an app. It requires entering a code and does not involve plugging in or tapping a USB device.
SMS text message MFA sends a one time code by text message which is not delivered via a USB key and is less resistant to interception or SIM based attacks compared with a security key.
A reliability team at a ride-sharing startup needs to ingest application and system logs and search them within seconds to troubleshoot incidents, create operational dashboards, and monitor performance. Which AWS service is the best fit for this type of operational analytics?
✓ B. Amazon OpenSearch Service
Amazon OpenSearch Service is correct because it is a managed and scalable search and analytics engine that supports near real time log ingestion, indexing, and fast querying for troubleshooting and operational dashboards.
OpenSearch provides full text search, aggregations, and visualization through OpenSearch Dashboards and it integrates with ingestion services such as Amazon Kinesis Data Firehose and CloudWatch Logs. These capabilities make OpenSearch well suited to ingest streaming application and system logs and to deliver low latency ad hoc searches and operational dashboards for observability and incident response.
Amazon QuickSight focuses on business intelligence and reporting and it does not provide the log indexing or fast search primitives needed for near real time operational log analysis.
Amazon EMR is designed for batch big data processing with frameworks such as Spark and Hadoop and it is not optimized for low latency interactive searches on streaming logs.
Amazon Athena runs interactive SQL queries over data stored in Amazon S3 and it is not intended for near real time log ingestion and subsecond search and aggregation workflows.
When a question emphasizes near real time log ingestion and fast search choose Amazon OpenSearch Service rather than tools built for BI or batch processing.
A nonprofit arts network is launching a new web portal in one AWS Region to serve visitors across six continents. Which AWS services should the team use to reduce latency and boost transfer speeds for this global audience? (Choose 2)
✓ B. AWS Global Accelerator
✓ D. Amazon CloudFront
AWS Global Accelerator and Amazon CloudFront are the correct options for reducing latency and boosting transfer speeds for a web portal hosted in a single AWS Region that must serve users across six continents.
Amazon CloudFront distributes and caches content at edge locations around the world which reduces the distance between users and content and lowers latency for static assets and many dynamic responses. It also reduces origin load and improves perceived performance by serving content from nearby edges.
AWS Global Accelerator assigns static anycast IP addresses and routes user traffic over the AWS global backbone to the closest healthy regional endpoint which improves latency consistency and failover behavior for traffic destined to a single Region. It complements CloudFront when you need optimized routing for noncacheable or interactive traffic.
Amazon S3 Transfer Acceleration is focused on accelerating uploads and downloads to and from S3 buckets and does not speed general web application traffic to compute or load balancer endpoints.
AWS Transit Gateway is built to connect VPCs and on premises networks at scale and it is not intended to optimize user facing latency for global web traffic.
AWS Direct Connect provides private dedicated links between on premises locations and AWS and it does not accelerate public internet access for a worldwide audience.
NorthStar Clinics stores about 45 TB of patient records across more than 80 Amazon S3 buckets and needs an automated, fully managed way to discover, classify, and help protect sensitive data such as PII at scale. Which AWS service should the security team use?
✓ C. Amazon Macie
The correct choice is Amazon Macie because it is a fully managed service that discovers, classifies, and helps protect sensitive data such as personally identifiable information stored in Amazon S3 at scale.
Amazon Macie uses machine learning and pattern matching to identify PII and other sensitive data across many S3 buckets and it provides automated workflows and reporting to help secure that data. This makes it suitable for a 45 TB repository spread across more than 80 buckets where automated, scalable discovery and classification are required.
AWS Key Management Service (AWS KMS) manages encryption keys and controls access to those keys but it does not scan or classify S3 objects so it cannot discover PII by itself.
AWS Secrets Manager securely stores and rotates secrets such as database credentials and API keys and it is not designed to inspect stored data for sensitive content.
Amazon GuardDuty focuses on threat detection and anomaly monitoring across your AWS environment and it does not perform content discovery or classification of S3 objects.
Polaris Learning, an edtech startup, is moving a prototype service to AWS to shorten release cycles and test features quickly. Which AWS capabilities most directly accelerate their ability to experiment and deliver changes faster? (Choose 2)
✓ A. Rapid provisioning of resources
✓ C. Elastic, on-demand compute capacity
Rapid provisioning of resources and Elastic, on-demand compute capacity are correct because they most directly accelerate Polaris Learning’s ability to experiment and deliver changes faster.
Rapid provisioning of resources lets teams create and tear down development, test, and staging environments in minutes which removes hardware and manual setup delays and shortens feedback loops for prototypes.
Elastic, on-demand compute capacity enables immediate right-sizing and scaling so experiment workloads can grow or shrink without procurement delays which supports rapid iteration and controlled cost during trials.
AWS Direct Connect provides dedicated private network connectivity to on-premises locations and it improves performance and consistency but it does not directly speed development cycles or simplify creating test environments.
Lower total cost of ownership is a financial benefit that may result from cloud use and it is valuable but it is not an operational mechanism that by itself makes teams deliver features faster.
Globally secured data centers improve security and compliance and they are important for scaling and trust yet they do not directly accelerate the pace of experimentation or deployments.
When a question emphasizes speed and agility choose options that mention rapid provisioning or elastic compute rather than answers focused on cost or security.
A regional e-commerce marketplace operates multiple production relational databases on Amazon RDS. The database engine vendor releases security updates roughly every quarter, and the team wants these updates applied with minimal manual work and predictable timing. What is the most efficient way to ensure the databases receive these security patches?
✓ C. Enable automatic minor version updates and define a maintenance window in Amazon RDS
Enable automatic minor version updates and define a maintenance window in Amazon RDS is the correct option because it lets Amazon RDS apply engine security patches automatically during a scheduled maintenance window so updates occur with minimal manual effort and predictable timing.
Enable automatic minor version updates and define a maintenance window in Amazon RDS delegates patching to the managed service so AWS performs minor engine upgrades and security fixes for supported database engines within the maintenance window you set. This reduces operational overhead and gives you control over when reboots or brief disruptions may occur.
Configure an AWS Config rule to check each RDS instance for the required patch level is incorrect because AWS Config only evaluates and records configuration compliance and it does not deploy or install engine patches on RDS instances.
Sign in to each DB instance every quarter and manually download and install the vendor’s patches is incorrect because this approach is inefficient and unnecessary for managed RDS databases and it increases operational risk and workload.
Use AWS Systems Manager Patch Manager to schedule database engine patching across the RDS fleet is incorrect because Patch Manager is intended for operating system patching on EC2 and hybrid instances and it does not perform RDS managed database engine minor version upgrades.
For questions about RDS patching remember to prefer the native managed features and think maintenance window and automatic minor version updates rather than manual or Config based solutions.
Riverton Analytics is reviewing AWS Support tiers for a new workload. Under the Basic Support plan that is included for all AWS accounts, which features are available to customers? (Choose 2)
✓ B. Access to the AWS Service Health Dashboard
✓ C. Direct, one-on-one help for account and billing inquiries
Direct, one-on-one help for account and billing inquiries and Access to the AWS Service Health Dashboard are included with the Basic support plan. These items cover nontechnical customer service and public service status information that every AWS account receives at no additional cost.
Direct, one-on-one help for account and billing inquiries is available around the clock for account and billing questions and it covers nontechnical issues handled by customer service. This support does not provide technical troubleshooting or architectural advice.
Access to the AWS Service Health Dashboard gives customers the ability to view current and historical availability and service events for AWS offerings. This is public status information rather than a personalized technical incident response service.
Infrastructure Event Management is not included with Basic and it is offered only to customers on Enterprise On-Ramp and Enterprise plans for launch and large scale event support.
Prescriptive use-case guidance for architectures is provided by paid plans such as Business and above and is not part of the Basic tier.
AWS Support client-side diagnostic tools are not available with Basic and require at least Developer, Business, Enterprise On-Ramp, or Enterprise support levels.
An engineer at Helios Motors needs to balance large volumes of TCP traffic and requires an Elastic Load Balancer that works at the OSI Layer 4 connection level. Which load balancer type should they choose?
✓ C. Network Load Balancer (NLB)
The Network Load Balancer (NLB) is the correct choice for balancing large volumes of TCP traffic because it operates at the OSI Layer 4 and makes routing decisions based on connection information rather than application payload.
The NLB is optimized for high throughput and low latency and it can scale to handle millions of TCP connections. The NLB routes on TCP or UDP connection details and can preserve client source IP addresses which helps with stateful workloads and direct client identification.
The Application Load Balancer (ALB) is not correct because it operates at OSI Layer 7 and routes based on HTTP and HTTPS request attributes such as headers and paths which do not meet a raw TCP balancing requirement.
The Classic Load Balancer (CLB) is not preferred because it is a legacy option and while it can handle both Layer 4 and Layer 7 traffic it lacks many of the newer features and optimizations that make the NLB the recommended Layer 4 choice for new architectures.
The AWS Global Accelerator is not an Elastic Load Balancer because it provides global anycast and traffic acceleration and routing rather than per connection Layer 4 load balancing within a region so it does not replace the NLB for this use case.
A developer at Northwind Robotics is configuring a build server to run automation using the AWS CLI and SDKs against multiple AWS services. Which credential type should be set up so the scripts can authenticate programmatic API requests?
✓ B. IAM access keys
The correct option is IAM access keys. They provide the access key ID and secret access key pair that scripts use to authenticate and sign programmatic API requests made with the AWS CLI or SDKs.
IAM access keys are the mechanism the CLI and SDKs expect when performing signed API calls. You can associate access keys with an IAM user or you can avoid long lived keys by using roles and temporary credentials for automation which improves security.
AWS Certificate Manager is incorrect because it provisions and manages SSL and TLS certificates to secure network traffic and it does not supply credentials for AWS API calls.
SSH public keys are incorrect because they authenticate SSH sessions to instances or services that accept SSH and they are not used to sign generic AWS service API requests.
Username and password are incorrect because those are used for console sign in and cannot be used by the CLI or SDKs to sign programmatic requests. For programmatic access you must use access keys or temporary credentials issued by roles.
When a question mentions programmatic access or the CLI think of access keys and remember that roles and temporary credentials are the more secure choice for automation.
A fast-growing edtech startup runs a read-heavy API on Amazon DynamoDB and wants to add an in-memory layer to speed up repeated reads without changing the application’s data model. Which AWS service should they choose to add this cache to DynamoDB?
✓ B. Amazon DynamoDB DAX
Amazon DynamoDB DAX is the correct choice for adding an in memory cache to DynamoDB because it is purpose built to provide microsecond read latency and it integrates with DynamoDB so the application data model and client code do not need to change.
Amazon DynamoDB DAX is a fully managed, in memory cache that sits in front of DynamoDB and it speaks the same DynamoDB API so reads can be served from the cache without rewriting access logic. This reduces DynamoDB read capacity usage and improves response times for repeated reads while keeping the data model intact.
Amazon ElastiCache is a general purpose in memory caching service that supports Redis and Memcached, but it is not a drop in cache for DynamoDB and using it typically requires application changes to implement caching logic, so it does not meet the requirement of avoiding code or data model changes.
AWS Key Management Service (KMS) handles encryption key management and it provides no caching capabilities, so it cannot be used to speed up reads for DynamoDB.
Amazon Simple Queue Service (SQS) is a message queuing service for decoupling components and it does not cache data or reduce read latency for DynamoDB queries.
When a question asks for an in memory cache that is a drop in for DynamoDB and requires no client code changes think DAX rather than a general cache like ElastiCache.
NovaRetail, an online marketplace, is deciding whether to host a third-party analytics suite on Amazon EC2 instances or adopt an AWS fully managed alternative. What is the primary advantage of choosing a fully managed service in this case?
✓ C. Lower operational burden
Lower operational burden is the correct option because a fully managed AWS service shifts routine operational tasks off your team and onto the provider.
A managed service removes responsibilities such as infrastructure maintenance, operating system and software patching, automatic scaling, and many aspects of availability and fault handling, so your engineers can focus on the analytics workload and business logic rather than server upkeep.
Backups are unnecessary is incorrect because managed services do not eliminate the need for data protection and recovery planning and you still must enable and test backups and restores as part of your responsibilities.
Greater control and flexibility is incorrect since self-managing on EC2 usually gives more low level configurability and tuning, and managed services trade some of that fine grained control for operational simplicity.
Automatic multi-Region replication by default is incorrect because most AWS managed services are regional and require explicit setup or additional features and cost to replicate data across regions.
When a question contrasts EC2 with an AWS managed alternative choose the managed option when the focus is on reducing operational work and maintenance and remember that backup and security responsibilities still apply.
NorthBridge Capital wants a managed way to ingest system and application log files from about 50 Amazon EC2 instances and several on-premises servers, search those logs, create metric filters, and trigger alarms on patterns within roughly one minute to speed up incident diagnosis. Which AWS service should they use?
✓ B. Amazon CloudWatch Logs
The correct choice is Amazon CloudWatch Logs. It is a managed service that can ingest system and application log files from Amazon EC2 instances and on prem servers, allow you to search log events, create metric filters, and drive CloudWatch alarms for near real time alerting to speed incident diagnosis.
Amazon CloudWatch Logs centralizes logs and supports agents and native integrations to collect logs from EC2 and on prem hosts. You can define metric filters that convert log patterns into CloudWatch metrics and then create alarms on those metrics so that alerts are triggered within roughly one minute for rapid investigation.
AWS CloudTrail records AWS API calls for auditing and compliance and does not serve as a general system or application log aggregation service or provide metric filters on arbitrary log content.
AWS X-Ray provides distributed request tracing and service maps to help debug latency and errors in applications and it is not designed for bulk log aggregation search or for creating CloudWatch metric filters from arbitrary logs.
Amazon OpenSearch Service is a powerful analytics and search engine for logs and metrics visualization but it requires an ingestion pipeline such as CloudWatch Logs subscriptions, Kinesis, or Firehose to receive logs. It does not by itself provide the native CloudWatch metric filter to alarm workflow that CloudWatch Logs does.
Keywords such as log ingestion, metric filters, and CloudWatch alarms usually point to the service that directly integrates with CloudWatch metrics and alarms.
At Pine Harbor Robotics, a new administrator needs to understand which credentials can be directly associated with an IAM user for day-to-day access. Which credentials can be provided to the IAM user? (Choose 2)
✓ B. An access key ID with its matching secret access key
✓ C. A password for signing in to the AWS Management Console
An access key ID with its matching secret access key and A password for signing in to the AWS Management Console are the credentials that can be directly associated with an IAM user. These two provide the programmatic and human sign in methods that IAM manages for individual users.
The An access key ID with its matching secret access key pair provides programmatic access to AWS APIs and SDKs and is created and rotated under the IAM user security credentials. These access keys are used by the AWS CLI and application code to authenticate as that IAM user.
The A password for signing in to the AWS Management Console lets a person sign in to the web console as the IAM user and can be required, allowed, or disabled by account policies and by federated or single sign on setups. The console password can be combined with MFA for stronger protection.
An Amazon EC2 key pair is not an IAM user credential because it is used to authenticate to EC2 instances over SSH or RDP and it is part of the compute access model rather than the IAM identity model.
An AWS Certificate Manager (ACM) certificate is used to provision and manage TLS certificates for endpoints and load balancers and is not issued as a login credential for an IAM user.
A local Linux server login password is an operating system credential that applies to a specific server and is unrelated to AWS IAM user credentials.
Remember that access keys are for programmatic API and CLI use and a console password is for human sign in. Confirm whether a user needs one or both before assigning credentials.
BlueWave Robotics wants to lower Amazon EC2 costs by finding instances that have shown consistently low CPU use and little network traffic during the past 30 days. Which AWS service or feature can quickly highlight these underutilized instances to guide cost savings?
✓ B. AWS Trusted Advisor
AWS Trusted Advisor is correct because it provides automated best practice recommendations and includes a Low Utilization Amazon EC2 Instances check that reviews recent activity and flags instances with consistently low CPU and minimal network traffic to help reduce EC2 costs.
The AWS Trusted Advisor low utilization check analyzes CPU and network usage over multiple days and highlights candidates for stop, termination, or downsizing so you can quickly act on cost savings. It surfaces concise recommendations across accounts and removes the need to manually scan raw metrics when the question asks for an automated tool that pinpoints underutilized instances.
AWS Cost Explorer is focused on spending analysis and cost trends and it does not automatically analyze instance performance to identify underused EC2 instances.
Amazon CloudWatch provides detailed metrics and alarms for CPU and network but it requires you to interpret those metrics and it does not itself provide a built in underutilization recommendation for EC2.
AWS Budgets monitors and alerts on budget thresholds and forecasts and it is not designed to give resource efficiency or right sizing insights.
When a question asks for an automated check that identifies low CPU and low network EC2 instances think AWS Trusted Advisor. If the question asks about more detailed right sizing recommendations also consider AWS Compute Optimizer.
An online education startup named Kestrel Learning uploads high-definition lecture videos and audio files up to 25 GB each to a single Amazon S3 bucket in us-east-2 from instructors in 14 countries. Which AWS solution should the company use to consistently accelerate these long-distance uploads to the bucket?
✓ C. Amazon S3 Transfer Acceleration
Amazon S3 Transfer Acceleration is the correct choice to consistently accelerate long distance uploads from instructors in many countries into a single S3 bucket in us-east-2.
It leverages AWS edge locations and the AWS global network so clients upload to the nearest edge and the data is then carried over the optimized AWS backbone into the target bucket which reduces latency and variability for large objects and improves throughput for files such as 25 GB lecture videos.
Amazon CloudFront is incorrect because it is designed to cache and deliver content to end users and it does not provide an optimized upload path from global clients into S3.
AWS DataSync is incorrect because it targets agent based migrations and scheduled or automated transfers from on premises storage or other file systems and it is not intended for many ad hoc uploads from distributed individual instructors.
AWS Direct Connect is incorrect because it provides a dedicated private network connection that must be provisioned for a specific network and it cannot accelerate uploads from many independent users in different countries.
When a question describes many remote clients uploading large files to S3 remember to favor S3 Transfer Acceleration because it uses edge locations and the AWS backbone to speed uploads rather than caching or dedicated links.
A startup called BlueLark Media is building a microservices application that must notify customers by both SMS text messages and email from the same managed service. Which AWS service should the team choose to send these notifications?
✓ C. Amazon SNS
The correct choice is Amazon SNS because it supports multiple delivery protocols such as SMS and email and it lets a single publish and subscribe service push notifications to end users from distributed applications.
Amazon SNS provides native transports for SMS and email and it supports fan out to multiple endpoints so a microservices publisher can send one message and let SNS handle delivery, retries, and filtering. It also integrates with other AWS services so applications can publish events and rely on SNS to route them to phones and inboxes.
Amazon EventBridge is built for event routing and integration across AWS services and SaaS partners but it does not natively send SMS or email to end recipients.
Amazon SQS is a message queuing service that decouples components and buffers messages but it does not deliver messages directly as SMS or email to users.
Amazon SES is focused on high volume and transactional email and it does not provide SMS delivery so it cannot satisfy a requirement to send both SMS and email from the same managed service.
When a question requires one managed service to send both SMS and email think Amazon SNS and look for wording about multiple delivery protocols, pub or sub, or fan out.
An engineer at BlueWave Analytics needs to enable programmatic access to AWS using the CLI and SDKs. They plan to create an Access Key ID and a Secret Access Key for ongoing use. These credentials are linked to which IAM entity?
✓ C. IAM user
The Access Key ID and Secret Access Key are long term credentials that belong to an identity and are issued to an IAM user or the AWS account root user for signing programmatic requests to the AWS CLI and SDKs.
These keys are long lived and are attached directly to the principal that owns them so an IAM user receives static access keys that persist until rotated or deleted and this makes them suitable for CLI and SDK use when ongoing programmatic access is required.
IAM role is incorrect because roles are assumed and provide temporary credentials from AWS STS and you do not create static access keys for a role.
IAM policy is incorrect because a policy is a JSON permissions document and does not contain or hold credentials.
IAM user group is incorrect because groups only aggregate users for permissions and you cannot attach long term access keys to a group.
When you see Access Key ID and Secret Access Key remember they are long term credentials for an IAM user or the root user and not for roles which use temporary STS credentials.
Which cost-related benefits does moving to AWS provide for a mid-size media company that wants to manage monthly expenses efficiently without long-term contracts? (Choose 2)
✓ A. Granular, pay-as-you-go billing for only the usage you consume
✓ C. Ability to shut down resources when idle so you do not pay while they are off
Granular, pay-as-you-go billing for only the usage you consume and Ability to shut down resources when idle so you do not pay while they are off are correct for a mid-size media company that wants to manage monthly expenses efficiently without long term contracts.
The Granular, pay-as-you-go billing for only the usage you consume option describes AWS metering that charges for actual usage of services so costs scale with consumption and you avoid paying for idle capacity. This model lets you provision resources for peaks and then reduce usage when demand falls so monthly bills reflect real consumption.
The Ability to shut down resources when idle so you do not pay while they are off option captures the elasticity that lets you stop or terminate compute and other billable resources when they are not needed. Shutting down or scaling down resources reduces ongoing charges and gives operational control over monthly spend without committing to long term contracts.
Itemized electricity or power charges appear separately on AWS bills is incorrect because AWS does not bill customers separately for electricity or power. Energy and infrastructure costs are included in the service pricing and you will not see a distinct line item for power on standard bills.
One-time flat fees for on-demand capacity instead of recurring charges is incorrect because on demand pricing is usage based and not a one time flat fee. On demand costs accrue as you consume resources and are charged per use rather than as a single upfront payment.
AWS Cost Explorer is incorrect as a cost benefit in this context because it is a reporting and analysis tool that helps you understand and visualize spending. It does not itself provide a pricing model or automatic discounts though it can help you identify opportunities to save.
In cost questions choose answers that describe pay as you go metering or the ability to stop resources to avoid charges and avoid options that claim separate power line items or one time on demand fees.
A genomics startup stores rarely accessed research archives in Amazon S3 Glacier and occasionally needs to pull a small archive urgently so the data becomes available in about 1 to 5 minutes. Which retrieval tier should they use?
✓ B. Expedited retrieval
The correct option is Expedited retrieval because it is the only S3 Glacier retrieval tier that returns data in about one to five minutes for urgent, occasional access to smaller archives.
Expedited retrieval is designed for scenarios that require rapid access and it is optimized for smaller archives where speed matters more than the lowest cost. Use this tier when you need data available within minutes rather than hours.
Standard retrieval is incorrect because it typically takes about three to five hours and it serves use cases where moderate latency is acceptable in exchange for lower cost compared to expedited.
Bulk retrieval is incorrect because it is the lowest cost retrieval option and it usually requires about five to twelve hours, so it is intended for large, nonurgent restores.
Amazon S3 Transfer Acceleration is incorrect because it is a network acceleration feature for speeding transfers to and from S3 and it is not a Glacier retrieval tier.
Remember the retrieval timings when answering speed questions. Expedited is minutes while Standard and Bulk take hours. Distinguish retrieval tiers from network features.
NovaPixel Games wants to track AWS spending trends and obtain Savings Plans recommendations derived from roughly the last 90 days of usage to choose an optimal commitment. As the Cloud Practitioner, which AWS service should be used to analyze costs and get those recommendations?
✓ C. AWS Cost Explorer
The correct choice is AWS Cost Explorer. It provides interactive visualizations of historical spend and usage and surfaces Savings Plans purchase recommendations at the account and organization levels to help you choose an optimal commitment based on recent usage.
AWS Cost Explorer analyzes past cost and usage patterns and generates Savings Plans recommendations. It offers filters and charts so you can inspect usage by service and linked accounts and compare recommendation scenarios before you commit.
AWS Budgets is designed to set cost and usage thresholds and send alerts when budgets are breached and it does not generate Savings Plans recommendations from historical usage.
AWS Cost & Usage Report (AWS CUR) delivers the most granular billing records to an S3 bucket for detailed analysis and long term storage and it requires you to process the raw data yourself so it does not natively recommend Savings Plans.
AWS Pricing Calculator is intended for forward looking cost estimates of proposed architectures and it is not used to analyze past usage or produce Savings Plans recommendations.
When the requirement is Savings Plans recommendations from recent usage pick AWS Cost Explorer and remember Budgets is for alerts while CUR is raw billing data.
A mobile augmented reality startup needs to run compute and store data within 5G carrier networks so that users on cellular connections experience single digit millisecond latency near edge locations. Which AWS service should they choose?
✓ B. AWS Wavelength
The correct option is AWS Wavelength because it embeds AWS compute and storage directly within telecommunications providers’ 5G networks so mobile users on cellular connections can achieve single digit millisecond latency at near edge locations.
AWS Wavelength places compute and storage at the edge of the carrier network so packets travel fewer hops and the mobile traffic remains inside the telecom domain, which is essential for ultra low latency augmented reality experiences.
AWS Outposts runs AWS infrastructure on customer premises and is not hosted inside mobile carrier networks, so it does not keep traffic within the telco network and does not meet the 5G carrier edge requirement.
AWS Snowball Edge is a portable appliance for data transfer and local processing in remote or disconnected environments and it is not a managed service embedded in telecom networks, so it cannot provide the carrier network latency characteristics needed.
AWS Local Zones extend AWS infrastructure closer to metro areas to reduce latency for nearby users but traffic typically exits the carrier network, so they will not match the ultra low latency of telco embedded solutions like Wavelength.
A logistics startup runs EC2 instances in a private subnet that need to read and write to a DynamoDB table named Orders2025. To follow best practices and avoid storing long-term credentials on the servers, which AWS identity should the instances use to obtain permissions to the table?
✓ B. IAM role with an instance profile
The correct choice is IAM role with an instance profile. An EC2 instance uses the attached IAM role with an instance profile to obtain temporary credentials so it can read and write the Orders2025 DynamoDB table without long term secrets on the server.
A IAM role with an instance profile is attached to the instance and the EC2 instance retrieves temporary credentials from the instance metadata service which your application uses to call DynamoDB. You then scope an IAM policy to the specific Orders2025 table so permissions follow the principle of least privilege and you avoid embedding static credentials.
AWS Key Management Service (KMS) manages encryption keys and does not grant an EC2 instance permission to call DynamoDB APIs so it is not the correct identity mechanism for access control.
Amazon Cognito issues identities and credentials for mobile and web application users and it is not typically used for backend EC2 instances to access DynamoDB.
AWS IAM user access keys are long term credentials that must be stored and rotated on the instance and they do not follow best practices when an instance role with temporary credentials is available.
NorthBay Data is moving about 18 internal services into Docker containers. They want to orchestrate these containers while maintaining full control of and SSH access to the EC2 instances that run them. Which AWS service should they use to satisfy these needs?
✓ C. Amazon Elastic Container Service (Amazon ECS)
Amazon Elastic Container Service (Amazon ECS) is correct because it supports the EC2 launch type which lets you run containers on EC2 instances that you provision and manage while ECS provides orchestration features.
Using the EC2 launch type with ECS means tasks and services are scheduled onto your own hosts so you retain full control and can SSH into instances for debugging performance tuning or custom configuration while still benefiting from ECS features such as task definitions service scheduling and integration with other AWS services.
AWS Fargate is incorrect because it is a serverless container runtime that abstracts away the hosts and does not give you SSH access to underlying instances.
Amazon Elastic Container Registry (Amazon ECR) is incorrect because it is only a managed container image registry and does not orchestrate or run containers on servers.
AWS Lambda is incorrect because it is a serverless function service and although it can accept container images it is not intended for long running containerized services or for providing host level access.
When a scenario requires keeping host level access look for ECS with EC2 or self managed EKS nodes. If the goal is to avoid managing servers look for Fargate.
A media tech startup named VistaPixels wants to add image analysis using an AWS service that is delivered as a complete, provider-managed application accessed through an API, so the team does not operate servers or maintain runtimes. Which AWS service best fits the Software as a Service model for this need?
✓ C. Amazon Rekognition
The correct choice is Amazon Rekognition. It is consumed as a fully managed application through APIs so VistaPixels does not operate servers or maintain runtimes and this aligns with the Software as a Service model.
Amazon Rekognition provides prebuilt image and video analysis features that you call via API and AWS handles the underlying infrastructure the models and the runtime environment so the startup only sends images and receives analysis results. This removes the need to provision instances manage operating systems or patch runtimes which matches SaaS expectations.
AWS Elastic Beanstalk is not correct because it is a Platform as a Service that orchestrates infrastructure and application components while you still deploy and manage your application code and environment settings.
Amazon Elastic Compute Cloud (Amazon EC2) is not correct because it is Infrastructure as a Service where you control the virtual servers the operating system and any installed software and updates.
AWS Lambda is not correct because it is serverless compute or Function as a Service where you provide function code and manage integrations and configuration rather than consuming a complete provider operated software product.
A regional news streaming startup is designing a highly available web tier on AWS and is evaluating services that enable scalable application architectures. Which statement accurately describes Elastic Load Balancing in this context?
✓ C. Distributes incoming client traffic across multiple targets such as Amazon EC2 instances, containers, and IP addresses
Distributes incoming client traffic across multiple targets such as Amazon EC2 instances, containers, and IP addresses is correct because Elastic Load Balancing distributes client requests across registered targets to improve availability and scalability.
Elastic Load Balancing performs health checks on targets and routes traffic only to healthy endpoints across multiple Availability Zones which increases resilience and helps applications handle variable load.
Automatically scales the fleet of Amazon EC2 instances to meet fluctuating demand is incorrect because that capability belongs to EC2 Auto Scaling which adjusts instance count based on scaling policies and not to the load balancer.
Provides a dedicated private network connection from an on-premises facility to AWS without traversing the public internet is incorrect because that describes AWS Direct Connect which provides private connectivity and does not distribute application traffic.
Offers a highly available and scalable Domain Name System service is incorrect because that refers to Amazon Route 53 which handles DNS resolution and does not itself balance traffic across application targets.
Match the phrase distribute traffic across targets to Elastic Load Balancing and match add or remove instances to EC2 Auto Scaling when you see similar options on the exam.
A great way to secure your employment or even open the door to new opportunities is to get certified. If you’re interested in AWS products, here are a few great resources to help you get Cloud Practitioner, Solution Architect, Machine Learning and DevOps certified from AWS: