AWS Cloud Practitioner Practice Test Questions
AWS Practice Tests & Exam Topics
When I prepared for my AWS Cloud Practitioner certification, I didn’t just want to pass, but I wanted to make sure I walked into the exam room knowing exactly what to expect.
I wanted to walk into the AWS Practitioner exam with the same confidence I had when I passed the Scrum Master and Product Owner exams with near-perfect scores.
Over time, I’ve developed a repeatable strategy that I’ve now used to help me pass multiple IT certifications, including the AWS Cloud Practitioner exams. If you’re interested in passing the practitioner exam yourself, here’s a five step strategy that will help you do it:
- Thoroughly read and cater your study based on the stated exam objectives
- Do AWS Practitioner practice exams before you even begin your study
- Take a course from a reputable trainer
- Do some simple, hands-on projects
- Spend the weekend before you sit the exam doing more AWS practice tests
Add on a sensible exam-day strategy, and you’ll greatly enhance your chances of passing your AWS certification exam on the first try.
AWS Cloud Practitioner Practice Test
As I said, practicing with real AWS Practitioner practice exam questions and answers (not AWS exam dumps or cloud practitioner braindumps) is really the best way to prepare. Here are 35 sample exam questions from Cameron McKenzie’s Udemy course to help you to prepare.
More practice AWS exam questions can be found in the AWS Practitioner Udemy course and certificationexams.pro
A global payments company is migrating sensitive workloads to AWS. Auditors require that all encryption and key generation occur only within dedicated hardware security modules controlled by the company and validated to FIPS 140-2 Level 3. Which AWS service should the company use to satisfy this compliance requirement?
-
❏ A. AWS Key Management Service (AWS KMS)
-
❏ B. AWS Nitro Enclaves
-
❏ C. AWS CloudHSM
-
❏ D. AWS Secrets Manager
An operations engineer at a fintech startup wants to standardize infrastructure creation and automate application rollout using AWS-managed services while minimizing manual steps across environments. Which services should they choose to automate provisioning and deployments? (Choose 2)
-
❏ A. Elastic Load Balancing
-
❏ B. AWS CloudFormation
-
❏ C. AWS Lambda
-
❏ D. Amazon Elastic File System (EFS)
-
❏ E. AWS Elastic Beanstalk
After moving its workloads to AWS last quarter, the CTO at Alpine Transit, a regional logistics startup, wants to review spending trends over time, break down costs by service and linked account, and identify the main cost drivers. Which AWS tool should be used?
-
❏ A. AWS Pricing Calculator
-
❏ B. AWS Cost Explorer
-
❏ C. AWS CloudTrail
-
❏ D. AWS Organizations consolidated billing
A geospatial analytics startup runs large-scale simulations on Amazon EC2 instances. Each instance requires very fast local scratch space for per-instance caches that can be discarded when the instance stops or is terminated. Which EC2 storage option best meets this need?
-
❏ A. Amazon Elastic File System (Amazon EFS)
-
❏ B. Amazon FSx for Lustre
-
❏ C. Instance Store
-
❏ D. Amazon Elastic Block Store (Amazon EBS)
For bursty workloads averaging 15% utilization, which AWS Cloud capability helps avoid paying for idle capacity?
-
❏ A. Scalability
-
❏ B. Resource elasticity
-
❏ C. Fault tolerance
-
❏ D. High availability
BluePine Labs plans to launch 75 additional Amazon EC2 instances across two Regions next month and wants to know if they are nearing existing service quotas so they can file a quota increase request early. Which AWS tool should they use to get this guidance?
-
❏ A. AWS Health Dashboard
-
❏ B. AWS Cost Explorer
-
❏ C. AWS Trusted Advisor
-
❏ D. Amazon CloudWatch
A fintech startup migrating multiple workloads to AWS wants clarity on the AWS Shared Responsibility Model to satisfy an internal audit. Which task is the customer responsible for under this model?
-
❏ A. Administering the underlying servers and OS for Amazon DynamoDB
-
❏ B. Physical security controls in AWS data centers
-
❏ C. Configuring security groups and network settings for Amazon EC2 instances
-
❏ D. Securing AWS edge locations
A junior cloud administrator at Aurora Pet Supplies is preparing a quick-reference chart that maps AWS storage offerings to their foundational storage models. Which statement correctly associates each service with the appropriate storage type?
-
❏ A. Amazon Simple Storage Service (Amazon S3) is file storage, Amazon Elastic Block Store (Amazon EBS) is block storage, and Amazon Elastic File System (Amazon EFS) is object storage
-
❏ B. Amazon Simple Storage Service (Amazon S3) provides object storage, Amazon Elastic Block Store (Amazon EBS) offers block storage, and Amazon Elastic File System (Amazon EFS) delivers file storage
-
❏ C. Amazon Simple Storage Service (Amazon S3) offers object storage, Amazon Elastic Block Store (Amazon EBS) is file storage, and Amazon Elastic File System (Amazon EFS) is block storage
-
❏ D. Amazon Simple Storage Service (Amazon S3) is block storage, Amazon Elastic Block Store (Amazon EBS) is object storage, and Amazon Elastic File System (Amazon EFS) is file storage
A training coordinator at Northvale Analytics is reviewing core characteristics of cloud adoption and wants to spot the statement that is inaccurate. Which statement does not correctly describe cloud computing?
-
❏ A. Cloud computing provides on-demand access to compute resources
-
❏ B. Using the cloud helps teams move faster and become more agile
-
❏ C. Cloud computing replaces variable operating spend with capital expenditure
-
❏ D. Cloud adoption enables organizations to achieve large economies of scale
How is compute usage metered for Amazon EC2 On-Demand instances running Windows?
-
❏ A. Per vCPU-hour
-
❏ B. By the second
-
❏ C. Per minute
-
❏ D. Per hour
More practice AWS exam questions can be found in the AWS Practitioner Udemy course and certificationexams.pro
A digital media startup wants AWS capabilities that run close to viewers worldwide to minimize latency and add built-in DDoS defense. Which services operate from AWS Edge locations to accomplish this? (Choose 2)
-
❏ A. AWS Config
-
❏ B. AWS Shield
-
❏ C. Amazon EBS
-
❏ D. Amazon CloudFront
-
❏ E. AWS Direct Connect
A mid-size fintech, HarborBank Labs, is conducting a security audit of its AWS environment. Under the AWS Shared Responsibility Model, which of the following are the company’s responsibilities related to AWS Identity and Access Management when controlling access to its resources? (Choose 2)
-
❏ A. Manage and secure the global AWS network infrastructure
-
❏ B. Regularly review IAM policies and usage to enforce least privilege
-
❏ C. AWS Organizations
-
❏ D. Require multi-factor authentication for all identities, including the root user
-
❏ E. Hardening and vulnerability scanning of AWS-managed hypervisors and facility systems
BrightCart Labs, a retail analytics startup, wants to lower costs and right-size its AWS workloads by continuously observing the operational health, performance metrics, and utilization of all resources across 5 AWS accounts and 3 Regions. Which AWS service should the company use to meet these goals?
-
❏ A. AWS Control Tower
-
❏ B. Amazon CloudWatch
-
❏ C. AWS CloudTrail
-
❏ D. AWS Config
A global research nonprofit named Vireo Institute recently centralized its systems on AWS. Staff frequently work out of 12 offices across several continents and need to sign in from any location to manage resources deployed in multiple AWS Regions. How should the organization configure accounts and access to meet this need?
-
❏ A. Create separate IAM users for every person in each AWS Region
-
❏ B. No special regional configuration is required because AWS Identity and Access Management is a global service
-
❏ C. Allow employees to use a coworker’s credentials when traveling
-
❏ D. Use AWS Organizations to create a dedicated AWS account for each user in every Region
Which AWS service provides manual and automated testing on a wide range of real mobile devices without building a device lab?
-
❏ A. AWS Amplify
-
❏ B. AWS CodePipeline
-
❏ C. AWS Device Farm
-
❏ D. AWS IoT Device Tester
A regional design firm, BlueCanvas Studio, is training new staff on core traits of cloud adoption. Which statement does not accurately describe cloud computing?
-
❏ A. You can deploy applications worldwide in a short time frame
-
❏ B. Cloud services deliver computing resources on demand
-
❏ C. Cloud adoption replaces usage-based operational costs with large up-front capital purchases
-
❏ D. Customers can leverage significant economies of scale
After moving most workloads to AWS last month, a fintech startup called Harbor Ledger wants executives to continuously track and visualize spending and usage trends across all linked accounts. Which AWS service should they use?
-
❏ A. AWS Budgets
-
❏ B. AWS Cost Explorer
-
❏ C. AWS Pricing Calculator
-
❏ D. AWS CloudTrail
A regional electric utility must keep meter audit records for 12 years to meet regulatory requirements, expects almost no access to the data, and wants the lowest possible Amazon S3 storage cost. Which storage class should they choose?
-
❏ A. Amazon S3 Intelligent-Tiering
-
❏ B. Amazon S3 Glacier Deep Archive
-
❏ C. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
-
❏ D. Amazon S3 Glacier Flexible Retrieval
A security team at NorthPeak Finance is writing AWS IAM policies for a new internal reporting application and wants a guiding principle to ensure each user or role receives only the specific permissions required to perform its tasks. Which IAM best practice aligns with this approach?
-
❏ A. Enforce MFA for administrator accounts
-
❏ B. Use IAM roles to delegate permissions
-
❏ C. Apply least-privilege access when authoring policies
-
❏ D. AWS Organizations service control policies
Which AWS service lets you create and share interactive BI dashboards on web and mobile without managing servers?
-
❏ A. Amazon Athena
-
❏ B. QuickSight
-
❏ C. Amazon Redshift
-
❏ D. Amazon Managed Grafana
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
A fintech startup is moving its customer portal to AWS to avoid large capital purchases and align spending with actual usage. Which advantage of cloud adoption directly supports this objective?
-
❏ A. Give up elasticity to maximize raw performance
-
❏ B. Shift up-front capital spending to pay-as-you-go operating costs
-
❏ C. AWS Budgets
-
❏ D. Exchange stronger security controls for faster scaling
A media analytics startup runs its web tier on Amazon EC2 in a VPC across three Availability Zones. The team wants higher resilience to instance failures without changing the application code and needs inbound traffic automatically spread across healthy instances. Which AWS service should they add?
-
❏ A. AWS Global Accelerator
-
❏ B. Elastic Load Balancing
-
❏ C. Amazon DynamoDB
-
❏ D. Amazon ElastiCache
Riverstone Analytics is onboarding engineers to AWS and wants to align with IAM security guidance while keeping permissions manageable. Which actions represent recommended practices for setting up user access? (Choose 2)
-
❏ A. Store static access keys inside application code
-
❏ B. Use AWS Secrets Manager to keep long-lived access keys for EC2 applications instead of IAM roles
-
❏ C. Assign permissions to users through IAM groups
-
❏ D. Prefer inline policies rather than customer managed policies
-
❏ E. Create a separate IAM user for each individual
A media startup wants a managed AWS service that can analyze images immediately without building custom models. They are considering Amazon Rekognition and want to know which built-in capability it provides out of the box. Which feature is available?
-
❏ A. Convert speech in videos to text
-
❏ B. Detect objects and scenes in images
-
❏ C. Automatically resize or compress images
-
❏ D. Estimate full-body human pose
Under standard Amazon S3 pricing, which usage patterns incur no charge? (Choose 2)
-
❏ A. S3 cross-Region replication traffic
-
❏ B. S3 to EC2 transfer in the same Region
-
❏ C. Data transfer out to the internet
-
❏ D. S3 Standard storage
-
❏ E. S3 data transfer in from the internet
More practice AWS exam questions can be found in the AWS Practitioner Udemy course and certificationexams.pro
A streaming media startup serves users on multiple continents and needs to improve end-to-end availability and latency for TCP and UDP traffic to endpoints in two AWS Regions by leveraging the AWS global network. The team also requires static anycast IP addresses and fast health-based failover between Regions. Which AWS service should they use?
-
❏ A. Amazon Route 53
-
❏ B. AWS Global Accelerator
-
❏ C. Elastic Load Balancing (ELB)
-
❏ D. Amazon CloudFront
Riverbend Outfitters, a regional e-commerce brand, is moving from its colocated servers to AWS to better handle unpredictable shopping spikes while keeping costs aligned to usage. Which architectural characteristic of cloud computing provides the ability to automatically match capacity to fluctuating demand?
-
❏ A. Vertical scaling
-
❏ B. AWS Auto Scaling
-
❏ C. Elasticity
-
❏ D. Monolithic architecture
Orion Health Labs is reviewing the shared responsibility model before launching a new analytics workload on AWS. Which activities must the customer handle to meet their security obligations in this model? (Choose 2)
-
❏ A. Encrypting stored application data
-
❏ B. Controlling physical access to AWS facilities
-
❏ C. Training the company’s workforce on secure use of AWS services
-
❏ D. Decommissioning and destroying retired AWS hardware
-
❏ E. Patching and maintaining the EC2 host hypervisor layer
A healthcare analytics startup must retain compliance records for 9 years with very rare access. Retrieval within a few hours is acceptable and keeping storage costs low is the top priority. Which Amazon S3 storage class should be selected for this archival requirement?
-
❏ A. Amazon S3 Intelligent-Tiering
-
❏ B. Amazon S3 Glacier Flexible Retrieval
-
❏ C. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
-
❏ D. Amazon S3 Standard
With consolidated billing in AWS Organizations, there are eight matching Reserved Instances and 10 matching instances running across linked accounts in one hour. Which statements correctly describe how charges apply and when a linked account can use a zonal RI? (Choose 2)
-
❏ A. All 10 instances receive RI pricing
-
❏ B. Eight instance-hours at RI rate; two On-Demand
-
❏ C. No discount for linked accounts without their own RIs
-
❏ D. RI sharing works only if in the same Region
-
❏ E. Zonal RI usable across accounts only with the same AZ label
HarborPoint Retail plans to migrate its legacy customer management system that currently runs on MySQL to a fully managed AWS database. The team wants a relational service that preserves MySQL compatibility while offering better performance and managed high availability. Which AWS database service should they choose?
-
❏ A. Amazon Neptune
-
❏ B. Amazon DynamoDB
-
❏ C. Amazon Aurora
-
❏ D. Amazon ElastiCache
A media analytics startup is adopting AWS and wants to distinguish what AWS secures versus what the customer must secure under the Shared Responsibility Model. Which activity is AWS responsible for from a security and compliance perspective?
-
❏ A. Configuring IAM users, roles, and policies
-
❏ B. Operating and securing edge locations
-
❏ C. Protecting and classifying customer data
-
❏ D. Choosing and managing server-side encryption settings
A data analytics startup wants to launch virtual servers on AWS that they can sign in to and manage the operating system directly. They also prefer elastic capacity that can scale quickly with pricing charged by the second. Which AWS service should they use?
-
❏ A. Amazon Lightsail
-
❏ B. Amazon Elastic Compute Cloud (Amazon EC2)
-
❏ C. AWS Lambda
-
❏ D. Amazon Elastic Container Service (Amazon ECS)
A fintech startup is preparing a mission‑critical application and wants to build for resilience using AWS’s global footprint. Which description best explains how AWS enables high availability and fault tolerance?
-
❏ A. The infrastructure is composed of subnets and VPCs that serve as the primary physical isolation boundaries for availability
-
❏ B. High availability is primarily delivered by distributing content through Amazon CloudFront edge locations worldwide
-
❏ C. AWS organizes independent Regions, each comprising multiple physically separate Availability Zones connected with low-latency networks and redundant power, which supports resilient architectures
-
❏ D. Customers can choose individual data centers within Regions to deploy workloads in different facilities for failover
Which Amazon RDS actions can scale a database using built-in features without application changes? (Choose 2)
-
❏ A. Use Amazon RDS Proxy to scale
-
❏ B. Increase allocated storage for the DB instance
-
❏ C. Enable Multi-AZ for scaling
-
❏ D. Scale up to a larger DB instance class
-
❏ E. Use AWS Auto Scaling to resize the DB class
Certification Exam Dump Answers
More practice AWS exam questions can be found in the AWS Practitioner Udemy course and certificationexams.pro
A global payments company is migrating sensitive workloads to AWS. Auditors require that all encryption and key generation occur only within dedicated hardware security modules controlled by the company and validated to FIPS 140-2 Level 3. Which AWS service should the company use to satisfy this compliance requirement?
-
✓ C. AWS CloudHSM
The correct choice is AWS CloudHSM. This service provides dedicated, single tenant hardware security modules that you control and that are validated to FIPS 140-2 Level 3 which satisfies the auditor requirement for key generation and encryption within company-controlled HSMs.
AWS CloudHSM performs key generation and cryptographic operations inside dedicated HSM appliances so key material never leaves the validated hardware boundary. This gives the company exclusive control over HSM access and key lifecycle which is the usual interpretation of a requirement that keys be generated and managed on customer controlled, FIPS 140-2 Level 3 devices.
AWS Key Management Service (AWS KMS) is a managed key service that uses HSMs under the hood but it is not a customer dedicated HSM offering and it does not give customers exclusive control of the physical HSM appliances. This means it often does not meet controls that explicitly require customer managed, dedicated HSMs.
AWS Nitro Enclaves isolates compute and memory for sensitive processing but it is not an HSM and it does not provide dedicated hardware HSM appliances for key generation or storage. Nitro Enclaves can protect in memory secrets but it does not replace a FIPS 140-2 Level 3 validated HSM.
AWS Secrets Manager is intended for storing and rotating credentials and secrets and it does not perform hardware based cryptographic operations on dedicated HSM devices. It is not suitable when the control requirement mandates keys be generated and kept inside customer controlled HSM appliances.
When an audit mandates customer controlled, FIPS 140-2 Level 3 HSMs choose CloudHSM. If the requirement is for managed keys without a dedicated hardware mandate choose KMS.
An operations engineer at a fintech startup wants to standardize infrastructure creation and automate application rollout using AWS-managed services while minimizing manual steps across environments. Which services should they choose to automate provisioning and deployments? (Choose 2)
-
✓ B. AWS CloudFormation
-
✓ E. AWS Elastic Beanstalk
The services that meet the requirement are AWS CloudFormation and AWS Elastic Beanstalk.
AWS CloudFormation provides infrastructure as code so you can define and version stacks of AWS resources in templates and deploy identical environments automatically across accounts and regions. It supports change sets and stack updates so provisioning is repeatable and auditable and this reduces manual steps.
AWS Elastic Beanstalk focuses on application deployment and lifecycle management by orchestrating environment provisioning, capacity scaling, load balancing, health monitoring, and application updates. Elastic Beanstalk leverages CloudFormation under the hood which lets it simplify application rollout while still allowing customization of the underlying resources.
Elastic Load Balancing only distributes incoming traffic across targets and does not provide orchestration or templated resource provisioning so it does not address the need to automate infrastructure creation and application rollout.
AWS Lambda is a serverless compute service for running code in response to events and it can be part of automation workflows but it is not the managed service that standardizes full infrastructure provisioning and end to end deployment across environments.
Amazon Elastic File System (EFS) is a managed file storage service and it does not handle orchestration, deployments, or infrastructure as code so it is not suitable for the stated goal.
When exam language mentions infrastructure as code or automate deployments choose services built for orchestration such as CloudFormation or Elastic Beanstalk rather than picking compute, storage, or networking services.
After moving its workloads to AWS last quarter, the CTO at Alpine Transit, a regional logistics startup, wants to review spending trends over time, break down costs by service and linked account, and identify the main cost drivers. Which AWS tool should be used?
-
✓ B. AWS Cost Explorer
AWS Cost Explorer is the correct choice because it provides interactive visualizations and reports that let Alpine Transit review spending trends over time, break down costs by AWS service and by linked account, and identify the primary cost drivers across the environment.
AWS Cost Explorer exposes historical charge data with filtering and grouping by service, account, and tags and it supports trend charts, forecasting, and detailed breakdowns so teams can pinpoint which services or accounts are causing the most spend.
AWS Pricing Calculator is not correct because it is intended for estimating future costs before deployment and it does not analyze actual past charges or provide interactive trend visualizations.
AWS CloudTrail is not correct because it records API activity for auditing and security and it does not provide consolidated cost analytics or spending breakdowns.
AWS Organizations consolidated billing is not correct because it centralizes and aggregates billing across accounts for payment and account management and it does not offer the interactive cost reporting and driver analysis that Cost Explorer delivers.
Focus on keywords like visualizing trends and identifying cost drivers when a question asks about past spending and choose AWS Cost Explorer. Remember that AWS Pricing Calculator is for pre deployment estimates and AWS CloudTrail is for API audit logs.
A geospatial analytics startup runs large-scale simulations on Amazon EC2 instances. Each instance requires very fast local scratch space for per-instance caches that can be discarded when the instance stops or is terminated. Which EC2 storage option best meets this need?
-
✓ C. Instance Store
The correct choice is Instance Store. This option provides ephemeral local block storage that is attached to the host and is intended for fast per instance scratch space and caches that can be discarded when the instance stops or is terminated.
Instance Store delivers very low latency and high throughput because the storage is physically local to the host. That makes it ideal for temporary caches and scratch files that do not need to survive instance stops or terminations. The ephemeral nature aligns with workloads that can tolerate data loss on instance failure or shutdown.
Amazon Elastic File System (Amazon EFS) is a managed network file system that is designed for shared, elastic file storage across multiple instances and not for ultra low latency local caching on a single instance.
Amazon FSx for Lustre provides high performance for compute intensive workloads and can integrate with object storage, but it is network attached and does not offer the ephemeral, physically local disk tied to a specific instance lifecycle.
Amazon Elastic Block Store (Amazon EBS) is durable network attached block storage that persists independently of the instance and supports snapshots and recovery, which makes it unnecessary overhead for short lived per instance caches that can be discarded.
If a workload can accept data loss on stop choose the ephemeral local disk for the fastest scratch performance and if you need durability choose EBS or if you need shared file access choose EFS or FSx.
For bursty workloads averaging 15% utilization, which AWS Cloud capability helps avoid paying for idle capacity?
-
✓ B. Resource elasticity
Resource elasticity is correct because it adjusts capacity to match demand so you do not pay for idle resources when workloads are bursty and average low utilization.
Resource elasticity dynamically adds and removes compute and other resources in response to changes in demand and it is the capability that enables AWS features like Auto Scaling and serverless billing models to right size capacity and reduce cost.
Scalability describes the ability to grow to meet increased demand and it does not necessarily imply scaling down to cut idle costs so it is not the best choice.
Fault tolerance focuses on surviving component or system failures and it does not address matching capacity to variable utilization so it does not solve paying for idle capacity.
High availability emphasizes redundancy and uptime and it often requires extra capacity for failover which can increase unused resources rather than reduce them.
Watch for clues like idle capacity or bursty as they usually indicate that elasticity is the right concept to choose on the exam.
BluePine Labs plans to launch 75 additional Amazon EC2 instances across two Regions next month and wants to know if they are nearing existing service quotas so they can file a quota increase request early. Which AWS tool should they use to get this guidance?
-
✓ C. AWS Trusted Advisor
AWS Trusted Advisor is correct because it runs Service Limits checks that show when your EC2 usage is approaching current quotas so you can request increases before launching the planned additional instances.
AWS Trusted Advisor compares your resource usage to service quotas and highlights nearing limits while providing links and guidance to request quota increases for EC2 and other services. This proactive guidance is why it is the right tool for assessing whether additional instances will exceed existing quotas.
AWS Health Dashboard is focused on service health and account specific events and it does not monitor service quota usage or warn when you approach limits.
AWS Cost Explorer analyzes spending patterns and forecasts costs and it does not evaluate or report on service quota limits.
Amazon CloudWatch collects metrics logs and alarms for performance and operational health but it does not determine how close your resources are to EC2 service quotas.
When a question mentions service limits or quotas and asks for proactive guidance map the answer to AWS Trusted Advisor.
A fintech startup migrating multiple workloads to AWS wants clarity on the AWS Shared Responsibility Model to satisfy an internal audit. Which task is the customer responsible for under this model?
-
✓ C. Configuring security groups and network settings for Amazon EC2 instances
The correct customer responsibility under the AWS Shared Responsibility Model is Configuring security groups and network settings for Amazon EC2 instances. This option is what the customer must manage within their AWS account.
The Shared Responsibility Model separates duties so that AWS handles the security of the cloud infrastructure while customers handle security in the cloud. That means Configuring security groups and network settings for Amazon EC2 instances is a customer task because customers control instance level network access rules, firewalls, and related configuration for their compute resources.
Administering the underlying servers and OS for Amazon DynamoDB is not the customer responsibility because Amazon DynamoDB is a fully managed service and AWS operates the servers and operating systems for that service.
Physical security controls in AWS data centers are managed by AWS because they secure the facilities, hardware, and physical access that run cloud services.
Securing AWS edge locations is also an AWS responsibility since edge locations and the global content delivery network are part of AWS managed infrastructure.
Remember that Security OF the cloud is AWS and Security IN the cloud is your responsibility. Focus on tasks that require configuring resources inside your account when choosing customer responsibilities.
A junior cloud administrator at Aurora Pet Supplies is preparing a quick-reference chart that maps AWS storage offerings to their foundational storage models. Which statement correctly associates each service with the appropriate storage type?
-
✓ B. Amazon Simple Storage Service (Amazon S3) provides object storage, Amazon Elastic Block Store (Amazon EBS) offers block storage, and Amazon Elastic File System (Amazon EFS) delivers file storage
Amazon Simple Storage Service (Amazon S3) provides object storage, Amazon Elastic Block Store (Amazon EBS) offers block storage, and Amazon Elastic File System (Amazon EFS) delivers file storage is correct because these services correspond directly to the object block and file storage models used in cloud architectures.
Amazon S3 stores data as objects in buckets so each object carries its data and metadata and it is ideal for unstructured data archives and media. Amazon EBS provides raw block devices that attach to EC2 instances and these are suitable for operating system volumes and databases. Amazon EFS offers a managed POSIX style file system that multiple instances can mount for shared file access and typical file workloads.
Amazon Simple Storage Service (Amazon S3) is file storage, Amazon Elastic Block Store (Amazon EBS) is block storage, and Amazon Elastic File System (Amazon EFS) is object storage is wrong because it treats S3 as a file system and it claims EFS is an object store which is not correct.
Amazon Simple Storage Service (Amazon S3) offers object storage, Amazon Elastic Block Store (Amazon EBS) is file storage, and Amazon Elastic File System (Amazon EFS) is block storage is incorrect because it swaps the roles of EBS and EFS and EBS does not provide a file system and EFS does not provide raw block devices.
Amazon Simple Storage Service (Amazon S3) is block storage, Amazon Elastic Block Store (Amazon EBS) is object storage, and Amazon Elastic File System (Amazon EFS) is file storage is invalid because it reverses the responsibilities of S3 and EBS and S3 is not block storage and EBS is not an object store.
Remember the simple mapping S3 = object and EBS = block and EFS = file. Use workload examples to recall each service quickly.
A training coordinator at Northvale Analytics is reviewing core characteristics of cloud adoption and wants to spot the statement that is inaccurate. Which statement does not correctly describe cloud computing?
-
✓ C. Cloud computing replaces variable operating spend with capital expenditure
The inaccurate statement is Cloud computing replaces variable operating spend with capital expenditure.
Cloud computing is fundamentally pay as you go and operational in nature. You do not buy and depreciate servers when you use cloud services. Even when organizations prepay for options like reserved capacity they still do not own the assets and the costs are treated as operating expenses rather than capital purchases.
Cloud computing provides on-demand access to compute resources is correct because on demand provisioning of compute and storage is a core characteristic of cloud services and it enables rapid scaling.
Using the cloud helps teams move faster and become more agile is correct because managed services automation and self service provisioning reduce operational overhead and let teams iterate and deploy more quickly.
Cloud adoption enables organizations to achieve large economies of scale is accurate because large cloud providers spread fixed costs across many customers and operate at scale which lowers unit costs for customers.
When a question mentions costs watch for reversed phrasing. Cloud typically shifts spending from CapEx to OpEx and exam distractors often flip that relationship.
How is compute usage metered for Amazon EC2 On-Demand instances running Windows?
-
✓ B. By the second
The correct choice is By the second. Amazon EC2 On-Demand instances running Windows are billed at per-second granularity which provides finer accuracy than coarser time units.
This per-second billing applies to EC2 On-Demand compute usage so charges accrue by the second after the brief minimum billing period. Using per-second billing reduces cost for short lived instances and aligns charges more closely with actual runtime compared with hourly blocks.
Per hour is incorrect because EC2 moved away from strict hourly metering for On-Demand instances and now uses finer granularity for most instance types and operating systems.
Per minute is incorrect because AWS does not meter On-Demand EC2 compute in minute increments for billing.
Per vCPU-hour is incorrect because EC2 pricing is not based on a vCPU-hour billing unit for On-Demand instances.
When you see questions about EC2 On-Demand metering remember the answer is the most granular time unit supported and that AWS uses per-second billing with a short minimum rather than per minute or per hour.
More practice AWS exam questions can be found in the AWS Practitioner Udemy course and certificationexams.pro
A digital media startup wants AWS capabilities that run close to viewers worldwide to minimize latency and add built-in DDoS defense. Which services operate from AWS Edge locations to accomplish this? (Choose 2)
-
✓ B. AWS Shield
-
✓ D. Amazon CloudFront
Amazon CloudFront and AWS Shield are correct because they operate from AWS Edge locations to deliver content close to viewers and to provide integrated DDoS protection at the network edge.
Amazon CloudFront distributes and caches content across a global network of Edge locations so viewers receive data from nearby endpoints which reduces latency and improves streaming and download performance.
AWS Shield offers always on DDoS mitigation that integrates with the Edge network so attacks can be absorbed or mitigated before reaching the origin infrastructure and it is particularly effective when combined with CloudFront for traffic that flows through those Edge locations.
AWS Config operates at the regional level to record and evaluate resource configurations and it is not delivered from Edge locations so it does not provide edge content delivery or DDoS defense.
Amazon EBS is block storage provisioned within Availability Zones and it is not an edge service so it cannot reduce viewer latency by serving data from Edge locations.
AWS Direct Connect provides dedicated network links between customer sites and AWS Regions and it uses physical connection facilities rather than running services from CloudFront Edge locations so it is not considered an Edge location service.
Edge locations usually indicate services that serve or protect traffic at the network edge so choose CloudFront for content delivery and Shield for integrated DDoS protection when a question mentions low latency and built in DDoS defense.
A mid-size fintech, HarborBank Labs, is conducting a security audit of its AWS environment. Under the AWS Shared Responsibility Model, which of the following are the company’s responsibilities related to AWS Identity and Access Management when controlling access to its resources? (Choose 2)
-
✓ B. Regularly review IAM policies and usage to enforce least privilege
-
✓ D. Require multi-factor authentication for all identities, including the root user
Regularly review IAM policies and usage to enforce least privilege and Require multi-factor authentication for all identities, including the root user are correct because they are explicit customer responsibilities under the AWS Shared Responsibility Model when controlling access to resources.
Regularly review IAM policies and usage to enforce least privilege is correct because customers define, apply, and audit identity and access controls. Regular reviews of roles, policies, and usage help remove excessive permissions and reduce blast radius from compromised credentials.
Require multi-factor authentication for all identities, including the root user is correct because enabling and enforcing MFA is a configuration and operational choice that the customer must make to strengthen account security and protect high privilege identities.
Manage and secure the global AWS network infrastructure is incorrect because AWS is responsible for securing the global network and data centers that run the cloud services.
AWS Organizations is incorrect because it is an AWS service and not a responsibility. Using it is optional and it does not change the division of duties in the shared responsibility model.
Hardening and vulnerability scanning of AWS-managed hypervisors and facility systems is incorrect because AWS secures the underlying compute, storage, networking, and physical infrastructure that customers do not operate.
When you see IAM questions think of customer actions that are about configuration, review, and enforcement. Focus on least privilege and requiring MFA for high privilege accounts when selecting answers.
BrightCart Labs, a retail analytics startup, wants to lower costs and right-size its AWS workloads by continuously observing the operational health, performance metrics, and utilization of all resources across 5 AWS accounts and 3 Regions. Which AWS service should the company use to meet these goals?
-
✓ B. Amazon CloudWatch
Amazon CloudWatch is correct because it collects and visualizes metrics and logs sets alarms and provides dashboards that reveal performance utilization and error trends across accounts and Regions so teams can continuously observe operational health and identify right sizing opportunities.
Amazon CloudWatch aggregates metrics and logs from AWS services and custom sources and lets you build dashboards and alarms for proactive alerting and automated responses. These capabilities enable BrightCart Labs to monitor utilization across five accounts and three Regions identify idle or underutilized resources and make data driven decisions to lower costs.
AWS Control Tower is focused on establishing and governing a secure multi account landing zone and on provisioning and governance guardrails rather than on collecting performance metrics or providing dashboards and alarms for utilization monitoring.
AWS CloudTrail records API calls and user activity for auditing and security investigations and it does not provide continuous performance metrics dashboards or alarming features needed for right sizing resources.
AWS Config tracks resource configurations and compliance state and it is useful for auditing configuration drift and compliance checks but it does not deliver the metrics based monitoring and alarm features required for real time health and utilization analysis.
When a question mentions metrics logs dashboards or alarms for health and utilization think Amazon CloudWatch and avoid confusing monitoring needs with audit or governance services.
A global research nonprofit named Vireo Institute recently centralized its systems on AWS. Staff frequently work out of 12 offices across several continents and need to sign in from any location to manage resources deployed in multiple AWS Regions. How should the organization configure accounts and access to meet this need?
-
✓ B. No special regional configuration is required because AWS Identity and Access Management is a global service
No special regional configuration is required because AWS Identity and Access Management is a global service is correct because IAM identities and roles are global and staff can sign in from any office to manage resources deployed in any AWS Region.
IAM is not tied to a specific Region so a single IAM user or role can be granted policies that permit actions on resources across Regions. This avoids duplicating identities and lets administrators control access centrally with policies and roles.
Create separate IAM users for every person in each AWS Region is wrong because IAM identities are global and creating per Region users adds unnecessary administrative overhead and risks inconsistent permissions.
Allow employees to use a coworker’s credentials when traveling is wrong because credential sharing breaks least privilege, prevents reliable auditing, and increases security risk.
Use AWS Organizations to create a dedicated AWS account for each user in every Region is wrong because Organizations is for account governance and isolation rather than per user per Region access and creating one account per user would cause excessive cost and complexity.
Keep in mind that IAM is global and you should not share credentials. Use policies and roles to grant access across Regions and accounts.
Which AWS service provides manual and automated testing on a wide range of real mobile devices without building a device lab?
-
✓ C. AWS Device Farm
AWS Device Farm is the correct option because it provides both manual and automated testing on a wide range of real mobile devices without requiring you to build or maintain a physical device lab.
AWS Device Farm gives on-demand access to many real smartphones and tablets running different operating systems and versions and it supports automated test suites as well as interactive manual sessions so you can validate app behavior on actual hardware rather than emulators or simulators.
AWS Amplify is incorrect because it focuses on building, configuring, and hosting web and mobile applications and it does not provide access to physical devices for test execution.
AWS CodePipeline is incorrect because it orchestrates continuous integration and delivery workflows and it does not supply a pool of real mobile devices for running tests.
AWS IoT Device Tester is incorrect because it is intended to validate IoT devices and firmware for AWS IoT programs and it is not designed for mobile app testing across phones and tablets.
When a question mentions real devices and both manual and automated testing choose the service that provides device access rather than CI CD or IoT tools.
A regional design firm, BlueCanvas Studio, is training new staff on core traits of cloud adoption. Which statement does not accurately describe cloud computing?
-
✓ C. Cloud adoption replaces usage-based operational costs with large up-front capital purchases
The statement that does not accurately describe cloud computing is Cloud adoption replaces usage-based operational costs with large up-front capital purchases.
Cloud computing typically moves spending away from capital expenditure and toward operating expense and customers pay for resource consumption as they use it. Even options that offer discounts for prepayment or reserved capacity remain treated as operating costs and users do not take ownership of the underlying hardware, so the idea of replacing variable OpEx with large upfront CapEx is incorrect.
You can deploy applications worldwide in a short time frame is correct because major cloud providers offer global regions and services that enable rapid, multi region deployment.
Cloud services deliver computing resources on demand is correct because on demand, self service provisioning and rapid scalability are core characteristics of cloud models.
Customers can leverage significant economies of scale is correct because providers aggregate demand across many customers and optimize operations to reduce costs per user.
Focus on cost model language and remember that cloud usually converts CapEx into variable OpEx so watch for statements that reverse that relationship.
After moving most workloads to AWS last month, a fintech startup called Harbor Ledger wants executives to continuously track and visualize spending and usage trends across all linked accounts. Which AWS service should they use?
-
✓ B. AWS Cost Explorer
AWS Cost Explorer is the correct choice for Harbor Ledger because it provides interactive visualizations and reports that enable executives to continuously track spending and usage trends across all linked accounts.
AWS Cost Explorer offers time series charts, filters by account and tags, saved reports and basic forecasting so leadership can analyze trends after migration and identify cost drivers over time.
AWS Budgets is useful for setting targets and sending alerts when thresholds are exceeded but it is not intended for rich, exploratory visualization and trend analysis across many accounts.
AWS Pricing Calculator is designed for estimating prospective costs before deployment and it does not operate on your organization billed usage data.
AWS CloudTrail captures API activity for auditing and security investigations and it does not provide cost reporting or spend visualization.
Match the action words in the question to the service and look for visualize or analyze to choose Cost Explorer and look for alerts to choose Budgets.
A regional electric utility must keep meter audit records for 12 years to meet regulatory requirements, expects almost no access to the data, and wants the lowest possible Amazon S3 storage cost. Which storage class should they choose?
-
✓ B. Amazon S3 Glacier Deep Archive
The correct choice is Amazon S3 Glacier Deep Archive because it delivers the lowest Amazon S3 storage price for data retained for many years with almost no expected access.
Amazon S3 Glacier Deep Archive is built for long term regulatory archives and is optimized for minimal storage cost at the expense of retrieval speed. It is appropriate when records must be kept for a decade or more and restores are expected to be extremely rare.
Amazon S3 Intelligent-Tiering is wrong because it optimizes for changing access patterns and includes monitoring and automation charges, and it is not the cheapest option for data that will almost never be accessed over many years.
Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is wrong because it offers faster retrieval but has higher storage rates than deep archive classes, and that makes it more expensive for decade-long rarely accessed archives.
Amazon S3 Glacier Flexible Retrieval is wrong because it provides faster restore options than Deep Archive and is useful for occasional quicker restores, but its storage price is higher than Amazon S3 Glacier Deep Archive, so it is not the minimum-cost choice for this compliance use case.
When the requirements state long term retention with very rare access choose the deepest archival S3 class to minimize storage cost.
A security team at NorthPeak Finance is writing AWS IAM policies for a new internal reporting application and wants a guiding principle to ensure each user or role receives only the specific permissions required to perform its tasks. Which IAM best practice aligns with this approach?
-
✓ C. Apply least-privilege access when authoring policies
Apply least-privilege access when authoring policies is correct because it requires granting only the specific actions and resources that a user or role needs to perform their tasks and this reduces the blast radius of compromised credentials or misconfiguration.
Applying least privilege means scoping permissions to exact actions and ARNs and adding conditions when possible. You implement this by narrowing allowed actions, restricting resource ARNs, using condition keys, and testing policies with tools such as the IAM policy simulator or IAM Access Analyzer before deployment.
Use IAM roles to delegate permissions is incorrect because roles are a mechanism for granting or assuming permissions and do not by themselves define which actions or resources should be allowed. You still must apply least privilege when you write the role policies.
Enforce MFA for administrator accounts is incorrect because multifactor authentication strengthens how identities authenticate but it does not limit the scope of permissions granted in IAM policies.
AWS Organizations service control policies is incorrect because service control policies set organization level guardrails and can deny broad classes of actions across accounts but they do not replace the practice of granting only the precise permissions to each identity and they operate at a different control layer.
When exam language mentions minimum necessary permissions or only required actions map those phrases to the principle of least privilege and choose the option that limits permissions per identity.
Which AWS service lets you create and share interactive BI dashboards on web and mobile without managing servers?
-
✓ B. QuickSight
QuickSight is correct because it is AWS’s fully managed business intelligence service that lets you create publish and share interactive dashboards accessible on web and mobile without provisioning or managing servers.
QuickSight provides features such as SPICE for fast in memory analytics and built in sharing and embedding so authors can build dashboards and distribute them to users and applications while AWS handles the underlying infrastructure and scaling.
Amazon Athena is incorrect because it is a serverless query service for analyzing data in Amazon S3 and it does not include dashboard authoring and publishing capabilities by itself.
Amazon Redshift is incorrect because it is a cloud data warehouse that can serve as a source for BI tools but it is not the service used to create and share interactive dashboards.
Amazon Managed Grafana is incorrect because it focuses on observability and operational metrics dashboards and it integrates with monitoring data sources rather than providing general purpose business intelligence reporting.
When the question mentions fully managed or serverless BI and interactive dashboards for web or mobile pick the managed BI service instead of a warehouse or query engine.
A fintech startup is moving its customer portal to AWS to avoid large capital purchases and align spending with actual usage. Which advantage of cloud adoption directly supports this objective?
-
✓ B. Shift up-front capital spending to pay-as-you-go operating costs
The correct choice is Shift up-front capital spending to pay-as-you-go operating costs. This option directly aligns with the startup goal of avoiding large capital purchases and matching costs to actual usage.
Shift up-front capital spending to pay-as-you-go operating costs is correct because cloud services let organizations convert heavy up-front capital expenditures into flexible operating expenses, scale resources up and down with demand, and pay only for the capacity they consume.
Give up elasticity to maximize raw performance is incorrect because cloud providers deliver elasticity and strong performance and you do not have to sacrifice elasticity to get good performance.
AWS Budgets is incorrect because it is a cost management and forecasting tool rather than the underlying business model that enables pay as you go.
Exchange stronger security controls for faster scaling is incorrect because AWS emphasizes security and does not require trading off security controls to achieve scalability.
For cost related questions look for wording about pay-as-you-go or shifting from capital expenses to operating expenses. Management tools are helpful but they are not the core cost model.
A media analytics startup runs its web tier on Amazon EC2 in a VPC across three Availability Zones. The team wants higher resilience to instance failures without changing the application code and needs inbound traffic automatically spread across healthy instances. Which AWS service should they add?
-
✓ B. Elastic Load Balancing
Elastic Load Balancing is the correct choice because it automatically routes inbound traffic across healthy EC2 instances in multiple Availability Zones and it increases application resilience without requiring any changes to the application code.
Elastic Load Balancing performs health checks and forwards traffic only to healthy targets and it spreads requests across instances in different AZs so the web tier remains available when individual instances fail. The load balancer sits in front of the instances and handles distribution and failover so application code does not need to be modified.
AWS Global Accelerator improves global availability and performance by using static anycast IPs and routing to regional endpoints and it is typically used in front of regional load balancers rather than directly balancing traffic across EC2 instances inside a VPC.
Amazon DynamoDB is a fully managed NoSQL database and it does not distribute inbound client requests to EC2 web servers or provide compute layer fault tolerance and therefore it does not meet the requirement to spread traffic across instances.
Amazon ElastiCache offers in-memory caching to improve performance and it does not act as a request load balancer so it will not distribute inbound client traffic across application servers or add the desired resilience to the compute layer.
When a question mentions spreading requests across instances or increasing resilience without code changes think Elastic Load Balancing rather than databases or caches.
Riverstone Analytics is onboarding engineers to AWS and wants to align with IAM security guidance while keeping permissions manageable. Which actions represent recommended practices for setting up user access? (Choose 2)
-
✓ C. Assign permissions to users through IAM groups
-
✓ E. Create a separate IAM user for each individual
Assign permissions to users through IAM groups and Create a separate IAM user for each individual are correct because they enable scalable permission administration and clear accountability for actions by named individuals.
Assign permissions to users through IAM groups simplifies management by letting you grant, update, and audit permissions at the group level rather than per user. This approach supports the principle of least privilege and makes it easier to apply managed policies consistently across similar roles.
Create a separate IAM user for each individual gives each engineer a unique identity for authentication and auditing. Individual users let you require multi factor authentication and revoke or adjust access for a single person without impacting others.
Store static access keys inside application code is insecure because embedded keys can be leaked from source control or logs and they are hard to rotate. The recommended approach is to use temporary credentials from roles instead of hard coding secrets.
Use AWS Secrets Manager to keep long-lived access keys for EC2 applications instead of IAM roles is not recommended because storing long lived keys still creates exposure and rotation burden. EC2 instances should use IAM roles with instance profiles to obtain short lived credentials automatically.
Prefer inline policies rather than customer managed policies is discouraged because inline policies are attached to a single principal and they are harder to reuse and audit. Customer managed or AWS managed policies are easier to maintain and review.
Favor solutions that support least privilege and centralized management. Avoid choices that suggest embedding long lived keys in code or keeping static credentials where rotation is difficult.
A media startup wants a managed AWS service that can analyze images immediately without building custom models. They are considering Amazon Rekognition and want to know which built-in capability it provides out of the box. Which feature is available?
-
✓ B. Detect objects and scenes in images
The correct answer is Detect objects and scenes in images. Amazon Rekognition provides built-in label detection that can identify objects scenes and activities in images immediately without requiring you to build custom models.
Rekognition’s label detection returns categories and confidence scores for items it recognizes so you can quickly catalog or filter visual content. The service also includes ready-made features for face detection face analysis text-in-image moderation and PPE detection which makes it suitable for common vision tasks out of the box.
Convert speech in videos to text is incorrect because that capability belongs to Amazon Transcribe rather than Rekognition.
Automatically resize or compress images is incorrect since Rekognition performs visual analysis and not image editing or transformation.
Estimate full-body human pose is incorrect because Rekognition does not offer a general full-body pose estimation feature and pose analysis is typically handled with specialized models or other tools.
Match services to their primary function and eliminate choices that describe image editing or speech features when the question names a vision analysis service. Focus on what the managed service analyzes rather than what it cannot change.
Under standard Amazon S3 pricing, which usage patterns incur no charge? (Choose 2)
-
✓ B. S3 to EC2 transfer in the same Region
-
✓ E. S3 data transfer in from the internet
The correct options are S3 data transfer in from the internet and S3 to EC2 transfer in the same Region.
AWS does not charge for data uploaded into S3 from the internet so S3 data transfer in from the internet is free under standard S3 pricing. Moving data from S3 to EC2 within the same Region is treated as internal regional transfer and is priced at $0.00 per GB so S3 to EC2 transfer in the same Region is also free.
S3 cross-Region replication traffic is billed because replication across Regions incurs inter-Region data transfer fees and replication request costs.
Data transfer out to the internet is standard egress and will incur internet data transfer charges.
S3 Standard storage accrues per gigabyte per month storage fees so storing objects in the Standard storage class is not free.
On the exam remember that data transfer in to S3 and S3 to EC2 in the same Region are usually free while storage classes and data egress are charged.
More practice AWS exam questions can be found in the AWS Practitioner Udemy course and certificationexams.pro
A streaming media startup serves users on multiple continents and needs to improve end-to-end availability and latency for TCP and UDP traffic to endpoints in two AWS Regions by leveraging the AWS global network. The team also requires static anycast IP addresses and fast health-based failover between Regions. Which AWS service should they use?
-
✓ B. AWS Global Accelerator
AWS Global Accelerator is the correct choice because it leverages the AWS global network and provides static anycast IP addresses to route TCP and UDP traffic to the nearest healthy endpoint across Regions and it delivers rapid health based failover to improve both availability and latency.
AWS Global Accelerator uses the AWS edge network as a global entry point and then routes packets over the AWS backbone to application endpoints, which reduces internet variability for transport layer traffic and supports both TCP and UDP. It also offers continuous health checks and automatic failover so traffic is directed away from unhealthy endpoints quickly and across Regions.
Amazon Route 53 is DNS based and can perform latency or geolocation routing, but it does not accelerate packet delivery over the AWS data plane nor provide static anycast IP addresses that act as global entry points for TCP and UDP traffic.
Amazon CloudFront is a content delivery network designed for HTTP and HTTPS caching and delivery, and it does not provide global anycast entry points or native acceleration for arbitrary TCP and UDP application endpoints.
Elastic Load Balancing (ELB) distributes traffic across targets within a Region or across Availability Zones and it does not offer global anycast IP addresses or cross Region acceleration over the AWS backbone.
Focus on whether the requirement is for transport layer acceleration and static anycast IPs for TCP and UDP when selecting a service. If you see those terms think about a global network data plane solution rather than a CDN or DNS only approach.
Riverbend Outfitters, a regional e-commerce brand, is moving from its colocated servers to AWS to better handle unpredictable shopping spikes while keeping costs aligned to usage. Which architectural characteristic of cloud computing provides the ability to automatically match capacity to fluctuating demand?
-
✓ C. Elasticity
The correct option is Elasticity. Elasticity is the architectural characteristic that enables infrastructure to expand automatically during traffic spikes and to contract when demand falls so capacity matches fluctuating demand and costs align with usage.
Elasticity works by allowing resources to be provisioned and released dynamically so the system remains resilient under bursty traffic while avoiding unnecessary expense. Cloud providers expose services and APIs that implement this principle so teams can focus on application behavior rather than manual capacity planning.
Vertical scaling refers to increasing the size of a single server and it is not unique to the cloud and it has practical limits which make it unsuitable as the general cloud characteristic being asked for.
AWS Auto Scaling is a specific service that can implement the architectural concept but the question asks for the characteristic rather than a particular tool so the service answer is not correct here.
Monolithic architecture is an application design style and it is not a cloud characteristic. Cloud native designs usually favor modular or microservices approaches to enable horizontal scaling and agility.
When a question asks for a cloud characteristic choose conceptual answers like Elasticity rather than naming a specific service.
Orion Health Labs is reviewing the shared responsibility model before launching a new analytics workload on AWS. Which activities must the customer handle to meet their security obligations in this model? (Choose 2)
-
✓ A. Encrypting stored application data
-
✓ C. Training the company’s workforce on secure use of AWS services
The correct customer responsibilities are Encrypting stored application data and Training the company’s workforce on secure use of AWS services.
Encrypting stored application data is a customer task because customers decide which data to encrypt, how to manage keys, and which services or client side methods to use. Customers must configure encryption at rest and manage access to keys and encryption settings to meet compliance and confidentiality needs.
Training the company’s workforce on secure use of AWS services is also a customer obligation because secure configurations and proper use of IAM and service features depend on user actions. Educating staff reduces misconfiguration risks and enforces least privilege and secure data handling practices.
Controlling physical access to AWS facilities is incorrect because AWS is responsible for physical security of its data centers and the controls around facility access.
Decommissioning and destroying retired AWS hardware is incorrect because AWS performs hardware disposal and media sanitization under its infrastructure responsibilities.
Patching and maintaining the EC2 host hypervisor layer is incorrect because AWS manages the underlying host operating systems and hypervisor as part of securing the cloud infrastructure.
Remember think of AWS securing the infrastructure while the customer secures data, identities, and user practices and look for clues like encryption and training for customer duties.
A healthcare analytics startup must retain compliance records for 9 years with very rare access. Retrieval within a few hours is acceptable and keeping storage costs low is the top priority. Which Amazon S3 storage class should be selected for this archival requirement?
-
✓ B. Amazon S3 Glacier Flexible Retrieval
The best choice is Amazon S3 Glacier Flexible Retrieval.
Amazon S3 Glacier Flexible Retrieval is built for long term archival storage with infrequent access and it provides very low storage costs while allowing retrievals that complete in minutes to hours. This fits the requirement to retain compliance records for nine years with very rare access and with retrieval within a few hours being acceptable. The class also supports lifecycle transitions and cost effective bulk retrievals which keep ongoing expenses low.
Amazon S3 Intelligent-Tiering is designed for datasets with unpredictable access patterns and it automatically moves objects between tiers. It can include archive tiers but it is generally not as straightforward or as cost optimized as choosing a dedicated archive class when long term retention is known.
Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) stores data in a single Availability Zone and it lowers cost for infrequently accessed but quickly needed data. It is not ideal for durable regulatory archives because it lacks multi Availability Zone resilience.
Amazon S3 Standard is optimized for frequent access and low latency and it carries a higher storage cost than archival classes. It is not cost effective for records that are rarely accessed over nine years.
Match storage class to both retrieval time and cost goals and use lifecycle policies to automatically transition records to Glacier Flexible Retrieval when they enter long term retention.
With consolidated billing in AWS Organizations, there are eight matching Reserved Instances and 10 matching instances running across linked accounts in one hour. Which statements correctly describe how charges apply and when a linked account can use a zonal RI? (Choose 2)
-
✓ B. Eight instance-hours at RI rate; two On-Demand
-
✓ E. Zonal RI usable across accounts only with the same AZ label
Eight instance-hours at RI rate; two On-Demand and Zonal RI usable across accounts only with the same AZ label are correct because consolidated billing shares RI pricing benefits across linked accounts when usage matches the RI attributes.
The option Eight instance-hours at RI rate; two On-Demand is correct because with eight applicable Reserved Instances and 10 matching instance-hours AWS applies the RI rate up to the number of RIs and charges any additional matching usage at the On Demand rate. Consolidated billing lets the pricing benefit be applied across linked accounts so the count of RIs determines how many instance-hours receive the discount.
The option Zonal RI usable across accounts only with the same AZ label is correct because zonal Reserved Instances are scoped to a specific Availability Zone label. For the pricing benefit to apply across accounts the running instances must match that exact AZ label. The zonal RI capacity reservation is not shared across accounts even though the discount can apply when usage aligns with the AZ label.
The option All 10 instances receive RI pricing is wrong because you cannot apply RI pricing to more instance-hours than the number of applicable RIs. Any excess usage beyond the eight RIs is billed at the On Demand rate.
The option No discount for linked accounts without their own RIs is wrong because RI discount sharing under consolidated billing allows linked accounts to benefit from RIs purchased by the payer account when their usage matches the RI attributes.
The option RI sharing works only if in the same Region is misleading because regional RIs do require a Region match but zonal RIs require the more specific Availability Zone label match. This question is about zonal behavior so an AZ label match is required rather than only a Region match.
Count RIs and compare to matching usage and remember that regional discounts apply across a Region while zonal RIs require the exact AZ label to get the benefit.
HarborPoint Retail plans to migrate its legacy customer management system that currently runs on MySQL to a fully managed AWS database. The team wants a relational service that preserves MySQL compatibility while offering better performance and managed high availability. Which AWS database service should they choose?
-
✓ C. Amazon Aurora
Amazon Aurora is the correct choice because it is a fully managed relational database that preserves MySQL compatibility while offering higher performance and managed high availability.
Amazon Aurora commonly outperforms standard MySQL and accepts MySQL clients and tools so applications usually need minimal changes. It provides automated backups, read replicas, and multi availability zone durability which meet the need for managed high availability and reduced operational overhead.
Amazon Neptune is a purpose built graph database and it is not intended for relational or MySQL compatible workloads so it would not be a suitable replacement.
Amazon DynamoDB is a NoSQL key value and document database and migrating to it would require rearchitecting relational schemas and SQL queries so it does not fit when MySQL compatibility is required.
Amazon ElastiCache provides in memory caching to complement databases and it does not serve as persistent relational primary storage so it cannot replace a MySQL database for primary data storage.
Match the data model first and then prefer services that preserve compatibility when migrating relational databases to managed offerings.
A media analytics startup is adopting AWS and wants to distinguish what AWS secures versus what the customer must secure under the Shared Responsibility Model. Which activity is AWS responsible for from a security and compliance perspective?
-
✓ B. Operating and securing edge locations
The correct choice is Operating and securing edge locations. Under the Shared Responsibility Model AWS is responsible for the security of the cloud which includes the physical facilities hardware and the global infrastructure such as Regions Availability Zones and edge locations.
AWS manages and secures the underlying infrastructure and the network that supports services and edge locations so customers do not control those physical and network components. This means AWS operates and hardens edge sites to provide availability and integrity of the delivery network while customers configure and protect the resources they run on AWS.
Configuring IAM users, roles, and policies is the customer’s responsibility because identity and access management within an AWS account is part of security in the cloud and customers must set permissions monitor access and apply least privilege.
Protecting and classifying customer data is owned by the customer who must decide how to classify data apply encryption controls and manage backups and retention to meet compliance requirements.
Choosing and managing server-side encryption settings is also a customer task when they select encryption options manage keys and configure service settings although customers can choose AWS managed keys for some services which changes who manages the key material.
Remember that AWS secures the underlying infrastructure and customers secure their data and configurations in the cloud. Look for terms like edge locations regions and physical facilities to identify AWS responsibility.
A data analytics startup wants to launch virtual servers on AWS that they can sign in to and manage the operating system directly. They also prefer elastic capacity that can scale quickly with pricing charged by the second. Which AWS service should they use?
-
✓ B. Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is the correct choice because it provides virtual servers you can sign in to and it supports elastic capacity with pricing that can be charged by the second.
Amazon EC2 gives you on-demand, resizable compute instances with full operating system access so you can manage users, install software, and perform OS-level configuration. EC2 supports a wide range of instance types and Auto Scaling so capacity can grow quickly to meet demand and the billing model includes per-second charges for flexible cost control.
Amazon Lightsail is not ideal because it targets simplified, fixed-price bundles and it does not offer the same granularity of instance types or per-second billing that the startup requested.
AWS Lambda is not suitable because it is a serverless compute model that abstracts away servers and it does not provide OS-level access or a persistent VM that you can sign in to.
Amazon Elastic Container Service (Amazon ECS) is focused on running and orchestrating containers and it does not itself provide a VM to sign in to unless you run containers on EC2 instances which adds complexity compared with launching EC2 directly.
When the requirement calls for direct OS control and quickly scalable virtual machines with per-second billing choose Amazon EC2 as the primary service to match those needs.
More practice AWS exam questions can be found in the AWS Practitioner Udemy course and certificationexams.pro
A fintech startup is preparing a mission‑critical application and wants to build for resilience using AWS’s global footprint. Which description best explains how AWS enables high availability and fault tolerance?
-
✓ C. AWS organizes independent Regions, each comprising multiple physically separate Availability Zones connected with low-latency networks and redundant power, which supports resilient architectures
AWS organizes independent Regions, each comprising multiple physically separate Availability Zones connected with low-latency networks and redundant power, which supports resilient architectures is correct because it explains how AWS separates faults and provides redundancy across AZs and Regions so customers can build highly available and fault tolerant systems.
Regions are isolated from each other and each Region contains multiple Availability Zones that are distinct facilities with independent power networking and low latency links which allows architects to deploy services across AZs for high availability and to replicate across Regions for disaster recovery.
The infrastructure is composed of subnets and VPCs that serve as the primary physical isolation boundaries for availability is incorrect because subnets and VPCs are logical networking constructs and they do not provide the physical separation that Availability Zones and Regions deliver.
High availability is primarily delivered by distributing content through Amazon CloudFront edge locations worldwide is incorrect because CloudFront improves latency and caching at the edge and it does not replace multi-AZ or multi-Region architectures for compute and data fault tolerance.
Customers can choose individual data centers within Regions to deploy workloads in different facilities for failover is incorrect because AWS does not permit selection of specific data centers and customers achieve facility-level resilience by deploying across Availability Zones.
When designing for resilience choose multi-AZ deployments for high availability and use multi-Region strategies when you need isolation from Region‑level failures.
Which Amazon RDS actions can scale a database using built-in features without application changes? (Choose 2)
-
✓ B. Increase allocated storage for the DB instance
-
✓ D. Scale up to a larger DB instance class
The correct options are Increase allocated storage for the DB instance and Scale up to a larger DB instance class.
Increase allocated storage for the DB instance increases the disk allocation and can raise baseline IOPS for storage types that scale with size, and this adjustment can be applied without changing application code.
Scale up to a larger DB instance class adds CPU and memory which improves throughput and concurrency, and changing the instance class is a native RDS modification that does not require application changes.
Use Amazon RDS Proxy to scale is incorrect because RDS Proxy improves connection management and resilience and it does not add compute or storage capacity.
Enable Multi-AZ for scaling is incorrect because Multi-AZ is intended for high availability and synchronous standby for failover and it does not provide performance scaling.
Use AWS Auto Scaling to resize the DB class is incorrect because AWS Auto Scaling does not automatically change RDS instance classes and instance class changes are performed through RDS instance modifications or scheduled operations.
Vertical scaling via storage increases or instance class upgrades is the fastest way to gain capacity without changing the application.
More on your AWS Practitioner Exam strategy
Here’s how I tailored this strategy for the AWS Practitioner exam.

Step 1: Read the exam objectives
Before you dive into study materials, start with the official exam objects.
The exam objectives document tells you exactly what AWS intends to test you on. It lays out the domains, the weighting of each section, and which services are in scope.
Too many people skip this step, but I found it gave me clarity and direction. Without it, you risk wasting time on areas that won’t even be tested.
Step 2: Do practice exams before studying
This may sound counterintuitive, but I always recommend doing practice questions right at the start, even before studying.
Why?
Because it reveals how AWS frames its questions and highlights your blind spots.
You’ll quickly see which topics come up most often and where you need to focus. It also primes your brain.
Later, when you’re watching a video or reading notes, you’ll recognize concepts that appeared in those early practice exams, and they’ll stick much better.
Step 3: Take a course
Once you know your weak areas, commit to a structured course. AWS Skill Builder and AWS Academy both offer free foundational training, which is a great starting point.
If you want a deeper dive, platforms like Udemy have excellent paid courses that go step by step through the content.
Personally, I like pairing one free, online course or a YouTube playlist with one paid course so I can learn the official material and also get a different perspective.
Step 4: Do simple hands-on projects in the AWS console
Reading and watching videos only get you so far. To really understand AWS, you need to do. I recommend spinning up small, inexpensive projects in the AWS Management Console that connect directly to exam topics. For example:
-
Create an S3 bucket and configure it with versioning, lifecycle policies, and encryption.
-
Launch an EC2 instance, connect via SSH, and practice stopping, starting, and terminating it.
-
Deploy a simple website with AWS Amplify or S3 static website hosting.
-
Experiment with IAM roles and policies, such as giving a user read-only access to S3.
-
Set up a DynamoDB table and perform basic CRUD operations.
These small exercises not only make the services stick in your mind but also give you the kind of practical intuition AWS loves to test for.
It’ll heavily prepare you for prototypical questions like “Which service is best for this scenario with the least effort and cost?”
Step 5: Get serious about mock exams
When you feel like you’ve studied enough, it’s time to pressure test yourself. I dedicated entire days just to doing mock exams, reviewing answers, and then doing more.
You can buy practice exams from various providers, or if you’re on a budget, scour forums, blogs, and free resources online. The goal isn’t just to memorize answers but to practice reading AWS-style questions until the patterns become second nature.

Your exam-day strategy
By the time exam day comes around, you should feel ready. These strategies helped me stay sharp under pressure:
-
Read the questions carefully and pay close attention to keywords like “least effort,” “lowest cost,” or “most secure.”
-
Eliminate distractors quickly, since two of the options are usually wrong right away, leaving you with a narrower set to choose from.
-
Choose managed services when possible, because AWS almost always prefers the option that requires less setup and administration.
-
Make a first pass through the exam answering what you know, then review flagged questions in a second round.
-
Never leave a question unanswered, since even a guess gives you a chance to score points.
-
Keep track of time, aiming to finish the first pass with at least 20 minutes left so you can review thoroughly without rushing.
-
Use later questions as clues, because sometimes wording in a later scenario will jog your memory or clarify an earlier one.
By pacing myself with this strategy, I was able to make two full passes through the exam, catch mistakes I almost missed, and walk out with confidence.
There are always unforeseen variables that impact whether you will pass an exam or not, but I’ve found these steps greatly minimize risk and enhance your odds of passing your exam on the first try.
Hopefully you’ll be able to use this strategy to pass the AWS Cloud Practitioner exam on your first try, just like me.
Darcy DeClute is a Certified Cloud Practitioner and author of the Scrum Master Certification Guide. Popular both on Udemy and social media, Darcy’s @Scrumtuous account has well over 250K followers on Twitter/X.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |

