AWS Cloud Practitioner Exam Dumps and Braindumps
AWS Practice Tests & Exam Topics
Despite the title of this article, this is not an AWS Cloud Practitioner braindump in the traditional sense.
I do not believe in cheating.
Traditionally, the term “braindump” referred to someone taking an exam, memorizing the questions, and sharing them online for others to use. That practice is unethical and violates certification agreements. It provides no integrity, no real learning, and no professional growth.
This is not an AWS Cloud Practitioner braindump.
All of these questions come from my AWS Cloud Practitioner Udemy Course and from the certificationexams.pro website, which offers hundreds of free AWS Cloud Practitioner practice questions.
AWS Certification Exam Simulator
Each question has been carefully written to align with the official AWS Certified Cloud Practitioner exam objectives. They mirror the tone, logic, and technical depth of real AWS exam scenarios, but none are copied from the actual test.
Every question is designed to help you learn, reason, and master AWS Cloud Practitioner concepts such as core AWS services, global infrastructure, billing and pricing, security and compliance, shared responsibility, and basic cloud architecture best practices.
If you can answer these questions and understand why the incorrect options are wrong, you will not only be ready for the real AWS Practition exam but also gain a solid understanding of how to choose the right AWS services, estimate costs, and design simple, secure solutions.
So if you want to call this your AWS Cloud Practitioner exam dump, that is fine, but remember that every question here is built to teach, not to cheat.
Each item includes clear explanations, realistic examples, and insights that help you think like an entry level cloud professional during the exam. Study with focus, practice consistently, and approach your certification with integrity.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Certification Brandump Questions
All AWS exam questions come from the AWS Practitioner Udemy course and certificationexams.pro
A regional healthcare provider is migrating applications to AWS and is reviewing the shared responsibility model. Which statements accurately reflect the division of duties between the customer and AWS? (Choose 2)
-
❏ A. Customers are responsible for protecting their network traffic, such as using encryption, security groups, and network ACLs
-
❏ B. Customers administer the physical compute hardware that runs AWS services
-
❏ C. AWS manages the physical networking infrastructure inside its Regions and Availability Zones
-
❏ D. AWS configures a customer’s VPC security groups and network ACL rules
-
❏ E. AWS automatically encrypts all application network traffic for customers without any setup
A retail startup is reviewing how AWS bills for network egress and ingress. What is the accurate statement about charges for data moving into an AWS Region from the internet and data leaving a Region to the internet?
-
❏ A. Neither inbound nor outbound data transfer is billed
-
❏ B. Both inbound and outbound data transfer are billed at tiered rates
-
❏ C. Only data transferred out to the public internet is charged
-
❏ D. Only inbound traffic into a Region incurs charges
A fintech startup in Berlin is onboarding new engineers to AWS. Which statements correctly describe how AWS Regions relate to Availability Zones? (Choose 2)
-
❏ A. AWS uses the term Region for a grouping of logical data centers
-
❏ B. A Region is a distinct geographic area that contains multiple, isolated, physically separate Availability Zones
-
❏ C. An Availability Zone is a single data center that shares power and networking with other AZs in the Region
-
❏ D. Encryption between Availability Zones must be enabled manually in the console
-
❏ E. Network traffic that traverses the AWS backbone between Availability Zones in the same Region is encrypted by default
NovaCart, an e-commerce startup, plans to place an Elastic Load Balancer in front of its Amazon EC2 instances running in three Availability Zones to improve resiliency and minimize service disruption. Which outcomes are direct benefits provided by Elastic Load Balancing in this design? (Choose 2)
-
❏ A. Lower infrastructure cost
-
❏ B. Agility
-
❏ C. Fault tolerance
-
❏ D. Amazon S3
-
❏ E. High availability
A biomedical research organization must satisfy regulatory rules that require keeping at least two copies of data in Regions that are hundreds of miles apart. The organization uses Amazon Simple Storage Service as its main data store. Which approach provides the most resource-efficient way to meet this requirement?
-
❏ A. AWS DataSync
-
❏ B. Enable S3 cross-Region replication (CRR) between buckets in different AWS Regions
-
❏ C. S3 same-Region replication (SRR)
-
❏ D. Run a nightly job on an Amazon EC2 instance to copy objects to a bucket in another Region
A regional logistics company runs automation scripts from its corporate data center that use the AWS CLI and SDK to call AWS APIs over the internet. Which requirement must be in place to authenticate these programmatic requests?
-
❏ A. Amazon API Gateway
-
❏ B. An Amazon EC2 key pair
-
❏ C. An AWS access key ID and matching secret access key
-
❏ D. AWS Direct Connect
A design team at a boutique retailer stores product photos in Amazon S3 and plans to analyze these images using an AWS managed computer vision service. Which capability is available when using Amazon Rekognition with image files?
-
❏ A. Extracts tables and form fields from scanned documents
-
❏ B. Detects objects and scenes in a picture
-
❏ C. Resizes images to new dimensions
-
❏ D. Compresses image files to reduce storage
Aster Grove Outfitters needs to attribute AWS charges to teams using cost allocation tags and view them in Cost Explorer during their 90 day cost reviews. Which statements about configuring and using these tags are correct? (Choose 2)
-
❏ A. Tags are required to run any cost or usage report
-
❏ B. For a given resource, a tag key must be unique and can map to only one value
-
❏ C. Only user defined tags need activation to appear in Cost Explorer
-
❏ D. Both AWS generated and user defined cost allocation tags must be activated separately before they appear in Cost Explorer or cost allocation reports
-
❏ E. For a resource, a tag key must be unique but can have multiple values
Employees at Orion Design Studio work remotely and need centrally managed Windows desktop environments in AWS, plus a secure network path so those desktops can access internal applications hosted on premises. Which AWS services together meet these needs? (Choose 2)
-
❏ A. AWS Direct Connect
-
❏ B. Amazon WorkSpaces
-
❏ C. Amazon Connect
-
❏ D. AWS Site-to-Site VPN
-
❏ E. Amazon AppStream 2.0
A media analytics startup needs to create 12 separate AWS accounts for different product teams, automate the setup, and have all costs roll up to a single payer. Which AWS service should they use to accomplish this quickly?
-
❏ A. AWS CloudFormation
-
❏ B. AWS Organizations
-
❏ C. AWS IAM
-
❏ D. AWS Billing Conductor
A retail analytics startup is planning its network layout on AWS and needs to identify which items are native components that exist within an Amazon VPC. Which choices represent VPC components? (Choose 2)
-
❏ A. AWS Storage Gateway
-
❏ B. Internet gateway
-
❏ C. Subnet
-
❏ D. Amazon API Gateway
-
❏ E. S3 bucket
A fintech startup needs to execute a log archive job every Tuesday at 1:30 AM, and the task usually completes in about 7 minutes. As a Cloud Practitioner, which AWS services should be combined to implement a fully serverless solution for this requirement? (Choose 2)
-
❏ A. AWS Step Functions
-
❏ B. Amazon EC2
-
❏ C. AWS Lambda
-
❏ D. Amazon EventBridge Scheduler
-
❏ E. AWS Batch
A technology lead at Riverbend Health is preparing to move several internal systems to AWS within the next 45 days and must review official AWS SOC and PCI compliance documentation for the vendor risk assessment. What is the appropriate way for the team to obtain these reports?
-
❏ A. Create a case with AWS Support
-
❏ B. AWS Secrets Manager
-
❏ C. AWS Artifact
-
❏ D. Contact the AWS Compliance team
BrightFin Labs is choosing managed AWS databases for a new microservices platform and needs nonrelational engines to support graph and document data models. Which services should they use? (Choose 2)
-
❏ A. Amazon RDS
-
❏ B. Amazon Neptune
-
❏ C. AWS Storage Gateway
-
❏ D. Amazon DocumentDB
-
❏ E. Amazon Aurora
An analytics team at Riverbend Manufacturing configures an Amazon Aurora MySQL cluster to run across three Availability Zones with automatic failover so the application stays available during an AZ disruption. Which pillar of the AWS Well-Architected Framework does this design primarily support?
-
❏ A. Security
-
❏ B. Reliability
-
❏ C. Cost optimization
-
❏ D. Performance efficiency
All AWS exam questions come from the AWS Practitioner Udemy course and certificationexams.pro
A travel booking startup named NovaTrips operates identical stacks in three AWS Regions and wants DNS to direct each user to the Region that delivers the quickest connection for that user. Which Amazon Route 53 routing policy should they use to maximize performance?
-
❏ A. Failover routing policy
-
❏ B. Geolocation routing policy
-
❏ C. Latency-based routing policy
-
❏ D. Weighted routing policy
Aurora Pixel, a 30-person design startup, wants the lowest paid AWS Support plan that lets them open cases by email to AWS Cloud Support Associates during business hours and does not include phone or chat with support engineers. Which plan should they choose?
-
❏ A. Enterprise
-
❏ B. Developer
-
❏ C. Business
-
❏ D. Basic
Northwind Health Research stores lab reports in an Amazon S3 bucket and wants a simple way to recover objects if someone accidentally deletes or overwrites them without changing how files are uploaded. Which AWS feature should they enable to meet this requirement?
-
❏ A. AWS Backup
-
❏ B. Amazon S3 Versioning
-
❏ C. Amazon S3 Transfer Acceleration
-
❏ D. Amazon S3 Lifecycle configuration
BrightPay, a digital payments startup, needs to obtain AWS Payment Card Industry Data Security Standard reports for an upcoming external audit. Which AWS resource should the team use to download these compliance documents?
-
❏ A. AWS Audit Manager
-
❏ B. AWS Artifact
-
❏ C. AWS Trusted Advisor
-
❏ D. AWS Cost & Usage Report (AWS CUR)
A home automation vendor, BrightNest Labs, needs to onboard about 75,000 smart thermostats that must securely exchange messages with cloud applications and other devices in near real time. Which AWS service provides this managed device connectivity and messaging?
-
❏ A. Amazon WorkSpaces
-
❏ B. AWS IoT Core
-
❏ C. AWS Directory Service
-
❏ D. AWS IoT Greengrass
A mapping startup named Horizon Maps needs a service that is provided at a global scope by default rather than being confined to a single AWS Region. Which service should they choose?
-
❏ A. Amazon Elastic File System (Amazon EFS)
-
❏ B. Amazon WorkSpaces
-
❏ C. AWS Snowball
-
❏ D. Amazon Simple Storage Service (Amazon S3) buckets
A fintech company named Aurora Ledger plans to run several Amazon EC2 instances in a VPC and wants an instance-scoped virtual firewall that can apply the same inbound and outbound rules across multiple instances. Which AWS feature should they choose?
-
❏ A. Network Access Control Lists (ACL)
-
❏ B. Amazon EC2 security groups
-
❏ C. Route table
-
❏ D. Virtual private gateways (VPG)
HarborPay, a fintech startup, wants to reduce the effort of building and patching golden server images that are used to launch Amazon EC2 instances and also deployed to on-premises VMware servers. Which AWS service should the team use to automate image creation, testing, and distribution?
-
❏ A. AWS Systems Manager
-
❏ B. Amazon Machine Image (AMI)
-
❏ C. Amazon EC2 Image Builder
-
❏ D. AWS CloudFormation
A small fintech startup plans to run a pilot analytics API on Amazon EC2 for the next 45 days and wants a pricing choice that keeps costs predictable without any upfront payment or long term commitment while ensuring AWS will not interrupt the running instances. Which EC2 purchasing option should they select?
-
❏ A. Spot Instances
-
❏ B. EC2 On-Demand Instances
-
❏ C. Reserved Instances
-
❏ D. Compute Savings Plans
A platform team at Klamath Robotics is evaluating AWS offerings to run application code and large-scale processing jobs. Which services from the list are considered compute services in AWS? (Choose 2)
-
❏ A. Amazon S3
-
❏ B. AWS Elastic Beanstalk
-
❏ C. AWS CloudTrail
-
❏ D. AWS Batch
-
❏ E. Amazon EFS
All AWS exam questions come from the AWS Practitioner Udemy course and certificationexams.pro
Kora Metrics, a retail analytics startup, runs several AWS accounts with a mix of On-Demand and Reserved Instances. One member account is accruing many unused Reserved Instance hours each month. How can the company lower its overall spend by ensuring these unused discounts benefit workloads in the other accounts? (Choose 2)
-
❏ A. Switch the fleet to Amazon EC2 Spot Instances
-
❏ B. Enable consolidated billing in AWS Organizations so RI discounts are shared across accounts
-
❏ C. Place the EC2 instances into a cluster placement group
-
❏ D. Move all accounts under a single AWS Organizations management account to link them
-
❏ E. Sell the idle Reserved Instances on the Reserved Instance Marketplace
A fast-growing online learning platform plans to launch in 45 countries and wants to lower latency when serving images, downloads, and other static web content to viewers worldwide without changing its application servers. Which AWS service should the team choose to achieve this?
-
❏ A. Amazon Lightsail
-
❏ B. Amazon Route 53
-
❏ C. Amazon CloudFront
-
❏ D. Amazon ElastiCache
An e-commerce startup deploys its web tier across several Availability Zones behind a load balancer so users can still access the site even if one zone experiences an outage. Which cloud design principle does this approach demonstrate?
-
❏ A. Support elasticity
-
❏ B. Plan for failure
-
❏ C. Automate operations
-
❏ D. Prioritize agility
A travel booking startup runs roughly 24 on-premises application servers that simultaneously mount an NFS share. They plan to shift the applications to Amazon EC2 and want to keep a shared, POSIX-compliant file system in AWS with minimal changes so all instances can read and write concurrently. Which AWS storage service should they choose?
-
❏ A. AWS Storage Gateway
-
❏ B. Amazon EFS
-
❏ C. Amazon S3
-
❏ D. Amazon EBS
A small charity named RiverHope is evaluating AWS Support tiers before launching a data portal. Which feature is included in every AWS Support plan and provides account-specific visibility into events that could affect their resources?
-
❏ A. AWS Support API
-
❏ B. AWS Health Dashboard (Your account health)
-
❏ C. Unlimited technical support cases and unlimited contacts
-
❏ D. Full suite of AWS Trusted Advisor best practice checks
A learning technology company needs to provide external auditors with AWS security attestations and certifications, such as SOC 2 Type II and ISO/IEC 27001, for the past 12 months. Which AWS resource lets the team obtain these AWS compliance documents on demand?
-
❏ A. AWS Organizations
-
❏ B. AWS Artifact
-
❏ C. AWS IAM
-
❏ D. AWS Audit Manager
A small analytics startup runs Amazon EC2 instances with EBS volumes and schedules nightly point in time backups for compliance. In which AWS storage service are these EBS snapshots actually stored?
-
❏ A. Amazon EFS
-
❏ B. Amazon EBS
-
❏ C. Amazon S3
-
❏ D. EC2 instance store
A finance director at Meridian Retail wants to know how AWS is able to keep lowering the per-unit cost of many services over time without reducing service quality. What primary factor enables these ongoing price reductions?
-
❏ A. Pay-as-you-go pricing
-
❏ B. Savings Plans
-
❏ C. Economies of scale
-
❏ D. Amazon EC2 Auto Scaling
A manufacturing firm, Meridian Tools, currently reaches AWS from its primary data center over the public internet. The network team wants a dedicated, private link to AWS to build a hybrid environment that does not traverse the internet. Which AWS service should they choose?
-
❏ A. AWS Transit Gateway
-
❏ B. AWS Direct Connect
-
❏ C. AWS VPC Endpoint
-
❏ D. AWS Site-to-Site VPN
A fintech company needs its applications running in an Amazon VPC to communicate with Amazon SQS using private IP addresses only, avoiding traversal over the public internet. Which AWS capability should they implement to achieve this connectivity?
-
❏ A. AWS Direct Connect
-
❏ B. VPC Interface Endpoint
-
❏ C. NAT Gateway
-
❏ D. VPC Gateway Endpoint
Cloud Practitioner Exam Questions Answered
All AWS exam questions come from the AWS Practitioner Udemy course and certificationexams.pro
A regional healthcare provider is migrating applications to AWS and is reviewing the shared responsibility model. Which statements accurately reflect the division of duties between the customer and AWS? (Choose 2)
-
✓ A. Customers are responsible for protecting their network traffic, such as using encryption, security groups, and network ACLs
-
✓ C. AWS manages the physical networking infrastructure inside its Regions and Availability Zones
Customers are responsible for protecting their network traffic, such as using encryption, security groups, and network ACLs and AWS manages the physical networking infrastructure inside its Regions and Availability Zones are correct.
The first correct statement is true because customers control and must configure how their data moves and is protected in the cloud and that includes choosing and implementing TLS, managing security group rules, and using network ACLs to control traffic flows. These controls are part of the customer responsibilities for securing resources and data in their account.
The second correct statement is true because AWS operates and secures the underlying physical network devices and the connectivity that form each Region and Availability Zone, and AWS is responsible for the physical infrastructure layer of the cloud.
Customers administer the physical compute hardware that runs AWS services is incorrect because customers do not manage the data center hosts or server hardware, and AWS owns and maintains the physical compute resources that support managed services.
AWS configures a customer’s VPC security groups and network ACL rules is incorrect because VPC firewall rules are defined and maintained by the customer as part of security in the cloud and AWS does not set those account level configurations for you.
AWS automatically encrypts all application network traffic for customers without any setup is incorrect because encryption in transit generally requires the customer to enable or configure protocols such as TLS or to use managed features that must be chosen and set up by the customer.
Remember that AWS secures the infrastructure and the customer secures the resources and data. Ask whether a task involves physical hardware or account level configuration to decide which side owns it.
A retail startup is reviewing how AWS bills for network egress and ingress. What is the accurate statement about charges for data moving into an AWS Region from the internet and data leaving a Region to the internet?
-
✓ C. Only data transferred out to the public internet is charged
The correct choice is Only data transferred out to the public internet is charged.
Inbound data that arrives from the internet into an AWS Region is generally not billed and AWS bills for data that leaves a Region to the public internet. Egress is priced on a region specific, tiered basis and the effective cost depends on volume and destination so you should design to minimise unnecessary outbound data when possible.
Neither inbound nor outbound data transfer is billed is incorrect because outbound traffic to the internet is typically chargeable.
Both inbound and outbound data transfer are billed at tiered rates is incorrect because inbound data from the public internet is generally free and only outbound is commonly billed.
Only inbound traffic into a Region incurs charges is incorrect because the billed direction is outbound to the internet and inbound is usually not charged.
Remember that ingress from the public internet is usually free and you are charged for egress to the internet. Read question wording carefully when it says traffic to the internet versus traffic within or between AWS.
A fintech startup in Berlin is onboarding new engineers to AWS. Which statements correctly describe how AWS Regions relate to Availability Zones? (Choose 2)
-
✓ B. A Region is a distinct geographic area that contains multiple, isolated, physically separate Availability Zones
-
✓ E. Network traffic that traverses the AWS backbone between Availability Zones in the same Region is encrypted by default
The correct statements are A Region is a distinct geographic area that contains multiple, isolated, physically separate Availability Zones and Network traffic that traverses the AWS backbone between Availability Zones in the same Region is encrypted by default. Regions are global locations, and each one includes multiple AZs that are independent for power, cooling, and security, and AWS encrypts traffic flowing between AZs by default.
AWS uses the term Region for a grouping of logical data centers is wrong because AWS uses Availability Zone to refer to a group of one or more data centers, while a Region is the broader geographic construct.
An Availability Zone is a single data center that shares power and networking with other AZs in the Region is incorrect since AZs can be one or more data centers and are purposefully isolated for power, networking, and connectivity.
Encryption between Availability Zones must be enabled manually in the console is incorrect because AWS automatically encrypts inter-AZ traffic by default and does not require manual enablement for the backbone encryption.
Exam Tip
Remember: Region equals geographic area containing multiple AZs; AZ equals one or more data centers with independent power and networking; and inter-AZ traffic on the AWS network is encrypted by default.
NovaCart, an e-commerce startup, plans to place an Elastic Load Balancer in front of its Amazon EC2 instances running in three Availability Zones to improve resiliency and minimize service disruption. Which outcomes are direct benefits provided by Elastic Load Balancing in this design? (Choose 2)
-
✓ C. Fault tolerance
-
✓ E. High availability
The correct outcomes are High availability and Fault tolerance.
Elastic Load Balancing continuously monitors instance health and routes traffic only to healthy targets and it distributes incoming requests across multiple Availability Zones which helps the application remain reachable if some instances or an Availability Zone fail and thus provides both High availability and Fault tolerance.
Lower infrastructure cost is not guaranteed because a load balancer is a charged managed service and it does not inherently reduce compute or storage expenses.
Agility is a general cloud advantage and it is not a capability that the load balancer itself directly delivers.
Amazon S3 is an object storage service and it is unrelated to the direct operational benefits that a load balancer provides.
When you see Elastic Load Balancing on an exam pick High availability and Fault tolerance because health checks and distribution across multiple Availability Zones are the key ELB features to look for.
A biomedical research organization must satisfy regulatory rules that require keeping at least two copies of data in Regions that are hundreds of miles apart. The organization uses Amazon Simple Storage Service as its main data store. Which approach provides the most resource-efficient way to meet this requirement?
-
✓ B. Enable S3 cross-Region replication (CRR) between buckets in different AWS Regions
The correct option is Enable S3 cross-Region replication (CRR) between buckets in different AWS Regions. This approach provides automatic, managed replication across Regions and satisfies the requirement to retain at least two copies of data in geographically distant locations.
S3 cross-Region replication or S3 CRR runs inside Amazon S3 and replicates objects asynchronously after they are created or updated. It integrates with S3 versioning and replication rules so you avoid building and maintaining custom copy logic or compute resources, which reduces operational overhead and overall cost compared with self managed solutions.
AWS DataSync is intended for efficient transfers and migrations and it requires configuring tasks and agents. It is not the native mechanism for continuous object level replication between S3 buckets and so it is less suited for this regulatory requirement.
S3 same-Region replication (SRR) replicates objects only within the same AWS Region and therefore does not meet the rule that copies must be in Regions hundreds of miles apart.
Run a nightly job on an Amazon EC2 instance to copy objects to a bucket in another Region can work but it is operationally inefficient. Managing instances, scripts, schedules and error handling increases cost and failure surface compared with using S3 CRR.
S3 CRR is the preferred choice when an exam scenario requires copies in different Regions. Favor managed S3 replication features over custom compute to minimize operational burden.
A regional logistics company runs automation scripts from its corporate data center that use the AWS CLI and SDK to call AWS APIs over the internet. Which requirement must be in place to authenticate these programmatic requests?
-
✓ C. An AWS access key ID and matching secret access key
An AWS access key ID and matching secret access key are the correct credentials required to authenticate AWS CLI and SDK calls that originate from the corporate data center and traverse the public internet.
These credentials are long term access keys that the CLI and SDK use to sign API requests and establish identity and permissions. The CLI and SDK include the An AWS access key ID and matching secret access key pair when creating signed requests so AWS can authenticate and authorize the call. When services run inside AWS you can use IAM roles and temporary credentials instead but external programmatic access requires an access key pair or an equivalent signed credential mechanism.
Amazon API Gateway is a managed service for publishing and operating custom APIs and it is not needed to invoke native AWS service APIs with the CLI or SDK.
An Amazon EC2 key pair is for SSH or RDP access to virtual machines and it does not provide authentication for AWS API calls.
AWS Direct Connect provides private network connectivity and it does not replace credential based request signing for programmatic access.
Remember to use access key ID and secret access key for programmatic CLI and SDK requests and do not confuse these with instance SSH keys or networking services.
A design team at a boutique retailer stores product photos in Amazon S3 and plans to analyze these images using an AWS managed computer vision service. Which capability is available when using Amazon Rekognition with image files?
-
✓ B. Detects objects and scenes in a picture
The correct option is Detects objects and scenes in a picture. Amazon Rekognition is an image analysis service that identifies objects and scenes in photos as a primary capability.
Rekognition can detect many visual elements including common objects, scenes, faces, text in images and it can also perform celebrity recognition and content moderation. Those analysis features are why the highlighted option is correct and why Rekognition is used for computer vision tasks rather than image editing.
The option Extracts tables and form fields from scanned documents is incorrect because extracting structured data from documents is the domain of Amazon Textract which is designed for form and table extraction.
The option Resizes images to new dimensions is incorrect because resizing is an image processing operation that must be done by application logic or image processing tools rather than by Rekognition.
The option Compresses image files to reduce storage is incorrect because Rekognition does not perform file compression and storage optimization is handled by storage services and image processing utilities.
Match each AWS service to its primary function and remember that Rekognition analyzes visual content while Textract extracts structured data from documents.
Aster Grove Outfitters needs to attribute AWS charges to teams using cost allocation tags and view them in Cost Explorer during their 90 day cost reviews. Which statements about configuring and using these tags are correct? (Choose 2)
-
✓ B. For a given resource, a tag key must be unique and can map to only one value
-
✓ D. Both AWS generated and user defined cost allocation tags must be activated separately before they appear in Cost Explorer or cost allocation reports
The correct statements are For a given resource, a tag key must be unique and can map to only one value and Both AWS generated and user defined cost allocation tags must be activated separately before they appear in Cost Explorer or cost allocation reports.
Tags are simple key value pairs and for any given resource you can assign a tag key only once so the key holds a single value. This is why the first correct statement is true and why you cannot associate multiple values with the same key on the same resource.
The second correct statement refers to activation of cost allocation tags. Both AWS generated tags and user defined tags must be turned on in the Billing or Cost Allocation Tags settings before they are available in Cost Explorer and in cost allocation reports. Activating makes the tag keys visible for grouping and filtering in those cost tools.
Tags are required to run any cost or usage report is incorrect because you can generate cost and usage reports without tags. Tags help with grouping and filtering but they are not mandatory to produce reports.
Only user defined tags need activation to appear in Cost Explorer is incorrect because AWS generated tags also require activation before they show up in Cost Explorer or cost allocation reports.
For a resource, a tag key must be unique but can have multiple values is incorrect because a resource cannot have the same tag key assigned with more than one value.
Before a cost review verify that both AWS generated and user defined cost allocation tags are activated and that each resource tag key has a single value so Cost Explorer can group costs correctly.
Employees at Orion Design Studio work remotely and need centrally managed Windows desktop environments in AWS, plus a secure network path so those desktops can access internal applications hosted on premises. Which AWS services together meet these needs? (Choose 2)
-
✓ B. Amazon WorkSpaces
-
✓ D. AWS Site-to-Site VPN
Amazon WorkSpaces and AWS Site-to-Site VPN are the correct choices for this scenario because WorkSpaces supplies centrally managed, persistent Windows desktops for remote employees and Site-to-Site VPN provides a secure IPsec tunnel from the AWS VPC to the on premises network so those desktops can reach internal applications.
Amazon WorkSpaces delivers full Windows desktop sessions that are managed from AWS and can be provisioned with persistent user volumes and group policies. This makes it suitable when users need a centralized Windows desktop experience rather than just streamed applications.
AWS Site-to-Site VPN creates an encrypted connection between the VPC and the corporate network so virtual desktops in AWS can access internal resources as if they were on the corporate LAN. The VPN option is appropriate when the requirement calls for secure connectivity without requiring dedicated physical circuits.
Amazon AppStream 2.0 is not correct because it streams individual applications rather than providing a full, persistent Windows desktop environment. It does not meet the managed desktop requirement in this scenario.
AWS Direct Connect is not the best choice for this use case because it provides a dedicated private network connection and does not include encryption by default. It is aimed at high throughput and low latency connections and is not targeted at providing secure tunnels for remote desktop sessions from individual users.
Amazon Connect is not relevant because it is a cloud contact center service and does not provide virtual desktop delivery or VPN connectivity to on premises resources.
For questions about remote desktops match a full, persistent desktop service such as Amazon WorkSpaces with a secure network link like AWS Site-to-Site VPN so applications on the corporate network remain reachable.
A media analytics startup needs to create 12 separate AWS accounts for different product teams, automate the setup, and have all costs roll up to a single payer. Which AWS service should they use to accomplish this quickly?
-
✓ B. AWS Organizations
AWS Organizations is the correct choice because it lets you programmatically create multiple AWS accounts and centralize billing so all costs roll up to a single payer account. It is built for multi account management and automation which matches the startup requirement to create 12 accounts quickly.
AWS Organizations provides APIs such as CreateAccount to automate provisioning and it supports consolidated billing under a payer account so costs aggregate in one place. It also enables governance features like service control policies to enforce policies across accounts while still allowing automated account creation and onboarding.
AWS CloudFormation is incorrect because CloudFormation orchestrates resources inside an existing account and does not natively create new AWS accounts, so it cannot meet the account provisioning requirement.
AWS IAM is incorrect because IAM manages users, roles, and permissions within a single account and it cannot be used to provision or manage multiple AWS accounts.
AWS Billing Conductor is incorrect because Billing Conductor helps customize billing views and perform chargebacks across accounts but it does not handle the creation or automated provisioning of AWS accounts.
When a question mentions multiple accounts or consolidated billing pick the service that provisions accounts and centralizes billing in a payer account.
All AWS exam questions come from the AWS Practitioner Udemy course and certificationexams.pro
A retail analytics startup is planning its network layout on AWS and needs to identify which items are native components that exist within an Amazon VPC. Which choices represent VPC components? (Choose 2)
-
✓ B. Internet gateway
-
✓ C. Subnet
Subnet and Internet gateway are the correct VPC components in this question.
A Subnet defines a CIDR range of IP addresses inside a VPC and is where you place EC2 instances and other resources. An Internet gateway attaches to the VPC and provides a route target that enables traffic to and from the internet for resources that have public routes.
AWS Storage Gateway is a hybrid storage appliance and service that connects on premises environments to AWS and it is not a VPC networking primitive.
Amazon API Gateway is a managed API hosting service and it does not live as a native VPC component unless you use specific private integrations or VPC links, so it should not be selected as a VPC primitive.
S3 bucket belongs to Amazon S3 which is global object storage and it is not a VPC component despite being accessible from VPC resources.
Focus on networking primitives such as subnets route tables and gateways when identifying VPC components and exclude managed services like S3 or API Gateway.
A fintech startup needs to execute a log archive job every Tuesday at 1:30 AM, and the task usually completes in about 7 minutes. As a Cloud Practitioner, which AWS services should be combined to implement a fully serverless solution for this requirement? (Choose 2)
-
✓ C. AWS Lambda
-
✓ D. Amazon EventBridge Scheduler
AWS Lambda and Amazon EventBridge Scheduler are the correct choices for a fully serverless solution to run a weekly log archive job on Tuesdays at 1 30 AM.
Amazon EventBridge Scheduler provides a managed way to trigger actions at a specific weekday and time and it can invoke the compute you need. AWS Lambda runs your archive code without provisioning servers and Lambda supports a maximum execution time that comfortably covers a typical 7 minute job.
AWS Step Functions is designed to coordinate multi step workflows and it can be started by EventBridge but it is unnecessary for a single short task and it does not natively provide the schedule by itself.
Amazon EC2 requires you to provision and maintain servers and to run any scheduling logic on those instances so it does not meet the requirement to be fully serverless.
AWS Batch is intended for running large scale batch workloads on managed compute environments and containers and it adds unnecessary complexity for a simple scheduled function.
For simple timed jobs choose serverless services by using EventBridge Scheduler to trigger and AWS Lambda to run the code and remember Lambda can execute for up to 15 minutes.
A technology lead at Riverbend Health is preparing to move several internal systems to AWS within the next 45 days and must review official AWS SOC and PCI compliance documentation for the vendor risk assessment. What is the appropriate way for the team to obtain these reports?
-
✓ C. AWS Artifact
AWS Artifact is the correct option for obtaining AWS SOC and PCI compliance reports for a vendor risk assessment.
AWS Artifact is a self service portal that provides on demand access to AWS security and compliance reports and certain agreements so teams can download official SOC and PCI documentation directly from their account without opening support tickets or emailing AWS personnel. The portal is intended to supply the official vendor documents auditors expect and it centralizes report access for accounts and organizations.
Create a case with AWS Support is unnecessary because standard compliance reports are distributed through the Artifact portal and do not require a support case to obtain.
AWS Secrets Manager is unrelated because it is a service for storing and rotating credentials and it does not host or distribute compliance reports.
Contact the AWS Compliance team is not required for routine downloads because the reports are already available to account holders in Artifact and contacting a team is usually only needed for unusual or account specific questions.
When an exam question asks how to get AWS compliance documentation think self service and choose Artifact rather than options that require support or outreach.
BrightFin Labs is choosing managed AWS databases for a new microservices platform and needs nonrelational engines to support graph and document data models. Which services should they use? (Choose 2)
-
✓ B. Amazon Neptune
-
✓ D. Amazon DocumentDB
Amazon Neptune and Amazon DocumentDB are the correct choices for nonrelational engines to support graph and document data models on AWS.
Amazon Neptune is a managed graph database built for highly connected data and it supports both property graph and RDF models with Gremlin and SPARQL queries which makes it well suited for graph workloads such as relationship analysis and knowledge graphs.
Amazon DocumentDB is a managed document database that stores JSON like documents and it offers MongoDB compatible APIs which makes it appropriate for document oriented microservices and applications that use flexible JSON schemas.
Amazon RDS is a managed service for relational database engines such as MySQL PostgreSQL Oracle and SQL Server and it does not provide native document or graph capabilities so it does not meet the NoSQL requirement.
Amazon Aurora is a high performance relational database compatible with MySQL and PostgreSQL and it is optimized for transactional relational workloads rather than document or graph models.
AWS Storage Gateway is a hybrid cloud storage service that connects on premises environments to cloud storage and it is not a database so it does not satisfy the requirement for nonrelational engines.
Nonrelational usually means document or graph engines so pick DocumentDB for JSON document workloads and Neptune for graph queries when you see NoSQL on the exam.
An analytics team at Riverbend Manufacturing configures an Amazon Aurora MySQL cluster to run across three Availability Zones with automatic failover so the application stays available during an AZ disruption. Which pillar of the AWS Well-Architected Framework does this design primarily support?
-
✓ B. Reliability
The correct option is Reliability. Deploying an Amazon Aurora MySQL cluster across multiple Availability Zones with automatic failover is an availability and fault tolerance design that keeps the application running during an AZ disruption.
By placing instances and storage across AZs and using automatic failover, Aurora improves fault tolerance and enables rapid recovery so services remain available when an AZ fails. These capabilities map to the Reliability pillar which emphasizes recovering from failures, meeting availability targets, and automating restoration of service.
Security focuses on protecting data and systems with controls such as identity and access management, encryption, and monitoring. Multi-AZ failover does not primarily address those protection controls so it is not the best match.
Cost optimization concentrates on controlling and reducing spend through right sizing, pricing choices, and eliminating waste. Adding Multi-AZ replicas typically increases cost to gain resilience so it does not align with cost optimization.
Performance efficiency is about selecting the right resources and scaling to meet performance requirements. Multi-AZ deployments improve availability and fault tolerance more than they directly improve raw performance so they align less with performance efficiency.
Tip map automatic failover, redundancy, and rapid recovery patterns to the Reliability pillar on the exam.
A travel booking startup named NovaTrips operates identical stacks in three AWS Regions and wants DNS to direct each user to the Region that delivers the quickest connection for that user. Which Amazon Route 53 routing policy should they use to maximize performance?
-
✓ C. Latency-based routing policy
The correct choice is Latency-based routing policy. This policy directs each user to the AWS Region that returns the lowest round trip time so it gives the quickest connection for the requester.
With Latency-based routing policy Route 53 evaluates latency to available Regions and returns the record for the Region with the lowest measured latency. For NovaTrips running identical stacks in three Regions this behavior ensures users are routed automatically to the Region that is likely to provide the best responsiveness.
Failover routing policy is intended for active passive failover and uses health checks to route traffic to a standby endpoint when a primary is unhealthy. It does not optimize for latency.
Weighted routing policy distributes queries according to percentages that you assign and does not use real time network performance to choose endpoints.
Geolocation routing policy routes based on the geographic origin of the DNS query and does not guarantee the lowest network latency because geographic proximity does not always equate to the fastest network path.
If the question asks for the fastest or lowest latency option choose latency-based routing. If it mentions percentages choose weighted and if it implies primary and standby choose failover.
Aurora Pixel, a 30-person design startup, wants the lowest paid AWS Support plan that lets them open cases by email to AWS Cloud Support Associates during business hours and does not include phone or chat with support engineers. Which plan should they choose?
-
✓ B. Developer
The Developer support plan is the correct choice because it is the lowest paid tier that provides business hours email access to AWS Cloud Support Associates and it does not include phone or chat.
The Developer plan is designed for early stage or nonproduction use and it offers technical case submission by email during business hours with Cloud Support Associates. This matches Aurora Pixel requirements by limiting access to email only and by avoiding the higher cost and expanded channels of phone and chat support.
The Basic plan is free but it does not include technical case support by email so it cannot meet the requirement to open cases with Cloud Support Associates.
The Business plan includes 24×7 phone chat and email access to Cloud Support Engineers which is more than requested and it is a higher cost option than the Developer plan.
The Enterprise plan also provides 24×7 phone chat and email and it adds advanced features and account management so it is not the lowest cost fit for the stated needs.
On exams look for the phrase email only or business-hours email to point to the Developer plan and look for 24×7 phone or chat to indicate Business or Enterprise.
Northwind Health Research stores lab reports in an Amazon S3 bucket and wants a simple way to recover objects if someone accidentally deletes or overwrites them without changing how files are uploaded. Which AWS feature should they enable to meet this requirement?
-
✓ B. Amazon S3 Versioning
Amazon S3 Versioning is correct because it preserves prior iterations of objects so you can recover files that were accidentally deleted or overwritten without changing how uploads are performed.
Amazon S3 Versioning stores multiple versions of an object in the same bucket so a delete creates a delete marker and previous versions remain available for restore. Enabling versioning on the bucket is an operational change on the bucket but it does not require altering client upload workflows, and it provides a quick, built-in way to roll back accidental changes.
AWS Backup can be used to create backups of S3 data but it introduces extra cost and operational steps and it is not the simple, immediate safeguard for accidental deletes or overwrites that versioning provides.
Amazon S3 Transfer Acceleration is designed to improve transfer speeds over long distances and it does not offer recovery or versioning features for accidental deletion or overwrite scenarios.
Amazon S3 Lifecycle configuration manages transitions and expirations to reduce storage costs and it can even delete objects on a schedule so it does not prevent accidental deletion and may remove older versions if configured to expire them.
When you need to recover accidental deletes or overwrites in S3 think Versioning and remember that Object Lock is for regulatory immutability.
BrightPay, a digital payments startup, needs to obtain AWS Payment Card Industry Data Security Standard reports for an upcoming external audit. Which AWS resource should the team use to download these compliance documents?
-
✓ B. AWS Artifact
AWS Artifact is the correct option because it provides on demand access to AWS compliance reports and certifications that auditors can download directly.
AWS Artifact hosts AWS provided compliance documents including PCI DSS and SOC reports and it is designed for customers and auditors to retrieve formal evidence of AWS compliance. This makes AWS Artifact the proper source when an external audit requires AWS generated PCI documentation.
AWS Audit Manager is not correct because it helps you automate evidence collection and map controls for your own assessments and audits rather than serving as the repository for AWS issued compliance reports.
AWS Trusted Advisor is not correct because it provides best practice checks and operational recommendations for cost security and performance and it does not provide formal compliance artifacts for download.
AWS Cost & Usage Report (AWS CUR) is not correct because it delivers billing and usage data for cost analysis and forecasting and it is unrelated to obtaining PCI or other compliance reports.
For questions about where to download AWS compliance reports remember to choose AWS Artifact because it is the official repository for AWS provided compliance documentation.
A home automation vendor, BrightNest Labs, needs to onboard about 75,000 smart thermostats that must securely exchange messages with cloud applications and other devices in near real time. Which AWS service provides this managed device connectivity and messaging?
-
✓ B. AWS IoT Core
AWS IoT Core is the correct service for this scenario because it provides managed, secure device connectivity and messaging that can support large fleets and near real time exchanges between thermostats and cloud applications.
AWS IoT Core supports MQTT, WebSockets, and HTTP and it handles device authentication, authorization, and message routing at scale. The service offers a rules engine and device shadows so devices can publish and subscribe to topics and cloud applications can process or forward messages, which meets the requirement to securely exchange messages among 75,000 smart thermostats and other systems.
Amazon WorkSpaces is a cloud desktop service and it does not provide IoT device connectivity or messaging, so it is not suitable for this use case.
AWS Directory Service provides managed Active Directory for identity and access management and it is not designed for connecting or messaging IoT devices, so it does not meet the requirement.
AWS IoT Greengrass focuses on edge compute and local messaging for devices that need to run code or operate while offline and it complements rather than replaces AWS IoT Core for cloud connectivity.
For questions about secure, scalable device messaging choose AWS IoT Core and reserve AWS IoT Greengrass for scenarios that require local processing or offline operation.
All AWS exam questions come from the AWS Practitioner Udemy course and certificationexams.pro
A mapping startup named Horizon Maps needs a service that is provided at a global scope by default rather than being confined to a single AWS Region. Which service should they choose?
-
✓ B. Amazon WorkSpaces
The correct choice is Amazon WorkSpaces. This service is offered by AWS as a managed Desktop as a Service solution that is delivered globally and it enables organizations to provision and manage virtual desktops for users around the world.
Amazon WorkSpaces is designed as a globally delivered desktop service with centralized management and user access across many locations. That global delivery model is what makes it the proper selection when the requirement calls for a service provided at a global scope by default.
Amazon Elastic File System (Amazon EFS) is regional because each file system exists within a specific AWS Region and operations target that regional resource.
AWS Snowball is a physical data transfer appliance and associated service that is used per job and per participating Region rather than being provided as a global service by default.
Amazon Simple Storage Service (Amazon S3) buckets are created in a particular Region and data residency is tied to that Region even though some S3 features operate globally. This regional characteristic makes S3 buckets an incorrect choice for a service required to be global by default.
When a question asks for a service that is provided at a global scope look for offerings described as globally delivered or with a worldwide control plane. Global usually means the service itself is managed across regions rather than you creating resources in a specific Region.
A fintech company named Aurora Ledger plans to run several Amazon EC2 instances in a VPC and wants an instance-scoped virtual firewall that can apply the same inbound and outbound rules across multiple instances. Which AWS feature should they choose?
-
✓ B. Amazon EC2 security groups
Amazon EC2 security groups is the correct option because they provide a stateful instance level virtual firewall that you can attach to multiple instances so the same inbound and outbound rules apply across them.
Security groups are stateful so return traffic is automatically allowed when outgoing traffic is permitted and they operate at the instance level so you can reuse the same group for many instances to ensure consistent enforcement.
Network Access Control Lists (ACL) are subnet scoped and stateless and they are not attached to individual instances so they do not meet the instance scoped requirement.
Virtual private gateways (VPG) are used for VPN connectivity into a VPC and they do not act as firewalls for EC2 instances.
Route table controls traffic routing inside the VPC and it does not provide packet filtering or firewall capabilities.
When a question asks for an instance scoped firewall remember that security groups are instance level and stateful and NACLs are subnet level and stateless. Match the scope in the scenario to the control type.
HarborPay, a fintech startup, wants to reduce the effort of building and patching golden server images that are used to launch Amazon EC2 instances and also deployed to on-premises VMware servers. Which AWS service should the team use to automate image creation, testing, and distribution?
-
✓ C. Amazon EC2 Image Builder
Amazon EC2 Image Builder is the correct option because it provides managed pipelines that automatically build, test, secure, and distribute server images for Amazon EC2 and for on premises VMware environments.
Amazon EC2 Image Builder can create and harden images, run validation tests, apply patches, and publish artifacts such as AMIs and VM images so teams do not have to script and maintain their own image pipelines.
Amazon Machine Image (AMI) is only the image artifact used to launch EC2 instances and it is not a managed service that orchestrates building testing patching or distributing images.
AWS CloudFormation focuses on provisioning infrastructure resources from templates and it does not create or patch operating system images.
AWS Systems Manager can patch and manage running instances and Image Builder can leverage Systems Manager under the hood but Systems Manager by itself does not provide end to end image creation testing and distribution pipelines.
When the requirement is to build, test, harden, and distribute images for EC2 and on premises choose the managed image pipeline service rather than a single image artifact or an infrastructure provisioning tool.
A small fintech startup plans to run a pilot analytics API on Amazon EC2 for the next 45 days and wants a pricing choice that keeps costs predictable without any upfront payment or long term commitment while ensuring AWS will not interrupt the running instances. Which EC2 purchasing option should they select?
-
✓ B. EC2 On-Demand Instances
The correct choice is EC2 On-Demand Instances. On-Demand pricing requires no upfront payment and no long term commitment and instances are not subject to AWS price-driven interruptions so it provides predictable costs and continuous availability for a 45 day pilot.
On-Demand billing is usage based so you pay only for the instance hours or seconds you consume and you can stop the instances at the end of the pilot to avoid further charges. Because AWS does not reclaim On-Demand instances for price reasons this option meets the requirement that the startup will not be interrupted.
Spot Instances are the least expensive option but they can be reclaimed by AWS with little notice so they do not satisfy a requirement to avoid interruptions during the pilot.
Reserved Instances require a one or three year commitment or an upfront payment and they are therefore unsuitable when the scenario specifies no long term commitment even though they provide cost savings for steady long term workloads.
Compute Savings Plans provide billing discounts in exchange for a one or three year commitment to a consistent spend rate and they do not change instance interruption behavior or provide capacity guarantees that would remove the risk of interruptions.
When a question states no upfront and no long term commitment and requires continuous availability prefer On-Demand rather than Spot or commitment-based discounts.
A platform team at Klamath Robotics is evaluating AWS offerings to run application code and large-scale processing jobs. Which services from the list are considered compute services in AWS? (Choose 2)
-
✓ B. AWS Elastic Beanstalk
-
✓ D. AWS Batch
AWS Batch and AWS Elastic Beanstalk are the correct compute services from the list because they provision and run workloads on AWS compute resources and enable you to execute jobs or deploy applications without managing the underlying infrastructure directly.
AWS Batch is built to run large scale batch computing jobs and it handles job scheduling resource provisioning and scaling so you can run containerized or array jobs at scale without managing individual instances.
AWS Elastic Beanstalk simplifies application deployment and environment management by handling provisioning scaling load balancing and health monitoring so you can focus on your application code rather than the infrastructure.
Amazon S3 is object storage for durable scalable data storage and it does not execute application code or run compute jobs so it is not a compute service.
AWS CloudTrail records API activity for auditing and governance and it does not run workloads or schedule jobs so it is not a compute service.
Amazon EFS provides shared file storage for instances and containers and it supports compute workloads but it does not itself run code so it is not a compute service.
When deciding if a service is compute ask whether it actually runs code or schedules jobs. If the service mainly stores data or records activity it is likely not compute.
Kora Metrics, a retail analytics startup, runs several AWS accounts with a mix of On-Demand and Reserved Instances. One member account is accruing many unused Reserved Instance hours each month. How can the company lower its overall spend by ensuring these unused discounts benefit workloads in the other accounts? (Choose 2)
-
✓ B. Enable consolidated billing in AWS Organizations so RI discounts are shared across accounts
-
✓ D. Move all accounts under a single AWS Organizations management account to link them
Enable consolidated billing in AWS Organizations so RI discounts are shared across accounts and Move all accounts under a single AWS Organizations management account to link them are correct because they allow unused Reserved Instance discounts in one account to reduce On Demand charges in other linked accounts.
Enable consolidated billing in AWS Organizations so RI discounts are shared across accounts aggregates usage across the payer family so unused Reserved Instance hours in a member account can automatically offset matching instance usage in other accounts and lower overall spend.
Move all accounts under a single AWS Organizations management account to link them is the required operational step to create the payer family and enable consolidated billing and RI discount sharing across those accounts. Accounts must be linked under the same management account for the discounts to flow across members.
Switch the fleet to Amazon EC2 Spot Instances can reduce costs for interruptible workloads but it does not apply existing Reserved Instance discounts that are already purchased in a member account.
Place the EC2 instances into a cluster placement group only affects placement and network performance characteristics and does not change billing or enable Reserved Instance sharing across accounts.
Sell the idle Reserved Instances on the Reserved Instance Marketplace might recover some spend for that specific account but it does not immediately apply discounts to other accounts and it is not the simplest way to lower overall cross-account spend when sharing via Organizations is available.
Enable consolidated billing and link member accounts under one management account so unused Reserved Instance discounts automatically apply across the payer family
A fast-growing online learning platform plans to launch in 45 countries and wants to lower latency when serving images, downloads, and other static web content to viewers worldwide without changing its application servers. Which AWS service should the team choose to achieve this?
-
✓ C. Amazon CloudFront
The correct choice is Amazon CloudFront because it is a global content delivery network that caches static assets at edge locations around the world and brings content physically closer to users to reduce latency without changing the origin application servers.
Amazon CloudFront places copies of images downloads and other static web content at edge locations so requests are served from nearby caches instead of the origin. This reduces round trip time and lowers load on the application servers and it integrates with S3 custom origins and supports HTTPS compression and cache invalidation to manage content delivery efficiently.
Amazon Lightsail is a simplified virtual server and application hosting service and it does not provide worldwide edge caching for static web assets so it will not by itself reduce global delivery latency.
Amazon Route 53 is a DNS and traffic management service and it can route users to low latency endpoints but it does not cache or accelerate content at edge locations so it will not directly speed up static asset delivery.
Amazon ElastiCache provides in-memory caching for databases and application data within a region and it is not designed to distribute static web assets to a global audience so it is not the right choice for reducing worldwide latency for downloads and images.
When a question describes serving static assets globally with lower latency without changing servers think CDN and pick Amazon CloudFront.
An e-commerce startup deploys its web tier across several Availability Zones behind a load balancer so users can still access the site even if one zone experiences an outage. Which cloud design principle does this approach demonstrate?
-
✓ B. Plan for failure
The correct choice is Plan for failure because the deployment spreads the web tier across multiple Availability Zones and places it behind a load balancer so users can still reach the site when a single zone goes down.
Plan for failure is appropriate because the architecture assumes components may fail and builds redundancy and failover into the design. Distributing instances across AZs and using a load balancer increases fault tolerance and supports high availability by reducing single points of failure.
Support elasticity is incorrect because elasticity focuses on matching capacity to demand with mechanisms such as Auto Scaling rather than on cross zone resilience.
Automate operations is incorrect because placing resources in multiple Availability Zones and using a load balancer does not by itself automate deployment or operational tasks.
Prioritize agility is incorrect because agility refers to rapid development and iteration practices and not specifically to designing for availability and fault tolerance.
When you see multi Availability Zone deployments and a load balancer associate them with designing for failure and high availability. Map Auto Scaling to elasticity and map Infrastructure as Code to automation.
A travel booking startup runs roughly 24 on-premises application servers that simultaneously mount an NFS share. They plan to shift the applications to Amazon EC2 and want to keep a shared, POSIX-compliant file system in AWS with minimal changes so all instances can read and write concurrently. Which AWS storage service should they choose?
-
✓ B. Amazon EFS
The correct choice is Amazon EFS. Amazon EFS provides a managed, elastic NFS file system that many EC2 instances can mount at the same time while preserving POSIX file semantics and requiring minimal application changes.
Amazon EFS supports concurrent read and write access from multiple EC2 instances and scales automatically to match workload demands. This makes it suitable for lifting and shifting applications that previously used an on premises NFS share because the same POSIX behaviors are available and parallel access is supported.
AWS Storage Gateway is intended for hybrid scenarios where on premises systems need cloud backed storage and it does not serve as a native shared NFS filesystem for EC2 instances in AWS.
Amazon S3 is object storage and it does not provide POSIX semantics so it cannot act as a drop in shared NFS file system for concurrent read and write by multiple servers.
Amazon EBS is block storage that is normally attached to a single instance and it does not offer a general multi writer POSIX file system across many servers. The limited multi attach capability is not a substitute for a true shared NFS filesystem.
For shared POSIX file access across many EC2 instances choose Amazon EFS and use Amazon EBS for single instance block volumes while using Amazon S3 for object storage.
A small charity named RiverHope is evaluating AWS Support tiers before launching a data portal. Which feature is included in every AWS Support plan and provides account-specific visibility into events that could affect their resources?
-
✓ B. AWS Health Dashboard (Your account health)
AWS Health Dashboard (Your account health) is the correct option because it is included with every AWS Support plan and it gives account specific visibility into events that could affect RiverHope’s resources.
AWS Health Dashboard (Your account health) provides personalized notifications and guidance for scheduled and unplanned AWS events that may impact services in a given account. It surfaces operational and account level issues so a small charity can see only the events that apply to their resources and respond appropriately.
AWS Support API is not available on all tiers and requires at least Business Support for programmatic access so it is not a feature present in every support plan.
Unlimited technical support cases and unlimited contacts are benefits tied to Business and Enterprise plans and therefore they are not universal across all support levels.
Full suite of AWS Trusted Advisor best practice checks is restricted to Business and Enterprise plans while Basic and Developer receive only core checks so the full suite is not included with every plan.
Remember that the AWS Health Dashboard shows account specific events for all support plans while programmatic APIs and full Trusted Advisor checks require higher tier support.
All AWS exam questions come from the AWS Practitioner Udemy course and certificationexams.pro
A learning technology company needs to provide external auditors with AWS security attestations and certifications, such as SOC 2 Type II and ISO/IEC 27001, for the past 12 months. Which AWS resource lets the team obtain these AWS compliance documents on demand?
-
✓ B. AWS Artifact
AWS Artifact is the correct option because it provides on demand access to AWS compliance reports and certifications such as SOC 2 Type II and ISO/IEC 27001 for requested time frames like the past twelve months.
AWS Artifact is the AWS service that publishes AWS owned compliance reports and agreements so you can download or share them with external auditors. The service exposes third party audit reports and AWS certifications that document how AWS controls its infrastructure and services which auditors require for vendor attestations.
AWS Organizations centralizes multi account governance and policy management but it does not host or distribute AWS compliance artifacts for download.
AWS IAM handles identity and access management for users, roles, and permissions which is unrelated to obtaining AWS compliance reports.
AWS Audit Manager helps you automate evidence collection and assess your own environment but it does not serve as the repository for AWS’s own compliance certifications and third party audit reports.
When the question asks for AWS provided compliance reports for auditors remember to look in AWS Artifact and not in services that manage accounts, identities, or your own evidence.
A small analytics startup runs Amazon EC2 instances with EBS volumes and schedules nightly point in time backups for compliance. In which AWS storage service are these EBS snapshots actually stored?
-
✓ C. Amazon S3
Amazon S3 is the correct option because EBS snapshots are stored in S3 as incremental point in time backups managed by the EBS service and users do not interact with them as ordinary S3 objects.
The EBS service creates block level, incremental snapshots and writes the snapshot data to Amazon S3 behind the scenes to provide durable and replicated storage. The snapshot lifecycle is managed through the EBS APIs and console so you request restores and copies without directly handling S3 objects.
Amazon EBS is incorrect because snapshots are not kept on EBS volumes themselves. EBS volumes are the live block devices and their snapshots are moved to S3 for durability and efficient incremental storage.
Amazon EFS is incorrect because it is a shared network file system for EC2 and it does not act as the backend for EBS snapshot storage.
EC2 instance store is incorrect because instance store provides ephemeral local host storage and it cannot hold persistent EBS snapshots.
Remember that snapshots are managed by the EBS service but the backed up data is stored in S3. Watch for choices that confuse live volumes with snapshot storage.
A finance director at Meridian Retail wants to know how AWS is able to keep lowering the per-unit cost of many services over time without reducing service quality. What primary factor enables these ongoing price reductions?
-
✓ C. Economies of scale
The correct answer is Economies of scale.
As AWS aggregates usage across a global customer base it can purchase hardware and network capacity more efficiently and spread fixed costs over many customers which reduces the provider’s unit cost. Those savings come from purchasing power operational automation and accumulated experience and AWS can pass a portion of those lower costs to customers as lower prices over time.
Pay-as-you-go pricing describes how customers are billed based on consumption and it does not explain why the provider can lower list prices across the board over time.
Savings Plans give customers lower rates when they commit to usage and they are a purchasing option rather than the underlying reason AWS can reduce its standard prices.
Amazon EC2 Auto Scaling helps individual customers match capacity to demand and control their own costs but it does not create provider-level cost reductions that drive lower per-unit prices for all customers.
When a question asks why cloud provider prices fall choose economies of scale as the provider level reason rather than customer billing models or commitment discounts.
A manufacturing firm, Meridian Tools, currently reaches AWS from its primary data center over the public internet. The network team wants a dedicated, private link to AWS to build a hybrid environment that does not traverse the internet. Which AWS service should they choose?
-
✓ B. AWS Direct Connect
The correct choice is AWS Direct Connect because it provides a dedicated private network connection from the on premises data center into AWS that does not traverse the public internet.
AWS Direct Connect establishes a physical or hosted connection to AWS which delivers more consistent bandwidth and lower latency than going over the internet, and it keeps traffic on a private path into AWS rather than using public networks.
AWS VPC Endpoint allows private access to supported AWS services from resources inside a VPC but it does not link an on premises network to AWS so it cannot provide the required dedicated private circuit.
AWS Site-to-Site VPN uses encrypted tunnels over the public internet so it provides secure connectivity but it still traverses the internet and therefore is not a private physical link.
AWS Transit Gateway functions as a central routing hub to connect multiple VPCs and on premises networks but it does not by itself create a private circuit and it depends on attachments such as Direct Connect or VPN for the actual link.
Look for the phrase dedicated private link that does not traverse the internet and choose the service that provides a physical connection into AWS.
A fintech company needs its applications running in an Amazon VPC to communicate with Amazon SQS using private IP addresses only, avoiding traversal over the public internet. Which AWS capability should they implement to achieve this connectivity?
-
✓ B. VPC Interface Endpoint
The correct choice is VPC Interface Endpoint. VPC Interface Endpoint uses AWS PrivateLink to place an elastic network interface with a private IP in your subnets so that applications in the VPC can reach Amazon SQS entirely over the AWS network without traversing the public internet.
With a VPC Interface Endpoint AWS provisions an ENI in your subnet that has a private IP and a private route to the service. Traffic to SQS stays on the AWS network and does not require internet gateways or NAT devices. This satisfies the requirement to use private IP addresses only.
AWS Direct Connect provides dedicated connectivity between on premises locations and AWS and it does not create a private endpoint inside the VPC for SQS. It is not used to expose SQS via a private IP inside your VPC.
NAT Gateway provides outbound internet access for resources in private subnets and will route traffic to services over the public internet via an internet gateway. It does not provide a private endpoint to SQS and so it will not keep SQS traffic entirely on private IP addresses.
VPC Gateway Endpoint is limited to Amazon S3 and DynamoDB and cannot be used for SQS. Gateway endpoints remain valid for S3 and DynamoDB but they do not support other services so they are not applicable to private connectivity to SQS.
Remember that Interface Endpoints use AWS PrivateLink to keep traffic private and that Gateway Endpoints only support S3 and DynamoDB.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
