Free Sample Questions for the AWS Developer Certification
 These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
AWS Developer Associate Sample Questions & Answers
The AWS Certified Developer Associate exam tests your ability to design, build, and deploy applications that make full use of the AWS ecosystem.
It focuses on key services like Lambda for serverless computing, S3 for object storage, DynamoDB and RDS for databases, and SQS or SNS for event driven messaging. Understanding how these services integrate is essential to passing the exam and becoming an effective cloud developer.
To help you prepare, this guide provides AWS Developer Practice Questions that mirror the structure and difficulty of the real test. You will find Real AWS Developer Exam Questions and AWS Developer Exam Sample Questions that cover core objectives such as writing and deploying code using the AWS SDK, implementing security with IAM roles, managing queues and streams, and optimizing data access with S3 and DynamoDB.
Targeted Developer Exam Topics
Each section includes AWS Developer Questions and Answers written to teach as well as test, giving you insight into how to reason through real world scenarios. For additional preparation, you can explore the AWS Developer Exam Simulator and full AWS Developer Practice Tests to assess your readiness.
| Git, GitHub & GitHub Copilot Certification Made Easy | 
|---|
|   Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry. 
 Get certified in the latest AI, ML and DevOps technologies. Advance your career today.  |  
    
These materials are not AWS Developer Exam Dumps or copied content. They are original study resources built to strengthen your understanding of AWS architecture, improve your test taking skills, and give you the confidence to succeed. Whether you are reviewing an AWS Developer Braindump style summary or tackling full length practice exams, these resources will help you master the knowledge areas required to pass the AWS Developer Certification exam.
Certification Sample Questions
A digital mapping startup runs a serverless backend where geospatial tile generation and compute intensive calculations execute in AWS Lambda. After reviewing 30 days of Amazon CloudWatch metrics, the team observes increased function durations during CPU bound steps and wants to improve throughput without rearchitecting. What change should they make to best handle these compute intensive workloads?
-  
❏ A. Enable provisioned concurrency for the functions
 -  
❏ B. Increase the memory size configured for the Lambda functions to allocate more CPU
 -  
❏ C. Invoke the functions asynchronously using an event source such as Amazon SQS
 -  
❏ D. Move the compute-heavy tasks to AWS Fargate
 
A retail analytics startup named LumaRetail runs nearly all internal services on an AWS serverless stack and must deliver a new capability in under 10 days by reusing prebuilt serverless components instead of writing code from scratch. Which AWS service offers the most straightforward way to find and deploy ready-made serverless applications?
-  
❏ A. AWS Service Catalog
 -  
❏ B. AWS Serverless Application Repository (SAR)
 -  
❏ C. AWS Proton
 -  
❏ D. AWS Marketplace
 
A media analytics startup runs an Amazon ElastiCache for Redis layer in front of an Amazon Aurora MySQL cluster. They want to lower costs by ensuring the cache only holds entries after clients request them at least once. Which caching approach should they use?
-  
❏ A. Set short TTLs for cached entries
 -  
❏ B. Adopt a lazy-loading cache pattern
 -  
❏ C. Switch to a write-through strategy
 -  
❏ D. Amazon RDS Proxy
 
You are helping Veridian Labs stand up infrastructure with the AWS Cloud Development Kit. You will scaffold a project, define constructs and stacks in code, generate templates, and roll them out to an AWS account. In what sequence should this CDK workflow be executed to follow best practice?
-  
❏ A. Initialize the app with an AWS CDK template → Add stack code → Synthesize the app → Deploy stacks → Build the project
 -  
❏ B. Initialize the app with an AWS CloudFormation template → Add stack code → Build the project (optional) → Synthesize → Deploy
 -  
❏ C. Initialize the app with an AWS CDK template → Add stack code → Build the project (optional) → Synthesize the stacks → Deploy to the account
 -  
❏ D. Initialize using an AWS CloudFormation template → Add stack code → Synthesize the stacks → Deploy → Build the project
 
An engineer at Helios Analytics configures an Amazon ECS service with a task placement strategy that lists two entries in order: type “spread” on field “attribute:ecs.availability-zone” and then type “spread” on field “instanceId”. When the service schedules tasks, what behavior should be expected?
-  
❏ A. It spreads tasks across Availability Zones, then randomly assigns tasks to instances inside each zone
 -  
❏ B. It spreads tasks across Availability Zones, then packs tasks based on lowest available memory per zone
 -  
❏ C. It spreads tasks across Availability Zones, then guarantees each task lands on a different instance within the zone
 -  
❏ D. It spreads tasks across Availability Zones, then distributes tasks evenly across instances within each zone
 
A development group at HarborPeak Studios needs engineers to access a member account within an AWS Organizations setup. The administrator of the management account wants to strictly limit which AWS services, specific resources, and API operations are usable in that member account. What should the administrator apply?
-  
❏ A. Organizational Unit
 -  
❏ B. Attach a Service Control Policy (SCP)
 -  
❏ C. IAM permission boundary
 -  
❏ D. Tag Policy
 
Orion Outfitters processes about 240,000 online purchase records each day and stores them in an Amazon S3 bucket for auditing. Company policy mandates that every new object written to this bucket is encrypted at the time of upload. What should a developer implement to guarantee that only encrypted objects can be stored in the bucket?
-  
❏ A. Enable default bucket encryption using SSE-S3 on the Amazon S3 bucket
 -  
❏ B. Use the AWS Config managed rule s3-bucket-server-side-encryption-enabled to detect noncompliant uploads
 -  
❏ C. Attach an S3 bucket policy that rejects PutObject requests missing the x-amz-server-side-encryption header
 -  
❏ D. Create an Amazon CloudWatch alarm that alerts when unencrypted objects are added to the bucket
 
A biotech analytics company runs Node.js microservices on Amazon ECS with Fargate. The AWS X-Ray daemon is deployed as a sidecar container listening on 10.0.2.25:3000 instead of the default address. To make sure the application’s X-Ray SDK discovers the daemon correctly, which environment variable should be configured in the task definition?
-  
❏ A. AWS_XRAY_TRACING_NAME
 -  
❏ B. AWS_XRAY_DAEMON_ADDRESS
 -  
❏ C. AWS Distro for OpenTelemetry
 -  
❏ D. AWS_XRAY_CONTEXT_MISSING
 
You are the lead engineer at Medora Health responsible for NoSQL-backed services. A new patient lookup API must sustain 18 strongly consistent reads per second, and each item is approximately 9 KB in size. How many read capacity units should be provisioned on the DynamoDB table to satisfy this requirement?
-  
❏ A. 72
 -  
❏ B. 27
 -  
❏ C. 54
 -  
❏ D. 18
 
A ride-sharing startup uses an Amazon SQS Standard queue to decouple trip events. A worker service polls the queue about every 3 seconds and cannot pick a specific message, only the maximum batch size for each ReceiveMessage request. What is the largest number of messages it can ask SQS to return in a single call?
-  
❏ A. 20
 -  
❏ B. 10
 -  
❏ C. 5
 -  
❏ D. 25
 
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
A developer at Helios Travel is building a REST API with Amazon API Gateway and needs a Lambda authorizer for a custom authentication mechanism. The authorizer must read client-provided values from HTTP headers and query string parameters. Which approach should the developer choose to meet this requirement?
-  
❏ A. IAM authorization
 -  
❏ B. TOKEN Lambda authorizer
 -  
❏ C. REQUEST Lambda authorizer
 -  
❏ D. Amazon Cognito user pool authorizer
 
A developer at a logistics startup needs an Auto Scaling group of Amazon EC2 instances to scale automatically based on the average memory utilization across the fleet. The team wants a supported and maintainable approach that makes scaling decisions from a memory metric. What should they implement?
-  
❏ A. Enable detailed monitoring for EC2 and the Auto Scaling group, then create a CloudWatch alarm on memory utilization
 -  
❏ B. Use AWS Auto Scaling target tracking with the predefined metric ASGAverageCPUUtilization to approximate memory needs
 -  
❏ C. Publish a custom CloudWatch metric for memory from each instance using PutMetricData and create a scaling alarm on that metric
 -  
❏ D. AWS Lambda
 
Riverton Health Tech exposes public REST endpoints through Amazon API Gateway, and stage caching is enabled with a 180-second TTL to reduce backend load. Several partners want a self-service way to force a fresh response for a single call without changing the API or requiring the provider to clear the entire cache. How can callers trigger a one-time cache refresh on demand while keeping the existing API unchanged?
-  
❏ A. Tell clients to invoke a special AWS-managed endpoint that clears the API Gateway cache
 -  
❏ B. Have clients append a query string parameter named REFRESH_CACHE to each request
 -  
❏ C. Instruct clients to send the HTTP header Cache-Control: max-age=0 with the request
 -  
❏ D. Give clients IAM credentials so they can call the API Gateway FlushStageCache action
 
A media startup named AuroraStream runs a Python-based AWS Lambda function that generates a temporary file during execution and needs to store it in an Amazon S3 bucket. The team wants to add the upload step while making the fewest possible edits to the current codebase. Which approach is the most appropriate?
-  
❏ A. Bundle the AWS SDK for Python with the Lambda deployment package
 -  
❏ B. Attach a Lambda layer that provides the AWS SDK for Python
 -  
❏ C. Use the AWS SDK for Python that is preinstalled in the Lambda Python runtime
 -  
❏ D. Use the AWS CLI in the Lambda execution environment
 
A software engineer at BlueHarbor Analytics is setting up a CI/CD workflow and needs to automatically deploy application packages to Amazon EC2 instances and to about 14 on-premises virtual machines in the company data center. Which AWS service provides a managed way to orchestrate these deployments to both environments?
-  
❏ A. AWS Systems Manager
 -  
❏ B. AWS CodeBuild
 -  
❏ C. AWS CodeDeploy
 -  
❏ D. AWS CodePipeline
 
StoneRiver Media uses a GitLab CI/CD pipeline and needs one automated process to deploy the same application package to about 30 Amazon EC2 instances and to a group of on-premises VMware virtual machines in its data center. Which AWS service should the developer choose to perform these deployments across both environments?
-  
❏ A. AWS CodeBuild
 -  
❏ B. AWS Elastic Beanstalk
 -  
❏ C. AWS CodeDeploy
 -  
❏ D. AWS CodePipeline
 
A biotech analytics firm needs to encrypt approximately 80 TB of research data, and its compliance policy mandates that the data encryption keys be created inside customer-controlled, single-tenant, tamper-resistant hardware. Which AWS service should the team use to satisfy this requirement?
-  
❏ A. AWS Certificate Manager
 -  
❏ B. AWS KMS
 -  
❏ C. AWS CloudHSM
 -  
❏ D. AWS Nitro Enclaves
 
A product team at a digital publishing startup is writing an AWS CloudFormation template to launch an EC2 based content service. The stack will be rolled out in several Regions, and a Mappings section links each Region and platform key to the correct base AMI. Which YAML syntax should be used to call FindInMap to retrieve the AMI from this two level mapping?
-  
❏ A. !FindInMap [ MapName, TopLevelKey ]
 -  
❏ B. Fn::FindInMap: [ MapName, TopLevelKey, SecondLevelKey ]
 -  
❏ C. !FindInMap [ MapName ]
 -  
❏ D. !FindInMap [ MapName, TopLevelKey, SecondLevelKey, ThirdLevelKey ]
 
Northline Auto runs a nationwide appointment portal where customers book service slots at multiple garages. Bookings are stored in an Amazon DynamoDB table named ServiceBookings with DynamoDB Streams enabled. An Amazon EventBridge schedule triggers an AWS Lambda function every 30 hours to read the stream and export a summary to an Amazon S3 bucket. The team observes many updates never make it to S3 even though there are no logged errors. What change should they make to ensure all updates are captured?
-  
❏ A. Set the DynamoDB Streams StreamViewType to NEW_IMAGE
 -  
❏ B. Reduce the EventBridge schedule so the Lambda runs at least every 24 hours
 -  
❏ C. Increase the schedule so the Lambda runs every 48 hours
 -  
❏ D. Configure DynamoDB Streams to retain records for 72 hours
 
A company runs a serverless API with Amazon API Gateway in front of AWS Lambda functions and uses a Lambda authorizer for authentication. Following a production incident, a developer needs to trace each client request from the API Gateway entry point through the downstream Lambda and any invoked services to identify latency and errors. Which AWS service should the developer use to achieve end-to-end request tracing and analysis across these components?
-  
❏ A. Amazon CloudWatch
 -  
❏ B. AWS X-Ray
 -  
❏ C. VPC Flow Logs
 -  
❏ D. Amazon Inspector
 
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
BlueOrbit, a retail startup, exposes roughly 12 HTTPS endpoints through Amazon API Gateway that trigger a Lambda backend. The team needs to secure these routes using an access control mechanism that API Gateway supports natively. Which option is not supported by API Gateway for request authorization?
-  
❏ A. IAM authorization with Signature Version 4
 -  
❏ B. AWS Security Token Service (STS)
 -  
❏ C. Lambda authorizer
 -  
❏ D. Amazon Cognito user pools
 
A photo-sharing startup, Northwind Lens, is moving its application to AWS. Today, the app saves user-uploaded media into a local folder on the server. The company will run the service in an Auto Scaling group that can grow to 18 instances across three Availability Zones. Every upload must persist and be immediately accessible to any existing or new instance in the fleet with minimal operational overhead. What should the developer implement?
-  
❏ A. Use Amazon EBS volumes and bake a golden AMI from a snapshot so new instances start with the same files
 -  
❏ B. Configure Amazon EBS Multi-Attach to mount a single volume to all instances in the Auto Scaling group
 -  
❏ C. Refactor the application to write and read uploads from a single Amazon S3 bucket
 -  
❏ D. Store files on instance store and replicate to other nodes using file synchronization tooling
 -  
❏ E. Keep files on an attached Amazon EBS volume and synchronize changes between instances with custom scripts
 
PixelWave Studios runs an event-driven application on AWS Lambda behind Amazon API Gateway with tracing enabled for AWS X-Ray across every function. The team reports that nearly all requests are being traced and their X-Ray costs have surged. They still need to track latency and error patterns but want to lower spend with minimal changes to the workload. What should the developer do?
-  
❏ A. Use X-Ray filter expressions in the console to narrow displayed traces
 -  
❏ B. Turn off X-Ray tracing and rely on Amazon CloudWatch metrics and logs
 -  
❏ C. Configure AWS X-Ray sampling rules to capture a representative subset of requests
 -  
❏ D. Reduce AWS X-Ray trace retention to 14 days
 
A software engineer at Aurora Retail is packaging a web app for AWS Elastic Beanstalk and must apply custom load balancer settings by adding a file named lb-settings.config. Where in the application source bundle should this file be placed so Elastic Beanstalk reads it during deployment?
-  
❏ A. Inside the .platform/hooks directory
 -  
❏ B. In a bin directory within the project
 -  
❏ C. In the .ebextensions folder at the root of the source bundle
 -  
❏ D. Directly in the top-level of the source bundle
 
An engineer at Nova Courier is modeling a parcel-tracking workflow using AWS Step Functions. Several states are defined in the Amazon States Language. Which state definition represents a single unit of work executed by the state machine?
-  
❏ A. “pause_for_event”: { “Type”: “Wait”, “Timestamp”: “2021-10-05T08:30:00Z”, “Next”: “Continue” }
 -  
❏ B. “StashPayload”: { “Type”: “Pass”, “Result”: { “status”: “ok”, “attempt”: 3 }, “ResultPath”: “$.meta”, “Next”: “Finalize” }
 -  
❏ C. “ProcessParcel”: { “Type”: “Task”, “Resource”: “arn:aws:lambda:us-west-2:210987654321:function:ProcessParcelFn”, “Next”: “AfterProcess”, “Comment”: “Invoke the parcel processor Lambda” }
 -  
❏ D. “Abort”: { “Type”: “Fail”, “Cause”: “Invalid status”, “Error”: “StateError” }
 
Riverbend Retail hosts an application on AWS Elastic Beanstalk using the Java 8 platform. A new build requires upgrading to the Java 11 platform. All traffic must shift to the new release at once, and the team must be able to revert quickly if problems occur. What is the most appropriate way to perform this platform upgrade?
-  
❏ A. Update the Elastic Beanstalk environment to the Java 11 platform version
 -  
❏ B. Perform a Blue/Green deployment and swap environment URLs
 -  
❏ C. Use an immutable deployment in Elastic Beanstalk
 -  
❏ D. Use Elastic Beanstalk traffic splitting for the deployment
 
MetroSight Analytics operates a live transit telemetry pipeline that ingests events into Amazon Kinesis Data Streams, and a fleet of consumers on Amazon ECS processes the records. During rush hours the producers spike from about 2,000 to 7,500 records per second, so the team routinely changes the shard layout to match demand. Which statements about Kinesis resharding should the team rely on when planning capacity and protecting data integrity? (Choose 2)
-  
❏ A. To reduce overall capacity, you should split underutilized cold shards
 -  
❏ B. Splitting shards increases the stream’s ingest and read capacity
 -  
❏ C. During resharding, data from parent shards is discarded
 -  
❏ D. Amazon Kinesis Data Firehose can automatically split and merge shards in a Kinesis Data Stream
 -  
❏ E. Merging shards reduces the stream’s capacity and cost
 
A developer at BlueLedger, a fintech startup, is updating an AWS Lambda function that invokes a third-party GraphQL API and must now persist results in an Amazon RDS database within a VPC. The function has been placed in two private subnets with a security group and can connect to the database, but during testing it can no longer reach the external GraphQL endpoint. What should the developer change to restore outbound internet access from the function while maintaining database connectivity? (Choose 2)
-  
❏ A. Attach an internet gateway to the VPC and run the function in a public subnet
 -  
❏ B. Verify the Lambda function’s security group allows outbound traffic to 0.0.0.0/0
 -  
❏ C. Create a NAT gateway in the VPC and route 0.0.0.0/0 from the Lambda private subnets to the NAT gateway
 -  
❏ D. Submit a request to increase the function’s reserved concurrency limit
 -  
❏ E. Create an interface VPC endpoint for the external GraphQL API
 
Orion Logistics is moving its internal platforms to AWS and needs a fully managed private PKI built from native services. The solution must allow IAM policy controls, capture API activity in AWS CloudTrail for auditing, issue private X.509 certificates, and support a hierarchy with subordinate certificate authorities. Which AWS offering best meets these needs?
-  
❏ A. AWS Key Management Service
 -  
❏ B. AWS Private Certificate Authority
 -  
❏ C. AWS Certificate Manager
 -  
❏ D. AWS Secrets Manager
 
At Horizon HealthTech, an engineer is experimenting with Amazon SQS in a sandbox account. After running tests, they need to remove a test queue so the queue itself and any messages it contains are permanently gone. Which SQS API action should they use?
-  
❏ A. RemovePermission
 -  
❏ B. PurgeQueue
 -  
❏ C. DeleteQueue
 -  
❏ D. DeleteMessageBatch
 
A developer at Helios Fitness is building an AWS Lambda consumer for an Amazon Kinesis Data Streams stream named activity-events-v4. The code parses each JSON record and throws an error when the required attribute memberId is absent. After deployment, the downstream application starts seeing the same records more than once, but when the team inspects records directly in the Kinesis console there are no duplicates. What most likely explains the repeated records?
-  
❏ A. The Lambda function is lagging behind the incoming stream volume
 -  
❏ B. The event source mapping used asynchronous invocation for the Kinesis trigger
 -  
❏ C. The function raised an exception, so the Lambda poller retried the same batch from the shard, which led to duplicate processing
 -  
❏ D. The function did not update the Kinesis sequence pointer to skip the bad record
 
A logistics startup named Northwind Haulage runs a Node.js microservice on Amazon ECS with AWS X-Ray enabled as a sidecar. The service uses Amazon RDS for PostgreSQL in a Multi-AZ configuration with a single read replica. Compliance asks for end-to-end trace data for outbound calls to RDS and to internal and third-party HTTP endpoints. Engineers must be able to search in the X-Ray console using filter expressions, and the traces should capture the exact SQL statements sent by the application. What should you implement to fulfill these requirements?
-  
❏ A. Add metadata to the subsegment that records the database and HTTP calls
 -  
❏ B. Add annotations to the top-level segment document
 -  
❏ C. Add annotations to the subsegment that records each downstream call, including the SQL text
 -  
❏ D. Add metadata fields to the segment document
 
A retail startup runs a microservice on Amazon ECS with data in an Amazon DynamoDB table and uses an Amazon ElastiCache for Redis cluster to cache lookups. The application currently adds items to the cache only after a miss, which leads to outdated entries when records are updated in the table. The platform team wants the cache to always reflect the latest writes and also wants unused keys to be removed automatically to avoid wasting memory. What should the developer implement to meet these goals?
-  
❏ A. Use a write-through caching strategy
 -  
❏ B. Use a lazy loading caching strategy
 -  
❏ C. Implement write-through in the application and configure TTL on ElastiCache keys
 -  
❏ D. Enable an allkeys-lru eviction policy and continue using lazy loading
 
A developer at Nimbus Robotics is building a CLI tool that uses the AWS SDK with long-term IAM user keys. The IAM policy enforces that every programmatic request must be authenticated with MFA using a six-digit code from a virtual device. Which STS API action should the developer call to obtain temporary credentials that satisfy the MFA requirement for these SDK requests?
-  
❏ A. GetCallerIdentity
 -  
❏ B. GetSessionToken
 -  
❏ C. DecodeAuthorizationMessage
 -  
❏ D. GetFederationToken
 
A team at Northwind Fitness is building a RESTful API with Amazon API Gateway integrated to AWS Lambda. They need to run v1, v2, and a sandbox build at the same time so clients and testers can call specific versions using stable URLs. What is the most effective way to publish and manage these concurrent API versions?
-  
❏ A. Use a Lambda authorizer to direct callers to the correct version
 -  
❏ B. Publish each version to its own API Gateway stage with distinct invoke URLs and use stage variables for backend context
 -  
❏ C. Rely on Lambda function versions and aliases behind a single API stage to represent API versions
 -  
❏ D. Attach an API Gateway resource policy to separate versions and pass context to Lambda
 
AWS Certification Sample Questions Answered
      These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
A digital mapping startup runs a serverless backend where geospatial tile generation and compute intensive calculations execute in AWS Lambda. After reviewing 30 days of Amazon CloudWatch metrics, the team observes increased function durations during CPU bound steps and wants to improve throughput without rearchitecting. What change should they make to best handle these compute intensive workloads?
-  
✓ B. Increase the memory size configured for the Lambda functions to allocate more CPU
 
Increase the memory size configured for the Lambda functions to allocate more CPU is the correct choice because AWS Lambda assigns more CPU and other resources when you raise the function memory setting so each invocation gains more compute power and runs faster for CPU bound work.
Raising the memory setting increases the CPU share available to an invocation and often shortens the duration of CPU intensive steps without changing code or architecture. This makes it the most direct way to improve throughput for tile generation and heavy calculations when monitoring shows longer durations during CPU bound phases.
Enable provisioned concurrency for the functions targets cold start latency by keeping execution environments initialized, and it does not increase the CPU allocated to each invocation so it will not reduce compute time for CPU bound operations.
Invoke the functions asynchronously using an event source such as Amazon SQS can smooth spikes and decouple producers from consumers, and it helps with throughput management, but it does not make a single invocation finish faster because the per invocation CPU remains the same.
Move the compute-heavy tasks to AWS Fargate can provide finer control over CPU and memory and may suit long running or specialized workloads, but it requires rearchitecture and operational overhead so it is not the quickest improvement when simply increasing Lambda memory already boosts per invocation compute.
Remember that Lambda CPU scales with memory and increasing memory is the fastest way to speed up CPU bound functions without redesigning the application.
A retail analytics startup named LumaRetail runs nearly all internal services on an AWS serverless stack and must deliver a new capability in under 10 days by reusing prebuilt serverless components instead of writing code from scratch. Which AWS service offers the most straightforward way to find and deploy ready-made serverless applications?
-  
✓ B. AWS Serverless Application Repository (SAR)
 
AWS Serverless Application Repository (SAR) is the correct choice for this scenario because it provides a managed catalog of ready to deploy serverless applications that can be discovered and launched quickly without building packaging pipelines.
SAR hosts community and AWS published serverless components that you can deploy directly into your account or fork and customize. It removes the need to create CI pipelines for packaging and publishing when you must assemble functionality fast for a startup under a tight deadline.
AWS Service Catalog focuses on curated and internally approved products with governance and lifecycle controls and it is aimed at managing internal portfolios rather than providing a broad public repository of reusable serverless apps for rapid development.
AWS Proton helps platform teams define provision and manage standardized templates for infrastructure and services and it is designed for lifecycle and template management. It does not supply ready made application packages that you can deploy out of the box.
AWS Marketplace lists third party software and managed services across many categories and it is not specifically focused on reusable serverless application blueprints in the way that SAR is. It is therefore a less direct fit for quickly assembling serverless features.
When you see prebuilt or ready made serverless applications choose the Serverless Application Repository. Remember that Service Catalog maps to internal portfolios and governance and Proton maps to platform templates and lifecycle management.
A media analytics startup runs an Amazon ElastiCache for Redis layer in front of an Amazon Aurora MySQL cluster. They want to lower costs by ensuring the cache only holds entries after clients request them at least once. Which caching approach should they use?
-  
✓ B. Adopt a lazy-loading cache pattern
 
The correct choice is Adopt a lazy-loading cache pattern because it inserts items into the cache only after a read miss and ensures the cache holds data that clients actually request which helps control memory usage and cost.
With Adopt a lazy-loading cache pattern the application checks Redis on each read and only queries the Aurora MySQL cluster when the key is missing and then populates the cache after the database returns the value. This behavior prevents the cache from being filled with entries that are never requested and it directly reduces the memory footprint and costs for ElastiCache.
Set short TTLs for cached entries is insufficient because TTL only affects how long an item stays after it is cached and it does not stop unneeded keys from being created in the first place which can still waste memory and cause churn.
Switch to a write-through strategy is counter to the goal because it causes the cache to be populated on every database write and that can store items that clients never read which increases cache usage and cost.
Amazon RDS Proxy improves database connection management and pooling but it does not change how or when the cache is populated so it does not meet the requirement to only hold entries after clients request them at least once.
When a question specifies caching only after items are requested look for lazy loading rather than TTLs or write-through strategies and remember that RDS Proxy is about connections not cache population.
You are helping Veridian Labs stand up infrastructure with the AWS Cloud Development Kit. You will scaffold a project, define constructs and stacks in code, generate templates, and roll them out to an AWS account. In what sequence should this CDK workflow be executed to follow best practice?
-  
✓ C. Initialize the app with an AWS CDK template → Add stack code → Build the project (optional) → Synthesize the stacks → Deploy to the account
 
The correct sequence is Initialize the app with an AWS CDK template → Add stack code → Build the project (optional) → Synthesize the stacks → Deploy to the account. This option matches the intended CDK workflow of scaffolding a CDK app then authoring constructs and stacks before producing CloudFormation templates and applying them to an account.
This flow is correct because you start by creating a CDK project and then add stack code. The optional build step ensures compiled languages and generated assets are ready before synthesis. Synthesis produces CloudFormation templates and asset manifests which the deploy step consumes to create or update resources in the target account. Running synth before deploy helps catch definition errors and prevents deploying incomplete artifacts.
Initialize the app with an AWS CDK template → Add stack code → Synthesize the app → Deploy stacks → Build the project is incorrect because it places the build step after deployment. Building must happen before synth and deploy to ensure compiled code and assets are included in the generated templates.
Initialize the app with an AWS CloudFormation template → Add stack code → Build the project (optional) → Synthesize → Deploy is incorrect because you do not start a CDK project from a raw CloudFormation template. CDK apps are initialized with CDK project templates which include the app entry point and language scaffolding.
Initialize using an AWS CloudFormation template → Add stack code → Synthesize the stacks → Deploy → Build the project is incorrect for two reasons. It uses the wrong initializer and it defers build until after deployment which breaks the synthesis and deployment process.
Answer CDK ordering questions by remembering the sequence init then code then optional build then synth then deploy. Anything that builds after deploy or starts from CloudFormation is likely wrong.
An engineer at Helios Analytics configures an Amazon ECS service with a task placement strategy that lists two entries in order: type “spread” on field “attribute:ecs.availability-zone” and then type “spread” on field “instanceId”. When the service schedules tasks, what behavior should be expected?
-  
✓ D. It spreads tasks across Availability Zones, then distributes tasks evenly across instances within each zone
 
It spreads tasks across Availability Zones, then distributes tasks evenly across instances within each zone is correct.
ECS evaluates placement strategies in the order they are listed and applies the first rule then the next. A spread on attribute ecs.availability-zone balances tasks evenly across Availability Zones and a subsequent spread on instanceId balances tasks across container instances inside each zone so tasks end up distributed evenly across instances within each zone.
It spreads tasks across Availability Zones, then randomly assigns tasks to instances inside each zone is incorrect because the spread strategy creates even distribution across the chosen dimension rather than performing random placement.
It spreads tasks across Availability Zones, then packs tasks based on lowest available memory per zone is incorrect because packing by memory or CPU requires the binpack strategy rather than spread.
It spreads tasks across Availability Zones, then guarantees each task lands on a different instance within the zone is incorrect because guaranteeing one task per instance requires a placement constraint such as distinctInstance rather than using spread.
Read placement strategies left to right and remember that spread evens distribution while binpack consolidates and use placement constraints like distinctInstance when you need uniqueness.
A development group at HarborPeak Studios needs engineers to access a member account within an AWS Organizations setup. The administrator of the management account wants to strictly limit which AWS services, specific resources, and API operations are usable in that member account. What should the administrator apply?
-  
✓ B. Attach a Service Control Policy (SCP)
 
The correct option is Attach a Service Control Policy (SCP). This option lets the management account enforce account wide guardrails that limit which AWS services, specific resources, and API operations are available in a member account.
SCPs define the maximum permissions for identities in member accounts and are evaluated together with IAM policies. A deny in an SCP blocks access even when an IAM policy in the member account allows the action, and this behavior makes SCPs ideal for centrally enforcing organization wide restrictions across accounts and organizational units.
Organizational Unit is only a grouping mechanism that helps target policies to a set of accounts and it does not by itself enforce restrictions. You must attach an SCP to an OU or account for controls to take effect.
IAM permission boundary constrains the maximum permissions that a single principal can obtain within a single account and it cannot be centrally applied across accounts from the management account. That makes it unsuitable for organization wide guardrails.
Tag Policy helps standardize tagging conventions across accounts and it does not restrict which services, resources, or API operations can be used.
Remember that SCPs set the permission ceiling across member accounts and a deny in an SCP overrides an allow in an IAM policy.
Orion Outfitters processes about 240,000 online purchase records each day and stores them in an Amazon S3 bucket for auditing. Company policy mandates that every new object written to this bucket is encrypted at the time of upload. What should a developer implement to guarantee that only encrypted objects can be stored in the bucket?
-  
✓ C. Attach an S3 bucket policy that rejects PutObject requests missing the x-amz-server-side-encryption header
 
Attach an S3 bucket policy that rejects PutObject requests missing the x-amz-server-side-encryption header is correct because it enforces encryption at upload and prevents any PutObject request that does not include the required server side encryption header from being accepted.
A deny bucket policy can require the request to include the header x-amz-server-side-encryption with values such as AES256 or aws:kms and it is evaluated during the request so unencrypted uploads are refused before S3 stores the object. This real time enforcement guarantees that only objects uploaded with server side encryption are accepted into the bucket.
Enable default bucket encryption using SSE-S3 on the Amazon S3 bucket configures a default encryption method for objects stored without a specified encryption option and it helps protect data at rest but it does not by itself require clients to send the encryption header and it does not block noncompliant uploads at request time.
Use the AWS Config managed rule s3-bucket-server-side-encryption-enabled to detect noncompliant uploads provides detection and optional automated remediation after an object exists and it cannot prevent a nonencrypted PutObject request from succeeding at upload time.
Create an Amazon CloudWatch alarm that alerts when unencrypted objects are added to the bucket only provides notification and visibility and it does not enforce encryption or stop uploads when the write occurs.
Enforce encryption at upload by using a deny bucket policy that requires the x-amz-server-side-encryption header so noncompliant PutObject requests are blocked in real time
A biotech analytics company runs Node.js microservices on Amazon ECS with Fargate. The AWS X-Ray daemon is deployed as a sidecar container listening on 10.0.2.25:3000 instead of the default address. To make sure the application’s X-Ray SDK discovers the daemon correctly, which environment variable should be configured in the task definition?
-  
✓ B. AWS_XRAY_DAEMON_ADDRESS
 
The correct environment variable to configure is AWS_XRAY_DAEMON_ADDRESS. This variable tells the X Ray SDK where to send traces so it can discover the daemon running at a non default host and port such as 10.0.2.25 on port 3000.
When the SDK needs to emit segments it reads AWS_XRAY_DAEMON_ADDRESS to obtain the daemon host and port and this overrides the default of 127.0.0.1 on port 2000. Setting this variable in the task definition ensures the Node.js microservice sidecar pattern on ECS with Fargate routes trace traffic to the correct address.
AWS_XRAY_TRACING_NAME is not used to locate the daemon and only sets the service or tracing name that appears on segments.
AWS Distro for OpenTelemetry is a separate observability distribution and not an environment variable for the X Ray SDK so it does not control daemon discovery.
AWS_XRAY_CONTEXT_MISSING configures SDK behavior when no segment context is present and is typically set to values such as LOG_ERROR. It does not configure where the daemon runs or how the SDK connects to it.
When running a daemon as a sidecar set the AWS_XRAY_DAEMON_ADDRESS environment variable in the task definition to the daemon host and port so the SDK sends traces to the correct place.
You are the lead engineer at Medora Health responsible for NoSQL-backed services. A new patient lookup API must sustain 18 strongly consistent reads per second, and each item is approximately 9 KB in size. How many read capacity units should be provisioned on the DynamoDB table to satisfy this requirement?
-  
✓ C. 54
 
54 is correct because a 9 KB item consumes three read capacity units for a strongly consistent read and 18 such reads per second require 54 RCUs.
DynamoDB charges one RCU per 4 KB or fraction thereof for strongly consistent reads and a 9 KB item therefore rounds up to three 4 KB chunks. Multiplying three RCUs per read by 18 reads per second yields 54 RCUs so that is the amount to provision.
72 is incorrect because it assumes four RCUs per 9 KB read which overestimates the rounded up chunk count.
27 is incorrect because that value reflects halving RCUs for eventual consistency and that rule does not apply to strongly consistent reads.
18 is incorrect because it ignores item size and treats each read as one RCU regardless of kilobytes.
Round the item size up to 4 KB increments and then multiply by the read rate for strongly consistent reads to get required RCUs.
A ride-sharing startup uses an Amazon SQS Standard queue to decouple trip events. A worker service polls the queue about every 3 seconds and cannot pick a specific message, only the maximum batch size for each ReceiveMessage request. What is the largest number of messages it can ask SQS to return in a single call?
-  
✓ B. 10
 
The correct choice is 10. The ReceiveMessage API for Amazon SQS uses the MaxNumberOfMessages parameter which accepts values from 1 through 10 and so a single ReceiveMessage request can return at most 10 messages. Consumers cannot pick a specific message and they only control the maximum batch size per request.
The 10 limit is consistent with SQS batch operations which also use 10 entry limits for operations that process multiple messages at once. This makes batching predictable and it is the documented hard cap for a single API call.
The option 20 is incorrect because SQS does not return 20 messages in one ReceiveMessage call and the service enforces the maximum of 10.
The option 5 is incorrect because although requesting five messages is valid it is not the maximum you can ask for and you may request up to 10.
The option 25 is incorrect because it exceeds the documented service limit and SQS will not return that many messages in a single ReceiveMessage request.
On the exam remember to check SQS documented limits and that ReceiveMessage uses a fixed small batch size. Focus on the documented quotas when selecting numeric answers.
       These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
A developer at Helios Travel is building a REST API with Amazon API Gateway and needs a Lambda authorizer for a custom authentication mechanism. The authorizer must read client-provided values from HTTP headers and query string parameters. Which approach should the developer choose to meet this requirement?
-  
✓ C. REQUEST Lambda authorizer
 
The correct choice is REQUEST Lambda authorizer because it can read values from HTTP headers and query string parameters and it therefore meets the requirement.
The REQUEST Lambda authorizer receives the full request context so the Lambda function can inspect headers, query string parameters, stage variables and the $context object to perform custom authentication and authorization logic. This capability makes it suitable when the identity information is spread across headers and query parameters rather than provided as a single token.
TOKEN Lambda authorizer is not appropriate because it expects a single bearer token such as a JWT typically provided in an Authorization header and it does not parse multiple headers and query string parameters for identity.
Amazon Cognito user pool authorizer is not suitable because it validates JWTs issued by Cognito user pools and it does not implement a custom scheme that reads arbitrary headers and query parameters.
IAM authorization is incorrect because it relies on SigV4 signing and IAM credentials and policies rather than custom request parameter values for authentication.
When a question requires reading headers or query strings choose a REQUEST Lambda authorizer and when a question requires a single bearer token choose a TOKEN authorizer.
A developer at a logistics startup needs an Auto Scaling group of Amazon EC2 instances to scale automatically based on the average memory utilization across the fleet. The team wants a supported and maintainable approach that makes scaling decisions from a memory metric. What should they implement?
-  
✓ C. Publish a custom CloudWatch metric for memory from each instance using PutMetricData and create a scaling alarm on that metric
 
Publish a custom CloudWatch metric for memory from each instance using PutMetricData and create a scaling alarm on that metric is the correct choice because EC2 does not emit memory utilization by default and you must supply a memory metric to drive scaling decisions.
You implement Publish a custom CloudWatch metric for memory from each instance using PutMetricData and create a scaling alarm on that metric by having each instance publish memory usage into a common namespace and dimensions. You can use the CloudWatch agent or a small script that calls PutMetricData from each instance, then create an aggregated alarm on the metric and attach that alarm to an Auto Scaling policy so the group scales on the fleet memory average.
Enable detailed monitoring for EC2 and the Auto Scaling group, then create a CloudWatch alarm on memory utilization is incorrect because detailed monitoring only increases the frequency of default metrics and it does not add memory metrics for EC2.
Use AWS Auto Scaling target tracking with the predefined metric ASGAverageCPUUtilization to approximate memory needs is incorrect because CPU metrics do not reliably indicate memory pressure and using CPU as a proxy can cause inappropriate scaling when memory is the limiting resource.
AWS Lambda is incorrect because Lambda is a serverless execution service and it does not provide the per instance memory metrics or direct Auto Scaling integration needed to make memory driven scaling decisions for EC2 instances.
Do not assume EC2 publishes memory metrics by default. Publish a custom CloudWatch metric from each instance using the CloudWatch agent or PutMetricData and base your scaling policy on that metric.
Riverton Health Tech exposes public REST endpoints through Amazon API Gateway, and stage caching is enabled with a 180-second TTL to reduce backend load. Several partners want a self-service way to force a fresh response for a single call without changing the API or requiring the provider to clear the entire cache. How can callers trigger a one-time cache refresh on demand while keeping the existing API unchanged?
-  
✓ C. Instruct clients to send the HTTP header Cache-Control: max-age=0 with the request
 
The correct choice is Instruct clients to send the HTTP header Cache-Control: max-age=0 with the request.
When a caller includes Cache-Control: max-age=0 API Gateway treats that request as requiring a fresh response so it forwards the request to the integration and returns the new response. API Gateway then updates the cached value for that cache key with the fresh response so subsequent requests receive the updated data until the configured TTL elapses.
Tell clients to invoke a special AWS-managed endpoint that clears the API Gateway cache is wrong because API Gateway does not expose a customer facing invalidation endpoint to clear individual cached entries on demand.
Have clients append a query string parameter named REFRESH_CACHE to each request is incorrect because API Gateway will not automatically use a custom query parameter as a cache invalidation signal unless you change the API or the cache key configuration to include that parameter.
Give clients IAM credentials so they can call the API Gateway FlushStageCache action is not appropriate because FlushStageCache is an administrative action that clears the entire stage cache and granting those privileges to external callers is insecure and unnecessary for a single request refresh.
Use Cache-Control: max-age=0 to force a one time refresh without changing the API or granting admin rights. Verify the behavior with a test client and confirm which cache key fields are used before rolling out to partners.
A media startup named AuroraStream runs a Python-based AWS Lambda function that generates a temporary file during execution and needs to store it in an Amazon S3 bucket. The team wants to add the upload step while making the fewest possible edits to the current codebase. Which approach is the most appropriate?
-  
✓ C. Use the AWS SDK for Python that is preinstalled in the Lambda Python runtime
 
The correct option is Use the AWS SDK for Python that is preinstalled in the Lambda Python runtime. This lets the function import boto3 and call Amazon S3 APIs without changing the deployment package or adding new components.
The Python Lambda runtimes include boto3 and botocore so you can write the temporary file to the Lambda /tmp directory and then upload it to S3 using boto3 methods such as upload_file or put_object. This meets the requirement for the fewest possible edits because you only add a few lines that call the existing SDK and you do not modify build or deployment workflows.
Bundle the AWS SDK for Python with the Lambda deployment package is not ideal because it requires changing the build and packaging process and increases deployment size. This approach conflicts with the goal of making minimal edits to the current codebase.
Attach a Lambda layer that provides the AWS SDK for Python is unnecessary when the runtime already supplies boto3 and it adds operational steps to manage and publish a layer. A layer is useful when you need a specific or locked SDK version but it is not the minimal-change solution.
Use the AWS CLI in the Lambda execution environment is not feasible in the managed runtime because the AWS CLI is not provided by default and installing it would require creating a layer or using a custom runtime or container image. That would contradict the requirement to make the fewest possible edits.
When a question asks for the least change prefer using libraries already present in the managed runtime and only package or layer dependencies if you need a specific version or feature.
A software engineer at BlueHarbor Analytics is setting up a CI/CD workflow and needs to automatically deploy application packages to Amazon EC2 instances and to about 14 on-premises virtual machines in the company data center. Which AWS service provides a managed way to orchestrate these deployments to both environments?
-  
✓ C. AWS CodeDeploy
 
The correct choice is AWS CodeDeploy. AWS CodeDeploy is the managed service designed to orchestrate application deployments to Amazon EC2 instances and to on‑premises servers or virtual machines by using a CodeDeploy agent on target hosts.
AWS CodeDeploy supports in place and blue/green deployment strategies and it provides health checks and automated rollback capabilities so releases can be coordinated across cloud and data center environments. It integrates with CI stages and artifact producers so you can trigger deployments from a pipeline and manage lifecycle hooks and deployment groups for staged rollouts.
AWS CodeBuild focuses on building and testing source code and producing artifacts and it does not coordinate deployments across fleets of servers.
AWS CodePipeline is a CI CD orchestrator that wires stages together and invokes deployment providers such as AWS CodeDeploy to perform the actual rollout so by itself it does not deploy to EC2 or on‑prem targets.
AWS Systems Manager can run commands and distribute packages and it can manage configuration but it is not purpose built for application release orchestration with built in lifecycle hooks and automated rollback in the same way that AWS CodeDeploy is. For unified application deployments to EC2 and on‑prem hosts, CodeDeploy is the preferred managed option.
When the exam scenario requires deploying the same application to EC2 and on‑prem machines remember to pick CodeDeploy for deployment, use CodeBuild to produce artifacts and use CodePipeline to orchestrate the stages.
StoneRiver Media uses a GitLab CI/CD pipeline and needs one automated process to deploy the same application package to about 30 Amazon EC2 instances and to a group of on-premises VMware virtual machines in its data center. Which AWS service should the developer choose to perform these deployments across both environments?
-  
✓ C. AWS CodeDeploy
 
AWS CodeDeploy is the correct choice because it supports automated deployments to both Amazon EC2 instances and on premises VMware virtual machines by using an agent on each target, which lets you run the same deployment process across hybrid environments.
AWS CodeDeploy installs a lightweight agent on EC2 instances and on premises servers and orchestrates application updates, which lets a single deployment pipeline push the same package to all targets and handle lifecycle hooks and rollbacks as needed.
AWS CodeBuild is incorrect because it is a build and test service and it does not perform deployments to servers or on premises systems.
AWS Elastic Beanstalk is incorrect because it manages application environments inside AWS and does not provide native deployment support to on premises VMware hosts.
AWS CodePipeline is incorrect because it is a release orchestration service and it relies on a deployment provider such as AWS CodeDeploy to actually push artifacts to EC2 or on premises targets.
When a question asks about deploying the same package to EC2 and on premises think hybrid deployments and choose the service that uses an agent on targets to perform the install.
A biotech analytics firm needs to encrypt approximately 80 TB of research data, and its compliance policy mandates that the data encryption keys be created inside customer-controlled, single-tenant, tamper-resistant hardware. Which AWS service should the team use to satisfy this requirement?
-  
✓ C. AWS CloudHSM
 
AWS CloudHSM is correct because it provides dedicated, single-tenant hardware security modules that let you generate and store encryption keys inside customer-controlled, tamper-resistant hardware to meet strict compliance requirements.
AWS CloudHSM gives you HSM appliances that are dedicated to your account and under your management, and the devices are FIPS validated and designed to resist tampering so keys are created and protected inside hardware you control. This approach aligns with policies that mandate customer-controlled, single-tenant key generation and is suitable for large volumes of encrypted research data.
AWS KMS is not ideal for this requirement because KMS uses AWS managed, multi-tenant HSMs by default and does not by itself provide customer-dedicated hardware. You can integrate KMS with a custom key store backed by CloudHSM to get dedicated HSMs but KMS alone does not satisfy a strict single-tenant, tamper-resistant hardware mandate.
AWS Certificate Manager is focused on provisioning and managing TLS certificates and it does not generate or store general data encryption keys in dedicated HSM hardware so it does not meet the compliance requirement.
AWS Nitro Enclaves provides isolated compute environments for secure processing and it can help protect secrets in memory while processing, but it is not an HSM service and it does not offer tamper-resistant, single-tenant hardware for key generation and storage so it cannot fulfill the stated requirement.
When a policy calls for customer-controlled, single-tenant, tamper-resistant key generation choose a dedicated HSM solution and when the exam describes managed, shared HSMs think of managed key services.
A product team at a digital publishing startup is writing an AWS CloudFormation template to launch an EC2 based content service. The stack will be rolled out in several Regions, and a Mappings section links each Region and platform key to the correct base AMI. Which YAML syntax should be used to call FindInMap to retrieve the AMI from this two level mapping?
-  
✓ B. Fn::FindInMap: [ MapName, TopLevelKey, SecondLevelKey ]
 
The correct syntax for a two level mapping is Fn::FindInMap: [ MapName, TopLevelKey, SecondLevelKey ] and this long form explicitly names the intrinsic function while providing the mapping name and the two required keys.
This long form requires three elements which are the mapping name a top level key such as a Region and a second level key such as an architecture or platform label. You can use the YAML short alias in templates but the function still needs exactly three inputs to resolve an AMI from a two level mapping.
!FindInMap [ MapName, TopLevelKey ] is incomplete because it omits the second level key and CloudFormation cannot resolve the AMI entry from the mapping.
!FindInMap [ MapName ] is invalid because it lacks both required keys and cannot address a nested mapping entry.
!FindInMap [ MapName, TopLevelKey, SecondLevelKey, ThirdLevelKey ] is wrong because FindInMap supports two level mappings and does not accept an extra key.
Remember to supply exactly three inputs to FindInMap for a two level mapping and use AWS::Region as the common top level key when mapping AMIs by Region.
Northline Auto runs a nationwide appointment portal where customers book service slots at multiple garages. Bookings are stored in an Amazon DynamoDB table named ServiceBookings with DynamoDB Streams enabled. An Amazon EventBridge schedule triggers an AWS Lambda function every 30 hours to read the stream and export a summary to an Amazon S3 bucket. The team observes many updates never make it to S3 even though there are no logged errors. What change should they make to ensure all updates are captured?
-  
✓ B. Reduce the EventBridge schedule so the Lambda runs at least every 24 hours
 
Reduce the EventBridge schedule so the Lambda runs at least every 24 hours is correct because DynamoDB Streams retains change records for a maximum of 24 hours and a 30 hour schedule allows many records to expire before the Lambda reads them.
Because Streams only keep records for 24 hours the scheduled Lambda must run within that retention window to consume all changes. Running the processor at least once every 24 hours ensures records are not trimmed before export to S3. You can also consider using an event source mapping or near real time processing to reduce latency and improve reliability.
Set the DynamoDB Streams StreamViewType to NEW_IMAGE is incorrect because StreamViewType only controls whether new or old images are included in each record and it does not affect retention or delivery timing.
Increase the schedule so the Lambda runs every 48 hours is incorrect because lengthening the interval makes more records expire before the consumer runs and it increases the chance of missing updates.
Configure DynamoDB Streams to retain records for 72 hours is incorrect because DynamoDB Streams retention is fixed at 24 hours and cannot be extended to 72 hours.
Keep scheduled consumers inside the 24 hour DynamoDB Streams retention window or use an event source mapping for near real time processing.
A company runs a serverless API with Amazon API Gateway in front of AWS Lambda functions and uses a Lambda authorizer for authentication. Following a production incident, a developer needs to trace each client request from the API Gateway entry point through the downstream Lambda and any invoked services to identify latency and errors. Which AWS service should the developer use to achieve end-to-end request tracing and analysis across these components?
-  
✓ B. AWS X-Ray
 
The correct option is AWS X-Ray. It provides distributed tracing that follows a request from Amazon API Gateway through AWS Lambda and into any downstream services so you can identify latency and errors across the full request path.
AWS X-Ray records segments and subsegments for each service and builds a service map that shows latency breakdowns and error counts. You can enable tracing on API Gateway and on Lambda so traces include the API entry point, the authorizer, the Lambda execution, and any calls the function makes to other AWS services or HTTP endpoints. The visual traces and analytics make it straightforward to isolate slow components and to inspect exceptions and stack traces per request.
Amazon CloudWatch is useful for metrics logging dashboards and alarms but it does not provide an end to end distributed trace of a single request across multiple managed services. You should use CloudWatch for logs and metrics and pair it with tracing for request level analysis.
VPC Flow Logs records network flow metadata at the ENI level and helps with IP traffic analysis and connectivity checks. It does not capture application layer traces or the internal function calls that X Ray shows.
Amazon Inspector is a vulnerability assessment service that finds security issues and unintended exposure. It does not provide request tracing or latency analysis and is not suitable for debugging runtime performance problems.
Enable tracing on both API Gateway and Lambda before a test so X Ray can produce end to end traces and a service map during troubleshooting.
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
BlueOrbit, a retail startup, exposes roughly 12 HTTPS endpoints through Amazon API Gateway that trigger a Lambda backend. The team needs to secure these routes using an access control mechanism that API Gateway supports natively. Which option is not supported by API Gateway for request authorization?
-  
✓ B. AWS Security Token Service (STS)
 
The correct choice is AWS Security Token Service (STS). API Gateway does not accept STS as a direct authorizer because STS issues temporary credentials that clients use to sign requests rather than acting as an authorization mechanism attached to routes or methods.
AWS Security Token Service (STS) is a credential service that returns temporary AWS credentials and it is useful for granting time limited access to AWS resources. API Gateway can receive requests signed with those temporary credentials but it cannot be configured to call STS itself as an authorizer on a method.
IAM authorization with Signature Version 4 is supported. API Gateway can accept SigV4 signed requests and evaluate them against IAM policies to permit or deny access.
Lambda authorizer is supported. API Gateway can invoke a Lambda function to run custom authorization logic and return allow or deny decisions for requests.
Amazon Cognito user pools is supported. API Gateway can validate JWTs issued by Cognito user pools and use those tokens to authorize access to routes.
Remember that API Gateway authorizers are IAM SigV4, Lambda authorizers, and Cognito user pools and that AWS STS issues temporary credentials but is not configured as an authorizer.
A photo-sharing startup, Northwind Lens, is moving its application to AWS. Today, the app saves user-uploaded media into a local folder on the server. The company will run the service in an Auto Scaling group that can grow to 18 instances across three Availability Zones. Every upload must persist and be immediately accessible to any existing or new instance in the fleet with minimal operational overhead. What should the developer implement?
-  
✓ C. Refactor the application to write and read uploads from a single Amazon S3 bucket
 
Refactor the application to write and read uploads from a single Amazon S3 bucket is correct because Amazon S3 provides durable, highly available object storage that is shared across Availability Zones and is immediately accessible to any instance in the Auto Scaling group.
Refactor the application to write and read uploads from a single Amazon S3 bucket removes the need to manage replication or custom synchronization logic and it scales automatically as the fleet grows. Using S3 also lets you leverage IAM for access control and presigned URLs or direct uploads to reduce server load while ensuring uploaded objects are available to all instances right away.
Use Amazon EBS volumes and bake a golden AMI from a snapshot so new instances start with the same files is incorrect because snapshots are point in time and will not include uploads that happen after the snapshot. That approach leads to instances becoming out of sync soon after launch.
Configure Amazon EBS Multi-Attach to mount a single volume to all instances in the Auto Scaling group is incorrect since Multi Attach is limited to specific volume types, only works within the same Availability Zone, and requires a cluster aware file system to avoid data corruption. Those constraints do not suit a multi AZ Auto Scaling fleet.
Store files on instance store and replicate to other nodes using file synchronization tooling is incorrect because instance store is ephemeral and data is lost when an instance stops or terminates. Relying on custom replication adds operational complexity and a higher risk of data loss.
Keep files on an attached Amazon EBS volume and synchronize changes between instances with custom scripts is incorrect because EBS is not a shared filesystem across instances and ad hoc synchronization is brittle, hard to scale, and prone to consistency problems.
For shared user uploads choose Amazon S3 and have the application put and get objects directly so files are durable, multi AZ, and immediately accessible to all instances.
PixelWave Studios runs an event-driven application on AWS Lambda behind Amazon API Gateway with tracing enabled for AWS X-Ray across every function. The team reports that nearly all requests are being traced and their X-Ray costs have surged. They still need to track latency and error patterns but want to lower spend with minimal changes to the workload. What should the developer do?
-  
✓ C. Configure AWS X-Ray sampling rules to capture a representative subset of requests
 
Configure AWS X-Ray sampling rules to capture a representative subset of requests is the correct choice because it reduces the number of traces sent to X Ray while preserving the ability to observe latency and error trends.
Using sampling rules trims the volume of segments and traces that are ingested and billed. The sampling approach can be configured centrally and tuned to provide a representative subset of traffic so latency and error patterns remain visible with minimal disruption to the workload.
Use X-Ray filter expressions in the console to narrow displayed traces only changes which traces you view in the console and does not reduce the traces that are collected or billed.
Turn off X-Ray tracing and rely on Amazon CloudWatch metrics and logs would remove end to end trace context and reduce the ability to analyze distributed latency and errors, so it does not meet the requirement to keep tracing trends with minimal change.
Reduce AWS X-Ray trace retention to 14 days is not a mechanism to lower ingestion charges and changing retention does not reduce the volume of traces sent to the service.
When X Ray costs rise prefer sampling to lower ingestion while keeping representative traces for latency and error analysis.
A software engineer at Aurora Retail is packaging a web app for AWS Elastic Beanstalk and must apply custom load balancer settings by adding a file named lb-settings.config. Where in the application source bundle should this file be placed so Elastic Beanstalk reads it during deployment?
-  
✓ C. In the .ebextensions folder at the root of the source bundle
 
In the .ebextensions folder at the root of the source bundle is correct because Elastic Beanstalk reads and applies .config option files that are placed in that directory during deployment.
Elastic Beanstalk scans the root .ebextensions directory for files ending in .config and uses those files to set environment options and create or modify resources. Placing your lb-settings.config under .ebextensions ensures the platform processes the load balancer settings when it provisions the environment.
Inside the .platform/hooks directory is incorrect because the .platform/hooks area is intended for lifecycle scripts and platform customization and it does not serve as the location Elastic Beanstalk scans for option .config files.
In a bin directory within the project is incorrect because Elastic Beanstalk does not read deployment option files from arbitrary project folders such as a bin folder.
Directly in the top-level of the source bundle is incorrect because option .config files must live under .ebextensions at the bundle root to be recognized and applied by Elastic Beanstalk.
Place any option files with a .config extension inside .ebextensions at the bundle root so Elastic Beanstalk will apply them during deployment
An engineer at Nova Courier is modeling a parcel-tracking workflow using AWS Step Functions. Several states are defined in the Amazon States Language. Which state definition represents a single unit of work executed by the state machine?
-  
✓ C. “ProcessParcel”: { “Type”: “Task”, “Resource”: “arn:aws:lambda:us-west-2:210987654321:function:ProcessParcelFn”, “Next”: “AfterProcess”, “Comment”: “Invoke the parcel processor Lambda” }
 
“ProcessParcel”: { “Type”: “Task”, “Resource”: “arn:aws:lambda:us-west-2:210987654321:function:ProcessParcelFn”, “Next”: “AfterProcess”, “Comment”: “Invoke the parcel processor Lambda” } is correct because it defines a Task state which represents a single unit of work executed by the state machine.
ProcessParcel is a Task state that invokes a Lambda function through the Resource field and this is how Step Functions hands off processing to external compute. Task states perform work by calling Lambda functions, activities, or supported service integrations and they are the appropriate choice when the state must execute a unit of work.
“pause_for_event”: { “Type”: “Wait”, “Timestamp”: “2021-10-05T08:30:00Z”, “Next”: “Continue” } is not correct because a Wait state only delays execution until a specified time or condition and it does not perform work.
“StashPayload”: { “Type”: “Pass”, “Result”: { “status”: “ok”, “attempt”: 3 }, “ResultPath”: “$.meta”, “Next”: “Finalize” } is not correct because a Pass state only injects or reshapes data within the state machine and it does not invoke external compute.
“Abort”: { “Type”: “Fail”, “Cause”: “Invalid status”, “Error”: “StateError” } is not correct because a Fail state simply ends the execution with an error and it does not execute any work.
Scan the Type field to identify the state role and pick Task when the question asks for a unit of work.
Riverbend Retail hosts an application on AWS Elastic Beanstalk using the Java 8 platform. A new build requires upgrading to the Java 11 platform. All traffic must shift to the new release at once, and the team must be able to revert quickly if problems occur. What is the most appropriate way to perform this platform upgrade?
-  
✓ B. Perform a Blue/Green deployment and swap environment URLs
 
The correct option is Perform a Blue/Green deployment and swap environment URLs. This approach lets you build a separate Elastic Beanstalk environment on Java 11 validate it and then swap the environment CNAMEs so all traffic moves to the new release at once while keeping the old environment available for an immediate rollback.
A blue green deployment isolates the platform upgrade so you can run acceptance tests health checks and any required smoke tests without impacting users. After validation you swap CNAMEs for an instant cutover and you can reverse the change just as quickly by swapping back. This meets the requirement to shift all traffic at once and to revert rapidly if problems occur.
Update the Elastic Beanstalk environment to the Java 11 platform version performs an in place platform change which can cause brief unavailability and offers limited rollback safety for a major runtime change. It is better suited to minor platform patches than to switching major runtime versions.
Use an immutable deployment in Elastic Beanstalk provisions new instances inside the same environment and is intended for safe application version deployments on the existing platform. It does not create a separate runtime environment so it is not the recommended method for upgrading from Java 8 to Java 11.
Use Elastic Beanstalk traffic splitting for the deployment routes a percentage of traffic to the new version for canary style rollouts and is therefore unsuitable when you must move all traffic at once. Traffic splitting delays full cutover and complicates the requirement for an immediate switch.
For major runtime upgrades favor Blue/Green with a CNAME swap so you get an instant cutover and a simple rollback path.
MetroSight Analytics operates a live transit telemetry pipeline that ingests events into Amazon Kinesis Data Streams, and a fleet of consumers on Amazon ECS processes the records. During rush hours the producers spike from about 2,000 to 7,500 records per second, so the team routinely changes the shard layout to match demand. Which statements about Kinesis resharding should the team rely on when planning capacity and protecting data integrity? (Choose 2)
-  
✓ B. Splitting shards increases the stream’s ingest and read capacity
 -  
✓ E. Merging shards reduces the stream’s capacity and cost
 
The correct statements are Splitting shards increases the stream’s ingest and read capacity and Merging shards reduces the stream’s capacity and cost. Splitting shards increases the stream’s ingest and read capacity adds additional shards which raises the aggregate write and read throughput for the stream and Merging shards reduces the stream’s capacity and cost combines shards which lowers throughput and typically reduces cost.
Splitting shards increases the stream’s ingest and read capacity is right because each new shard contributes to the total ingest and read limits so you scale out by adding shards. Merging shards reduces the stream’s capacity and cost is right because merging reduces the number of shards and lowers the stream’s provisioned throughput and cost when demand is lower.
To reduce overall capacity, you should split underutilized cold shards is incorrect because splitting increases capacity and it is merging that you should perform to reduce capacity and cost when shards are underutilized.
During resharding, data from parent shards is discarded is incorrect because parent shards remain readable until they are closed and records are retained according to the stream retention period so resharding does not drop existing data immediately.
Amazon Kinesis Data Firehose can automatically split and merge shards in a Kinesis Data Stream is incorrect because Firehose does not manage shard counts on Kinesis Data Streams and resharding is performed on the stream itself.
Watch per-shard write and read metrics and scale by splitting to increase throughput and by merging to lower cost. Remember that resharding does not immediately discard existing records while parent shards remain readable within retention.
A developer at BlueLedger, a fintech startup, is updating an AWS Lambda function that invokes a third-party GraphQL API and must now persist results in an Amazon RDS database within a VPC. The function has been placed in two private subnets with a security group and can connect to the database, but during testing it can no longer reach the external GraphQL endpoint. What should the developer change to restore outbound internet access from the function while maintaining database connectivity? (Choose 2)
-  
✓ B. Verify the Lambda function’s security group allows outbound traffic to 0.0.0.0/0
 -  
✓ C. Create a NAT gateway in the VPC and route 0.0.0.0/0 from the Lambda private subnets to the NAT gateway
 
Verify the Lambda function’s security group allows outbound traffic to 0.0.0.0/0 and Create a NAT gateway in the VPC and route 0.0.0.0/0 from the Lambda private subnets to the NAT gateway are correct because the function is running in private subnets and requires both a routed egress path and permissive security group egress to reach external endpoints while still connecting to the RDS database.
When a Lambda function is placed in private subnets the ENIs receive only private IP addresses and cannot reach the internet directly. A Create a NAT gateway in the VPC and route 0.0.0.0/0 from the Lambda private subnets to the NAT gateway provides a managed translation hop in a public subnet so outbound requests to the third party GraphQL API succeed without exposing the function to inbound internet traffic.
A restrictive security group egress rule can still block outbound connections even with a NAT in place so you must Verify the Lambda function’s security group allows outbound traffic to 0.0.0.0/0 or allow the specific destination ports and IP ranges required by the GraphQL endpoint.
Attach an internet gateway to the VPC and run the function in a public subnet is incorrect because Lambda ENIs created for functions in a VPC do not receive public IP addresses so moving them to a public subnet with an internet gateway will not, by itself, enable outbound internet access.
Submit a request to increase the function’s reserved concurrency limit is incorrect because reserved concurrency controls how many concurrent executions are allowed and it does not affect network routing or internet connectivity.
Create an interface VPC endpoint for the external GraphQL API is incorrect because interface endpoints support specific AWS services and PrivateLink integrated services and cannot be used to reach arbitrary public internet endpoints unless the API provider offers a PrivateLink endpoint.
Keep Lambda in private subnets for DB connectivity but ensure a NAT gateway is reachable from the subnet route table and confirm the security group egress allows the required outbound traffic such as 0.0.0.0/0 or specific API ranges.
Orion Logistics is moving its internal platforms to AWS and needs a fully managed private PKI built from native services. The solution must allow IAM policy controls, capture API activity in AWS CloudTrail for auditing, issue private X.509 certificates, and support a hierarchy with subordinate certificate authorities. Which AWS offering best meets these needs?
-  
✓ B. AWS Private Certificate Authority
 
The correct choice is AWS Private Certificate Authority. It is a managed private CA that integrates with IAM for access control and records API activity in AWS CloudTrail for auditing. It issues private X.509 certificates and supports hierarchical trust with subordinate certificate authorities.
AWS Private Certificate Authority is a fully managed service that removes the operational burden of running your own PKI. It lets you use IAM policies to control who can create and manage CAs and certificates. It also emits CloudTrail events for API calls so you can audit certificate issuance and configuration changes.
AWS Certificate Manager is not sufficient by itself because it provisions and manages certificates for AWS services and integrated endpoints and it does not act as a private CA or create subordinate CAs without the private CA service.
AWS Key Management Service manages cryptographic keys and supports signing and encryption operations and it does not issue X.509 certificates or provide CA lifecycle and hierarchy features.
AWS Secrets Manager is designed to store and rotate application secrets and credentials and it is not a PKI or certificate issuance service.
When requirements include private certificates, subordinate CAs, IAM controls and CloudTrail auditing focus on AWS Private Certificate Authority rather than on certificate management or key storage services.
At Horizon HealthTech, an engineer is experimenting with Amazon SQS in a sandbox account. After running tests, they need to remove a test queue so the queue itself and any messages it contains are permanently gone. Which SQS API action should they use?
-  
✓ C. DeleteQueue
 
The correct action is DeleteQueue. This API deletes the queue identified by its QueueUrl and any messages in the queue are no longer available after the deletion completes.
DeleteQueue removes the queue resource itself so the queue cannot be used after the delete finishes. SQS may take up to about 60 seconds to finalize the deletion and some requests can still appear to succeed while the deletion completes.
PurgeQueue only empties all messages from an existing queue and leaves the queue resource intact. That action does not satisfy the requirement to remove the queue itself.
RemovePermission changes access permissions by revoking a labeled permission and it does not delete the queue or its messages.
DeleteMessageBatch deletes specific messages supplied by receipt handle in bulk and it is useful for consumers but it cannot remove the queue resource.
Remember that DeleteQueue removes the queue and its contents while PurgeQueue only clears messages. Choose DeleteQueue when you need the queue removed entirely.
        These exam questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
A developer at Helios Fitness is building an AWS Lambda consumer for an Amazon Kinesis Data Streams stream named activity-events-v4. The code parses each JSON record and throws an error when the required attribute memberId is absent. After deployment, the downstream application starts seeing the same records more than once, but when the team inspects records directly in the Kinesis console there are no duplicates. What most likely explains the repeated records?
-  
✓ C. The function raised an exception, so the Lambda poller retried the same batch from the shard, which led to duplicate processing
 
The function raised an exception, so the Lambda poller retried the same batch from the shard, which led to duplicate processing is correct because Lambda polls Kinesis shards and delivers records with at least once delivery so failed batches are retried until they succeed or records expire.
This behavior occurs because Lambda only checkpoints a shard after a successful batch. When the function raised an exception and the poller retried the batch the entire batch is redelivered instead of being skipped, which can produce duplicates in downstream systems.
The Lambda function is lagging behind the incoming stream volume is incorrect because lag increases latency and backlog but does not itself cause previously delivered and checkpointed records to be reprocessed.
The event source mapping used asynchronous invocation for the Kinesis trigger is incorrect because Kinesis integrations use a synchronous poller model to invoke Lambda and do not rely on asynchronous invocation semantics for delivery.
The function did not update the Kinesis sequence pointer to skip the bad record is incorrect because application code does not manage sequence pointers when using Lambda. The Lambda service controls checkpointing and it intentionally does not advance the pointer on error so it can retry the batch.
For stream triggers make your handlers idempotent and handle bad records or use partial batch response to avoid reprocessing entire batches.
A logistics startup named Northwind Haulage runs a Node.js microservice on Amazon ECS with AWS X-Ray enabled as a sidecar. The service uses Amazon RDS for PostgreSQL in a Multi-AZ configuration with a single read replica. Compliance asks for end-to-end trace data for outbound calls to RDS and to internal and third-party HTTP endpoints. Engineers must be able to search in the X-Ray console using filter expressions, and the traces should capture the exact SQL statements sent by the application. What should you implement to fulfill these requirements?
-  
✓ C. Add annotations to the subsegment that records each downstream call, including the SQL text
 
The correct choice is Add annotations to the subsegment that records each downstream call, including the SQL text. This ensures that the exact SQL statements and outbound HTTP details are attached to the subsegments that represent those operations so the X-Ray console can index and find them.
Annotations are indexed by AWS X Ray and so they can be used in filter expressions to search traces. Subsegments represent downstream database and HTTP calls and placing the SQL text as an annotation on each subsegment captures the exact statements and makes them queryable in the console.
Add metadata to the subsegment that records the database and HTTP calls is wrong because metadata is stored for viewing but it is not indexed and cannot be used in X Ray filter expressions to find traces.
Add annotations to the top-level segment document is not ideal because top level annotations apply to the whole trace and they do not precisely tag individual downstream calls so filtering for specific SQL or HTTP operations will be less accurate.
Add metadata fields to the segment document is incorrect because metadata on the segment is not searchable and is intended for storing detailed payloads for inspection rather than for queryable trace keys.
Use annotations for data you need to search with filter expressions and attach them to the subsegments that represent each outbound database or HTTP operation so queries return precise traces
A retail startup runs a microservice on Amazon ECS with data in an Amazon DynamoDB table and uses an Amazon ElastiCache for Redis cluster to cache lookups. The application currently adds items to the cache only after a miss, which leads to outdated entries when records are updated in the table. The platform team wants the cache to always reflect the latest writes and also wants unused keys to be removed automatically to avoid wasting memory. What should the developer implement to meet these goals?
-  
✓ C. Implement write-through in the application and configure TTL on ElastiCache keys
 
Implement write-through in the application and configure TTL on ElastiCache keys is the correct choice because it ensures the cache is updated on every write and also removes unused entries automatically.
Implement write-through in the application and configure TTL on ElastiCache keys works by writing data to the database and the cache in the same operation so the cache always reflects the latest DynamoDB writes. The TTL setting causes keys that are not accessed to expire after a configured time so memory is not wasted on never-read entries.
Use a write-through caching strategy is incomplete because write-through keeps data fresh but does not by itself remove unused keys automatically. That option omits the TTL component that prevents wasted cache memory.
Use a lazy loading caching strategy is incorrect because lazy loading populates the cache only on misses and does not update the cache on writes. That leads to stale data after updates and fails the freshness requirement.
Enable an allkeys-lru eviction policy and continue using lazy loading is not sufficient because LRU eviction only triggers under memory pressure and lazy loading still allows stale entries until a miss forces refresh. That combination does not guarantee immediate freshness nor proactive removal of never-read keys.
Map requirements to patterns and pick the combined solution that covers both needs. If the exam asks for freshness and automatic cleanup choose write-through plus TTL rather than a single pattern alone.
A developer at Nimbus Robotics is building a CLI tool that uses the AWS SDK with long-term IAM user keys. The IAM policy enforces that every programmatic request must be authenticated with MFA using a six-digit code from a virtual device. Which STS API action should the developer call to obtain temporary credentials that satisfy the MFA requirement for these SDK requests?
-  
✓ B. GetSessionToken
 
The correct STS action is GetSessionToken. It issues temporary security credentials for an IAM user and accepts MFA parameters such as SerialNumber and TokenCode so SDK requests can satisfy an IAM policy that requires MFA.
When you call GetSessionToken you provide the virtual MFA device serial number and the six digit token code and receive temporary credentials that include an AccessKeyId SecretAccessKey and SessionToken along with an expiration time. You then use those temporary credentials in the AWS SDK so programmatic calls comply with the MFA requirement.
GetFederationToken is intended for federated sessions issued by an identity broker and is not the right choice for an existing IAM user who must present MFA.
GetCallerIdentity only returns the caller ARN account ID and user ID and does not create temporary credentials to use for SDK requests.
DecodeAuthorizationMessage is used to decode encoded authorization error messages and is unrelated to acquiring session credentials.
When a question requires programmatic MFA for an IAM user think GetSessionToken and include the MFA device serial and six digit code to obtain temporary credentials the SDK can use.
A team at Northwind Fitness is building a RESTful API with Amazon API Gateway integrated to AWS Lambda. They need to run v1, v2, and a sandbox build at the same time so clients and testers can call specific versions using stable URLs. What is the most effective way to publish and manage these concurrent API versions?
-  
✓ B. Publish each version to its own API Gateway stage with distinct invoke URLs and use stage variables for backend context
 
Publish each version to its own API Gateway stage with distinct invoke URLs and use stage variables for backend context is correct because it provides separate, stable invoke URLs for v1 v2 and a sandbox so clients and testers can call specific versions directly.
API Gateway stages create isolated deployments and unique invoke URLs so each version can be managed and deployed independently. Stage variables let you map a stage to a specific Lambda alias or function name so the same API configuration can call different backends per stage and you can update code without changing client URLs.
Use a Lambda authorizer to direct callers to the correct version is incorrect because authorizers are for authentication and authorization and they do not perform routing or create separate versioned endpoints.
Rely on Lambda function versions and aliases behind a single API stage to represent API versions is incorrect because aliases control which function code is invoked but a single API stage yields one invoke URL so you cannot expose multiple concurrent, predictable API version endpoints with that approach.
Attach an API Gateway resource policy to separate versions and pass context to Lambda is incorrect because resource policies manage access control and do not create isolated deployments or distinct invoke URLs for different API versions.
When you need concurrent, predictable API versions use API Gateway stages and per stage stage variables that point to Lambda aliases so each environment keeps its own stable URL.
| Jira, Scrum & AI Certification | 
|---|
|   Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out.. 
 You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.  |  
     
 Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
