Certified AWS Developer Associate Exam Dump and Braindump

These AWS questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
AWS Certified Developer Exam Simulator
Despite the title of this article, this is not an AWS Developer Braindump in the traditional sense.
I do not believe in shortcuts or cheating.
In the past, the word braindump referred to people who memorized real exam questions and shared them online. That violates the AWS certification agreement and does nothing to build genuine skill or understanding. There is no professional value in that.
This collection is different. These AWS Developer Exam Questions are original and created from real study experience, drawn from my AWS Developer Associate Udemy course and the certificationexams.pro website, which hosts hundreds of free AWS Developer Practice Questions and AWS Certified Developer Practice Tests.
AWS Developer Exam Topics based Questions
Each question is written to align with the official AWS Developer Associate exam topics and objectives.
They are designed to match the tone, depth, and complexity of Real AWS Developer Exam Questions without ever copying them. You will find realistic AWS Developer Exam Sample Questions that strengthen your ability to work with services like Lambda, DynamoDB, S3, and SQS, and help you understand how they interact in real application workflows.
These are not AWS Developer Exam Dumps. Each item in this AWS Developer Exam Simulator is written to teach you how to reason through scenarios such as event driven processing, data persistence, and CI/CD automation on AWS.
If you can answer these AWS Developer Questions and Answers and explain why the wrong choices are incorrect, you will not just pass the exam but gain the confidence to build secure, efficient, and maintainable applications on AWS.
So if you want to call this your Certified AWS Developer Exam Dump, that is fine, but remember that every question here was built to help you learn, not to cheat.
Study seriously, think like an AWS developer, and approach the AWS Developer Certification exam with confidence and integrity.
Git, GitHub & GitHub Copilot Certification Made Easy |
---|
Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Question 1
Scrumtuous Vitals, a health analytics startup, is launching a REST API with Amazon API Gateway. Approximately 60 partner applications will call this API. Each request must send custom HTTP headers named XClientId and XUserId, and those values must be validated against an AuthRegistry table in Amazon DynamoDB before any backend is invoked. What is the best way to implement this authentication in API Gateway?
-
❏ A. Configure an API Gateway usage plan with API keys and require clients to pass a key
-
❏ B. Define an API Gateway model that requires the headers and allow API Gateway to read AuthRegistry in DynamoDB
-
❏ C. Create a request-based Lambda authorizer that validates the XClientId and XUserId by querying the AuthRegistry table in DynamoDB
-
❏ D. Use Amazon Cognito user pools configured to reference the AuthRegistry table in DynamoDB
Question 2
Which AWS service should Amazon ECS tasks running on Fargate use to centrally retrieve secrets and a non-secret endpoint at runtime while providing strong security and requiring minimal code changes?
-
❏ A. AWS AppConfig
-
❏ B. Systems Manager Parameter Store
-
❏ C. Secrets Manager
-
❏ D. AWS Key Management Service (KMS)
Question 3
McKenzie Care Telehealth stores PHI in an encrypted Amazon RDS for MySQL instance. The team needs to speed up read-heavy endpoints by caching the hottest rows for about 45 minutes and also requires native support to maintain ranked lists and leaderboards within the cache. Which approach will achieve this while ensuring the PHI remains encrypted at all times?
-
❏ A. Store hot items in Amazon ElastiCache for Memcached with encryption enabled for data in transit and at rest
-
❏ B. Use Amazon ElastiCache for Redis with encryption in transit and at rest to cache data and manage rankings
-
❏ C. Place Amazon RDS Proxy in front of the MySQL database to improve performance while keeping encryption
-
❏ D. Use DynamoDB Accelerator (DAX) with encryption for caching items from the MySQL workload
Question 4
Which command will authenticate to Amazon ECR in the us-east-2 region and then pull the appsvc:stable image from 098765432109.dkr.ecr.us-east-2.amazonaws.com? (Choose 2)
-
❏ A. docker build -t 098765432109.dkr.ecr.us-east-2.amazonaws.com/appsvc:stable
-
❏ B. aws ecr get-login-password –region us-east-2 | docker login –username AWS –password-stdin 098765432109.dkr.ecr.us-east-2.amazonaws.com
-
❏ C. docker pull appsvc:stable
-
❏ D. docker pull 098765432109.dkr.ecr.us-east-2.amazonaws.com/appsvc:stable
-
❏ E. aws ecr get-authorization-token –region us-east-2
Question 5
Scrumtuous Training runs a web application on Elastic Beanstalk where requests can trigger heavy jobs such as document rendering and media processing that may run for 10 to 20 minutes, causing pages to become unresponsive as the work blocks the request lifecycle. The team wants a managed approach that moves these operations to background workers, scales automatically, and integrates cleanly with their existing Elastic Beanstalk deployment with minimal changes. What should the team implement to restore responsiveness?
-
❏ A. Run a background worker within each EC2 instance in the Elastic Beanstalk web tier to process the jobs asynchronously
-
❏ B. Launch an Amazon ECS service on AWS Fargate to pull tasks and run them asynchronously
-
❏ C. Create an Elastic Beanstalk worker environment that reads from an Amazon SQS queue and handles the jobs outside the web tier
-
❏ D. Build an AWS Step Functions state machine that triggers AWS Lambda functions to execute the long-running workloads
Question 6
How can custom fields such as orderId or buildVersion be made searchable in AWS X-Ray filter expressions?
-
❏ A. AWS X-Ray SDK plugins
-
❏ B. Add annotations to segments or subsegments
-
❏ C. Define custom attributes in sampling rules
-
❏ D. Add metadata key-value pairs
Question 7
A development team at Pickering is Springfield Air is implementing an Amazon SQS FIFO queue to preserve booking events in strict order per customer_id. They also need the producer to prevent duplicate payloads from being delivered by having SQS treat repeated sends as the same message during the deduplication window. Which message parameter should be configured to achieve this?
-
❏ A. ReceiveRequestAttemptId
-
❏ B. MessageGroupId
-
❏ C. MessageDeduplicationId
-
❏ D. ContentBasedDeduplication
Question 8
Which option provides attachable block storage for an EC2 instance and offers both encryption at rest and protection of data in transit?
-
❏ A. Amazon S3
-
❏ B. Amazon EBS volume with encryption enabled at creation
-
❏ C. Amazon EFS with TLS enabled
-
❏ D. EC2 instance store
Question 9
At Scrumtuous Tech Health, an event-driven pipeline uses five AWS Lambda functions that pull tasks from Amazon SQS and write results into an Amazon RDS for PostgreSQL database. Compliance requires all functions to use the same database connection string that is centrally managed, encrypted at rest, and accessed securely at runtime with fine-grained control. How should you protect and share these credentials across the Lambda functions?
-
❏ A. Amazon RDS Proxy
-
❏ B. Use AWS Systems Manager Parameter Store with a SecureString parameter
-
❏ C. Configure AWS Lambda environment variables for the credentials and enable KMS encryption
-
❏ D. Enable IAM database authentication on the RDS cluster and have each Lambda function use IAM auth tokens
Question 10
How can a Lambda function store five secret values in environment variables so they remain hidden in the Lambda console and API responses even from identities authorized to use the KMS key while minimizing runtime latency?
-
❏ A. AWS Secrets Manager
-
❏ B. Lambda console encryption helpers
-
❏ C. Customer managed KMS key for env var encryption
-
❏ D. AWS Systems Manager Parameter Store SecureString
Question 11
A regional publisher, Aurora Daily, exposes a CMS through an Amazon API Gateway REST API that posts events to Amazon EventBridge in a centralized AWS account. A rule in that account governs which items are syndicated. Leadership now wants the same updates to flow to eight partner AWS accounts in real time. How can the developer enable this expansion without changing the CMS configuration?
-
❏ A. Create separate API Gateway REST API endpoints in each partner account and update the CMS to call those endpoints directly
-
❏ B. Configure cross-account routing with Amazon EventBridge by creating a custom event bus in each partner account, adding a resource policy that allows the central account to put events, and setting a rule in the central account to target those buses
-
❏ C. Use AWS Resource Access Manager to share the central EventBridge event bus with the partner accounts
-
❏ D. Deploy additional CMS instances, one per partner AWS account, connected to their own EventBridge rules
Question 12
Which pseudocode implements write-through caching with a TTL so that writes update both the database and the cache and cached entries expire automatically?
-
❏ A. put(id, v): ttl=600 cache.set(id, v, ttl) return ok
-
❏ B. put(id, v): ttl=600 cache.set(id, v, ttl) async queue.send(Update{id,v}) return ok
-
❏ C. put(id, v): db.exec(“UPDATE items SET v=? WHERE id=?”, v, id) ttl=600 cache.set(id, v, ttl) return ok
-
❏ D. put(id, v): db.exec(“UPDATE items SET v=? WHERE id=?”, v, id) cache.delete(id) return ok
Question 13
A developer at HelioPay, a fintech startup, is building a service that ingests about 900 confidential documents per day. Each document must be encrypted within the application before any storage occurs, and every document must use its own distinct encryption key. How should the developer implement this?
-
❏ A. Upload the files to Amazon S3 using server-side encryption with AWS KMS keys (SSE-KMS)
-
❏ B. Call AWS KMS GenerateDataKey for each file, use the returned plaintext data key to encrypt the file, then store the encrypted file and the encrypted data key together
-
❏ C. Store a unique encryption key per file in AWS Secrets Manager and use those keys to encrypt the files in the application
-
❏ D. Use the AWS KMS Encrypt API directly on each file and store the encrypted output
Question 14
In the Lambda Invoke API which setting makes an SDK invocation asynchronous and behave as a fire and forget call?
-
❏ A. InvokeAsync API
-
❏ B. Amazon SNS
-
❏ C. Lambda Invoke with InvocationType=’Event’
-
❏ D. Lambda Invoke with RequestResponse
Question 15
At FintechNova, several squads are building a platform with Amazon API Gateway. The product lead wants each team to work independently by returning canned responses from methods that represent services owned by other teams so development can proceed without live backends. Which API integration type should they choose to simulate these dependencies?
-
❏ A. HTTP_PROXY
-
❏ B. AWS_PROXY
-
❏ C. MOCK
-
❏ D. VPC_LINK
Question 16
Which of the following statements are valid considerations for AWS Lambda performance? (Choose 2)
-
❏ A. You must run the X-Ray daemon inside the function to enable tracing
-
❏ B. Account concurrency is a Region-wide sum across all functions
-
❏ C. Provisioned concurrency increases the function’s memory allocation
-
❏ D. Increasing memory also increases CPU proportionally per invocation
-
❏ E. Lambda automatically assigns Elastic IPs for private VPC access
Question 17
An engineer at Orion Outfitters needs to review how a planned update will affect resources in an existing AWS CloudFormation stack named retail-prod before any changes are applied in the production account. What is the best way to preview the modifications safely?
-
❏ A. Run drift detection
-
❏ B. Perform a direct stack update
-
❏ C. Create a change set
-
❏ D. Use AWS CloudFormation StackSets
Question 18
Which Amazon ECS configuration allows two containers in a single AWS Fargate task to share a persistent /var/log/app directory that remains available across task restarts?
-
❏ A. Task with an ECS ephemeral volume mounted by both containers
-
❏ B. Task with a shared Amazon EFS volume mounted at /var/log/app in both containers
-
❏ C. Run the containers in separate tasks that mount the same EFS path
-
❏ D. Send logs to CloudWatch Logs for sharing
Question 19
A media analytics startup runs a serverless pipeline with twelve AWS Lambda functions that query an Amazon RDS database. All functions must use the same database connection string, which needs to be centrally managed and stored in encrypted form. What is the most secure way to achieve this?
-
❏ A. Attach an IAM execution role with RDS permissions to every Lambda function
-
❏ B. Use AWS Lambda environment variables encrypted with KMS and copy the same value into each function
-
❏ C. Create a SecureString parameter in AWS Systems Manager Parameter Store and have each function read it at runtime
-
❏ D. AWS AppConfig
Question 20
How should a Lambda function that runs for about nine minutes be invoked so the client receives an immediate response while the processing continues in the most cost effective way?
-
❏ A. Publish jobs to SQS and trigger Lambda from the queue
-
❏ B. Enable Lambda provisioned concurrency
-
❏ C. Invoke Lambda with asynchronous InvocationType ‘Event’
-
❏ D. Use AWS Step Functions for asynchronous coordination
Question 21
A ticketing startup, Pinnacle Tickets, plans to move its platform to AWS to handle a projected threefold surge in demand next quarter. Today a single web server hosts the app and keeps user sessions in memory, while a separate host runs MySQL for booking data. During large on-sales, memory on the web server climbs to 95 percent and the site slows noticeably. As part of the migration, the team will run the web tier on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. What additional change should the developer make to improve performance and scale?
-
❏ A. Persist both session state and application records in a MySQL database that you run on a single Amazon EC2 instance
-
❏ B. Keep session state on the EC2 instance store and move application data to Amazon RDS for MySQL
-
❏ C. Use Amazon ElastiCache for Memcached for centralized session storage and store application data in Amazon RDS for MySQL
-
❏ D. Put both session state and application data in Amazon ElastiCache for Memcached
Question 22
An application is incurring high Amazon SQS costs because many ReceiveMessage calls return no messages. What change reduces cost while still allowing messages to be retrieved quickly?
-
❏ A. Raise MaxNumberOfMessages to 8 on ReceiveMessage
-
❏ B. Enable SQS long polling with a 20-second wait time
-
❏ C. Increase the visibility timeout to 180 seconds
-
❏ D. Enable short polling with 0-second wait time
Question 23
At Aurora Logistics, a developer is building an inventory service backed by Amazon DynamoDB. The service must apply a set of Put, Update, and Delete changes across several items as a single atomic action so that either all modifications succeed together or none are applied. Which DynamoDB operation should be used to achieve this behavior?
-
❏ A. PutItem
-
❏ B. BatchWriteItem
-
❏ C. TransactWriteItems
-
❏ D. Scan
Question 24
How can instances in private subnets access Amazon S3 and Amazon DynamoDB privately when they have no public IP addresses and there is no internet gateway or NAT available?
-
❏ A. Deploy a NAT gateway and route S3 and DynamoDB traffic through it
-
❏ B. Use VPC gateway endpoints for S3 and DynamoDB; add routes in private route tables
-
❏ C. Create interface endpoints for S3 and DynamoDB
-
❏ D. Create a gateway endpoint for S3 and an interface endpoint for DynamoDB
Question 25
A retail analytics startup, Lattice Mart, runs REST endpoints in its own data center and documents them with OpenAPI (Swagger). The team is moving to AWS with Amazon API Gateway and AWS Lambda and has provided an OpenAPI 3.0 file named store-api-v3.json. You need to create a new API in API Gateway and automatically generate its resources and methods from this definition. What is the simplest way to proceed?
-
❏ A. Manually create request and response models and mapping templates to match the specification
-
❏ B. Use AWS SAM to deploy the API from the OpenAPI file
-
❏ C. Import the OpenAPI or Swagger definition into Amazon API Gateway using the console
-
❏ D. AWS AppSync
AWS Developer Exam Dump Questions Answered

These AWS questions come from the Udemy AWS Developer Practice course and from certificationexams.pro
Question 1
NovaVitals, a health analytics startup, is launching a REST API with Amazon API Gateway. Approximately 60 partner applications will call this API. Each request must send custom HTTP headers named XClientId and XUserId, and those values must be validated against an AuthRegistry table in Amazon DynamoDB before any backend is invoked. What is the best way to implement this authentication in API Gateway?
-
✓ C. Create a request-based Lambda authorizer that validates the XClientId and XUserId by querying the AuthRegistry table in DynamoDB
The most appropriate approach is to use a custom authorizer that can inspect request headers and validate them against a data store before the backend executes. Create a request-based Lambda authorizer that validates the XClientId and XUserId by querying the AuthRegistry table in DynamoDB fulfills this by reading the custom headers, looking up the values in DynamoDB, and returning an IAM policy to allow or deny the request.
Configure an API Gateway usage plan with API keys and require clients to pass a key is not correct because API keys and usage plans provide client identification and throttling, not user authentication or per-request validation against DynamoDB.
Define an API Gateway model that requires the headers and allow API Gateway to read AuthRegistry in DynamoDB is incorrect because models only enforce request schema and cannot perform authorization or run database queries.
Use Amazon Cognito user pools configured to reference the AuthRegistry table in DynamoDB is not suitable since Cognito user pools issue tokens for authorization and do not directly validate custom headers against your DynamoDB table; this scenario requires custom logic.
Cameron’s Exam Tip
When you see requirements for custom header checks and a data store lookup before invoking the backend, think API Gateway Lambda authorizer, specifically the REQUEST authorizer that can read headers, query DynamoDB, and return an IAM policy.
Question 2
Which AWS service should Amazon ECS tasks running on Fargate use to centrally retrieve secrets and a non-secret endpoint at runtime while providing strong security and requiring minimal code changes?
-
✓ C. Secrets Manager
Secrets Manager is the best choice because it natively stores and manages secrets, supports automatic rotation, and integrates directly with ECS task definitions to inject values as environment variables or mounted files at runtime. This meets the goals of strong security and minimal code changes since the ECS agent retrieves the secrets and provides them to the container without custom retrieval logic.
The option Systems Manager Parameter Store can securely store configuration (including SecureString values) and is integrated with ECS, but it does not provide built-in rotation for many secret types and is less focused on secret lifecycle. In contrast, Secrets Manager is designed specifically for secrets with rotation workflows and auditing.
The option AWS AppConfig is designed for application configuration and feature flags and is well-suited for non-secret settings, but it is not a secrets store and typically requires an agent or SDK integration, which can increase code or operational changes.
The option AWS Key Management Service (KMS) handles encryption keys, not storage of secrets or parameters. You would still need a store such as Secrets Manager or Parameter Store, so using KMS alone would not surface values to tasks.
Cameron’s Exam Tip
Remember that ECS task definitions can reference both Secrets Manager and Parameter Store to inject values at runtime with no code changes. Prioritize Secrets Manager when the requirement emphasizes secret rotation and secret lifecycle. Use Parameter Store or AppConfig for non-secret configuration where cost or change frequency may be considerations. Look for keywords like rotation, secrets, and minimal code changes to favor Secrets Manager.
Question 3
OrchidCare Telehealth stores PHI in an encrypted Amazon RDS for MySQL instance. The team needs to speed up read-heavy endpoints by caching the hottest rows for about 45 minutes and also requires native support to maintain ranked lists and leaderboards within the cache. Which approach will achieve this while ensuring the PHI remains encrypted at all times?
-
✓ B. Use Amazon ElastiCache for Redis with encryption in transit and at rest to cache data and manage rankings
The best choice is Use Amazon ElastiCache for Redis with encryption in transit and at rest to cache data and manage rankings. Redis provides in-memory performance, supports encryption at rest and in transit in ElastiCache, and natively offers sorted sets, which are ideal for leaderboards and ranked lists while caching PHI securely.
Store hot items in Amazon ElastiCache for Memcached with encryption enabled for data in transit and at rest is not suitable because Memcached lacks advanced data structures like sorted sets, making ranking difficult without custom code.
Use DynamoDB Accelerator (DAX) with encryption for caching items from the MySQL workload is incorrect since DAX only works with DynamoDB and cannot cache data from RDS for MySQL.
Place Amazon RDS Proxy in front of the MySQL database to improve performance while keeping encryption does not meet the requirement because RDS Proxy improves connection management but does not cache data or provide ranking features.
Cameron’s Exam Tip
When a scenario calls for caching plus native ranking or sorting, think of Redis sorted sets and ensure both encryption in transit and encryption at rest are enabled for PHI and other sensitive data.
Question 4
Which command will authenticate to Amazon ECR in the us-east-2 region and then pull the appsvc:stable image from 098765432109.dkr.ecr.us-east-2.amazonaws.com? (Choose 2)
-
✓ B. aws ecr get-login-password –region us-east-2 | docker login –username AWS –password-stdin 098765432109.dkr.ecr.us-east-2.amazonaws.com
-
✓ D. docker pull 098765432109.dkr.ecr.us-east-2.amazonaws.com/appsvc:stable
aws ecr get-login-password –region us-east-2 | docker login –username AWS –password-stdin 098765432109.dkr.ecr.us-east-2.amazonaws.com is the recommended way to authenticate Docker to a private Amazon ECR registry using AWS CLI v2. After authentication, use docker pull 098765432109.dkr.ecr.us-east-2.amazonaws.com/appsvc:stable to download the specified image and tag.
docker build -t 098765432109.dkr.ecr.us-east-2.amazonaws.com/appsvc:stable is incorrect because it builds an image locally rather than pulling from ECR.
docker pull appsvc:stable is incorrect because it omits the ECR registry and will target Docker Hub by default.
aws ecr get-authorization-token –region us-east-2 only returns a token; it does not log Docker in or pull the image.
Cameron’s Exam Tip
Authenticate to the exact ECR registry URI and then pull using the full image URI with repository and tag. Prefer get-login-password with CLI v2. If you see the older aws ecr get-login –no-include-email on the exam, know it is a deprecated v1 approach and less likely to appear on newer versions of the exam, but it still represents the same two-step pattern: login, then pull.
Question 5
VegaMetrics runs a web application on Elastic Beanstalk where requests can trigger heavy jobs such as document rendering and media processing that may run for 10 to 20 minutes, causing pages to become unresponsive as the work blocks the request lifecycle. The team wants a managed approach that moves these operations to background workers, scales automatically, and integrates cleanly with their existing Elastic Beanstalk deployment with minimal changes. What should the team implement to restore responsiveness?
-
✓ C. Create an Elastic Beanstalk worker environment that reads from an Amazon SQS queue and handles the jobs outside the web tier
The best way to keep the web tier responsive is to decouple long-running work from request handling and run it in a managed background tier. Create an Elastic Beanstalk worker environment that reads from an Amazon SQS queue and handles the jobs outside the web tier provides native Beanstalk integration via the worker daemon, automatic scaling, retries, and back-pressure through SQS.
Run a background worker within each EC2 instance in the Elastic Beanstalk web tier to process the jobs asynchronously ties background work to the same instances serving user traffic, which can starve the web tier and make scaling uneven under bursty loads.
Launch an Amazon ECS service on AWS Fargate to pull tasks and run them asynchronously could work but adds orchestration, queue wiring, and deployment complexity and lacks the built-in Elastic Beanstalk worker/SQS integration the team wants.
Build an AWS Step Functions state machine that triggers AWS Lambda functions to execute the long-running workloads is not suitable because Lambda’s max duration is 15 minutes and the tasks can take up to 20 minutes, and it does not provide the seamless Elastic Beanstalk integration required.
Cameron’s Exam Tip
For Elastic Beanstalk apps with long or blocking work, look for Elastic Beanstalk worker environments with SQS to decouple requests from processing; remember that Lambda has a 15-minute max duration, which can rule it out for very long tasks.
References:
Question 6
How can custom fields such as orderId or buildVersion be made searchable in AWS X-Ray filter expressions?
-
✓ B. Add annotations to segments or subsegments
The correct choice is Add annotations to segments or subsegments. In AWS X-Ray, annotations are indexed key-value pairs attached to segments and subsegments. Because they are indexed, they can be used in filter expressions to search and group traces by custom fields such as orderId or buildVersion.
The option AWS X-Ray SDK plugins is incorrect because plugins enrich environment context (for example, EC2 metadata) but do not create searchable, indexed custom attributes.
The option Define custom attributes in sampling rules is incorrect because sampling rules determine which requests are sampled; they do not index or expose arbitrary custom fields for search.
The option Add metadata key-value pairs is incorrect because metadata is unindexed and therefore not usable in filter expressions.
Cameron’s Exam Tip
Remember that annotations are indexed and searchable, while metadata is not. Look for keywords like “filter expressions,” “searchable,” or “indexed” to map directly to annotations. In code, use the SDK’s putAnnotation methods on segments or subsegments to add these fields.
Question 7
A development team at Aurora Air is implementing an Amazon SQS FIFO queue to preserve booking events in strict order per customer_id. They also need the producer to prevent duplicate payloads from being delivered by having SQS treat repeated sends as the same message during the deduplication window. Which message parameter should be configured to achieve this?
-
✓ C. MessageDeduplicationId
The correct choice is MessageDeduplicationId. For FIFO queues, this per-message token is used by SQS to detect duplicates; messages sent with the same deduplication ID within the deduplication window are accepted but not delivered again.
MessageGroupId is about ordering messages within a group and does not deduplicate messages, so it will not prevent re-delivery of identical sends.
ReceiveRequestAttemptId deduplicates ReceiveMessage calls to return the same set of messages after retrying a receive, not to deduplicate sends, so it is irrelevant to producer-side duplicate suppression.
ContentBasedDeduplication is a queue setting, not a message parameter; while enabling it lets SQS compute the deduplication ID from the message body, the question asks for the message parameter to set explicitly.
Cameron’s Exam Tip
For FIFO deduplication, remember: use MessageDeduplicationId per message, MessageGroupId for ordering scope, and ReceiveRequestAttemptId only for receive-call retries; ContentBasedDeduplication is a queue attribute, not a message field.
Question 8
Which option provides attachable block storage for an EC2 instance and offers both encryption at rest and protection of data in transit?
-
✓ B. Amazon EBS volume with encryption enabled at creation
The correct choice is Amazon EBS volume with encryption enabled at creation. EBS provides persistent, attachable block storage for EC2. Enabling EBS encryption at creation uses AWS KMS keys to encrypt data at rest, and for encrypted EBS on Nitro-based instances, data moving between the instance and the volume is also encrypted, satisfying at-rest and in-transit protection requirements.
Amazon S3 is object storage and cannot be attached as a block device to an EC2 instance, so it does not meet the attachable block storage requirement.
Amazon EFS with TLS enabled supports encryption at rest and in transit but is a network file system, not block storage; the question asks for attachable block storage to an instance.
EC2 instance store is ephemeral and data is lost on stop or terminate, which fails the persistence and compliance needs.
Cameron’s Exam Tip
Map keywords to storage types. Attach, block, and persistent point to EBS. File system and NFS point to EFS. Object storage points to S3. Ephemeral indicates instance store. For compliance cues like encrypt at rest and in transit, remember EBS encryption at creation covers both data at rest and the path between the instance and the encrypted volume on Nitro. If you see automatic encryption wording, verify whether encryption must be explicitly enabled versus relying on account defaults.
Question 9
At LumaTech Health, an event-driven pipeline uses five AWS Lambda functions that pull tasks from Amazon SQS and write results into an Amazon RDS for PostgreSQL database. Compliance requires all functions to use the same database connection string that is centrally managed, encrypted at rest, and accessed securely at runtime with fine-grained control. How should you protect and share these credentials across the Lambda functions?
-
✓ B. Use AWS Systems Manager Parameter Store with a SecureString parameter
Use AWS Systems Manager Parameter Store with a SecureString parameter best fits the requirement to centrally manage one database connection string, encrypt it with KMS, and control runtime access with IAM. Multiple Lambda functions can read the same parameter by name, and you can version, audit, and rotate the value without redeploying code.
Amazon RDS Proxy is valuable for connection pooling and improving database scalability for Lambda, but it is not a secrets management solution and does not itself provide a centralized encrypted credential to share; it typically integrates with a separate secrets store.
Configure AWS Lambda environment variables for the credentials and enable KMS encryption encrypts per-function copies of the secret, which complicates centralized sharing and coordinated rotation across several functions.
Enable IAM database authentication on the RDS cluster and have each Lambda function use IAM auth tokens controls authentication without storing a shared credential, and while useful for MySQL and PostgreSQL, it does not meet the requirement to share the same connection string as a centrally managed secret.
Cameron’s Exam Tip
When multiple compute components must read the same secret, think of a centralized, KMS-encrypted store with IAM policy controls. For configuration and simple secrets at scale, consider AWS Systems Manager Parameter Store SecureString; ensure your Lambdas retrieve at runtime and cache minimally to ease rotation.
Question 10
How can a Lambda function store five secret values in environment variables so they remain hidden in the Lambda console and API responses even from identities authorized to use the KMS key while minimizing runtime latency?
-
✓ B. Lambda console encryption helpers
Lambda console encryption helpers are best because they encrypt values client-side before saving, so only ciphertext is stored as environment variables. The Lambda console and API do not auto-decrypt these helpers’ ciphertext, keeping secrets masked even for identities that could use the KMS key, and avoiding extra network calls that add latency.
The option AWS Secrets Manager is not ideal here because it moves secrets out of environment variables and adds retrieval latency, despite being good for rotation.
The option Customer managed KMS key for env var encryption simply sets which key protects values at rest; the console/API can still decrypt for authorized users, so secrets are visible.
The option AWS Systems Manager Parameter Store SecureString also requires retrieval calls and additional logic or caching, increasing latency and complexity while not keeping the values as environment variables.
Cameron’s Exam Tip
When you see requirements to keep environment variable secrets masked in the console or API and to avoid extra latency, think client-side encryption with the Lambda encryption helpers. If the question emphasizes rotation and centralized secret management, think Secrets Manager or Parameter Store, but watch for hints about avoiding runtime calls and keeping values as env vars.
Question 11
A regional publisher, Aurora Daily, exposes a CMS through an Amazon API Gateway REST API that posts events to Amazon EventBridge in a centralized AWS account. A rule in that account governs which items are syndicated. Leadership now wants the same updates to flow to eight partner AWS accounts in real time. How can the developer enable this expansion without changing the CMS configuration?
-
✓ B. Configure cross-account routing with Amazon EventBridge by creating a custom event bus in each partner account, adding a resource policy that allows the central account to put events, and setting a rule in the central account to target those buses
The simplest way to fan out events to multiple AWS accounts without touching the producer is to use native cross-account capabilities in EventBridge. Create a custom event bus in each partner account, attach a resource policy that authorizes the central account to put events, and configure a rule in the central account to target those external event buses. This preserves the existing CMS integration and syndication logic.
Configure cross-account routing with Amazon EventBridge by creating a custom event bus in each partner account, adding a resource policy that allows the central account to put events, and setting a rule in the central account to target those buses is correct because EventBridge supports cross-account delivery via resource-based policies and rule targets, enabling a clean, scalable fan-out with no changes to the CMS.
Create separate API Gateway REST API endpoints in each partner account and update the CMS to call those endpoints directly is incorrect because it requires modifying the CMS and managing many endpoints, which conflicts with the requirement and adds operational complexity.
Use AWS Resource Access Manager to share the central EventBridge event bus with the partner accounts is incorrect because EventBridge does not use AWS RAM to share event buses; you must use event bus resource policies and cross-account rule targets.
Deploy additional CMS instances, one per partner AWS account, connected to their own EventBridge rules is incorrect because it needlessly duplicates infrastructure and increases cost and maintenance without solving the distribution requirement efficiently.
Cameron’s Exam Tip
When you see a requirement to distribute events across accounts without changing the producer, think EventBridge cross-account with event bus resource policies and cross-account rule targets, not custom fan-out via Lambda or extra API endpoints.
Question 12
Which pseudocode implements write-through caching with a TTL so that writes update both the database and the cache and cached entries expire automatically?
-
✓ C. put(id, v): db.exec(“UPDATE items SET v=? WHERE id=?”, v, id) ttl=600 cache.set(id, v, ttl) return ok
put(id, v): db.exec(“UPDATE items SET v=? WHERE id=?”, v, id) ttl=600 cache.set(id, v, ttl) return ok is correct because it implements write-through caching with TTL. It updates the database (the source of truth) first to ensure durability and then immediately writes the same value to the cache with a time-to-live, keeping the cache consistent on writes and automatically expiring entries to avoid stale buildup.
The option that writes only to the cache put(id, v): ttl=600 cache.set(id, v, ttl) return ok is wrong because the database is never updated, so the cache and persistent store can diverge. The write-behind option put(id, v): ttl=600 cache.set(id, v, ttl) async queue.send(Update{id,v}) return ok is wrong because it introduces a window where the cache differs from the database until the async update completes, violating immediate consistency on writes. The cache-aside invalidation option put(id, v): db.exec(“UPDATE items SET v=? WHERE id=?”, v, id) cache.delete(id) return ok is also wrong for this requirement because it removes the entry but does not repopulate the cache or use TTL to age entries automatically, so it won’t keep the cache warm and can still accumulate unrelated old keys.
Cameron’s Exam Tip
Know the differences between write-through (DB update then cache set), cache-aside (read-through; invalidate or update on write), and write-behind (async persistence). For questions requiring immediate consistency on writes plus automatic aging, look for the sequence “update DB, then cache.set with TTL”. Be cautious of options that only invalidate or that use asynchronous persistence, which fail the “consistent on write” requirement.
Question 13
A developer at HelioPay, a fintech startup, is building a service that ingests about 900 confidential documents per day. Each document must be encrypted within the application before any storage occurs, and every document must use its own distinct encryption key. How should the developer implement this?
-
✓ B. Call AWS KMS GenerateDataKey for each file, use the returned plaintext data key to encrypt the file, then store the encrypted file and the encrypted data key together
The best pattern is client-side envelope encryption using data keys. The application requests a unique data key for each document, uses the plaintext data key to encrypt the content, then discards it and stores the encrypted data key alongside the ciphertext for later decryption.
Call AWS KMS GenerateDataKey for each file, use the returned plaintext data key to encrypt the file, then store the encrypted file and the encrypted data key together is correct because GenerateDataKey returns both a plaintext data key for immediate use and a CiphertextBlob of that key protected by a KMS key, which matches per-file unique key requirements and client-side encryption.
Upload the files to Amazon S3 using server-side encryption with AWS KMS keys (SSE-KMS) is wrong because the requirement is to encrypt in the application before storage; SSE-KMS performs server-side encryption after upload.
Store a unique encryption key per file in AWS Secrets Manager and use those keys to encrypt the files in the application is not appropriate since Secrets Manager manages application secrets and does not generate or cryptographically protect ephemeral per-file data keys like KMS; it also adds unnecessary operational overhead and exposure.
Use the AWS KMS Encrypt API directly on each file and store the encrypted output is unsuitable because Encrypt has a 4 KB plaintext size limit and does not provide per-file data key generation, so it cannot handle full-file encryption.
Cameron’s Exam Tip
For client-side encryption where each object needs a distinct key, think envelope encryption with KMS GenerateDataKey; avoid using Encrypt for large payloads and remember SSE options are server-side, not client-side.
Question 14
In the Lambda Invoke API which setting makes an SDK invocation asynchronous and behave as a fire and forget call?
-
✓ C. Lambda Invoke with InvocationType=’Event’
Lambda Invoke with InvocationType=’Event’ is correct because setting InvocationType to ‘Event’ makes the SDK call asynchronous, returning immediately with a 202 status and not waiting for the function to complete or return a payload. This is the recommended approach for fire-and-forget Lambda invocations via the SDK.
The option InvokeAsync API is incorrect because it is deprecated and not recommended for new development; AWS advises using Invoke with InvocationType=’Event’ instead.
The option Amazon SNS is incorrect because publishing to SNS changes the architecture to an event fan-out pattern and is not a direct Lambda SDK invocation.
The option Lambda Invoke with RequestResponse is incorrect because RequestResponse is the synchronous mode that waits for execution to finish and returns the payload.
Cameron’s Exam Tip
Remember the three InvocationType values: RequestResponse (sync), Event (async), and DryRun (permission check only). For async flows, configure retries and failure handling with EventInvokeConfig (DLQ, on-failure destinations). Watch for distractors that suggest deprecated InvokeAsync or alternate eventing services (SNS, EventBridge) when the question explicitly asks for a direct SDK invocation.
Question 15
At FintechNova, several squads are building a platform with Amazon API Gateway. The product lead wants each team to work independently by returning canned responses from methods that represent services owned by other teams so development can proceed without live backends. Which API integration type should they choose to simulate these dependencies?
-
✓ C. MOCK
MOCK is the right choice because API Gateway can generate a response entirely within the gateway, enabling teams to isolate development and test contracts without any running backend services.
HTTP_PROXY simply forwards calls to an actual HTTP endpoint, so it requires the real downstream service and does not simulate anything.
AWS_PROXY (Lambda proxy) still invokes a live Lambda or AWS backend, which prevents true isolation and stubbing.
VPC_LINK targets private resources behind a Network Load Balancer, which is useful for private connectivity but unrelated to returning canned responses.
Cameron’s Exam Tip
When you see keywords like mock, stub, simulate, or no backend with API Gateway, the answer is usually the MOCK integration.
Question 16
Which of the following statements are valid considerations for AWS Lambda performance? (Choose 2)
-
✓ B. Account concurrency is a Region-wide sum across all functions
-
✓ D. Increasing memory also increases CPU proportionally per invocation
Account concurrency is a Region-wide sum across all functions is correct because AWS enforces an account-level concurrency limit per Region that is shared by all Lambda functions in that Region. This directly impacts scaling behavior and potential throttling when the total concurrent executions reach the limit.
Increasing memory also increases CPU proportionally per invocation is correct because Lambda allocates CPU power and some other resources in proportion to the memory setting. Increasing memory therefore increases available CPU, which often reduces execution time and improves throughput.
You must run the X-Ray daemon inside the function to enable tracing is incorrect. Lambda has native integration with AWS X-Ray. You enable active tracing in the function configuration; there is no need to run an in-function daemon.
Provisioned concurrency increases the function’s memory allocation is incorrect. Provisioned concurrency pre-initializes execution environments to reduce cold starts but does not alter memory or CPU. Memory is an independent configuration that you set per function or version/alias.
Lambda automatically assigns Elastic IPs for private VPC access is incorrect. With VPC access, Lambda attaches ENIs to your subnets. It does not assign Elastic IPs; outbound internet access from private subnets requires a NAT gateway or NAT instance.
Cameron’s Exam Tip
Remember that CPU scales with memory in Lambda. Keep in mind that account concurrency is regional and shared across functions, and use reserved or provisioned concurrency when you need isolation and warm starts. For VPC-enabled functions, plan ENI creation and NAT paths carefully to avoid cold-start and egress surprises. For tracing, enable active tracing instead of attempting to run a daemon in your code.
Question 17
An engineer at Orion Outfitters needs to review how a planned update will affect resources in an existing AWS CloudFormation stack named retail-prod before any changes are applied in the production account. What is the best way to preview the modifications safely?
-
✓ C. Create a change set
The right approach is to use a preview mechanism that summarizes modifications without touching production. Create a change set shows which resources will be added, modified, or replaced so you can review the impact before execution.
Run drift detection checks whether deployed resources differ from the recorded template state, but it does not show what will change from a new template.
Perform a direct stack update immediately applies changes to the stack, which defeats the purpose of previewing safely.
Use AWS CloudFormation StackSets is for orchestrating stacks across multiple accounts and Regions and does not provide a change preview for a single stack update.
Cameron’s Exam Tip
Watch for keywords like preview, proposed changes, or without applying; these point to CloudFormation change sets rather than drift detection or direct updates.
Question 18
Which Amazon ECS configuration allows two containers in a single AWS Fargate task to share a persistent /var/log/app directory that remains available across task restarts?
-
✓ B. Task with a shared Amazon EFS volume mounted at /var/log/app in both containers
Task with a shared Amazon EFS volume mounted at /var/log/app in both containers is correct because EFS provides a network filesystem that persists independently of the task lifecycle and can be mounted by multiple containers in the same ECS task. Defining an EFS volume in the task definition and mounting it into each container at /var/log/app enables shared, persistent logs across restarts on Fargate.
Task with an ECS ephemeral volume mounted by both containers is incorrect because the data is lost when the task stops or is replaced; it does not meet the persistence requirement.
Run the containers in separate tasks that mount the same EFS path is not the recommended pattern for a sidecar and adds unnecessary orchestration complexity. While both tasks could access the same EFS directory, the exam expects the simpler, co-located containers approach using a single task with a shared volume.
Send logs to CloudWatch Logs for sharing is incorrect because CloudWatch Logs is not a POSIX filesystem and cannot be mounted at /var/log/app for file reads and writes.
Cameron’s Exam Tip
When you see requirements to share files between containers and persist across task restarts on Fargate, think EFS-backed task volumes. Fargate does not support host bind mounts. Ephemeral task storage is suitable for temporary data only. Services like CloudWatch Logs or FireLens are for log streaming, not shared directories.
Question 19
A media analytics startup runs a serverless pipeline with twelve AWS Lambda functions that query an Amazon RDS database. All functions must use the same database connection string, which needs to be centrally managed and stored in encrypted form. What is the most secure way to achieve this?
-
✓ C. Create a SecureString parameter in AWS Systems Manager Parameter Store and have each function read it at runtime
The Create a SecureString parameter in AWS Systems Manager Parameter Store and have each function read it at runtime approach provides centralized, encrypted storage using KMS, fine-grained IAM access control, and versioning. This allows multiple Lambda functions to consume the same secret without duplicating values, while enabling rotation and auditing.
Attach an IAM execution role with RDS permissions to every Lambda function only grants permissions and does not store or encrypt the connection string, so it does not meet the secrecy requirement.
Use AWS Lambda environment variables encrypted with KMS and copy the same value into each function encrypts at rest but spreads secrets across functions, making rotation and governance harder and less secure than a centralized store.
AWS AppConfig is designed for application configuration and feature flags, and it typically integrates with Parameter Store or Secrets Manager for sensitive values; it is not the right place to store database secrets directly.
Cameron’s Exam Tip
For shared, encrypted configuration across multiple Lambdas, prefer a centralized secrets/config store. Use Parameter Store SecureString or AWS Secrets Manager for secrets, and avoid duplicating sensitive values in per-function environment variables.
Question 20
How should a Lambda function that runs for about nine minutes be invoked so the client receives an immediate response while the processing continues in the most cost effective way?
-
✓ C. Invoke Lambda with asynchronous InvocationType ‘Event’
Invoke Lambda with asynchronous InvocationType ‘Event’ is correct because async invocation returns immediately (HTTP 202), allowing the client to stay responsive while the function executes in the background. It is the simplest and most cost-effective approach for a single long-running Lambda within the 15-minute timeout limit. Async invocation also supports retries and failure handling via destinations or DLQs, avoiding extra infrastructure.
Publish jobs to SQS and trigger Lambda from the queue is unnecessary for this single-step flow and adds queue costs and operational overhead. While valid for decoupling, buffering, or fan-out, native async Lambda invocation already covers the core need here.
Enable Lambda provisioned concurrency does not make the call non-blocking. It only reduces cold-start latency. With synchronous RequestResponse, the client still waits for completion.
Use AWS Step Functions for asynchronous coordination is overkill for one task and increases cost and complexity. Step Functions is better when orchestrating multiple steps, retries, or branches.
Cameron’s Exam Tip
When you see keywords like return immediately, long-running (within 15 minutes), and cost-effective, think async Lambda invocation with InvocationType “Event”. Reserve SQS for workload smoothing, many producers, or fan-out patterns, and Step Functions for multi-step orchestration. Know the difference between RequestResponse (sync) and Event (async), plus async features like retries and destinations.
Question 21
A ticketing startup, Pinnacle Tickets, plans to move its platform to AWS to handle a projected threefold surge in demand next quarter. Today a single web server hosts the app and keeps user sessions in memory, while a separate host runs MySQL for booking data. During large on-sales, memory on the web server climbs to 95 percent and the site slows noticeably. As part of the migration, the team will run the web tier on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. What additional change should the developer make to improve performance and scale?
-
✓ C. Use Amazon ElastiCache for Memcached for centralized session storage and store application data in Amazon RDS for MySQL
The web tier will scale out behind an Application Load Balancer, so sessions must not live on individual instances. Offloading sessions to a fast, shared, in-memory store removes the RAM bottleneck on each host and enables truly stateless web servers.
Use Amazon ElastiCache for Memcached for centralized session storage and store application data in Amazon RDS for MySQL is best because Memcached provides low-latency session storage across multiple instances, and RDS for MySQL handles durable relational data with managed backups, scaling, and maintenance.
Persist both session state and application records in a MySQL database that you run on a single Amazon EC2 instance concentrates load on one server, is harder to scale, and mixes transient session data with durable data.
Keep session state on the EC2 instance store and move application data to Amazon RDS for MySQL risks losing sessions during instance replacement or scaling because instance store is ephemeral.
Put both session state and application data in Amazon ElastiCache for Memcached misuses the cache for primary storage; Memcached is not durable and is suited for transient data such as sessions or hot cache, not system of record.
Cameron’s Exam Tip
When using Auto Scaling and a load balancer, make the web tier stateless: store sessions in a shared cache like ElastiCache and keep durable relational data in RDS or Aurora.
Question 22
An application is incurring high Amazon SQS costs because many ReceiveMessage calls return no messages. What change reduces cost while still allowing messages to be retrieved quickly?
-
✓ B. Enable SQS long polling with a 20-second wait time
The correct choice is Enable SQS long polling with a 20-second wait time. Long polling reduces the number of empty ReceiveMessage responses by waiting for messages to arrive (up to the wait time) before returning. This lowers per-request charges and still returns immediately when a message becomes available, maintaining quick retrieval.
The option Raise MaxNumberOfMessages to 8 on ReceiveMessage does not solve empty receives; it only increases batch size when messages are present.
The option Increase the visibility timeout to 180 seconds affects post-receive processing time, not the frequency of receive calls or empty responses.
The option Enable short polling with 0-second wait time increases the number of empty responses and cost, especially on low-traffic queues.
Cameron’s Exam Tip
When you see many empty ReceiveMessage calls and higher cost on SQS with low message rates, think enable long polling. Configure WaitTimeSeconds up to 20 seconds at the queue level or per request. Short polling (0 seconds) increases API calls, while MaxNumberOfMessages and visibility timeout do not address empty receive costs.
Question 23
At Aurora Logistics, a developer is building an inventory service backed by Amazon DynamoDB. The service must apply a set of Put, Update, and Delete changes across several items as a single atomic action so that either all modifications succeed together or none are applied. Which DynamoDB operation should be used to achieve this behavior?
-
✓ C. TransactWriteItems
The correct choice is TransactWriteItems because DynamoDB transactions provide ACID guarantees for groups of writes so that either every action in the set commits or none do. It supports Put, Update, Delete, and ConditionCheck across items and even across tables in the same account and Region.
BatchWriteItem is not atomic, which means some requests can succeed while others fail, and it does not support UpdateItem at all. This violates the all-or-nothing and update requirements.
PutItem can only write a single item and cannot group multiple item changes into one atomic operation.
Scan is a read-only operation that iterates over items; it cannot make writes or enforce transactional behavior.
Cameron’s Exam Tip
When you see multi-item, all-or-nothing writes or the need to combine Put, Update, and Delete atomically, think DynamoDB transactions with TransactWriteItems; if you see bulk writes without atomicity, that points to BatchWriteItem.
Question 24
How can instances in private subnets access Amazon S3 and Amazon DynamoDB privately when they have no public IP addresses and there is no internet gateway or NAT available?
-
✓ B. Use VPC gateway endpoints for S3 and DynamoDB; add routes in private route tables
The correct choice is Use VPC gateway endpoints for S3 and DynamoDB; add routes in private route tables. Amazon S3 and Amazon DynamoDB both support VPC gateway endpoints, which provide private connectivity entirely over the AWS network without requiring public IPs, an internet gateway, or a NAT device. You attach these endpoints to your route tables so traffic to the service prefixes is routed through the endpoint.
The option Deploy a NAT gateway and route S3 and DynamoDB traffic through it is wrong because NAT requires an internet gateway and egresses to the public internet, which violates the requirement to keep traffic private and avoid public IPs.
The option Create interface endpoints for S3 and DynamoDB is incorrect because DynamoDB does not support interface endpoints. For DynamoDB, the supported and recommended approach is a gateway endpoint. While S3 can be accessed via a gateway endpoint (and, in some contexts, interface endpoints), the pair here fails due to DynamoDB.
The option Create a gateway endpoint for S3 and an interface endpoint for DynamoDB is incorrect because DynamoDB does not have interface endpoint support.
Cameron’s Exam Tip
Memorize which AWS services use gateway endpoints versus interface endpoints. On the exam, if the services are S3 and DynamoDB and the requirement is to avoid public IPs, NAT, and internet gateways, choose gateway endpoints. For most other AWS services, think interface endpoints (PrivateLink).
Question 25
A retail analytics startup, Lattice Mart, runs REST endpoints in its own data center and documents them with OpenAPI (Swagger). The team is moving to AWS with Amazon API Gateway and AWS Lambda and has provided an OpenAPI 3.0 file named store-api-v3.json. You need to create a new API in API Gateway and automatically generate its resources and methods from this definition. What is the simplest way to proceed?
-
✓ C. Import the OpenAPI or Swagger definition into Amazon API Gateway using the console
The most direct method is to use the API Gateway import capability. API Gateway can ingest OpenAPI v2 or v3 definitions and generate the REST API’s resources and methods in one step. Using the console to import the file is fast and requires minimal setup.
Import the OpenAPI or Swagger definition into Amazon API Gateway using the console is correct because API Gateway natively supports importing OpenAPI files to create or overwrite an API with its resources and methods.
Use AWS SAM to deploy the API from the OpenAPI file is not the simplest because it introduces template authoring, packaging, and deployment steps, which are more complex than a one-time console import.
Manually create request and response models and mapping templates to match the specification is time-consuming and does not automatically build the API from the definition, so it fails the simplicity requirement.
AWS AppSync is for GraphQL APIs and does not apply to importing or hosting REST APIs defined with OpenAPI.
Cameron’s Exam Tip
When a question emphasizes easiest or quickest, prefer console-based imports or built-in wizards; when it emphasizes repeatability or automation, prefer infrastructure as code such as SAM or CloudFormation.
Jira, Scrum & AI Certification |
---|
Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.