AZ-204 Azure Developer Practice Exams

Microsoft AZ-204 Certification Exam Topics

If you want to get certified in the Microsoft Azure Developer Associate (AZ-204) exam, you need more than just study materials. You need to practice by completing AZ-204 practice exams, reviewing Azure development sample questions, and spending time with a reliable AZ-204 certification exam simulator.

In this quick AZ-204 practice test tutorial, we provide a carefully written set of AZ-204 exam questions and answers. These questions mirror the tone and difficulty of the actual AZ-204 exam, helping you understand how prepared you truly are.

AZ-204 Developer Associate Practice Questions

Study thoroughly, practice consistently, and gain hands-on familiarity with Azure Functions, microservices, REST APIs, identity, storage, and messaging services. With consistent preparation, you will be ready to pass the AZ-204 certification exam with confidence.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Certification Practice Exam Questions

A retail analytics company named Harbor Systems is building an application to transfer files from an on premises NAS to Azure Blob Storage. The application stores keys secrets and certificates in Azure Key Vault and uses the Key Vault APIs. You need to configure the environment so that an accidental deletion of the key vault or its objects can be recovered for 60 days after deletion. What should you do?

  • ❏ A. Assign a user assigned managed identity and grant it Key Vault access via role based access control

  • ❏ B. Run the Add AzKeyVaultKey PowerShell cmdlet to add a key to the vault

  • ❏ C. Enable soft delete only by running az keyvault update –enable-soft-delete true

  • ❏ D. Run az keyvault update –enable-soft-delete true –enable-purge-protection true to enable both soft delete and purge protection

A retail technology company named Meridian Apps is deploying a customer portal to an Azure App Service Plan and it must scale automatically to absorb sudden spikes in traffic without downtime. Which metric can be used to trigger autoscale actions based on application performance?

  • ❏ A. Average Response Time

  • ❏ B. Memory Percentage

  • ❏ C. HTTP Queue Length

  • ❏ D. CPU Percentage

A development team at NimbusSoft is building a .NET service that must upload a local file to Azure Blob Storage using the Azure Storage SDK for .NET. Which code approach correctly uploads the file using the Azure.Storage.Blobs library?

  • ❏ A. An implementation that parses a connection string into a CloudStorageAccount then creates a CloudBlobClient and a CloudBlockBlob and calls UploadFromStream with a file stream

  • ❏ B. A pattern that creates a BlobServiceClient then attempts to call UploadAsync with an uploadFileStream parameter that is not declared in the snippet

  • ❏ C. A snippet that constructs a BlobClient from the connection string for the “docs-archive” container and the “report.txt” blob then opens a FileStream and calls Upload with overwrite enabled

  • ❏ D. A flow that creates a BlobServiceClient then obtains a BlobContainerClient and BlobClient and calls Upload passing an opened FileStream without specifying overwrite behavior

A small firm called Clearwater has an App Service plan named planB that is currently on the Free pricing tier and they intend to host an Azure Function that is triggered by a storage queue while keeping costs as low as possible. Which setting should be selected for the Azure App Service feature to satisfy the requirement?

  • ❏ A. Continuous deployment

  • ❏ B. System assigned managed identity

  • ❏ C. Always On

A developer at Northbridge Systems is building a B2B web portal that uses Azure B2B collaboration for sign in. Trial users may register with any email address and they initially authenticate by using a one time passcode. When a trial account is upgraded to a paid subscription the user data must remain intact and the user must sign in using federation with their corporate identity provider. Which Microsoft Graph API property should be set to force the guest to re accept their invitation and convert authentication from one time passcode to federation?

  • ❏ A. userFlowType

  • ❏ B. invitedUser

  • ❏ C. resetRedemption

  • ❏ D. Cloud Identity

You are deploying three microservices named paySvc sessionSvc and archiveSvc to a new Azure Container Apps environment called production hub. Each service must persist data to storage. paySvc must keep data only visible to its container and the storage must be limited to the container local disk. sessionSvc must retain data for the lifetime of a replica and allow multiple containers within the same replica to mount the same storage location. archiveSvc must preserve data beyond the replica lifetime allow multiple containers to access the same storage and support per object permissions. Which storage type should you choose for archiveSvc?

  • ❏ A. Azure Files

  • ❏ B. Container file system

  • ❏ C. Azure Blob Storage

  • ❏ D. Ephemeral volume

A retail analytics company called Skyline Finance uses an HTTP triggered Azure Function to process data and then write results to Azure Blob Storage with a blob output binding. The function is invoked by clients and the execution stops after three minutes. The function must complete processing the blob data without timing out. Would adopting the Durable Functions asynchronous pattern meet this requirement?

  • ❏ A. No

  • ❏ B. Yes

A regional clinic group called Harborcare is building a web portal to store scanned patient registration forms. The saved forms must remain confidential even if an outside party downloads them from storage. The proposed solution is to create an Azure Cosmos DB account with Storage Service Encryption enabled and then store the scanned files in that Cosmos DB account. Will this implementation meet the confidentiality requirement?

  • ❏ A. Yes this approach satisfies the confidentiality requirement

  • ❏ B. No this design does not satisfy the confidentiality requirement

An engineer at BlueWave Systems needs to launch a container with Azure Container Instances and provide persistent storage from a specified Azure Storage account. Which configuration will allow the container instance to access that Azure Storage account after it is deployed?

  • ❏ A. Provide the storage account connection string as an environment variable when creating the container instance

  • ❏ B. Set the restart policy to Always and pass the storage account connection string as a command line argument when launching the container instance

  • ❏ C. Mount an Azure Files share from the target storage account as a volume into the container instance during creation

  • ❏ D. Assign a system assigned managed identity to the container instance and grant it a Data Storage role on the storage account so the container can authenticate without keys

A team at Silverline deployed a .NET web application to Azure App Service and they will monitor it using Application Insights. Which Application Insights feature should they enable to record performance traces while keeping the impact on end users minimal?

  • ❏ A. Multistep web test

  • ❏ B. Live Metrics Stream

  • ❏ C. Profiler

  • ❏ D. Snapshot Debugger

A logistics startup named BlueHaven stores several terabytes across dozens of containers in an Azure storage account and they must copy every blob to a new storage account. The transfer must be automated require minimal operator intervention and be able to resume if interrupted. Which tool should they choose?

  • ❏ A. Azure Data Factory

  • ❏ B. Azure Storage Explorer

  • ❏ C. AzCopy

  • ❏ D. .NET Storage Client Library

At Bluewave Systems a scheduled function is configured with the timer expression “0 5 * * * *” in its settings. How frequently will this function execute?

  • ❏ A. Every five minutes throughout each hour

  • ❏ B. Once daily at five in the morning

  • ❏ C. At five minutes past every hour of every day

  • ❏ D. The schedule string is invalid so the function will not run

StreamHaven operates a web application that delivers live video streams to customers and the pipeline uses automated build and deployment workflows. You need to ensure the service remains highly available and that viewers get a steady playback experience. You also want data to be stored in the geographic region closest to each viewer. Would adding Azure Cache for Redis to the architecture satisfy these requirements?

  • ❏ A. Yes

  • ❏ B. No

AcmeSoft runs a customer API on Azure App Service and it needs to choose between vertical scaling and horizontal scaling. What does vertical scaling of the application do?

  • ❏ A. Increase the number of instances of the web application so additional copies run concurrently

  • ❏ B. Upgrade the App Service plan to a larger pricing tier which provides extra CPU memory disk and premium features

  • ❏ C. Deploy another instance of the app in a different region to improve latency for remote users

  • ❏ D. Use Cloud Load Balancing to distribute traffic across endpoints

A solution architect is building an Azure pipeline to collect point of sale telemetry for a retail chain named Harbor Foods. The deployment will cover about 2,400 outlets worldwide. Each device produces about 2.4 megabytes of data every 24 hours. Each location has between one and six devices that send data. All device data must be stored in Azure Blob storage and must be joinable by a device identifier. The chain expects to add more locations over time. The proposed ingestion uses Azure Event Grid with the device identifier as the partition key and with capture enabled. Does this design meet the requirements?

  • ❏ A. Yes the design meets the requirements

  • ❏ B. No the design does not meet the requirements

A regional finance firm runs a Microsoft Entra ID tenant and employees sometimes sign in from public internet IPs. You must ensure that when users authenticate from an unfamiliar IP address they are automatically prompted to change their passwords. You decide to enable Microsoft Entra Privileged Identity Management. Will this approach satisfy the requirement?

  • ❏ A. Yes configuring Privileged Identity Management will fulfill the requirement

  • ❏ B. No configuring Privileged Identity Management will not meet the requirement

Which REST API endpoint should you call to upload a ZIP archive to a web application through the Kudu SCM service on Contoso Cloud?

  • ❏ A. PUT /upload/storage/v1/b/my-bucket/o

  • ❏ B. POST /api/deploy

  • ❏ C. PUT /api/zip/{targetPath}/

  • ❏ D. PUT /api/deployments/{id}

Your team manages a Service Bus namespace that hosts a topic named OrdersTopic. You plan to add a subscription named PrioritySub to OrdersTopic and you want to filter messages by their system properties and then apply an action that tags each matched message. Which filter type should you configure?

  • ❏ A. Correlation filter

  • ❏ B. Boolean filter

  • ❏ C. SQL filter

A team at Meridian Climate Systems is building an ASP.NET Core web application that uses Azure Front Door to deliver custom climate CSV datasets to academic users. The datasets are regenerated every 12 hours and specific files must be invalidated in the Front Door cache according to response header values. Which cache purge method should be used to remove individual assets from the Front Door cache?

  • ❏ A. Wildcard path purge

  • ❏ B. Root domain purge

  • ❏ C. Single path purge

How would you describe an App Service Plan in Contoso Cloud and what function does it serve for web applications?

  • ❏ A. It is a managed serverless execution environment similar to Cloud Run

  • ❏ B. It is a group of compute resources allocated for hosting a web application

  • ❏ C. It is a single container image running inside a virtual machine

  • ❏ D. It is a dedicated physical isolation zone with exclusive networking for only your tenant

A cloud startup named Northpoint Solutions enables Geo Redundant Storage for an Azure storage account and requires clarity on redundancy. How many total replicas of the stored data does Azure retain?

  • ❏ A. Three replicas

  • ❏ B. Six replicas

  • ❏ C. One replica

  • ❏ D. Two replicas

You deploy a new Azure App Service web app for a client named Northbridge Solutions and you must require users to sign in with Microsoft Entra ID. You need to configure the app’s authentication and authorization. What configuration step should you perform first?

  • ❏ A. Assign a custom domain name

  • ❏ B. Create a new app configuration setting

  • ❏ C. Register an identity provider

  • ❏ D. Upload a private SSL certificate

  • ❏ E. Enable a system assigned managed identity

You are building a web application for Meridian Software that allows authenticated users from the Contoso Identity service to read their own profile attributes givenName and identities. Which minimal permission must be granted to the application registration object associated with the app?

  • ❏ A. Delegated Permissions with User.ReadWrite

  • ❏ B. Application Permissions with User.Read

  • ❏ C. Delegated Permissions with User.Read

  • ❏ D. Application Permissions with User.Read.All

A development team at Alderbrook Systems is building a service that stores its data in Azure Cosmos DB. The application requires minimal read and write latency and it must remain operational even if an entire region becomes unavailable. Which Cosmos DB configuration will satisfy these constraints?

  • ❏ A. Configure Cosmos DB for single region write operations and choose Eventual consistency

  • ❏ B. Enable multi-region write replication and require Strong consistency

  • ❏ C. Enable multi-region write replication and use Session consistency

  • ❏ D. Configure single region write mode and set Bounded staleness consistency

A development group at CedarStream is deploying several microservices on Azure Container Apps and they need to observe and troubleshoot them. Which capability should they use to perform live debugging from inside a running container?

  • ❏ A. Azure Monitor Log Analytics

  • ❏ B. Interactive container shell access

  • ❏ C. Azure Monitor metrics

  • ❏ D. Azure Container Registry

Atlas Retail is building an Azure design to collect inventory snapshots from more than 3,500 retail sites worldwide and each site will upload inventory files every 90 minutes to an Azure Blob storage account for processing. The architecture must begin processing when new blobs are created, filter data by store region, invoke an Azure Logic App to persist the results into Azure Cosmos DB, provide high availability with geographic distribution, allow up to 48 hours for retry attempts, and implement exponential back off for processing retries. Which technology should be used for the “Event Handler”?

  • ❏ A. Azure Event Grid

  • ❏ B. Azure Blob Storage

  • ❏ C. Azure Service Bus

  • ❏ D. Azure Functions

  • ❏ E. Azure Event Hubs

  • ❏ F. Azure Logic App

You publish an Azure App Service API to a Windows based deployment slot named DevSlot and you add two more slots called Staging and Live. You turn on auto swap for the Live slot and you need the application to run initialization scripts and to have resources ready before the swap runs. The proposed solution is to add the applicationInitialization element to the application’s web.config and point it to custom initialization endpoints that execute the scripts. Does this solution meet the requirement?

  • ❏ A. Yes it will satisfy the requirement

  • ❏ B. No it will not satisfy the requirement

Your engineering group deployed an Azure App Service web application to a client production slot and turned on Always On and the Application Insights extension. After releasing a code change you start seeing a large number of failed requests and exceptions. You must inspect performance metrics and failure counts almost in real time. Which Application Insights feature will provide that capability?

  • ❏ A. Application Map

  • ❏ B. Snapshot Debugger

  • ❏ C. Live Metrics Stream

  • ❏ D. Profiler

  • ❏ E. Smart Detection

A development group at Bluepeak Solutions plans to run a container on Azure Container Instances for a pilot project and the container must retain data using a mounted volume. Which type of Azure storage can be attached to an Azure Container Instance as a mount point?

  • ❏ A. Azure Managed Disks

  • ❏ B. Azure Files

  • ❏ C. Azure Blob Storage

  • ❏ D. Azure Cosmos DB

A development team at a mid sized firm needs to provision a new Azure App Service web application and then upload a deployment package by using the Azure CLI. Which command sequence accomplishes that task?

  • ❏ A. az appservice create –name siteapp –resource-group myResourceGroup && az appservice deploy –name siteapp –resource-group myResourceGroup –src /home/deploy/app.zip

  • ❏ B. az webapp up –name siteapp –resource-group myResourceGroup –sku F1 –location eastus –src-path /home/deploy/app.zip

  • ❏ C. az webapp create –name siteapp –resource-group myResourceGroup && az webapp deployment source config-zip –name siteapp –resource-group myResourceGroup –src /home/deploy/app.zip

  • ❏ D. az create webapp –name siteapp –resource-group myResourceGroup && az deploy webapp –name siteapp –resource-group myResourceGroup –src /home/deploy/app.zip

A global retail operator called Meridian Markets needs an Azure solution to ingest point of sale terminal telemetry from 2,400 locations worldwide. Each terminal generates about 3 megabytes of data every 24 hours. Each location has between one and six terminals that send data. The incoming records must be stored in Azure Blob storage and they must be correlated by terminal identifier. The operator expects to open more stores in the future. The proposed solution is to provision an Azure Event Hubs namespace and to use the terminal identifier as the partition key while enabling capture to Blob storage. Does this approach satisfy the requirements?

  • ❏ A. No

  • ❏ B. Yes

A community healthcare provider named Harborpoint Health is building a web portal to archive scanned patient admission forms. The portal must ensure that if a third party downloads a stored form the integrity and confidentiality of the form remain intact. You plan to save the scanned admission forms as Azure Key Vault secrets. Does this approach satisfy the requirement?

  • ❏ A. No

  • ❏ B. Yes

A development team at Northbridge Systems is deploying an Azure Resource Manager template that creates a virtual network named corpNet and multiple subnet resources. The team wants the subnets to be deployed only after the virtual network finishes deploying. Which ARM template feature enforces that dependency?

  • ❏ A. reference()

  • ❏ B. condition

  • ❏ C. dependsOn

  • ❏ D. resourceId()

You are building three microservices named appAlpha appBeta and appGamma and you deploy them to a Contoso Container Apps environment. Each service requires persistent storage and the following constraints apply. appAlpha must keep data visible only to its container and must be limited to the container’s available disk. appBeta must retain data for the duration of the replica and allow several containers in the same replica to mount the identical storage. appGamma must preserve data beyond replica termination while permitting multiple containers to access the data and support per object access controls. Which storage type should you choose for appBeta?

  • ❏ A. Azure Blob Storage

  • ❏ B. Azure Files Storage

  • ❏ C. Ephemeral volume

  • ❏ D. Container file system

Contoso Digital has an App Service plan named AppSvcPlanA configured on the Basic B2 pricing tier and it hosts an App Service web app named SiteA. The team intends to enable scheduled autoscaling for AppSvcPlanA and they must minimize the monthly cost for SiteA. The proposed solution is to scale out AppSvcPlanA. Does this solution meet the cost minimization goal?

  • ❏ A. Upgrade AppSvcPlanA to Standard tier and configure schedule-based autoscale

  • ❏ B. Scale up AppSvcPlanA to a larger instance size

  • ❏ C. No

  • ❏ D. Yes

Which type of storage container should a developer at Silverleaf Technologies provision to collect diagnostic logs and performance metrics from multiple Azure services so that Azure Monitor can analyze them?

  • ❏ A. Azure Event Hubs

  • ❏ B. Azure Storage account with append blobs

  • ❏ C. Log Analytics workspace

  • ❏ D. Azure Monitor resource

A regional retail startup stores user uploads in Azure Blob Storage and applies a lifecycle rule that moves objects to the archive tier after 45 days. Clients have requested a service level agreement for viewing content older than 45 days. What minimum data recovery SLA should you document?

  • ❏ A. At least 24 hours for retrieval

  • ❏ B. One to fifteen hours for standard rehydration

  • ❏ C. Between zero and sixty minutes for high priority rehydration

  • ❏ D. At least 48 hours for guaranteed availability

Infrastructure templates use a declarative syntax rather than procedural scripts. Why does that style remove the need to manually order resource creation when provisioning cloud services?

  • ❏ A. Terraform

  • ❏ B. You do not have to sequence resource creation manually because the template engine determines dependencies and deploys related resources in the correct order

  • ❏ C. Declarative templates integrate well with version control systems like GitHub

  • ❏ D. Declarative templates typically only need small manual edits before each deployment to avoid naming conflicts

A regional finance startup named Northwind Analytics needs a messaging architecture that uses a publish subscribe pattern and avoids continuous polling. Which Azure services can satisfy this requirement? (Choose 2)

  • ❏ A. Event Hubs

  • ❏ B. Azure Service Bus

  • ❏ C. Azure Queue Storage

  • ❏ D. Event Grid

A developer at Horizon Freight needs to securely obtain a database connection string that is stored in Azure Key Vault for a .NET service. Which step is required to retrieve the secret by using the Azure SDK for .NET?

  • ❏ A. Register Microsoft.Extensions.Configuration.AzureKeyVault in the application configuration so secrets are available during startup

  • ❏ B. Instantiate the legacy KeyVaultClient and attempt to call it without supplying credentials

  • ❏ C. Acquire a managed identity token and invoke the Key Vault REST API directly instead of using the Azure SDK

  • ❏ D. Authenticate using Azure.Identity.DefaultAzureCredential and then use SecretClient to fetch the secret

A development team at Contoso Cloud runs a web app on Azure App Service and they need to open the Kudu management console for their app. Which URL should they use to reach the Kudu companion application?

What factors influence the price you pay for an Azure Cache for Redis deployment?

  • ❏ A. Charges per operation or per transaction

  • ❏ B. Region, allocated memory and pricing tier

  • ❏ C. Region and pricing tier

  • ❏ D. Region, pricing tier and hourly billing

A serverless function at Nimbus Apps reads messages from an Azure Storage queue named orders queue and it must handle messages that repeatedly fail processing so they can be examined separately. Which configuration should you apply to ensure repeatedly failing messages are moved to a separate queue for later investigation?

  • ❏ A. Switch to Azure Service Bus and enable dead lettering

  • ❏ B. Implement custom code that moves recurring failures to a separate investigation queue

  • ❏ C. Configure maxDequeueCount in host.json to a value greater than 2

  • ❏ D. Increase visibilityTimeout on the queue trigger binding

VitaCrunch Foods is creating a rewards system. When a customer buys a snack at any of 125 affiliated outlets the purchase event is sent to Azure Event Hubs. Each outlet is assigned a unique outlet identifier that serves as the rewards program key. Outlets must be able to be enrolled or removed at any time. Outlets must only be able to submit transactions for their own identifier. How should you allow outlets to send their purchase events to Event Hubs?

  • ❏ A. Create a partition for each outlet

  • ❏ B. Provision a separate namespace for every outlet

  • ❏ C. Use publisher policies for outlets

Refer to the Northfield Systems case study at the link provided below and answer the following question. Open the link in a new tab and do not close the test tab. https://example.com/doc/9f8Kx7 Which types of availability tests can you use to check the company website? (Choose 2)

  • ❏ A. Multi step web test

  • ❏ B. Standard availability test

  • ❏ C. Custom testing via the TrackAvailability API

  • ❏ D. URL ping test

Tailwind Software hosts an ASP.NET Core API in an Azure Web App and it requires custom claims from its Entra ID tenant for authorization. The custom claims must be removed automatically when the app registration is deleted. You need those custom claims to be present in the user access token. What action will achieve this?

  • ❏ A. Add the groups to the groupMembershipClaims attribute in the application manifest

  • ❏ B. Require the https://graph.microsoft.com/.default scope during sign in

  • ❏ C. Add roles to the appRoles attribute in the application manifest

  • ❏ D. Implement custom middleware to query the identity provider for role claims at runtime

  • ❏ E. Use the OAuth 2.0 authorization code flow for the web app

Greenfield Studios is building a platform to archive millions of photographs in Azure Blob Storage. The platform must store camera Exchangeable Image File Format metadata as blob metadata when images are uploaded and it must read that metadata while minimizing bandwidth and processing overhead using the REST API. Which HTTP verb should the platform use to retrieve Exif metadata from a blob?

  • ❏ A. GET

  • ❏ B. PUT

  • ❏ C. POST

  • ❏ D. HEAD

Your team at Northgate Software is building an Azure hosted web application that must support user sign in and you plan to rely on Microsoft Entra ID for the identity provider. Which approach should you implement to provide user authentication for the application?

  • ❏ A. Use Microsoft Entra External ID formerly known as Azure AD B2C with a custom policy

  • ❏ B. Deploy a self hosted OpenID Connect provider and perform authentication against it

  • ❏ C. Configure the application to use Microsoft Entra ID as an identity provider using OpenID Connect or OAuth 2.0

  • ❏ D. Call the Microsoft Graph API to try to validate user credentials at runtime

A cloud engineering team at Northgate Systems maintains an Azure subscription that contains a storage account a resource group a blob container and a file share. A colleague named Alex Palmer deployed a virtual machine and a storage account by using an Azure Resource Manager template. You must locate the ARM template that Alex used. Solution You access the ‘Resource Group’ blade. Does this solution meet the requirement?

  • ❏ A. No

  • ❏ B. Yes

A retail startup named NimbusShop is building an order routing system using Azure Service Bus topics and it will publish order messages to a topic. Each message will set OrderNumber as the MessageId and include a Quantity application property plus a Priority property that can be High Medium or Low. One subscription must receive every order and another subscription must receive only messages whose Priority equals High while a third subscription must accept orders with a Quantity of at least 60. Which filter type should the subscription that only receives High priority orders use?

  • ❏ A. CorrelationFilter

  • ❏ B. TrueFilter

  • ❏ C. SqlFilter

AZ-204 Practice Exam Questions Answered

A retail analytics company named Harbor Systems is building an application to transfer files from an on premises NAS to Azure Blob Storage. The application stores keys secrets and certificates in Azure Key Vault and uses the Key Vault APIs. You need to configure the environment so that an accidental deletion of the key vault or its objects can be recovered for 60 days after deletion. What should you do?

  • ✓ D. Run az keyvault update –enable-soft-delete true –enable-purge-protection true to enable both soft delete and purge protection

Run az keyvault update –enable-soft-delete true –enable-purge-protection true to enable both soft delete and purge protection is correct.

Enabling soft delete causes deleted key vault objects and the vault itself to be retained in a recoverable state for the soft delete retention window and enabling purge protection prevents those deleted items from being permanently purged before the retention window ends. Together these two settings ensure an accidental deletion can be recovered for the required 60 day period by keeping the deleted resources recoverable and blocking premature permanent deletion.

Once purge protection is enabled it cannot be turned off and that ensures administrators or scripts cannot remove a deleted vault or its objects permanently until the retention period has passed.

Assign a user assigned managed identity and grant it Key Vault access via role based access control is incorrect because identities and access control govern who can access or manage Key Vault but they do not provide retention or recovery of deleted vaults or keys.

Run the Add AzKeyVaultKey PowerShell cmdlet to add a key to the vault is incorrect because adding a key does not configure deletion recovery or retention settings and it does not protect the vault from permanent deletion.

Enable soft delete only by running az keyvault update –enable-soft-delete true is incorrect because soft delete alone preserves deleted items but it does not stop an authorized user or process from purging those deleted items. Purge protection is required to prevent permanent deletion during the retention window.

When a question asks about protecting Key Vault objects from permanent deletion think of both soft delete and purge protection and choose the option that enables both features.

A retail technology company named Meridian Apps is deploying a customer portal to an Azure App Service Plan and it must scale automatically to absorb sudden spikes in traffic without downtime. Which metric can be used to trigger autoscale actions based on application performance?

  • ✓ D. CPU Percentage

The correct option is CPU Percentage.

CPU Percentage measures the processor load on App Service plan instances and is a built in metric that Azure Monitor autoscale can use to trigger scale out and scale in rules. Using CPU percentage lets the platform add instances when CPU usage climbs during sudden traffic spikes and then remove instances when demand falls to avoid downtime and control costs.

Average Response Time is not a built in autoscale metric for App Service. Response time is typically collected by Application Insights and you would need to create a custom metric or alert to drive autoscale from that signal.

Memory Percentage is not the primary platform autoscale signal for App Service plans. Memory metrics may not be exposed in the same way across all App Service environments and using them usually requires custom metrics or additional configuration.

HTTP Queue Length refers to queued requests at the web server level and it is not the standard autoscale trigger for App Service. Some platforms or custom scenarios may expose queue length but it is not the default, reliable autoscale metric used to absorb sudden spikes in App Service plans.

When choices include both platform provided metrics and application level signals pick a built in resource pressure metric such as CPU Percentage. Remember that things like response time often require Application Insights or custom metrics to use for autoscale.

A development team at NimbusSoft is building a .NET service that must upload a local file to Azure Blob Storage using the Azure Storage SDK for .NET. Which code approach correctly uploads the file using the Azure.Storage.Blobs library?

  • ✓ C. A snippet that constructs a BlobClient from the connection string for the “docs-archive” container and the “report.txt” blob then opens a FileStream and calls Upload with overwrite enabled

A snippet that constructs a BlobClient from the connection string for the “docs-archive” container and the “report.txt” blob then opens a FileStream and calls Upload with overwrite enabled is correct.

This approach directly uses the Azure.Storage.Blobs client model and it is the simplest pattern for a single blob upload. Creating a BlobClient for the specific container and blob and then opening a FileStream to call Upload with overwrite enabled matches the Azure.Storage.Blobs API and it prevents failures when the blob already exists by replacing it.

An implementation that parses a connection string into a CloudStorageAccount then creates a CloudBlobClient and a CloudBlockBlob and calls UploadFromStream with a file stream is incorrect because it uses the older WindowsAzure.Storage types. That legacy library used CloudStorageAccount and CloudBlockBlob and the method names differ from the newer Azure.Storage.Blobs library. The question asks specifically about Azure.Storage.Blobs so the legacy pattern does not apply and is less likely on newer exams.

A pattern that creates a BlobServiceClient then attempts to call UploadAsync with an uploadFileStream parameter that is not declared in the snippet is incorrect because the snippet is incomplete and references a variable that is not defined. Calling UploadAsync requires a valid Stream or properly constructed parameters and the shown code would not compile or run as is.

A flow that creates a BlobServiceClient then obtains a BlobContainerClient and BlobClient and calls Upload passing an opened FileStream without specifying overwrite behavior is incorrect because calling Upload without setting overwrite may throw an exception if the target blob already exists. You must explicitly allow overwrite or handle the existing blob case when using that overload.

When you see code that targets a single blob prefer using BlobClient and check the Upload overloads for an overwrite option so you avoid runtime conflicts with existing blobs.

A small firm called Clearwater has an App Service plan named planB that is currently on the Free pricing tier and they intend to host an Azure Function that is triggered by a storage queue while keeping costs as low as possible. Which setting should be selected for the Azure App Service feature to satisfy the requirement?

  • ✓ C. Always On

The correct option is Always On.

Always On keeps the App Service process loaded so background triggers such as a storage queue can be processed reliably instead of the app going idle. When you host an Azure Function on an App Service plan you need the app to stay running for queue triggers to fire consistently and enabling Always On achieves that. Note that Always On is not available on the Free tier so the plan must be moved to a paid tier such as Basic or higher to use this setting.

Continuous deployment is focused on automatically deploying code from a source repository and it does not keep the app process running to handle background queue triggers.

System assigned managed identity provides an identity for the app to authenticate to other Azure resources and it does not affect whether the app stays active or processes queue triggers.

When a question asks how to keep background functions running on an App Service plan remember that Always On keeps the process loaded and that you must not be on the Free tier to enable it.

A developer at Northbridge Systems is building a B2B web portal that uses Azure B2B collaboration for sign in. Trial users may register with any email address and they initially authenticate by using a one time passcode. When a trial account is upgraded to a paid subscription the user data must remain intact and the user must sign in using federation with their corporate identity provider. Which Microsoft Graph API property should be set to force the guest to re accept their invitation and convert authentication from one time passcode to federation?

  • ✓ C. resetRedemption

The correct option is resetRedemption.

Setting the resetRedemption property on the Microsoft Graph invitation resource forces the guest to redeem the invitation again and accept it anew. This forces Azure AD to prompt the guest to sign in again and allows the authentication method to change from a one time passcode to federation when the guest is converted to use their corporate identity provider.

userFlowType is an Azure AD B2C concept that controls user flows and policies and it is not a Microsoft Graph invitation property used to force a guest to accept an invitation again.

invitedUser refers to invited user details and attributes but it does not trigger a new redemption and it does not change the authentication method by itself.

Cloud Identity is not a Microsoft Graph invitation property and it refers to broader identity concepts or other products so it is not the setting used to force a re acceptance or convert authentication to federation.

When a question asks how to force a guest to re accept an invite think about the Microsoft Graph invitation resource and the resetRedemption property because that is the documented way to require a new redemption and change the sign in experience.

You are deploying three microservices named paySvc sessionSvc and archiveSvc to a new Azure Container Apps environment called production hub. Each service must persist data to storage. paySvc must keep data only visible to its container and the storage must be limited to the container local disk. sessionSvc must retain data for the lifetime of a replica and allow multiple containers within the same replica to mount the same storage location. archiveSvc must preserve data beyond the replica lifetime allow multiple containers to access the same storage and support per object permissions. Which storage type should you choose for archiveSvc?

  • ✓ C. Azure Blob Storage

Azure Blob Storage is the correct choice for archiveSvc.

Azure Blob Storage is designed for durable object storage and it preserves data beyond the lifetime of replicas. It allows multiple containers and replicas to access the same objects concurrently through REST APIs or SDKs and it provides fine grained, per object access controls with shared access signatures and Azure AD based permissions which are important for archive scenarios.

Azure Files is not the best fit because it is a network file share that can be mounted by multiple containers but it provides file share semantics rather than native object storage features and it does not offer the same object level access model and archival lifecycle features that blob storage provides.

Container file system refers to the local filesystem inside a container and it is limited to that container instance. It does not persist data beyond the container or replica lifecycle and it cannot be shared across replicas so it fails the archive requirements.

Ephemeral volume is intentionally short lived and tied to the lifecycle of the replica. It is useful for fast temporary storage but it does not preserve data beyond replica termination and it is not suitable for archival persistence.

When a question emphasizes preserving data beyond replica lifetime and requiring per object permissions think object storage such as Azure Blob Storage rather than local or ephemeral volumes.

A retail analytics company called Skyline Finance uses an HTTP triggered Azure Function to process data and then write results to Azure Blob Storage with a blob output binding. The function is invoked by clients and the execution stops after three minutes. The function must complete processing the blob data without timing out. Would adopting the Durable Functions asynchronous pattern meet this requirement?

  • ✓ B. Yes

Yes is correct. Using the Durable Functions asynchronous pattern allows the HTTP-triggered function to start a durable orchestration and return immediately while the orchestration continues processing the blob data so the work can complete without the client request timing out.

The asynchronous pattern uses an HTTP starter function to invoke a durable orchestrator function. The starter returns a 202 Accepted response with status and management URLs and the orchestrator runs one or more activity functions to do the long running work. The orchestrator persists state and checkpoints progress so the workflow can continue for minutes or hours and it can write results to Blob Storage using output bindings or the SDK.

The durable approach prevents the HTTP request from blocking until the blob processing finishes. The client can poll the provided status URL or use the provided callbacks to get the final result while the actual processing continues server side, and this avoids the three minute execution timeout seen with a simple synchronous HTTP function.

No is incorrect because a plain HTTP triggered function by itself will be subject to the host and plan timeout and it cannot reliably complete long running blob processing within the single request. Durable Functions exist specifically to enable long running asynchronous workflows and they are the appropriate solution for this scenario.

When a question mentions an HTTP request that must not block while long processing continues think Durable Functions asynchronous pattern. Look for language about returning status URLs or continuing work after the initial request.

A regional clinic group called Harborcare is building a web portal to store scanned patient registration forms. The saved forms must remain confidential even if an outside party downloads them from storage. The proposed solution is to create an Azure Cosmos DB account with Storage Service Encryption enabled and then store the scanned files in that Cosmos DB account. Will this implementation meet the confidentiality requirement?

  • ✓ B. No this design does not satisfy the confidentiality requirement

No this design does not satisfy the confidentiality requirement.

The proposed design relies on Azure Cosmos DB with Storage Service Encryption enabled, and that only provides encryption at rest that is managed by the service. Encryption at rest protects data on disk and helps against physical theft of storage media, but it does not ensure that objects remain unreadable if an outside party can download them. The service will decrypt data for authorized access and storage service encryption by itself does not provide end to end confidentiality if the attacker can obtain the file through normal service access or via a leaked credential.

To meet the requirement that files remain confidential even when downloaded by an outside party you need encryption where only the client controls the keys. That can be done with client side encryption or with customer managed keys combined with strict access controls, and Cosmos DB also offers client side encryption features through its SDKs for this purpose. Also storing large blobs in Cosmos DB is not the usual pattern and using Blob Storage with client side encryption is often a better fit for scanned files.

Yes this approach satisfies the confidentiality requirement is incorrect because Storage Service Encryption alone does not prevent decrypted content from being returned to a downloader who has access through the service. The encryption is transparent to the service and does not replace the need for encryption that the client controls when the threat model includes an outside downloader.

On the exam look for the difference between encryption at rest managed by the cloud provider and client side or customer key encryption that prevents readable downloads. If the requirement is confidentiality against a downloader then choose a solution that keeps keys on the client or uses customer managed keys with strict access controls.

An engineer at BlueWave Systems needs to launch a container with Azure Container Instances and provide persistent storage from a specified Azure Storage account. Which configuration will allow the container instance to access that Azure Storage account after it is deployed?

  • ✓ C. Mount an Azure Files share from the target storage account as a volume into the container instance during creation

The correct option is Mount an Azure Files share from the target storage account as a volume into the container instance during creation.

Azure Container Instances support mounting Azure Files shares as volumes when you create the container group. Mounting an Azure Files share provides persistent file storage that is mounted into the container filesystem and remains available across restarts. To set this up you specify the file share and supply the storage account credentials or a SAS token at creation time so the share is mounted for the container.

Provide the storage account connection string as an environment variable when creating the container instance is incorrect because handing the connection string to the container does not create a filesystem mount. The application would need to use that string and implement file I O or SDK calls and this approach exposes credentials in the environment rather than providing a mounted persistent volume.

Set the restart policy to Always and pass the storage account connection string as a command line argument when launching the container instance is incorrect because the restart policy only controls container lifecycle behavior and does not provision persistent storage. Passing the connection string on the command line also does not mount a share and it exposes credentials in a non persistent way.

Assign a system assigned managed identity to the container instance and grant it a Data Storage role on the storage account so the container can authenticate without keys is incorrect for mounting Azure Files in ACI. Managed identities can let the container call Azure APIs or obtain tokens for programmatic access, but ACI does not use a managed identity to perform the SMB mount of an Azure Files share at creation time. In practice you must still provide mount credentials or a SAS token when creating the volume, or access storage programmatically from inside the container using the identity.

When a question asks about persistent storage for Azure Container Instances look for an answer that mentions mounting an Azure Files share as a volume. Answers that only pass connection strings or change restart policies are usually not providing a mounted persistent volume.

A team at Silverline deployed a .NET web application to Azure App Service and they will monitor it using Application Insights. Which Application Insights feature should they enable to record performance traces while keeping the impact on end users minimal?

  • ✓ C. Profiler

The correct option is Profiler.

Profiler is built to record performance traces by sampling CPU usage and call stacks in short collection sessions so it captures detailed method level performance data while keeping overhead low. It integrates with Azure App Service and Application Insights so teams can identify hotspots and slow requests without continuous heavy instrumentation that would affect end users.

Multistep web test is focused on availability and functional checks by running scripted steps from external locations and it does not provide low level performance profiling of the application runtime.

Live Metrics Stream gives near real time telemetry for monitoring and quick diagnostics but it is not a sampled profiler and it does not produce the same detailed method level traces for performance analysis over time.

Snapshot Debugger captures snapshots of the application state at exceptions or on demand for debugging specific issues and it is not intended as a low overhead continuous profiler for recording performance traces.

When a question highlights minimal impact and performance tracing think about sampling based tools and choose Profiler because it is designed to collect useful performance profiles with low overhead.

A logistics startup named BlueHaven stores several terabytes across dozens of containers in an Azure storage account and they must copy every blob to a new storage account. The transfer must be automated require minimal operator intervention and be able to resume if interrupted. Which tool should they choose?

  • ✓ C. AzCopy

The correct option is AzCopy.

AzCopy is a command line tool that is designed for large scale blob transfers. It can be scripted for automated workflows with minimal operator intervention and it supports resumable transfers so a multi‑terabyte copy across many containers can continue after interruptions without manual restart. It also uses parallelism and other optimizations to achieve high throughput when moving data between storage accounts.

Azure Data Factory can perform copy activities and is useful for orchestrated ETL and scheduled jobs but it requires more configuration and operational overhead. It is not the simplest, most direct tool for a straightforward, resumable bulk blob copy.

Azure Storage Explorer is a graphical tool that is convenient for exploring storage and doing manual file or blob operations. It is not intended for fully automated, unattended bulk transfers across dozens of containers and it lacks the same resumable, high performance capabilities that AzCopy provides.

.NET Storage Client Library gives full programmatic control and you could implement an automated, resumable transfer yourself. That approach requires custom development and effort to handle resume logic and parallel performance. It is therefore less appropriate when an off the shelf, scriptable tool like AzCopy already meets the requirements.

When an exam scenario describes large, unattended blob moves look for a scriptable and resumable tool. Use AzCopy for automated bulk transfers because it is built for high throughput and interruption recovery.

At Bluewave Systems a scheduled function is configured with the timer expression “0 5 * * * *” in its settings. How frequently will this function execute?

  • ✓ C. At five minutes past every hour of every day

The correct answer is At five minutes past every hour of every day.

The expression uses six space separated fields where the first field is seconds and the second field is minutes. With 0 in the seconds field and 5 in the minutes field the trigger fires at second 0 of minute 5 of every hour. The remaining fields are wildcards so it runs every hour of every day.

Every five minutes throughout each hour is incorrect because that meaning would require a step value such as */5 in the minutes or seconds field and not a literal 5 in the minute position.

Once daily at five in the morning is incorrect because that schedule would set the hour field to 5 and typically set minutes to 0. The given expression sets minutes to 5 and leaves the hour as a wildcard so it fires every hour instead of once per day.

The schedule string is invalid so the function will not run is incorrect because many timer implementations accept six field cron expressions that include seconds and this string follows that valid format.

When you see a cron with six fields check whether the first field is seconds and the second is minutes. Parse by position rather than assuming a five field pattern.

StreamHaven operates a web application that delivers live video streams to customers and the pipeline uses automated build and deployment workflows. You need to ensure the service remains highly available and that viewers get a steady playback experience. You also want data to be stored in the geographic region closest to each viewer. Would adding Azure Cache for Redis to the architecture satisfy these requirements?

  • ✓ B. No

The correct option is No.

Azure Cache for Redis is an in memory distributed cache that improves application latency for frequently accessed data and session state, and it can provide regional high availability through clustering and zone redundancy. It is not designed to act as a global content delivery mechanism that stores and serves large video files from the geographic region closest to each viewer. For steady playback and geo proximity you would use a content delivery network and regional object storage or streaming services while Redis can still be useful for caching metadata, tokens, or session state.

Yes is incorrect because adding Azure Cache for Redis does not guarantee that video content will be served from the edge or from the region closest to each viewer, and it does not replace a CDN or multi region storage solution for global video delivery and edge caching.

When a question focuses on global low latency delivery of large media think CDN and regional object storage rather than an in memory cache.

AcmeSoft runs a customer API on Azure App Service and it needs to choose between vertical scaling and horizontal scaling. What does vertical scaling of the application do?

  • ✓ B. Upgrade the App Service plan to a larger pricing tier which provides extra CPU memory disk and premium features

Upgrade the App Service plan to a larger pricing tier which provides extra CPU memory disk and premium features is correct.

Upgrading the App Service plan to a larger pricing tier which provides extra CPU memory disk and premium features is vertical scaling because it increases the resources available to the same application instance or instances rather than increasing the number of copies. Vertical scaling adds more vCPU and memory and can provide larger disk and premium platform features so the app can handle heavier workloads on each instance.

Increase the number of instances of the web application so additional copies run concurrently is incorrect because that describes horizontal scaling. Scaling out adds more instances to run concurrently and distribute load rather than increasing CPU memory or disk for each instance.

Deploy another instance of the app in a different region to improve latency for remote users is incorrect because deploying to another region is about geo distribution and reducing latency for remote users. That approach creates additional geographically separated instances and is not the same as increasing the size of an instance.

Use Cloud Load Balancing to distribute traffic across endpoints is incorrect because load balancing distributes traffic across multiple endpoints and supports horizontal scaling and fault tolerance. It does not increase the CPU memory or disk of an existing App Service instance so it does not represent vertical scaling.

When deciding between vertical and horizontal scaling ask whether you need larger resources per instance or more instances. Think size for vertical scaling and count for horizontal scaling.

A solution architect is building an Azure pipeline to collect point of sale telemetry for a retail chain named Harbor Foods. The deployment will cover about 2,400 outlets worldwide. Each device produces about 2.4 megabytes of data every 24 hours. Each location has between one and six devices that send data. All device data must be stored in Azure Blob storage and must be joinable by a device identifier. The chain expects to add more locations over time. The proposed ingestion uses Azure Event Grid with the device identifier as the partition key and with capture enabled. Does this design meet the requirements?

  • ✓ B. No the design does not meet the requirements

The correct answer is No the design does not meet the requirements.

Azure Event Grid is an event routing service and it does not provide the built in capture capability that automatically writes partitioned event batches to Azure Blob storage. Event Grid also does not expose the partition key semantics you need to guarantee grouping or ordering by device identifier so that data can be joined by that key after storage. Those two features are central to the requirement to store all device data in blob storage and to make the blobs joinable by device id.

Azure Event Hubs offers the features you need because it supports a partition key and it provides a managed capture feature that writes event data to Blob storage in time ordered files. Using Event Hubs with the device identifier as the partition key and with capture enabled will produce partitioned files that you can reliably join by device id. For IoT scenarios you can also use Azure IoT Hub which integrates device identity and can route traffic into Event Hubs or storage as needed.

Yes the design meets the requirements is incorrect because it assumes Event Grid can act like Event Hubs for capture and partitioning. Event Grid can route events to many endpoints but it does not create the partitioned, captured blobs tied to a partition key that the solution requires.

When a question calls out capture and partition key you should think Event Hubs or IoT Hub rather than Event Grid.

A regional finance firm runs a Microsoft Entra ID tenant and employees sometimes sign in from public internet IPs. You must ensure that when users authenticate from an unfamiliar IP address they are automatically prompted to change their passwords. You decide to enable Microsoft Entra Privileged Identity Management. Will this approach satisfy the requirement?

  • ✓ B. No configuring Privileged Identity Management will not meet the requirement

The correct answer is No configuring Privileged Identity Management will not meet the requirement.

Privileged Identity Management focuses on managing elevated roles and just in time activation for privileged accounts. It is not designed to evaluate sign in risk or to force users to change their passwords when they authenticate from an unfamiliar IP address. To require password changes based on risky or unfamiliar sign ins you would use Microsoft Entra Identity Protection sign in risk policies or Conditional Access rules that leverage Identity Protection to require a password reset for risky users.

Yes configuring Privileged Identity Management will fulfill the requirement is incorrect because Privileged Identity Management does not perform sign in risk detection or automated password reset actions. Privileged Identity Management is about role elevation workflows and approvals and it cannot trigger a password change when an authentication originates from an unfamiliar public IP.

When a question asks to force password changes for unfamiliar or risky sign ins think of Identity Protection and sign in risk policies rather than Privileged Identity Management.

Which REST API endpoint should you call to upload a ZIP archive to a web application through the Kudu SCM service on Contoso Cloud?

  • ✓ C. PUT /api/zip/{targetPath}/

PUT /api/zip/{targetPath}/ is correct because the Kudu SCM service provides a zip API that accepts a PUT of a ZIP archive and extracts its contents into the specified target path on the web app.

The Kudu endpoint at /api/zip/{targetPath}/ is the standard mechanism for performing a zip deploy to an App Service site. A client sends the ZIP as the request body to that endpoint and Kudu unpacks the archive into the target folder so the application files are updated in place.

PUT /upload/storage/v1/b/my-bucket/o is incorrect because that path is the Google Cloud Storage JSON upload endpoint and it is unrelated to Azure Kudu or App Service deployments.

POST /api/deploy is incorrect because it is not the Kudu API used to upload and extract a ZIP archive. The Kudu zip upload uses the /api/zip pattern rather than a generic /api/deploy POST for this purpose.

PUT /api/deployments/{id} is incorrect because that endpoint addresses deployment records or metadata and it is not the endpoint used to send a ZIP archive for extraction into the app files.

Read the URL pattern and the service name to narrow choices. If the path contains /api/zip it is likely the Kudu zip deploy endpoint and is a strong signal that it is the correct answer. If the path looks like a cloud storage REST path you can usually eliminate it for an Azure Kudu question.

Your team manages a Service Bus namespace that hosts a topic named OrdersTopic. You plan to add a subscription named PrioritySub to OrdersTopic and you want to filter messages by their system properties and then apply an action that tags each matched message. Which filter type should you configure?

  • ✓ C. SQL filter

The correct answer is SQL filter.

SQL filter lets you write SQL like expressions that evaluate system properties and user properties so you can select messages based on those values and then pair the filter with a SQL rule action to tag or modify each matched message. A SQL filter is the only option here that both evaluates properties with expressions and supports actions that add or set message properties.

Correlation filter is wrong because it only matches a fixed set of properties using equality conditions and it does not support applying SQL rule actions to tag or modify messages.

Boolean filter is wrong because Azure Service Bus does not provide a general Boolean expression filter that can both evaluate system properties and perform rule actions. The platform offers TrueFilter and FalseFilter for unconditional matching and those cannot tag messages.

If the scenario requires both conditional matching and changing or tagging messages pick a SQL filter plus a SQL rule action. Focus on whether the question asks to modify messages as that usually means SQL filters are required.

A team at Meridian Climate Systems is building an ASP.NET Core web application that uses Azure Front Door to deliver custom climate CSV datasets to academic users. The datasets are regenerated every 12 hours and specific files must be invalidated in the Front Door cache according to response header values. Which cache purge method should be used to remove individual assets from the Front Door cache?

  • ✓ C. Single path purge

The correct option is Single path purge.

Single path purge is the right choice because it lets you invalidate individual files by specifying their exact URL path. This method matches the requirement to remove only specific CSV assets when they are regenerated every 12 hours and when the decision to purge depends on response header values. You can call the Front Door purge API to target each updated file and avoid evicting unrelated cached content.

Wildcard path purge is incorrect because it would remove multiple objects that match a pattern and is too broad for selectively invalidating single CSV files. Using a wildcard could evict unrelated content and create unnecessary load on the origin when many files are refreshed.

Root domain purge is incorrect because it clears the entire host or domain and is excessive when only specific assets need invalidation. Purging the root would cause many cache misses across the site and is not appropriate for targeted cache removal.

When a question asks about removing specific cached files pick the option that targets exact paths and remember that the Front Door management API can be used to automate single path purges after data updates.

How would you describe an App Service Plan in Contoso Cloud and what function does it serve for web applications?

  • ✓ B. It is a group of compute resources allocated for hosting a web application

It is a group of compute resources allocated for hosting a web application is correct. An App Service Plan represents the compute capacity where web applications run and therefore describes the amount and type of resources allocated to your apps.

App Service Plan determines the number of instances, the VM size or SKU, the billing tier, and the scaling behavior that apply to any web app hosted on that plan. Choosing the right plan affects performance, isolation, and cost for your web applications.

It is a managed serverless execution environment similar to Cloud Run is incorrect because a serverless execution environment abstracts away dedicated compute and typically scales to zero and bills per request. An App Service Plan allocates persistent compute capacity and does not behave like a serverless container runtime such as Cloud Run.

It is a single container image running inside a virtual machine is incorrect because an App Service Plan is not a single container instance. A plan can host many apps and can run apps in a variety of modes including multiple instances, and it is about the compute tier rather than a single container image.

It is a dedicated physical isolation zone with exclusive networking for only your tenant is incorrect because a plan defines virtual compute resources and tiered isolation but it does not by itself provide exclusive physical isolation or a private networking enclave. Dedicated physical isolation is typically provided by specific dedicated hosting or isolated network offerings.

When you see a question about hosting capacity think about the plan or SKU that defines compute size, instance count, scaling, and cost rather than a serverless or single container concept.

A cloud startup named Northpoint Solutions enables Geo Redundant Storage for an Azure storage account and requires clarity on redundancy. How many total replicas of the stored data does Azure retain?

  • ✓ B. Six replicas

The correct option is Six replicas.

Azure Geo Redundant Storage keeps three copies of your data in the primary region and replicates those copies asynchronously to a paired secondary region where it keeps three additional copies. This pairing results in a total of six replicas and it provides protection against a full regional outage.

Three replicas is incorrect because that would describe only the copies stored within the primary region and it does not include the three copies stored in the paired secondary region under Geo Redundant Storage.

One replica is incorrect because Azure storage redundancy always maintains multiple copies for durability and a single replica would not provide protection against hardware failures.

Two replicas is incorrect because Azure does not use a two copy model for Geo Redundant Storage and two copies would not meet the durability guarantees that GRS is designed to provide.

When a question asks about Azure redundancy match the named redundancy to its region strategy and number of copies. Remember that Geo Redundant Storage keeps three copies in the primary region and three in the paired secondary region for a total of six.

You deploy a new Azure App Service web app for a client named Northbridge Solutions and you must require users to sign in with Microsoft Entra ID. You need to configure the app’s authentication and authorization. What configuration step should you perform first?

  • ✓ C. Register an identity provider

Register an identity provider is the correct configuration step to perform first when you require users to sign in with Microsoft Entra ID.

You must register or configure an identity provider in the App Service Authentication and Authorization settings before the app can redirect users to Microsoft Entra ID and validate authentication tokens. Performing Register an identity provider establishes the client identifier and secret or the platform connection that enables the sign in flow and token issuance.

Assign a custom domain name is not required to enable user sign in with Microsoft Entra ID. A custom domain is only needed if you want a branded URL or to use certificates for that domain and it does not affect the initial identity provider configuration.

Create a new app configuration setting is not the first step for enabling sign in. App settings may store values such as client ids or secrets but you must first register the identity provider so those values exist and the sign in flow can be established.

Upload a private SSL certificate is unrelated to the basic user sign in setup. Private certificates are used for securing custom domains or for client certificate authentication and they are not needed before registering the identity provider.

Enable a system assigned managed identity is used for the app to authenticate to Azure resources and not for end user sign in. A managed identity does not replace the need to register Microsoft Entra ID as an identity provider for user authentication.

When setting up App Service authentication always start by registering the identity provider in the Authentication section so the app can obtain the client id and secret or use the platform flow before you add settings or certificates.

You are building a web application for Meridian Software that allows authenticated users from the Contoso Identity service to read their own profile attributes givenName and identities. Which minimal permission must be granted to the application registration object associated with the app?

  • ✓ C. Delegated Permissions with User.Read

Delegated Permissions with User.Read is correct. This is the minimal delegated permission that allows an authenticated Contoso user to consent and let the application read their own profile attributes such as givenName and identities.

Delegated permissions let the app act on behalf of the signed in user and Delegated Permissions with User.Read specifically grants basic read access to the signed in user profile without giving extra write or tenant wide rights. That makes it the least privilege option and it typically does not require administrator consent for standard user profiles.

Delegated Permissions with User.ReadWrite is incorrect because it grants write access that is not needed to read profile attributes. Choosing a write permission would violate the principle of least privilege for this scenario.

Application Permissions with User.Read is incorrect because application permissions are app only and are not used for actions taken on behalf of a signed in user. Also the User.Read permission is normally a delegated permission so this combination is not the right fit for a user delegated scenario.

Application Permissions with User.Read.All is incorrect because it grants tenant wide read access to all users and requires administrator consent. That is far more privilege than required to let an individual user read their own profile.

When a question asks about an app acting for a signed in user prefer delegated permissions and pick the permission that follows the principle of least privilege.

A development team at Alderbrook Systems is building a service that stores its data in Azure Cosmos DB. The application requires minimal read and write latency and it must remain operational even if an entire region becomes unavailable. Which Cosmos DB configuration will satisfy these constraints?

  • ✓ C. Enable multi-region write replication and use Session consistency

The correct answer is Enable multi-region write replication and use Session consistency.

Enable multi-region write replication allows the application to accept writes in multiple regions which reduces write latency by letting clients write to a nearby region and it helps keep the service operational if an entire region becomes unavailable. Session consistency gives a useful balance between low latency and predictable behavior because it provides read your writes and monotonic reads for a client session while avoiding the high coordination overhead of the strongest consistency modes.

Configure Cosmos DB for single region write operations and choose Eventual consistency is incorrect because a single write region means writes will not be possible if that region fails and eventual consistency does not address the availability requirement across region failures.

Enable multi-region write replication and require Strong consistency is incorrect because requiring strong consistency across regions forces cross region coordination which increases latency and undermines the requirement for minimal read and write latency.

Configure single region write mode and set Bounded staleness consistency is incorrect because single region writes still create a single point of failure for writes and bounded staleness affects read freshness but does not provide the multi region write availability needed when an entire region becomes unavailable.

When a question emphasizes both minimal latency and continued operation after a region outage look for multi region writes for availability and a consistency level like Session that preserves client guarantees while keeping latency low.

A development group at CedarStream is deploying several microservices on Azure Container Apps and they need to observe and troubleshoot them. Which capability should they use to perform live debugging from inside a running container?

  • ✓ B. Interactive container shell access

The correct option is Interactive container shell access.

This capability lets developers open a live shell session inside a running container to run diagnostic commands inspect files view environment variables and observe running processes in real time. That direct access is what you need for hands on debugging and troubleshooting of a microservice while it is running.

Azure Monitor Log Analytics collects and queries log data for analysis and alerting and it does not provide an interactive shell into a running container.

Azure Monitor metrics provides numeric telemetry such as CPU memory and request counts for monitoring and scaling and it cannot execute commands inside a container for live debugging.

Azure Container Registry is a private image repository used to store and manage container images and it does not offer runtime access to containers.

When a question asks about live debugging choose the option that gives an interactive shell or exec session rather than services that only provide logs metrics or image storage.

Atlas Retail is building an Azure design to collect inventory snapshots from more than 3,500 retail sites worldwide and each site will upload inventory files every 90 minutes to an Azure Blob storage account for processing. The architecture must begin processing when new blobs are created, filter data by store region, invoke an Azure Logic App to persist the results into Azure Cosmos DB, provide high availability with geographic distribution, allow up to 48 hours for retry attempts, and implement exponential back off for processing retries. Which technology should be used for the “Event Handler”?

  • ✓ F. Azure Logic App

The correct option is Azure Logic App.

Azure Logic App is the right choice because it implements managed workflow orchestration with built in connectors so it can be triggered by blob creation, apply filtering by store region, and then call downstream services such as Cosmos DB.

Azure Logic App supports configurable retry policies and exponential backoff on actions and it can be configured to allow extended retry windows to satisfy requirements that span many hours. It is a managed, highly available service that can be deployed across regions and integrated with event routing to provide geographic distribution and durable processing.

Azure Event Grid is incorrect because it is an event routing and delivery service rather than the component that performs workflow processing. Event Grid is used to notify subscribers but it does not itself implement the multi step processing, filtering logic, and connectors required to persist results into Cosmos DB.

Azure Blob Storage is incorrect because it is the source of the files and not an event handler. The storage account holds the uploaded inventory files but it does not perform orchestration or retries for processing workflows.

Azure Service Bus is incorrect because it is a messaging broker for decoupling producers and consumers. It is useful for queued delivery and at least once processing but it does not provide the built in orchestration, connector ecosystem, and long running workflow semantics that the scenario requires.

Azure Functions is incorrect in this scenario because functions are best for short lived compute and custom code handlers. Durable Functions can add orchestration but the question calls for a managed workflow with connectors and built in retry configuration, which is a better fit for Logic Apps.

Azure Event Hubs is incorrect because it is a high throughput telemetry ingestion service. It is optimized for streaming scenarios and not for handling discrete blob created events with filtered workflow orchestration and direct integration to Cosmos DB.

When a question combines event based triggers, long running retries, exponential backoff, and many built in integrations think of managed workflow services such as Logic Apps rather than only event routers or simple compute functions.

You publish an Azure App Service API to a Windows based deployment slot named DevSlot and you add two more slots called Staging and Live. You turn on auto swap for the Live slot and you need the application to run initialization scripts and to have resources ready before the swap runs. The proposed solution is to add the applicationInitialization element to the application’s web.config and point it to custom initialization endpoints that execute the scripts. Does this solution meet the requirement?

  • ✓ B. No it will not satisfy the requirement

The correct answer is No it will not satisfy the requirement.

The No it will not satisfy the requirement choice is correct because the IIS applicationInitialization element in web.config is an application start and warm up mechanism that runs when the site or app pool is started and when Always On triggers a warm up. Auto swap in Azure App Service follows its own warm up and swap workflow and does not rely on the web.config applicationInitialization callbacks to guarantee that custom initialization scripts run before the swap completes. You should use the App Service slot warm up features such as the swap warm up ping path or swap with preview to make sure endpoints are called and resources are ready before the final swap.

The Yes it will satisfy the requirement option is incorrect. Adding applicationInitialization to web.config alone does not ensure that Azure App Service will execute those initialization endpoints as part of the auto swap operation. Auto swap uses its own pre-swap warm up steps and configuration and you must configure those App Service specific settings to guarantee pre-swap initialization.

When a question asks about warming up an App Service slot before swap look for App Service specific features such as the WEBSITE_SWAP_WARMUP_PING_PATH app setting or the swap with preview workflow rather than relying solely on IIS web.config settings.

Your engineering group deployed an Azure App Service web application to a client production slot and turned on Always On and the Application Insights extension. After releasing a code change you start seeing a large number of failed requests and exceptions. You must inspect performance metrics and failure counts almost in real time. Which Application Insights feature will provide that capability?

  • ✓ C. Live Metrics Stream

The correct answer is Live Metrics Stream.

Live Metrics Stream provides near real time telemetry from Application Insights so you can inspect request rates, failure counts, exceptions, and response times almost instantly. It streams small samples of live telemetry to the portal which makes it ideal for investigating a spike in failed requests right after a deployment.

Application Map shows dependencies and topology between services and components which helps locate where a problem may originate but it does not provide the near real time failure counts that the live metrics stream gives.

Snapshot Debugger captures snapshots of application state at the moment an exception occurs so you can debug specific errors and inspect variables, but it does not stream aggregated performance metrics in real time.

Profiler records performance traces and CPU usage over time to identify slow code paths and hot methods, but it is focused on profiling and sampling rather than delivering immediate failure counts.

Smart Detection uses heuristics and anomaly detection to surface potential problems and it can generate alerts, but it is not a live streaming view and it may take some time to detect anomalies compared with the immediate visibility of live metrics.

When a question asks about near real time or live visibility choose the feature that explicitly mentions streaming or live metrics in its name.

A development group at Bluepeak Solutions plans to run a container on Azure Container Instances for a pilot project and the container must retain data using a mounted volume. Which type of Azure storage can be attached to an Azure Container Instance as a mount point?

  • ✓ B. Azure Files

The correct option is Azure Files.

Azure Files provides managed file shares that support SMB and NFS protocols and can be mounted directly into an Azure Container Instance as a persistent volume. This allows containers to retain and share data across restarts and between instances without relying on the container filesystem.

Azure Managed Disks are block storage designed to attach to virtual machines and they are not supported as mountable volumes for Azure Container Instances.

Azure Blob Storage is object storage accessed via REST APIs and client libraries and it is not a native file share that ACI can mount directly. Access to blobs from a container requires specific tools or libraries and is not a built in mount the way Azure Files is.

Azure Cosmos DB is a globally distributed NoSQL database service and it is not a filesystem so it cannot be attached as a volume to a container instance.

When a question asks which storage can be mounted into Azure Container Instances remember that Azure Files is the native file share option that ACI supports for persistent mounts.

A development team at a mid sized firm needs to provision a new Azure App Service web application and then upload a deployment package by using the Azure CLI. Which command sequence accomplishes that task?

  • ✓ C. az webapp create –name siteapp –resource-group myResourceGroup && az webapp deployment source config-zip –name siteapp –resource-group myResourceGroup –src /home/deploy/app.zip

The correct option is az webapp create –name siteapp –resource-group myResourceGroup && az webapp deployment source config-zip –name siteapp –resource-group myResourceGroup –src /home/deploy/app.zip.

The az webapp create command provisions an Azure App Service web application in the specified resource group. The az webapp deployment source config-zip command uploads a zip archive to that web app and triggers a deployment so the package is applied to the newly created app.

az appservice create –name siteapp –resource-group myResourceGroup && az appservice deploy –name siteapp –resource-group myResourceGroup –src /home/deploy/app.zip is incorrect because the Azure CLI does not use an az appservice command group for web apps and there is no standard az appservice deploy subcommand for uploading zip packages. The proper group is az webapp and deployment is handled by the deployment source commands.

az webapp up –name siteapp –resource-group myResourceGroup –sku F1 –location eastus –src-path /home/deploy/app.zip is incorrect because az webapp up is a convenience command for quick local deployments and it does not provide the same explicit zip upload behavior as the dedicated zip deployment command. For reliable zip deployments use az webapp deployment source config-zip.

az create webapp –name siteapp –resource-group myResourceGroup && az deploy webapp –name siteapp –resource-group myResourceGroup –src /home/deploy/app.zip is incorrect because those command forms are not valid Azure CLI syntax for provisioning and deploying web apps. The correct approach uses the az webapp group and the supported deployment source commands.

When the task requires both provisioning and uploading a zip think about two clear steps. Use az webapp create to create the app and then use az webapp deployment source config-zip to upload the package.

A global retail operator called Meridian Markets needs an Azure solution to ingest point of sale terminal telemetry from 2,400 locations worldwide. Each terminal generates about 3 megabytes of data every 24 hours. Each location has between one and six terminals that send data. The incoming records must be stored in Azure Blob storage and they must be correlated by terminal identifier. The operator expects to open more stores in the future. The proposed solution is to provision an Azure Event Hubs namespace and to use the terminal identifier as the partition key while enabling capture to Blob storage. Does this approach satisfy the requirements?

  • ✓ B. Yes

The correct answer is Yes.

Using Azure Event Hubs with the terminal identifier as the partition key and enabling capture to Blob storage meets the requirements. Event Hubs can ingest telemetry at global scale and the capture feature writes incoming event batches directly to Azure Blob storage for retention and downstream processing.

Setting the terminal identifier as the partition key ensures that all events from a given terminal are routed to the same partition so they can be correlated by terminal identifier. This preserves grouping by terminal and supports expected growth because Event Hubs is designed for high throughput and horizontal scaling.

Be aware that capture writes files per partition and not per partition key so a single capture blob may contain events from multiple terminals that hash to the same partition. You can still correlate events by terminal identifier when processing those blobs because events for a terminal always land in the same partition. If you require a separate blob per terminal you would need an additional processing step to reorganize the captured data.

No is incorrect because the proposed configuration does satisfy the storage and correlation requirements when you use the partition key to route terminal events and enable capture to persist the data to Blob storage.

Use partition key to route related events to the same partition and use Event Hubs capture to land raw events in Blob storage. Plan the number of partitions when you create the event hub because partitions are fixed.

A community healthcare provider named Harborpoint Health is building a web portal to archive scanned patient admission forms. The portal must ensure that if a third party downloads a stored form the integrity and confidentiality of the form remain intact. You plan to save the scanned admission forms as Azure Key Vault secrets. Does this approach satisfy the requirement?

  • ✓ A. No

No is correct because Azure Key Vault secrets are not meant to be used as a general file store for scanned documents and they do not provide the appropriate storage, size, and download semantics required to guarantee integrity and confidentiality for third party downloads.

Key Vault secrets are designed to hold small sensitive values such as passwords and connection strings and there is a size limit for secret values which makes them unsuitable for storing scanned admission forms. Secrets are returned as values to callers that have access so confidentiality depends on access control rather than file storage features. For durability, large binary content handling, controlled downloads, and built in integrity checks you should use a storage service such as Azure Blob Storage and then use encryption and key management to protect confidentiality and use checksums or digital signatures to protect integrity. You can manage encryption keys with Key Vault while keeping the actual files in Blob Storage to meet both confidentiality and integrity requirements.

Yes is incorrect because saying yes assumes that Key Vault secrets can function as a secure document archive for files and that they can safely support third party file downloads with intact integrity and confidentiality. That is not true because secrets are for small values and do not offer the file storage features, size support, or download controls that Blob Storage provides.

Remember that Azure Key Vault is for small secrets like keys and connection strings and not for storing documents. For file archives think about Azure Blob Storage with encryption and managed keys instead.

A development team at Northbridge Systems is deploying an Azure Resource Manager template that creates a virtual network named corpNet and multiple subnet resources. The team wants the subnets to be deployed only after the virtual network finishes deploying. Which ARM template feature enforces that dependency?

  • ✓ C. dependsOn

The correct option is dependsOn.

You use the dependsOn property in an ARM template to declare that subnet resources must wait until the virtual network has finished deploying. This property creates an explicit deployment dependency so the platform provisions the virtual network first and then the subnets.

The dependsOn array can contain resource names or resource identifiers and it can list multiple resources when you need more complex ordering. You can also include the resourceId() function inside a dependsOn entry to reference a resource by id when that helps build the dependency.

reference() is incorrect because it is a function used to read runtime properties of an already deployed resource and it does not establish deployment order.

condition is incorrect because it controls whether a resource is deployed or skipped and it does not force one resource to wait for another.

resourceId() is incorrect as the answer because it is a function that returns a resource identifier and it does not by itself enforce deployment sequencing. It can be used inside dependsOn but it is not the mechanism that enforces the dependency.

When asked about ordering in ARM templates look for the dependsOn property because functions like reference() and resourceId() are for values and not for declaring resource dependencies.

You are building three microservices named appAlpha appBeta and appGamma and you deploy them to a Contoso Container Apps environment. Each service requires persistent storage and the following constraints apply. appAlpha must keep data visible only to its container and must be limited to the container’s available disk. appBeta must retain data for the duration of the replica and allow several containers in the same replica to mount the identical storage. appGamma must preserve data beyond replica termination while permitting multiple containers to access the data and support per object access controls. Which storage type should you choose for appBeta?

  • ✓ C. Ephemeral volume

The correct option is Ephemeral volume.

Ephemeral volume fits appBeta because it provides storage that lasts only for the lifetime of the replica and it can be mounted by multiple containers within the same replica. The data is stored on the node and is discarded when the replica terminates or is moved, so it meets the requirement to retain data only for the duration of the replica while allowing shared mounts for containers in that replica.

Azure Blob Storage is object storage that persists independently of replicas and is intended for durable, long lived storage of objects, so it does not meet the requirement that data be tied to the replica lifecycle.

Azure Files Storage provides a persistent network file share that remains available beyond replica termination and can be mounted across replicas and instances, so it does not satisfy the need for data that is only retained for the replica lifetime.

Container file system refers to the writable layer local to a single container and cannot be shared across multiple containers in the same replica, so it fails the requirement that several containers in the same replica mount the identical storage.

When you see a requirement about storage lifetime think about whether the data must live only as long as a replica or beyond that life. Ephemeral or pod scoped volumes are the right choice for replica tied data and shared mounts within the same replica, while blob and file services are for durable, cross replica storage.

Contoso Digital has an App Service plan named AppSvcPlanA configured on the Basic B2 pricing tier and it hosts an App Service web app named SiteA. The team intends to enable scheduled autoscaling for AppSvcPlanA and they must minimize the monthly cost for SiteA. The proposed solution is to scale out AppSvcPlanA. Does this solution meet the cost minimization goal?

  • ✓ C. No

The correct answer is No.

No is correct because scaling out the existing Basic B2 App Service plan does not provide schedule-based autoscale and it increases the number of instances which raises the monthly cost. The Basic tier does not support scheduled autoscale and to enable schedule-based rules you must move the plan to the Standard tier or higher, so simply scaling out the current Basic B2 plan will not meet the cost minimization goal.

Upgrade AppSvcPlanA to Standard tier and configure schedule-based autoscale is not the selected answer for the proposed solution because it describes a different change than scaling out. This option would enable scheduled autoscale and is the recommended way to support schedule-based scaling, but it is not the action described by the proposed solution and so it is not the correct choice in this question.

Scale up AppSvcPlanA to a larger instance size is incorrect because scaling up increases the instance size which typically increases cost and it does not enable schedule-based autoscale. Scaling up does not meet the requirement to have scheduled autoscale while minimizing monthly cost.

Yes is incorrect because the proposed solution to scale out the Basic B2 plan will not implement scheduled autoscaling and will increase costs rather than minimize them.

Check the feature matrix for the service tier before answering and confirm whether the proposed action matches the required change. Remember that scheduled autoscale requires the Standard tier or higher and that scaling out increases instance count and monthly cost.

Which type of storage container should a developer at Silverleaf Technologies provision to collect diagnostic logs and performance metrics from multiple Azure services so that Azure Monitor can analyze them?

  • ✓ C. Log Analytics workspace

Log Analytics workspace is the correct option.

Log Analytics workspace is the central repository that Azure Monitor uses to collect and analyze diagnostic logs and performance metrics from multiple Azure services. It ingests data from diagnostic settings and agents and provides a query language with alerting and analytics, so it is the proper storage container for analysis.

Azure Event Hubs is an event ingestion and streaming service that routes telemetry to downstream systems and custom analytics. It is not the workspace where Azure Monitor stores and queries logs for built in analysis.

Azure Storage account with append blobs can be used to archive logs and to store raw log files for retention. It does not provide the integrated query, analytics, and alerting capabilities that Azure Monitor uses for direct analysis.

Azure Monitor resource is not a storage container and is not the specific place where logs are stored for analysis. Azure Monitor stores log data in a Log Analytics workspace and data collection settings route telemetry into that workspace.

When you see a question about where Azure Monitor analyzes logs pick the service that acts as the integrated log store and query engine such as Log Analytics workspace.

A regional retail startup stores user uploads in Azure Blob Storage and applies a lifecycle rule that moves objects to the archive tier after 45 days. Clients have requested a service level agreement for viewing content older than 45 days. What minimum data recovery SLA should you document?

  • ✓ B. One to fifteen hours for standard rehydration

One to fifteen hours for standard rehydration is correct.

Azure archive tier stores data offline and requires a rehydration operation before objects can be read. Standard rehydration is the documented baseline and typically completes within one to fifteen hours, so that is the minimum recovery window you should use when promising an SLA.

Azure also provides expedited options but those are not the baseline and so they should not be used as the minimum guaranteed window.

At least 24 hours for retrieval is incorrect because the documented standard rehydration window is shorter at one to fifteen hours and 24 hours overstates the typical recovery time.

Between zero and sixty minutes for high priority rehydration is incorrect because this describes the expedited high priority option and not the standard baseline. High priority can be faster but it is an optional expedited path and it is not the minimum SLA you should document.

At least 48 hours for guaranteed availability is incorrect because it is much longer than Azure’s published rehydration ranges and does not reflect the actual documented recovery times for archived blobs.

When asked about SLAs for archived data use the default service behavior. Rely on standard rehydration as the baseline unless the scenario specifically allows paying for or depending on expedited retrieval.

Infrastructure templates use a declarative syntax rather than procedural scripts. Why does that style remove the need to manually order resource creation when provisioning cloud services?

  • ✓ B. You do not have to sequence resource creation manually because the template engine determines dependencies and deploys related resources in the correct order

You do not have to sequence resource creation manually because the template engine determines dependencies and deploys related resources in the correct order.

Declarative templates describe the desired end state and the relationships between resources so the engine can construct a dependency graph. The engine then plans the create, update, and delete operations and it executes them in an order that respects those dependencies while running independent actions in parallel when safe. This behavior removes the need for a human to hand order each resource and it reduces errors from missing or incorrect sequencing.

Terraform is not the correct choice because simply naming a tool does not answer why manual ordering is unnecessary. Terraform is an example of a declarative tool that also computes dependencies, but the underlying reason is the declarative style and dependency resolution provided by the template engine.

Declarative templates integrate well with version control systems like GitHub is true in the sense that templates are easy to store and track, but integration with version control does not explain why you do not need to manually sequence resource creation. Version control helps with change management and collaboration rather than execution order.

Declarative templates typically only need small manual edits before each deployment to avoid naming conflicts is incorrect because declarative templates are meant to be parameterized and repeatable. Naming conflicts are normally handled with variables, unique identifiers, or automated naming schemes rather than manual edits before every deployment.

When the question contrasts declarative and procedural approaches look for wording about declaring desired state or dependency resolution to identify answers that explain why manual sequencing is not required.

A regional finance startup named Northwind Analytics needs a messaging architecture that uses a publish subscribe pattern and avoids continuous polling. Which Azure services can satisfy this requirement? (Choose 2)

  • ✓ B. Azure Service Bus

  • ✓ D. Event Grid

Azure Service Bus and Event Grid are correct.

Azure Service Bus supports a true publish subscribe pattern through topics and subscriptions and it provides reliable message delivery and advanced messaging features that suit enterprise scenarios where ordered or transactional delivery is needed.

Event Grid is an event routing service that delivers events to many subscribers with push delivery to endpoints such as webhooks or Azure Functions so consumers do not need to continuously poll for new events.

Event Hubs is not the best choice here because it is optimized for high throughput telemetry and streaming ingestion and consumers typically read from partitions using offsets rather than using a brokered topic and subscription model for classic pub sub.

Azure Queue Storage is also not ideal because it is a simple queue service that lacks native topics and subscriptions and clients generally poll or poll with long polling to retrieve messages rather than receiving pushed events.

When a question asks about publish subscribe and avoiding continuous polling look for services that provide topics or push-based event routing such as Service Bus topics or Event Grid and rule out simple queue or streaming ingestion services.

A developer at Horizon Freight needs to securely obtain a database connection string that is stored in Azure Key Vault for a .NET service. Which step is required to retrieve the secret by using the Azure SDK for .NET?

  • ✓ D. Authenticate using Azure.Identity.DefaultAzureCredential and then use SecretClient to fetch the secret

Authenticate using Azure.Identity.DefaultAzureCredential and then use SecretClient to fetch the secret. This is the correct approach for a .NET service to obtain a secret from Azure Key Vault when using the Azure SDK.

DefaultAzureCredential is the recommended authentication chain in the Azure Identity library because it automatically tries a set of credential sources and picks the appropriate one for the environment. It will use a managed identity when running in Azure and developer credentials when you run locally. Using DefaultAzureCredential means you do not need to hard code keys or manually obtain tokens for the most common scenarios.

SecretClient is the modern client type in the Azure.Security.KeyVault.Secrets package that provides simple methods such as GetSecretAsync to read secrets. Using SecretClient together with DefaultAzureCredential is the supported and secure way to fetch a connection string with the Azure SDK for .NET.

Register Microsoft.Extensions.Configuration.AzureKeyVault in the application configuration so secrets are available during startup is not the required step for the Azure SDK approach. That older configuration provider was used in legacy patterns and newer applications generally use the Azure SDK clients or the Azure.Extensions.AspNetCore.Configuration.Secrets provider instead.

Instantiate the legacy KeyVaultClient and attempt to call it without supplying credentials is incorrect because the legacy KeyVaultClient belongs to older libraries and you must supply credentials. The modern SDKs use Azure.Identity and SecretClient and the legacy client is not the recommended path on current exams or in new code.

Acquire a managed identity token and invoke the Key Vault REST API directly instead of using the Azure SDK is technically possible but it is not the expected step when the question asks about using the Azure SDK for .NET. Calling the REST API directly requires manual token management and more boilerplate when the SDK provides a simpler and supported alternative.

When an exam question mentions the Azure SDK for .NET, look for answers that use DefaultAzureCredential with the modern client libraries such as SecretClient because that pattern covers both local development and managed identities in Azure.

A development team at Contoso Cloud runs a web app on Azure App Service and they need to open the Kudu management console for their app. Which URL should they use to reach the Kudu companion application?

https://(app-name).scm.azurewebsites.net is the App Service Advanced Tools endpoint that hosts the Kudu management console. The .scm subdomain provides the Kudu console for deployments, the web-based console, and diagnostic features and you can open it directly or via the Advanced Tools blade in the Azure Portal.

https://(app-name).kudu.azurewebsites.net is incorrect because Kudu for App Service does not use a .kudu subdomain. The companion application is exposed on the .scm subdomain.

Azure Portal is not the direct Kudu URL. The portal can launch Advanced Tools which opens Kudu but the question asks for the specific URL for the companion application so the .scm address is required.

https://(app-name).azurewebsites.net is the public endpoint for the web app and it serves application traffic. It does not host the Kudu management console which lives on the .scm subdomain.

When you need the Kudu or Advanced Tools URL remember to use the .scm subdomain or open Advanced Tools from the Azure Portal to launch the Kudu console.

What factors influence the price you pay for an Azure Cache for Redis deployment?

  • ✓ D. Region, pricing tier and hourly billing

The correct answer is Region, pricing tier and hourly billing.

Specifically, Region, pricing tier and hourly billing is correct because Azure Cache for Redis pricing varies by geographic region and by the SKU or tier you choose, and the service is metered on a time basis so costs are calculated hourly for provisioned instances.

Charges per operation or per transaction is incorrect because Azure Cache for Redis is not billed per individual cache operation or transaction. The service is provisioned as an instance and pricing is based on the instance tier and time rather than per request.

Region, allocated memory and pricing tier is incorrect because although memory capacity is related to the chosen tier, billing is expressed by the tier and by time. This option omits the important hourly billing dimension that determines how charges accumulate.

Region and pricing tier is incorrect because it leaves out the billing period. Region and tier affect price levels but the actual charges are applied on an hourly basis for the provisioned instance.

When a question asks about cloud pricing focus on the resource dimensions such as region, pricing tier, and the billing period. For Azure Cache for Redis remember that costs are tied to the selected tier and charged by time so look for answers that include hourly billing.

A serverless function at Nimbus Apps reads messages from an Azure Storage queue named orders queue and it must handle messages that repeatedly fail processing so they can be examined separately. Which configuration should you apply to ensure repeatedly failing messages are moved to a separate queue for later investigation?

  • ✓ C. Configure maxDequeueCount in host.json to a value greater than 2

The correct answer is Configure maxDequeueCount in host.json to a value greater than 2.

The Configure maxDequeueCount in host.json to a value greater than 2 setting controls how many times the Functions runtime will retry a failed Azure Storage queue message before it moves that message to a separate poison queue for later investigation. Adjusting Configure maxDequeueCount in host.json to a value greater than 2 lets you set the retry threshold so that repeatedly failing messages are isolated automatically without custom code.

Switch to Azure Service Bus and enable dead lettering is not the best answer because it suggests moving to a different messaging service. Service Bus does support dead lettering but the question is about handling failures on an Azure Storage queue and the Functions runtime already provides poison queue handling for that scenario.

Implement custom code that moves recurring failures to a separate investigation queue is unnecessary because Azure Functions and the WebJobs SDK provide built in support to move messages to a poison queue once they exceed the retry threshold. Custom code adds extra complexity and can duplicate functionality the runtime already handles.

Increase visibilityTimeout on the queue trigger binding is incorrect because visibility timeout only changes how long a message remains invisible while it is being processed. Changing that value affects retry timing and concurrent processing but it does not cause repeatedly failing messages to be moved to a separate investigation queue.

When a question mentions repeated failures for Storage queue triggers think about the runtime retry settings such as maxDequeueCount in host.json and the automatic poison queue behavior.

VitaCrunch Foods is creating a rewards system. When a customer buys a snack at any of 125 affiliated outlets the purchase event is sent to Azure Event Hubs. Each outlet is assigned a unique outlet identifier that serves as the rewards program key. Outlets must be able to be enrolled or removed at any time. Outlets must only be able to submit transactions for their own identifier. How should you allow outlets to send their purchase events to Event Hubs?

  • ✓ C. Use publisher policies for outlets

The correct option is Use publisher policies for outlets.

Use publisher policies for outlets is the right choice because Event Hubs publisher policies provide per-publisher credentials and allow you to register each outlet as a publisher and revoke or add publishers dynamically. This approach lets all outlets send to the same Event Hub while ensuring that each outlet can only publish using its assigned publisher identity so transactions are tied to the outlet identifier.

Create a partition for each outlet is incorrect because partitions are a throughput and ordering mechanism and not an access control feature. You cannot practically create a dedicated partition per outlet at scale and partitions do not prevent one sender from using another outlet identifier.

Provision a separate namespace for every outlet is incorrect because namespaces are heavy weight and intended for isolation at a higher level. Creating one namespace per outlet would be expensive and hard to manage and it is unnecessary when publisher policies provide per-outlet credentials and scoping.

When a question asks for per sender identity and the ability to add or revoke senders quickly look for answers that provide per sender credentials and scoped permissions such as publisher policies rather than structural changes like partitions or separate namespaces.

Refer to the Northfield Systems case study at the link provided below and answer the following question. Open the link in a new tab and do not close the test tab. https://example.com/doc/9f8Kx7 Which types of availability tests can you use to check the company website? (Choose 2)

  • ✓ B. Standard availability test

  • ✓ C. Custom testing via the TrackAvailability API

Standard availability test and Custom testing via the TrackAvailability API are correct.

Standard availability test is the built in HTTP or HTTPS uptime check that monitors a website URL for response status and latency from multiple locations and it is the typical method to verify the company website is reachable.

Custom testing via the TrackAvailability API is correct because it allows you to implement custom or scripted checks and to track complex sequences of requests when a single HTTP request is not sufficient for validating site functionality.

Multi step web test is incorrect because the exam context expects the built in HTTP(S) uptime check or a custom API based approach and not a separately named multi step test feature as an exam answer.

URL ping test is incorrect because website availability checks use HTTP or HTTPS requests rather than ICMP style pings and that phrasing often refers to older or less relevant testing methods that are not the recommended approach on newer exams.

Read each option for whether it mentions HTTP or an explicit API and eliminate choices that refer to ICMP ping or vague legacy features to focus on standard uptime checks or custom API based testing.

Tailwind Software hosts an ASP.NET Core API in an Azure Web App and it requires custom claims from its Entra ID tenant for authorization. The custom claims must be removed automatically when the app registration is deleted. You need those custom claims to be present in the user access token. What action will achieve this?

  • ✓ C. Add roles to the appRoles attribute in the application manifest

The correct option is Add roles to the appRoles attribute in the application manifest.

Defining roles in the appRoles section of the application manifest creates application roles that Azure AD includes as the roles claim in user access tokens when a user is assigned to those roles. These roles are part of the app registration metadata so they are automatically removed if the app registration is deleted, which meets the requirement that the custom claims disappear with the app registration.

Add the groups to the groupMembershipClaims attribute in the application manifest is incorrect because the groupMembershipClaims setting emits group identifiers and not custom application roles. That setting will not produce the custom role claims you need in the token.

Require the https://graph.microsoft.com/.default scope during sign in is incorrect because the /.default scope is used for acquiring application permissions to call Microsoft Graph and it does not cause Azure AD to add custom app role claims into the user access token for your API.

Implement custom middleware to query the identity provider for role claims at runtime is incorrect because fetching roles at runtime would not make the roles part of the issued access token and it would not ensure the claims are removed automatically when the app registration is deleted. The appRoles manifest entry is the supported way to have roles emitted in tokens.

Use the OAuth 2.0 authorization code flow for the web app is incorrect because the authorization code flow is an authentication and token acquisition mechanism and it does not by itself add custom application role claims to the token. The roles must be defined on the app registration to appear in the access token.

For questions about token contents remember to check how the application registration is configured. Defining roles in the app manifest will cause Azure AD to emit a roles claim in the access token and those roles are tied to the registration so they are removed if the registration is deleted.

Greenfield Studios is building a platform to archive millions of photographs in Azure Blob Storage. The platform must store camera Exchangeable Image File Format metadata as blob metadata when images are uploaded and it must read that metadata while minimizing bandwidth and processing overhead using the REST API. Which HTTP verb should the platform use to retrieve Exif metadata from a blob?

  • ✓ D. HEAD

The correct option is HEAD.

HEAD reads only the blob headers and the metadata without returning the blob body. This behavior lets the platform retrieve Exchangeable Image File Format information stored as blob metadata while minimizing bandwidth and processing overhead when using the REST API.

GET is incorrect because it returns the full blob content along with headers and metadata. Retrieving the body would waste bandwidth and increase processing when only Exif metadata is needed.

PUT is incorrect because it is used to upload or replace blobs and not to retrieve blob metadata. Using PUT would modify storage rather than perform a read of metadata.

POST is incorrect because Azure Blob Storage does not use POST to read blob properties or metadata. POST is not the REST verb for retrieving headers only and it is not appropriate for this read only metadata access pattern.

When you need only metadata use HEAD to fetch headers only and avoid downloading blob content. This reduces bandwidth and speeds up metadata lookups.

Your team at Northgate Software is building an Azure hosted web application that must support user sign in and you plan to rely on Microsoft Entra ID for the identity provider. Which approach should you implement to provide user authentication for the application?

  • ✓ C. Configure the application to use Microsoft Entra ID as an identity provider using OpenID Connect or OAuth 2.0

Configure the application to use Microsoft Entra ID as an identity provider using OpenID Connect or OAuth 2.0 is the correct choice.

Using Microsoft Entra ID with OpenID Connect or OAuth 2.0 allows the web application to delegate authentication to a managed identity service and receive standardized tokens for secure access. The platform handles sign in flows, token issuance, single sign on and optional multi factor authentication so your app does not need to manage user credentials directly.

The Microsoft identity platform and libraries such as MSAL integrate with Microsoft Entra ID and simplify acquiring and validating tokens in your application. This approach follows cloud security best practices and reduces operational overhead when hosting a web app in Azure.

Use Microsoft Entra External ID formerly known as Azure AD B2C with a custom policy is not the best choice for standard enterprise sign in. Entra External ID is targeted at consumer and external identity scenarios and for custom user journeys and social identity providers so it is more complex than needed for typical organization user authentication.

Deploy a self hosted OpenID Connect provider and perform authentication against it is incorrect because running your own identity provider adds significant operational and security responsibility and duplicates functionality that Microsoft Entra ID already provides as a managed service. For most Azure hosted apps the managed identity platform is the preferred option.

Call the Microsoft Graph API to try to validate user credentials at runtime is wrong because Microsoft Graph is not an authentication endpoint and it does not accept raw user credentials for sign in. Authentication must be performed using OAuth 2.0 or OpenID Connect flows so that your app receives and validates tokens.

When a question asks about web app sign in on Azure choose the option that uses the platform identity service and standard protocols. Look for OpenID Connect or OAuth 2.0 in the correct answer.

A cloud engineering team at Northgate Systems maintains an Azure subscription that contains a storage account a resource group a blob container and a file share. A colleague named Alex Palmer deployed a virtual machine and a storage account by using an Azure Resource Manager template. You must locate the ARM template that Alex used. Solution You access the ‘Resource Group’ blade. Does this solution meet the requirement?

  • ✓ A. No

The correct answer is No.

Accessing the Resource Group blade by itself does not meet the requirement because the ARM template is not shown simply by opening that blade. To locate the template you need to view the deployment history for the resources. In the Azure portal you must open the specific resource group and then view its Deployments entry and select the relevant deployment to view the template that was used.

If the deployment was performed at subscription scope or the template file was stored externally, the Resource Group blade alone will not show the template and you must check the subscription Deployments view or the original repository or storage location.

Yes is incorrect because simply accessing the Resource Group blade without viewing the Deployments does not guarantee that you will find the ARM template. The portal requires you to inspect the deployment record to retrieve the template or to check the proper deployment scope.

When asked where to find a deployed ARM template, first open the specific resource group and then open its Deployments history. If nothing appears check the subscription deployments or the source repository where the template may have been stored.

A retail startup named NimbusShop is building an order routing system using Azure Service Bus topics and it will publish order messages to a topic. Each message will set OrderNumber as the MessageId and include a Quantity application property plus a Priority property that can be High Medium or Low. One subscription must receive every order and another subscription must receive only messages whose Priority equals High while a third subscription must accept orders with a Quantity of at least 60. Which filter type should the subscription that only receives High priority orders use?

  • ✓ C. SqlFilter

The correct option is SqlFilter.

SqlFilter lets you write SQL like expressions that evaluate application properties on messages so you can filter for Priority = ‘High’ and only receive high priority orders. It supports string equality checks and other comparisons and logical operators which makes it the flexible choice for property based filtering.

CorrelationFilter is not the best fit because it is designed for simple equality matching on a few well known message header fields such as MessageId or CorrelationId and it does not provide the full SQL like expression support needed to filter an arbitrary application property like Priority.

TrueFilter is incorrect because it matches every message and would deliver all orders to the subscription instead of restricting delivery to only high priority messages.

When a question asks about filtering on message properties or doing comparisons pick SqlFilter since it supports SQL like expressions and is the most flexible filter type.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.