Free AZ-204 Practice Tests on Azure Developer Exam Topics

Microsoft AZ-204 Certification Exam Topics

Want to pass the AZ-204 certification exam on your first attempt? You are in the right place. We have collected a set of AZ-204 exam questions that will help you understand key concepts and prepare for the real AZ-204 test.

All sample questions come from my Azure and cloud practice courses and the certificationexams.pro website.

AZ-204 Developer Associate Practice Questions

These are not AZ-204 exam dumps or braindumps. They are legitimate practice questions designed to help you learn and gain real competency with Azure development services.

Good luck on these practice questions, and even better luck on the official AZ-204 exam.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AZ-204 Sample Questions

BrightMart is hosting a retail web application on Azure App Service and they will store authentication secrets in Azure Key Vault while using App Service authentication with Microsoft Entra ID. What setting must you apply to the retail App Service so the application can retrieve Key Vault secrets during user sign in?

  • ❏ A. Use the Azure CLI az keyvault secret command to set a secret in Key Vault

  • ❏ B. Register an application and create a Microsoft Entra ID service principal for the app

  • ❏ C. Enable a system assigned managed identity for the App Service

  • ❏ D. Enable Microsoft Entra ID Connect on the App Service

At Meridian Freight you are building an Azure Functions service that must capture comprehensive execution telemetry to help with debugging and performance troubleshooting. Which approach should you use to implement logging in the Azure Function?

  • ❏ A. Pass an ILogger parameter into the function method and write structured logs using that interface

  • ❏ B. Use Console.WriteLine to emit log statements from the function code

  • ❏ C. Persist runtime log files to an Azure Blob Storage container for later inspection

  • ❏ D. Configure Application Insights and use the TelemetryClient to send custom events and trace messages

A retail technology company called HarborLane runs six web apps within a single App Service plan and each app uses a production slot and a staging slot. The App Service plan is scaled out to four instances. How many virtual machines are running to support these applications?

  • ❏ A. Twenty four virtual machines

  • ❏ B. Four virtual machines

  • ❏ C. Twelve virtual machines

  • ❏ D. One virtual machine

A development team at Nimbus Analytics stores user documents in Azure Blob Storage and they need an automated rule that removes blobs that have not been modified for more than 45 days. Which Azure Blob Storage capability should they use to satisfy this requirement?

  • ❏ A. Azure Blob Snapshots

  • ❏ B. Azure Blob Lifecycle Management

  • ❏ C. Azure Blob Change Feed

  • ❏ D. Azure Blob Soft Delete

A regional e commerce company called Meridian Retail runs an Azure App Service that consistently receives a traffic surge every Monday morning compared with the rest of the week. What scaling approach should you apply to handle that predictable weekly spike?

  • ❏ A. Configure CPU based autoscale rules that only evaluate metrics during Mondays

  • ❏ B. Schedule App Service autoscale rules to increase instance count on Mondays at 9 AM and scale down on Tuesdays at 9 AM

  • ❏ C. Keep the application on a single instance to reduce costs during quieter weekdays

  • ❏ D. Manually increase the number of App Service instances every Monday morning to absorb the load

Your team is building an ASP.NET Core web application that will be hosted on Azure App Service. The application needs a centralized session store that can also cache entire HTTP responses for frequent reuse. Which Azure service should you choose?

  • ❏ A. Azure Functions

  • ❏ B. Azure Storage Account

  • ❏ C. Azure Logic Apps

  • ❏ D. Azure Cache for Redis

You are building a web portal that will be hosted as an Azure App Service. Users will sign in with their Contoso Identity credentials. You plan to assign each user one of these permission tiers admin standard viewer. A user must receive the correct tier based on their Contoso Identity group membership. The suggested configuration is to register a new Contoso Identity application then add application roles to the application manifest that correspond to the required tiers and assign each Contoso Identity group to the appropriate role and finally have the portal read the roles claim from the JWT to determine access. Does this solution meet the goal?

  • ❏ A. No

  • ❏ B. Yes

When configuring an Azure Function using the function.json definition file what values can be assigned to the direction property?

  • ❏ A. blob, queue, table, file

  • ❏ B. stderr, stdout, stdin

  • ❏ C. left and right

  • ❏ D. in, out and inout

Why does an in memory caching layer such as Memcached return data much faster than a disk based relational database like PostgreSQL?

  • ❏ A. It stores data on high performance NVMe solid state drives

  • ❏ B. It retains frequently used data entirely in memory for immediate retrieval

  • ❏ C. Cloud Memorystore

  • ❏ D. The cache runs on the same physical host as the application

A retail technology firm named RetailNex runs a web app called PortalApp2 in Azure App Service and a function app named WorkerFunc2. The web app reports telemetry to an Application Insights instance named insightsPortal2 where a synthetic web test and an alert rule are configured and each alert currently sends an email to your mailbox. You must ensure every alert also invokes WorkerFunc2. The proposed solution is to enable an Application Insights smart detection. Does this solution meet the requirement?

  • ❏ A. Yes

  • ❏ B. No

A software team at Novasoft deployed a scheduled Azure Function using the cron expression “0 0 0 1 1 *” and they documented that the pattern follows the {second} {minute} {hour} {day} {month} {day-of-week} layout. How often will the function execute?

  • ❏ A. Triggers daily at 12:01 AM

  • ❏ B. Runs at midnight on January first each year

  • ❏ C. Executes every Monday at 1:00 AM

  • ❏ D. Fires every day at 1:01 AM

A startup named LeisureStays is building a .NET 6 MVC web application that lets guests find independent vacation rentals and the team plans to use Azure Cognitive Search to query an index by several criteria and they want to support regular expression matching in user queries. What change should they make to the search request to enable regular expression queries?

  • ❏ A. Configure the “Filter” property of the “SearchParameters” class

  • ❏ B. Configure the “QueryType” property of the “SearchParameters” class

  • ❏ C. Configure the “SearchMode” property of the “SearchParameters” class

  • ❏ D. Configure the “Facets” property of the “SearchParameters” class

Evaluate whether the proposed approach satisfies the requirements. A software team at HarborSoft is deploying four ASP.NET web applications to Azure App Service and they need to persist session state and complete HTTP responses. The storage solution must allow session state to be shared across all web applications. It must support controlled concurrent access to the same session data with many readers and a single writer. It must also save full HTTP responses for concurrent requests. The proposed approach is to enable Application Request Routing on the App Service. Does this approach meet the requirements?

  • ❏ A. No

  • ❏ B. Yes

Your team builds integrations for BlueRiver Tech that connect to an Entra directory and you plan to classify permissions within that directory. You must select which permission type to include in the classification. Which permission type should you choose?

  • ❏ A. App only permissions that must be approved by an administrator

  • ❏ B. Delegated permissions obtained via end user consent

  • ❏ C. App only permissions that claim to be grantable by users

  • ❏ D. Delegated permissions that require administrator approval

Review the Northbridge Logistics case study using the document link provided and answer the following question. Open the document in a separate browser tab and keep the test tab active. https://example.com/documents/2bXyZ9AbCcdE Which API setup should the Azure Function use to retrieve delivery driver profile information?

  • ❏ A. Microsoft Identity Platform

  • ❏ B. Microsoft Entra ID Graph

  • ❏ C. Microsoft Graph

A developer is building a mobile application that stores data in an Azure SQL Database named AzuraDB. The database contains a table named Clients in the sales schema and the table includes a column named contact_email. The developer executes the following Transact SQL statement ALTER TABLE [sales].[AzuraDB].[Clients] ALTER COLUMN [contact_email] ADD MASKED WITH (FUNCTION = ’email()’) in order to enable dynamic data masking. Does this statement meet the requirement?

  • ❏ A. Yes it applies dynamic data masking

  • ❏ B. No the statement does not implement the masking

A development group at Harborview Systems needs to allow customers to upload large files into a specific blob container in Azure Blob Storage. Which Azure SDK method should they call to upload a local file into a blob?

  • ❏ A. PutBlockBlobAsync

  • ❏ B. CreateBlobAsync

  • ❏ C. UploadBlobAsync

  • ❏ D. UploadFromFileAsync

Your team uses an Azure Container Registry named scrumtuous.azurecr.io and you organized repositories into namespaces for sales marketing technology and customerservice. The website project is stored in the marketing namespace. Which docker pull command will download the website image from the registry?

  • ❏ A. docker pull marketing.scrumtuous.azurecr.io/website

  • ❏ B. docker pull scrumtuous.azurecr.io -location marketing/website

  • ❏ C. docker pull scrumtuous.azurecr.io/marketing/website

  • ❏ D. docker pull scrumtuous.azurecr.io -path marketing -project website

You are creating nested Azure Resource Manager templates to provision several Azure resources for Nova Systems and you must test the templates before they run in production. You want a method to preview and validate the exact changes a deployment will apply to the subscription. Which tool should you use to preview and validate the expected modifications?

  • ❏ A. Parameters file

  • ❏ B. User defined functions in the template

  • ❏ C. Azure Deployment Manager

  • ❏ D. Template analyzer tool

  • ❏ E. Built in template functions

  • ❏ F. What if preview operation

A development team at Meridian Apps is instrumenting an Azure web service with Azure Application Insights to gather telemetry and they need to pinpoint which requests and external calls are introducing latency so they can resolve performance bottlenecks, which Application Insights capability helps surface slow requests and the dependent components that cause them?

  • ❏ A. Live Metrics Stream

  • ❏ B. Application Map

  • ❏ C. Performance Counters

  • ❏ D. Analytics Query

What advantage does hosting applications in an App Service Environment provide for an organization running sensitive workloads?

  • ❏ A. It provides direct integration with Azure Front Door

  • ❏ B. It removes all network setup requirements when publishing apps

  • ❏ C. It guarantees infinite autoscaling with zero performance impact

  • ❏ D. It offers a dedicated and isolated hosting environment within a virtual network for stronger security and compliance

Review the Bayside Retail case study at https://example.com/casestudy and keep this testing tab open while you consult the document. The retailer is experiencing lost and overwritten store location records and you must enable the appropriate Azure Blob storage capabilities so administrators can restore blob data to a previous point in time. Which three Azure Blob storage features should you enable to allow point in time restore and recovery? (Choose 3)

  • ❏ A. Change feed

  • ❏ B. Immutability policies

  • ❏ C. Soft delete

  • ❏ D. Object replication

  • ❏ E. Versioning

  • ❏ F. Snapshots

A development team at Meridian Bank is building a service that stores cryptographic keys in Azure Key Vault. The team must require a specific cryptographic algorithm and a set key length for any keys placed in the vault. Which Azure capability should they use?

  • ❏ A. Secret versioning

  • ❏ B. Access policies

  • ❏ C. Azure Policy

  • ❏ D. Azure Blueprints

A small firm called North Harbor Apps is building a web portal that uses the Microsoft identity platform for user sign in. The engineering team needs to retrieve a single immutable claim that will uniquely identify each user within the directory and remain stable across applications. Which claim type should they read from the token?

  • ❏ A. nonce claim

  • ❏ B. idp claim

  • ❏ C. aud claim

  • ❏ D. oid claim

You work as a developer at Summit Analytics and your application consumes messages from an external system using an Azure Service Bus queue with several workers processing those messages. Recently a rare issue has caused some messages to be processed twice because a worker completed the work but crashed before it could remove the message from the queue. Management prefers a behavior that avoids duplicate processing even if that means some messages may be missed. How should you change the system so messages are never processed more than once even if some are occasionally lost?

  • ❏ A. Move the messaging workflow to Cloud Pub/Sub with exactly once delivery

  • ❏ B. Record each message identifier in a database table called processed_messages after finishing processing and have consumers check this table before handling a message

  • ❏ C. Change the Service Bus queue to use “at most once” delivery semantics

  • ❏ D. Add robust error handling that sends an SMS alert to developers when message processing fails

A virtual machine named VMProd is launched into the virtual network CorpNet and placed in the subnet AppSubnet. Which private IP address will the virtual machine receive?

  • ❏ A. A public IP from Azure public address pool

  • ❏ B. The next free private IP automatically assigned from the AppSubnet range

  • ❏ C. 192.168.5.2 is always the first address given on a new virtual network

  • ❏ D. An unpredictable address that cannot be determined in advance

A development team at Northwind Commerce deployed an Azure Cosmos DB for NoSQL account named appdata1 using the default consistency level and they intend to set consistency on a per request basis while requiring consistent prefix behavior for both reads and writes to appdata1. Which consistency level should be configured for read operations?

  • ❏ A. Session

  • ❏ B. Consistent prefix

  • ❏ C. Strong

Review the Birchwood Orchards web application case study at https://example.com/case-study-birchwood and decide how to resolve repeated HTTP 503 errors that indicate the site is running out of CPU and memory resources. Open the linked case study in a new tab and keep this test tab open. Which solution should you implement?

  • ❏ A. Create an Azure Content Delivery Network endpoint

  • ❏ B. Enable the App Service Local Cache feature

  • ❏ C. Scale the App Service plan up to a higher SKU such as Premium V3

  • ❏ D. Add an App Service staging deployment slot

A development group at Solstice Systems wants to keep infrastructure definitions in source control and deploy environments using repeatable declarative files. Which Azure feature enables defining infrastructure as code?

  • ❏ A. GitHub Actions

  • ❏ B. Terraform

  • ❏ C. Azure Automation Accounts

  • ❏ D. Azure Resource Manager template files

A regional retailer named Meridian Books has an Azure Cosmos DB for NoSQL account configured with session consistency and it accepts writes in a single Azure region while the data is replicated for reads to four regions. An application that will access the Cosmos DB container through an SDK requires that container items must never be removed automatically. What change should you make to the Azure Cosmos DB for NoSQL account to meet this requirement?

  • ❏ A. Apply a resource lock to the container

  • ❏ B. Set the Time to Live property on items to -1

  • ❏ C. Set the Time to Live property on items to 0

  • ❏ D. Apply a resource lock to the database account

Review the Millerton Retail case study hosted at https://example.com/doc/2bTzQx and answer the prompt provided in the linked document. Keep the exam tab open and view the case details in a separate tab. You need to build the store locator Azure Function for the retailer. Which binding direction should you choose for the Cosmos DB binding?

  • ❏ A. Input

  • ❏ B. Output

A mobile team is building an app that connects to an Azure SQL Database named ‘Aquila’. The database contains a table called ‘Clients’ and that table has a column named ‘contact_email’. The team wants to apply dynamic data masking to hide the contact_email values and they propose executing the PowerShell command Set-AzSqlDatabaseDataMaskingPolicy with the DatabaseName parameter set to ‘Aquila’. Will this approach achieve the masking requirement?

  • ❏ A. No

  • ❏ B. Yes

You are building an Azure solution to ingest point of sale device telemetry from 2,500 retail locations worldwide and each device generates about 2.5 megabytes of data every 24 hours. Each location has between one and six devices that transmit data and the data must be stored in Azure Blob storage. Device records must be correlated by a device identifier and more locations will be added in the future. The proposed solution provisions an Azure Notification Hub and registers all devices with the hub. Does this solution meet the requirement?

  • ❏ A. Yes the proposed design satisfies the requirement

  • ❏ B. No the proposed design does not satisfy the requirement

A regional lender uses Azure Durable Functions to automate an auto loan approval workflow at Harbor Lending. The process requires several ordered steps and one step performs a credit verification that can take up to four days to finish. Which Azure Durable Functions function type should coordinate the entire loan workflow?

  • ❏ A. client

  • ❏ B. activity

  • ❏ C. entity

  • ❏ D. orchestrator

A team is building a telemetry backend for courier drivers at ParcelWave that records driver first name driver last name total packages parcel id and current GPS coordinates in Azure Cosmos DB. You must select an Azure Cosmos DB partition key that provides even distribution and supports efficient retrieval of single delivery records. Which field should be used as the Azure Cosmos DB partition key?

  • ❏ A. totalPackages

  • ❏ B. driverFirstName

  • ❏ C. parcelId

  • ❏ D. driverLastName

Read the Northgate Retail case study at https://example.com/northgate-case and keep this test tab open. You need to route events for incoming retail store location files so that processing is triggered. Which source configuration should be used?

  • ❏ A. Azure Event Hub

  • ❏ B. Azure Service Bus

  • ❏ C. Azure Event Grid

  • ❏ D. Azure Blob Storage

You manage a web application through Azure Front Door and you expect incoming files to be served with Brotli compression. You observe that incoming XML files that are 25 megabytes in size are not being compressed. You need to determine the root cause and decide whether the edge nodes must be purged of all cached content?

  • ❏ A. Yes

  • ❏ B. No

A developer is building a service that uses Contoso Cache for Redis and must choose the appropriate Redis structure for a requirement. Which Redis data type would you select to implement a Publish/Subscribe messaging pattern?

  • ❏ A. stream

  • ❏ B. list

  • ❏ C. channel

A team at Northshire deployed a container group to an Azure Container Instance and set the DNS label “appinstance”. What is the publicly reachable fully qualified domain name for that container group?

  • ❏ A. Cloud Run

  • ❏ B. azurecontainer.example.com/appinstance

  • ❏ C. appinstance.westus2.azurecontainer.example.com

  • ❏ D. appinstance.azurecontainer.example.com

Which kinds of item operations does the Azure Cosmos DB change feed capture?

  • ❏ A. Inserts updates and deletes

  • ❏ B. Updates only

  • ❏ C. Inserts and updates

  • ❏ D. Inserts only

A software team at Contoso Dev is building a REST API hosted in Azure App Service that is called by a companion web application. The API must read and update user profile attributes stored in the company’s Azure Active Directory tenant. Which two technologies should the team configure so the API can perform these updates? (Choose 2)

  • ❏ A. Azure Key Vault SDK

  • ❏ B. Microsoft Graph API

  • ❏ C. Azure API Management

  • ❏ D. Microsoft Authentication Library MSAL

A scheduled timer function at NovaWorks uses the cron expression “0 12,24,36 0 * * *” for its trigger schedule. When will the function execute?

  • ❏ A. Every 12 seconds continuously throughout the day

  • ❏ B. At 12 minutes 24 minutes and 36 minutes past every hour of the day

  • ❏ C. At 00:12, 00:24, and 00:36 each day

  • ❏ D. Every 12 minutes during every hour of the day

Your team operates an Azure Cosmos DB for NoSQL instance for a retail startup called Soluna Tech. You plan to build two services named ReaderService and PushService that will consume the change feed to detect updates to containers. ReaderService will pull changes by polling and PushService will receive events using the push style. Which component should PushService use to record the most recently processed change for each partition?

  • ❏ A. Continuation token

  • ❏ B. Integrated cache

  • ❏ C. Lease container

A retail technology team is preparing to deploy an Azure Function App and they have implemented the logic in a runtime that the Functions host does not support natively, and the runtime can handle HTTP requests, so they must select a Publish option when deploying to production. Which Publish setting should they choose?

  • ❏ A. Custom Handler

  • ❏ B. Code

  • ❏ C. Docker Container

A developer created an HTTP triggered Azure Function for MeridianRetail to process files stored in Azure Storage blobs and the function uses an output binding to update the blob. The function repeatedly times out after three minutes when handling large blob content and it must finish processing the blob data. The team proposes enqueuing the incoming HTTP payload into an Azure Service Bus queue so that a separate queue triggered function will perform the work while the HTTP endpoint returns an immediate success response. Does this approach meet the objective?

  • ❏ A. No this design does not meet the objective

  • ❏ B. Yes this approach meets the objective

You are building an ASP.NET Core Web API for a logistics startup that uses Azure Application Insights for telemetry and dependency monitoring. The API stores data in a non Microsoft SQL Server database. You need to ensure outgoing calls to that external datastore are tracked as dependencies. Which two dependency telemetry properties should you set? (Choose 2)

  • ❏ A. Telemetry.Context.Cloud.RoleInstance

  • ❏ B. Telemetry.Id

  • ❏ C. Telemetry.Context.Operation.ParentId

  • ❏ D. Telemetry.Context.Operation.Id

  • ❏ E. Telemetry.Name

A development team at LunarSoft is building a customer portal that relies on the Microsoft Identity platform for user sign in and API access. The portal calls several REST endpoints and you must implement authentication and authorization flows that validate the claims carried in the authentication token. Which token type should you use to identify users for the portal by using a JWT that contains claims?

  • ❏ A. Refresh token

  • ❏ B. Access token

  • ❏ C. SAML token

  • ❏ D. ID token

A small telemedicine company called Meridian Health is building a web service that needs to retrieve sensitive settings such as database connection strings and third party API keys at runtime and the team plans to store these values in Azure Key Vault. What should they do to allow the service to retrieve the Key Vault secrets in a secure manner?

  • ❏ A. Register a service principal and embed its client secret inside the application code

  • ❏ B. Store Key Vault secrets in the application source repository

  • ❏ C. Assign a managed identity to the application and grant it access to Key Vault secrets

  • ❏ D. Use a storage account shared access signature to access Key Vault

Your operations group is responsible for 120 Azure virtual machines that each have a system assigned managed identity enabled. You must obtain the objectId value for every managed identity attached to these VMs. Which two commands will return that information? (Choose 2)

  • ❏ A. az ad sp credential list

  • ❏ B. Get-AzVM

  • ❏ C. Get-AzureADUser

  • ❏ D. az ad signed-in-user list-owned-objects

  • ❏ E. Get-AzResource

  • ❏ F. Get-AzureADUserOwnedObject

Review the CoveTech Ltd case study in the linked document and then answer the question. Open the reference in a separate browser tab and keep this exam tab open. You must deploy the corporate website. Which Azure service should you choose?

  • ❏ A. Azure Blob Storage static website

  • ❏ B. Azure Static Web Apps

  • ❏ C. Azure App Service Web App

  • ❏ D. Azure Functions

Azure Developer Certification Questions Answered

BrightMart is hosting a retail web application on Azure App Service and they will store authentication secrets in Azure Key Vault while using App Service authentication with Microsoft Entra ID. What setting must you apply to the retail App Service so the application can retrieve Key Vault secrets during user sign in?

  • ✓ C. Enable a system assigned managed identity for the App Service

Enable a system assigned managed identity for the App Service is correct.

Enabling a system assigned managed identity gives the App Service an identity in Microsoft Entra ID that the app can use to authenticate to Azure Key Vault without storing client secrets. After you enable the managed identity you grant that identity access to the Key Vault with an access policy or Key Vault role so the application can retrieve secrets during user sign in.

Use the Azure CLI az keyvault secret command to set a secret in Key Vault is wrong because creating or updating a secret does not grant the App Service permission to read secrets. That CLI command only manages secret values and does not configure the app identity or access control.

Register an application and create a Microsoft Entra ID service principal for the app is not the best choice for this scenario because App Service can use a managed identity instead of requiring you to register an app and manage credentials. A service principal would work but it requires extra credential handling that managed identities avoid.

Enable Microsoft Entra ID Connect on the App Service is incorrect because Entra ID Connect is an on premises synchronization technology and not a setting on App Service that grants Key Vault access. App Service built in authentication and managed identities are the relevant features for this requirement.

When an app running in App Service needs to access Key Vault at runtime prefer using managed identities and then grant the identity Key Vault access rather than embedding client secrets in configuration.

At Meridian Freight you are building an Azure Functions service that must capture comprehensive execution telemetry to help with debugging and performance troubleshooting. Which approach should you use to implement logging in the Azure Function?

  • ✓ D. Configure Application Insights and use the TelemetryClient to send custom events and trace messages

Configure Application Insights and use the TelemetryClient to send custom events and trace messages is the correct option.

Configure Application Insights and use the TelemetryClient to send custom events and trace messages provides a full observability solution that captures structured events, traces, metrics, and custom properties and it also correlates those signals with automatic telemetry such as requests and dependencies for Azure Functions.

Using Configure Application Insights and use the TelemetryClient to send custom events and trace messages lets you perform rich queries in Logs, view distributed traces in Application Map, and attach custom dimensions that help with debugging and performance analysis.

Pass an ILogger parameter into the function method and write structured logs using that interface is not the best answer because ILogger is useful for structured logging and it can be routed to Application Insights but it does not by itself provide the same breadth of telemetry, custom metrics, and explicit correlation features that the TelemetryClient and full Application Insights integration provide.

Use Console.WriteLine to emit log statements from the function code is incorrect because Console.WriteLine produces unstructured output that is meant for simple local debugging and it will not give you searchable, correlated telemetry or the metrics and diagnostics features of Application Insights.

Persist runtime log files to an Azure Blob Storage container for later inspection is not suitable for comprehensive telemetry because storing raw log files is manual and inefficient for querying, correlation, and real time troubleshooting compared with an instrumented monitoring service like Application Insights.

For questions about deep telemetry prefer solutions that provide structured, correlated traces and metrics such as Application Insights and look for the use of the SDK or TelemetryClient for custom events.

A retail technology company called HarborLane runs six web apps within a single App Service plan and each app uses a production slot and a staging slot. The App Service plan is scaled out to four instances. How many virtual machines are running to support these applications?

  • ✓ B. Four virtual machines

The correct answer is Four virtual machines.

The App Service plan is scaled out to four instances so there are four underlying worker virtual machines hosting the apps. Multiple web apps and their production and staging slots share those instances so neither the six apps nor the staging slots increase the number of virtual machines beyond the number of instances.

Twenty four virtual machines is incorrect because that option multiplies apps and slots and then assumes separate VMs per slot which is not how App Service scaling works.

Twelve virtual machines is incorrect because it assumes each app slot runs on its own VM. Deployment slots do not create separate VMs and the 12 slots are hosted on the four instances.

One virtual machine is incorrect because the plan is explicitly scaled out to four instances so there are four worker VMs rather than a single VM.

When you see questions about how many VMs are running remember to count the number of scaled instances for the App Service plan and do not count the number of apps or deployment slots as additional virtual machines.

A development team at Nimbus Analytics stores user documents in Azure Blob Storage and they need an automated rule that removes blobs that have not been modified for more than 45 days. Which Azure Blob Storage capability should they use to satisfy this requirement?

  • ✓ B. Azure Blob Lifecycle Management

The correct option is Azure Blob Lifecycle Management.

Azure Blob Lifecycle Management provides a policy based engine that lets you define automatic, time based rules to transition or delete blobs. You can create a rule that deletes blobs that have not been modified for more than 45 days by using the delete action with a condition based on days since last modification and optional prefix or blob type filters.

Azure Blob Snapshots are point in time copies used for backup or restore scenarios, and they do not provide automated lifecycle policies to remove blobs based on age.

Azure Blob Change Feed records a chronological log of changes to blobs for auditing or processing, and it does not perform automatic deletion of blobs after a set period.

Azure Blob Soft Delete protects blobs from permanent deletion by retaining deleted data for a retention period so you can recover it, and it is not a mechanism for automatically removing older blobs.

When the requirement is an automated, time based deletion look for features that mention lifecycle rules or policies and remember that Lifecycle Management is the Azure feature designed for those use cases.

A regional e commerce company called Meridian Retail runs an Azure App Service that consistently receives a traffic surge every Monday morning compared with the rest of the week. What scaling approach should you apply to handle that predictable weekly spike?

  • ✓ B. Schedule App Service autoscale rules to increase instance count on Mondays at 9 AM and scale down on Tuesdays at 9 AM

The correct answer is Schedule App Service autoscale rules to increase instance count on Mondays at 9 AM and scale down on Tuesdays at 9 AM.

Scheduled autoscaling works best when you have a predictable, recurring traffic pattern like a weekly Monday morning surge. Setting a schedule lets you increase instance count before the spike and reduce it after the peak which provides consistent capacity while controlling cost.

Azure App Service and the Autoscale feature support scheduled actions so you can specify the exact days and times and the desired instance counts or scaling behaviour. This is more reliable and operationally efficient for a known weekly pattern than relying on manual steps or ad hoc metric evaluation during the event.

Configure CPU based autoscale rules that only evaluate metrics during Mondays is not ideal because metric based rules are reactive and may not respond quickly enough at the start of a surge, and limiting metric evaluation to a single day is awkward when a scheduled rule directly expresses the expected pattern.

Keep the application on a single instance to reduce costs during quieter weekdays is incorrect because a single instance will likely be overwhelmed during the surge which can cause degraded performance or outages, and scheduled scaling allows cost control without sacrificing availability.

Manually increase the number of App Service instances every Monday morning to absorb the load is not recommended because manual scaling is error prone and labour intensive, and it cannot guarantee timely or consistent scaling in case of schedule changes or human error.

When the workload is predictable and repeats on a schedule choose scheduled autoscale. If the workload is unpredictable choose metric-based autoscale. On exam questions watch for words like “predictable” or “regularly” to guide you to scheduled scaling.

Your team is building an ASP.NET Core web application that will be hosted on Azure App Service. The application needs a centralized session store that can also cache entire HTTP responses for frequent reuse. Which Azure service should you choose?

  • ✓ D. Azure Cache for Redis

Azure Cache for Redis is the correct choice.

Azure Cache for Redis provides a centralized in memory cache that can store session state and also cache entire HTTP responses for frequent reuse. It integrates with ASP.NET Core through the IDistributedCache implementation and the session middleware so multiple App Service instances share the same session store and can serve cached responses with low latency.

As a managed offering Azure Cache for Redis supports TTL based eviction replication and scaling that suit high throughput scenarios and short lived response caches. Those capabilities make it far better for fast shared session and response caching than general purpose storage or workflow services.

Azure Functions is a serverless compute platform and not a centralized cache or session store. You could run code that uses a cache from a function but the function service itself does not provide the required shared in memory caching capability.

Azure Storage Account offers durable blob table and queue storage and is optimized for persistence rather than low latency in memory caching. It is not ideal for session state or caching whole HTTP responses that need millisecond access times.

Azure Logic Apps is a workflow and integration service and it is not designed to act as a session store or a response cache. Logic Apps automate orchestrations between services but do not provide the in memory caching features required here.

When a question asks for a centralized session store and fast reuse of full HTTP responses look for a managed in memory cache service such as Azure Cache for Redis rather than a storage account or serverless compute.

You are building a web portal that will be hosted as an Azure App Service. Users will sign in with their Contoso Identity credentials. You plan to assign each user one of these permission tiers admin standard viewer. A user must receive the correct tier based on their Contoso Identity group membership. The suggested configuration is to register a new Contoso Identity application then add application roles to the application manifest that correspond to the required tiers and assign each Contoso Identity group to the appropriate role and finally have the portal read the roles claim from the JWT to determine access. Does this solution meet the goal?

  • ✓ B. Yes

The correct answer is Yes.

Registering a Contoso Identity application and adding application roles in the application manifest supports defining roles such as admin, standard, and viewer. You can assign each Contoso Identity group to the appropriate app role on the application’s service principal in the Enterprise Applications view. When a user signs in the roles they hold are emitted in the JWT as the roles claim and the portal can read that claim to grant the correct permission tier.

App roles are designed for this scenario because they allow you to map Azure AD groups to application roles and have those roles appear directly in the token. Make sure group assignments are applied to the service principal or enterprise application object so the role assertions are included in tokens that the portal receives and validates.

No is incorrect because the proposed configuration does meet the requirement of assigning users the correct tier based on their Contoso Identity group membership and having the portal enforce access by reading the roles claim from the JWT.

When you implement this pattern assign roles to groups on the application’s service principal in Enterprise Applications and verify that the roles claim is present in the token your app receives.

When configuring an Azure Function using the function.json definition file what values can be assigned to the direction property?

  • ✓ D. in, out and inout

in, out and inout is the correct option for the direction property in the function.json file.

The direction property accepts in for input bindings, out for output bindings and inout for bindings that can act as both input and output. The runtime uses these values to determine the data flow for each binding listed in the function.json file.

blob, queue, table, file are examples of binding types or storage resources and not valid values for the direction property because the property expects flow semantics rather than resource names.

stderr, stdout, stdin refer to standard IO streams and are unrelated to Azure Functions binding directions so they are not valid direction values.

left and right are not meaningful in Azure Functions configuration and are not valid options for the direction property.

When you encounter binding questions look for terms that describe data flow and choose values like in, out or inout rather than resource names or unrelated terms.

Why does an in memory caching layer such as Memcached return data much faster than a disk based relational database like PostgreSQL?

  • ✓ B. It retains frequently used data entirely in memory for immediate retrieval

The correct answer is It retains frequently used data entirely in memory for immediate retrieval.

This option is correct because an in memory cache keeps data in RAM so reads are served without disk access and that removes the latency associated with disk operations. Memory access is orders of magnitude faster than disk access and that is the primary reason a cache can return data much faster than a disk based relational database.

In addition caches perform simple key value lookups and avoid the overhead of complex query planning, transactional durability, and frequent writes to persistent storage. Those design differences reduce CPU and I/O work which further improves latency compared with a relational database that is optimized for consistency and persistence.

It stores data on high performance NVMe solid state drives is incorrect because typical caching systems like Memcached store data in RAM rather than on SSDs. Even NVMe SSDs have higher latency than RAM and the cache advantage comes from memory resident data.

Cloud Memorystore is incorrect because it is a product name for a managed caching service and not an explanation of why an in memory cache is faster. While Cloud Memorystore can provide in memory caching, the performance reason is the data being kept in memory rather than the service name.

The cache runs on the same physical host as the application is incorrect because caches do not need to be co located to be fast and the defining factor for speed is avoiding disk I/O by serving from RAM. Co locating a cache can reduce network latency but it is not the fundamental reason an in memory cache is faster than a disk based database.

When you see performance comparison questions focus on the underlying mechanism such as in memory access versus disk I/O and not on specific product names.

A retail technology firm named RetailNex runs a web app called PortalApp2 in Azure App Service and a function app named WorkerFunc2. The web app reports telemetry to an Application Insights instance named insightsPortal2 where a synthetic web test and an alert rule are configured and each alert currently sends an email to your mailbox. You must ensure every alert also invokes WorkerFunc2. The proposed solution is to enable an Application Insights smart detection. Does this solution meet the requirement?

  • ✓ B. No

No is correct because enabling Application Insights smart detection will not ensure every alert invokes WorkerFunc2.

Application Insights smart detection automatically analyzes telemetry and raises incidents for anomalous behavior. It is a diagnostic feature that creates its own insights and potential alerts but it does not attach actions to your existing alert rules or configure a callback to an Azure Function.

To guarantee that every alert invokes WorkerFunc2 you must update the alerting pipeline to include an action that calls the function. In practice you add or update an Action Group to include the Azure Function or a webhook that targets WorkerFunc2 and then associate that Action Group with the relevant alert rules so that alerts trigger the function.

Also be aware that newer exam scenarios expect you to use Azure Monitor alerts and Action Groups for automated responses rather than relying on detection features alone.

Yes is incorrect because simply enabling smart detection does not configure per alert actions and will not cause existing alerts or synthetic web test alerts to invoke WorkerFunc2.

When a question is about triggering a resource on alert think about configuring an Action Group that calls a webhook or Azure Function. Features that only detect anomalies do not automatically perform actions.

A software team at Novasoft deployed a scheduled Azure Function using the cron expression “0 0 0 1 1 *” and they documented that the pattern follows the {second} {minute} {hour} {day} {month} {day-of-week} layout. How often will the function execute?

  • ✓ B. Runs at midnight on January first each year

The correct answer is Runs at midnight on January first each year.

The cron expression is given in the order {second} {minute} {hour} {day} {month} {day-of-week}. The values 0 0 0 1 1 * therefore mean second zero, minute zero, hour zero on day one of month one, with any day-of-week. That schedule corresponds to a single execution each year at 00:00 on January first which is midnight on January first.

Triggers daily at 12:01 AM is incorrect because the minute field in the expression is zero not one and the day and month fields restrict the run to January first rather than every day.

Executes every Monday at 1:00 AM is incorrect because the hour field is zero not one and the day-of-week field is a wildcard so the job is not limited to Mondays.

Fires every day at 1:01 AM is incorrect because both the hour and minute fields do not match one and the expression restricts execution to the first day of January rather than every day.

When you see a cron question map each position to its field carefully and remember that Azure Functions cron includes the seconds field at the start which shifts every position compared to some other cron formats.

A startup named LeisureStays is building a .NET 6 MVC web application that lets guests find independent vacation rentals and the team plans to use Azure Cognitive Search to query an index by several criteria and they want to support regular expression matching in user queries. What change should they make to the search request to enable regular expression queries?

  • ✓ B. Configure the “QueryType” property of the “SearchParameters” class

Configure the “QueryType” property of the “SearchParameters” class is correct.

Setting QueryType to the full Lucene mode tells Azure Cognitive Search to interpret the query text using Lucene query syntax and that allows regular expression patterns to be used in user queries.

The regular expression support comes from the Lucene query parser so you must enable the full query parsing behavior by setting QueryType appropriately rather than using the default simple parser.

Configure the “Filter” property of the “SearchParameters” class is incorrect because filters use OData style expressions for structured filtering and they do not enable Lucene or regex parsing.

Configure the “SearchMode” property of the “SearchParameters” class is incorrect because search mode only controls whether all terms or any term must match and it does not change the query language or add regex support.

Configure the “Facets” property of the “SearchParameters” class is incorrect because facets control aggregation and categorization of results and they do not affect how queries are parsed or whether regular expressions are supported.

When a question mentions advanced query capabilities like regular expressions look for options that change the query parser and not options that change filtering or result presentation and remember to set QueryType to the full Lucene mode.

Evaluate whether the proposed approach satisfies the requirements. A software team at HarborSoft is deploying four ASP.NET web applications to Azure App Service and they need to persist session state and complete HTTP responses. The storage solution must allow session state to be shared across all web applications. It must support controlled concurrent access to the same session data with many readers and a single writer. It must also save full HTTP responses for concurrent requests. The proposed approach is to enable Application Request Routing on the App Service. Does this approach meet the requirements?

  • ✓ A. No

The correct answer is No.

Enabling Application Request Routing on App Service does not provide a shared, persistent session store or a mechanism to save full HTTP responses across multiple web applications. ARR is a routing and proxy feature and it does not implement distributed session state or durable response caching that can be accessed by several different apps.

The scenario requires a storage layer that supports sharing session state across all applications and that can enforce controlled concurrent access with many readers and a single writer. It also requires a way to persist complete HTTP responses for concurrent requests. These needs point to a distributed cache or shared store and concurrency controls such as locks or atomic operations that ARR does not provide.

Yes is incorrect because simply enabling Application Request Routing does not create a cross-application session store or provide the concurrency guarantees and full-response persistence the team requires. ARR may help route traffic and enable affinity in some deployments but it is not a substitute for a distributed cache or storage solution that supports shared sessions and controlled concurrent access.

When a question asks about shared session state across multiple apps think of a distributed cache or shared data store rather than routing features. Routing can direct traffic but it does not provide durable, concurrent session storage.

Your team builds integrations for BlueRiver Tech that connect to an Entra directory and you plan to classify permissions within that directory. You must select which permission type to include in the classification. Which permission type should you choose?

  • ✓ B. Delegated permissions obtained via end user consent

The correct option is Delegated permissions obtained via end user consent.

Delegated permissions obtained via end user consent are the permissions an application has when it acts on behalf of a signed in user and they are granted through the user’s consent. These permissions are the ones you classify when you need to identify permissions that users can approve without administrator involvement.

App only permissions that must be approved by an administrator is incorrect because application permissions are granted to the app itself and they require administrator consent rather than end user consent.

App only permissions that claim to be grantable by users is incorrect because app only permissions cannot be granted by regular users and any statement that users can grant them is incorrect.

Delegated permissions that require administrator approval is incorrect because delegated permissions are normally consented to by users and only a subset of high privilege delegated permissions require administrator consent. The question specifically targets permissions obtained via end user consent.

When you see consent related choices think about who grants the permission and whether the app acts on behalf of a user or as itself. Pay attention to end user consent versus administrator consent.

Review the Northbridge Logistics case study using the document link provided and answer the following question. Open the document in a separate browser tab and keep the test tab active. https://example.com/documents/2bXyZ9AbCcdE Which API setup should the Azure Function use to retrieve delivery driver profile information?

  • ✓ C. Microsoft Graph

The correct option is Microsoft Graph.

Microsoft Graph is the unified REST API for accessing user and directory information across Microsoft services and it is the appropriate choice for retrieving delivery driver profile information from Azure AD or Microsoft 365. It exposes people and user profile endpoints and it supports application and delegated permissions that an Azure Function can use to query profile attributes securely.

Microsoft Graph also supports query features such as filtering and selecting specific attributes so the function can request only the fields needed and reduce latency and payload size.

Microsoft Identity Platform is the authentication and authorization system that issues tokens and manages sign in and consent. It is not the primary API used to query user profile data so it is not the correct choice for retrieving driver profiles.

Microsoft Entra ID Graph refers to APIs that have been superseded and in many cases deprecated. Current guidance directs developers to use Microsoft Graph instead so Entra ID Graph is not the recommended API for new implementations and is less likely to be correct on newer exams.

When a question asks which API to use for user or profile data choose Microsoft Graph and distinguish between authentication services and data APIs. Identify whether the option is for token issuance or for querying directory data.

A developer is building a mobile application that stores data in an Azure SQL Database named AzuraDB. The database contains a table named Clients in the sales schema and the table includes a column named contact_email. The developer executes the following Transact SQL statement ALTER TABLE [sales].[AzuraDB].[Clients] ALTER COLUMN [contact_email] ADD MASKED WITH (FUNCTION = ’email()’) in order to enable dynamic data masking. Does this statement meet the requirement?

  • ✓ B. No the statement does not implement the masking

No the statement does not implement the masking.

The ALTER statement in the question is incorrect because the object name is misordered and it does not target the table properly. The dynamic data masking clause ADD MASKED WITH (FUNCTION = ’email()’) is the correct form to mask an email column, but the table should be referenced as [sales].[Clients] or fully qualified as [AzuraDB].[sales].[Clients]. With the proper table identifier the ALTER COLUMN …​ ADD MASKED WITH (FUNCTION = ’email()’) command will enable masking for the contact_email column.

Yes it applies dynamic data masking is wrong because the provided statement will not correctly apply the mask due to the misplaced database identifier. The masking clause itself is valid, but the object naming prevents the command from targeting the intended column.

When you review T SQL on the exam check the order of fully qualified names which is database.schema.table and make sure the ALTER TABLE target uses the correct identifier before judging whether the syntax will apply the change.

A development group at Harborview Systems needs to allow customers to upload large files into a specific blob container in Azure Blob Storage. Which Azure SDK method should they call to upload a local file into a blob?

  • ✓ D. UploadFromFileAsync

The correct option is UploadFromFileAsync.

UploadFromFileAsync is the convenience method provided by the Azure Storage client libraries that accepts a local file path and uploads that file as a blob in the target container. It handles reading the file and performing the upload and the client libraries manage chunking and retries so it is appropriate for large file uploads.

PutBlockBlobAsync is not the correct high level method for uploading a local file because put block operations are low level and are used to upload individual blocks and then commit them, which requires manual block management.

CreateBlobAsync is not a standard SDK method for this purpose and SDKs normally create blobs by calling upload methods rather than a method with that name.

UploadBlobAsync may exist in some client APIs to upload content from a stream or binary data but it is not the direct convenience method that takes a file path, so it is not the best answer when the question specifically asks about uploading a local file.

When a question asks about uploading a local file choose the method name that includes FromFile because that usually indicates the method accepts a file path and handles the file transfer for you.

Your team uses an Azure Container Registry named scrumtuous.azurecr.io and you organized repositories into namespaces for sales marketing technology and customerservice. The website project is stored in the marketing namespace. Which docker pull command will download the website image from the registry?

  • ✓ C. docker pull scrumtuous.azurecr.io/marketing/website

The correct answer is docker pull scrumtuous.azurecr.io/marketing/website.

Using docker pull scrumtuous.azurecr.io/marketing/website follows the expected registry format where the registry login server comes first and the namespace and repository follow as the path. The marketing namespace is therefore specified as part of the repository path and the command pulls the website image from that location.

You must authenticate to the private Azure Container Registry before pulling when it is not public and you can include a tag such as latest by appending it to the repository name when you want a specific image version.

docker pull marketing.scrumtuous.azurecr.io/website is incorrect because it places the namespace at the start of the hostname which makes the registry host invalid. Docker expects the registry login server first followed by the repository path.

docker pull scrumtuous.azurecr.io -location marketing/website is incorrect because the docker pull command does not support a -location flag and the repository path must be provided directly after the registry host.

docker pull scrumtuous.azurecr.io -path marketing -project website is incorrect because docker does not use -path or -project options for pulls and those arguments will not select a repository. The repository path must follow the registry host in the command.

When you see questions about pulling images from a registry look for the format registry/repository and remember that docker pull takes the repository path directly after the registry host rather than using extra flags.

You are creating nested Azure Resource Manager templates to provision several Azure resources for Nova Systems and you must test the templates before they run in production. You want a method to preview and validate the exact changes a deployment will apply to the subscription. Which tool should you use to preview and validate the expected modifications?

  • ✓ F. What if preview operation

What if preview operation is correct because it is the feature that previews and validates the exact changes a deployment will apply to your subscription before any resources are modified.

The What if preview operation performs a dry run of an Azure Resource Manager template deployment and returns a resource level list of planned actions such as create update or delete without applying them. You can run the preview from the Azure portal the Azure CLI or PowerShell and use the output to confirm intent and avoid unintended deletions or updates.

Parameters file is incorrect because a parameters file only supplies values for template parameters and it does not provide a change preview or an analysis of what resources will be created modified or removed.

User defined functions in the template is incorrect because user defined functions provide reusable logic and expression capabilities inside the template and they do not produce a deployment preview of the changes to the subscription.

Azure Deployment Manager is incorrect because that service is used to orchestrate and coordinate complex or multi step deployments across groups and regions and it does not serve as the built in preview tool that shows resource level diffs for a single template deployment.

Template analyzer tool is incorrect because the analyzer checks templates for best practices style and potential issues but it performs static analysis and does not simulate or list the exact resource changes that a deployment will apply.

Built in template functions is incorrect because those functions are evaluated at deployment time to compute values and perform logic and they do not provide an overall preview of the resulting subscription modifications.

When the question asks how to see exactly what will change before deploying think of a dry run or preview and choose the What if functionality because it shows planned actions without applying them.

A development team at Meridian Apps is instrumenting an Azure web service with Azure Application Insights to gather telemetry and they need to pinpoint which requests and external calls are introducing latency so they can resolve performance bottlenecks, which Application Insights capability helps surface slow requests and the dependent components that cause them?

  • ✓ B. Application Map

The correct option is Application Map.

Application Map visualizes your application topology and the dependencies between components so you can quickly see which requests are slow and which external calls are contributing to latency. The map highlights nodes and edges with high response times and failure rates so you can prioritize where to drill into traces and request details to find the root cause.

Live Metrics Stream provides real time charts of metrics such as request rate and failure rate and it is useful for immediate diagnostics. It does not produce the dependency graph that surfaces which downstream components are causing latency like the Application Map does.

Performance Counters collect operating system and process level metrics such as CPU memory and disk usage. Those counters can indicate resource pressure but they do not map requests to dependent services to show which calls introduce latency.

Analytics Query lets you run powerful Kusto queries to analyze telemetry and you can use it to find slow requests manually. It is not the built in visual dependency map and it requires crafting queries to correlate requests and dependencies unlike the Application Map.

Open the Application Map first to locate hotspots visually and then use Analytics Query or end to end traces to investigate individual slow requests.

What advantage does hosting applications in an App Service Environment provide for an organization running sensitive workloads?

  • ✓ D. It offers a dedicated and isolated hosting environment within a virtual network for stronger security and compliance

It offers a dedicated and isolated hosting environment within a virtual network for stronger security and compliance is the correct option.

An App Service Environment gives an organization a fully dedicated instance of the App Service platform that is deployed into a virtual network so workloads run in isolation from other tenants. This isolated deployment lets you apply standard network controls and compliance measures and it supports internal load balancer deployments so applications can remain private to your network.

It provides direct integration with Azure Front Door is incorrect because Front Door is an external global edge service and integration is not an inherent, exclusive property of an App Service Environment. You can place Front Door in front of many App Service configurations but that is a separate architectural choice.

It removes all network setup requirements when publishing apps is incorrect because an App Service Environment increases network configuration options rather than removing them. You must plan virtual network layout, subnets, security rules, and optionally private endpoints or ILB configurations to use an ASE securely.

It guarantees infinite autoscaling with zero performance impact is incorrect because autoscaling is subject to resource limits and capacity constraints. An ASE can scale and offer strong performance characteristics but it does not provide unlimited scaling or eliminate performance trade offs.

Look for answers that focus on isolation and network control when a question mentions sensitive workloads. Emphasize the ability to deploy into a virtual network and to apply network security controls when you choose the most secure App Service option.

Review the Bayside Retail case study at https://example.com/casestudy and keep this testing tab open while you consult the document. The retailer is experiencing lost and overwritten store location records and you must enable the appropriate Azure Blob storage capabilities so administrators can restore blob data to a previous point in time. Which three Azure Blob storage features should you enable to allow point in time restore and recovery? (Choose 3)

  • ✓ A. Change feed

  • ✓ C. Soft delete

  • ✓ E. Versioning

The correct options are Change feed, Soft delete, and Versioning.

Soft delete protects blobs and blob versions from accidental deletion by retaining deleted data for a configurable retention period so administrators can restore deleted items within that window.

Versioning keeps prior versions of a blob when it is overwritten so you can revert to an earlier state and recover data that was changed or replaced.

Change feed provides an ordered, durable log of all blob create, update, and delete events so you can identify which objects changed and when and then use versions and deleted items to reconstruct storage to a past point in time.

Immutability policies are designed for compliance and retention by preventing modification or deletion of data but they do not provide the operational point in time restore workflow needed to recover overwritten blobs.

Object replication copies blobs between storage accounts for disaster recovery or data distribution but it does not maintain historical versions or a change log that lets you restore all objects to a prior point in time.

Snapshots are per-blob read only copies and can be used for manual recovery in some scenarios but they are not part of the automated point in time restore capability and are largely superseded by versioning for this purpose.

When you see a question about restoring to a past point in time think about features that record changes and features that retain older data. Enable versioning and change feed and set retention with soft delete to cover both overwritten and deleted blobs.

A development team at Meridian Bank is building a service that stores cryptographic keys in Azure Key Vault. The team must require a specific cryptographic algorithm and a set key length for any keys placed in the vault. Which Azure capability should they use?

  • ✓ C. Azure Policy

The correct answer is Azure Policy.

Azure Policy lets you create rules that evaluate resource properties and enforce allowed configurations across a scope such as a subscription or resource group. You can write or assign a policy that checks Key Vault key properties and denies or modifies requests that do not match the required algorithm or key length, so it is the right tool to enforce cryptographic algorithms and key sizes for keys placed in a vault.

Secret versioning is not correct because that feature manages different versions of stored secrets and does not control cryptographic algorithm or key length for keys.

Access policies are used to grant or restrict who can perform operations on Key Vault secrets keys and certificates and they do not enforce the cryptographic algorithm or key length constraints when keys are created or updated.

Azure Blueprints is not the best choice because blueprints orchestrate the deployment of resources and can include policy assignments but they are a packaging and deployment mechanism rather than the primary enforcement tool for ongoing configuration rules.

When a question asks about enforcing specific resource settings across subscriptions or resource groups think Azure Policy because it evaluates and enforces resource properties while access controls only grant permissions.

A small firm called North Harbor Apps is building a web portal that uses the Microsoft identity platform for user sign in. The engineering team needs to retrieve a single immutable claim that will uniquely identify each user within the directory and remain stable across applications. Which claim type should they read from the token?

  • ✓ D. oid claim

The correct answer is oid claim.

The oid claim contains the Azure Active Directory object identifier for the user. It is an immutable directory level identifier that uniquely identifies the user within the tenant and remains stable across different applications. This permanence makes it the proper choice when you need a single unchanging claim to map users in your portal and across other apps.

The nonce claim is not correct because it is a value used to mitigate token replay and to correlate authentication requests with responses. It is a transient value generated per authentication and is not a persistent user identifier.

The idp claim is not correct because it indicates the identity provider that authenticated the user. It tells you where the authentication came from and not which user to uniquely identify across applications.

The aud claim is not correct because it specifies the intended audience or client id for the token. It is used to ensure the token was issued for a particular application and does not identify the user.

When you need a stable, cross application user identifier choose the oid claim. Keep in mind that claims like aud and nonce serve different purposes and are not suitable as persistent user ids.

You work as a developer at Summit Analytics and your application consumes messages from an external system using an Azure Service Bus queue with several workers processing those messages. Recently a rare issue has caused some messages to be processed twice because a worker completed the work but crashed before it could remove the message from the queue. Management prefers a behavior that avoids duplicate processing even if that means some messages may be missed. How should you change the system so messages are never processed more than once even if some are occasionally lost?

  • ✓ C. Change the Service Bus queue to use “at most once” delivery semantics

Change the Service Bus queue to use “at most once” delivery semantics is correct.

Choosing at most once delivery ensures the broker removes the message when it is delivered so a worker crash after removal cannot cause the same message to be processed again. This directly matches the management preference to avoid duplicate processing even if that means some messages are occasionally lost.

In Azure Service Bus this behavior is achieved by using the receive and delete mode instead of peek and lock. Receive and delete removes the message as soon as it is handed to a consumer so duplicate processing is avoided. The trade off is that if the consumer fails after deletion the message is lost and cannot be retried.

Move the messaging workflow to Cloud Pub/Sub with exactly once delivery is incorrect because migrating platforms is a large change and exactly once delivery is not a simple drop in guarantee for every configuration. Relying on a platform migration to solve a local delivery semantics choice is unnecessary and may not eliminate all duplication scenarios without careful configuration.

Record each message identifier in a database table called processed_messages after finishing processing and have consumers check this table before handling a message is incorrect because the described flow still allows a race where two consumers check the table, both process the message, and only then one writes the record. To make this approach safe you must perform an atomic check and claim before processing or use transactional deduplication, and that extra complexity is not part of the proposed option.

Add robust error handling that sends an SMS alert to developers when message processing fails is incorrect because alerts do not change delivery semantics. Notifying developers helps incident response but it does not prevent a message from being delivered more than once or stop duplicates when a worker crashes after completing work.

When a question values avoiding duplicates over guaranteed delivery pick at most once semantics and be prepared to describe the trade off that messages can be lost.

A virtual machine named VMProd is launched into the virtual network CorpNet and placed in the subnet AppSubnet. Which private IP address will the virtual machine receive?

  • ✓ B. The next free private IP automatically assigned from the AppSubnet range

The next free private IP automatically assigned from the AppSubnet range is correct because when a virtual machine is deployed into a subnet and no static private address is configured the platform assigns the next available private IP from that subnet range to the VM’s network interface.

By default Azure assigns a private IP from the subnet to the VM. The assignment is made from the subnet address space and the platform reserves a small set of addresses for infrastructure, so the actual first usable address can be influenced by those reservations.

A public IP from Azure public address pool is incorrect because a public IP is not automatically assigned from the public pool unless you explicitly attach a Public IP resource to the VM or its network interface.

192.168.5.2 is always the first address given on a new virtual network is incorrect because there is no fixed first address that applies to all virtual networks. The assigned private IP depends on the subnet CIDR and on which addresses are already reserved or in use.

An unpredictable address that cannot be determined in advance is incorrect because the address is chosen from the known subnet range and you can predict or control it by checking the subnet allocation state or by assigning a static private IP when you create the network interface.

Remember that Azure VMs receive a private IP from the subnet by default. You can make the IP predictable by assigning a static private IP or by reserving the address in the network interface configuration.

A development team at Northwind Commerce deployed an Azure Cosmos DB for NoSQL account named appdata1 using the default consistency level and they intend to set consistency on a per request basis while requiring consistent prefix behavior for both reads and writes to appdata1. Which consistency level should be configured for read operations?

  • ✓ B. Consistent prefix

Consistent prefix is the correct option for the read operations in this scenario because it guarantees that reads will never see out of order writes and thus provides the consistent prefix behavior required for both reads and writes.

The Consistent prefix level ensures that if a sequence of updates was applied in order A then B then C a reader will never observe B before A or C before B. This matches the requirement to guarantee prefix ordering across replicas while allowing some replication lag, and you can request this level on a per request basis using the Cosmos DB per request consistency override.

Session is incorrect because session consistency provides read your writes and monotonic reads within a single client session. That makes it stronger in some client-scoped scenarios but it does not represent the explicit global prefix guarantee the question asks for across reads and writes to the account.

Strong is incorrect because strong consistency provides linearizability and a single global order which is stronger than the requested prefix guarantee. Strong also has higher latency and availability trade offs and is not necessary when the requirement is specifically to ensure consistent prefix behavior.

Remember that Cosmos DB consistency levels control the behavior of reads and you can override the account default on a per request basis by setting the request consistency header in your client calls.

Review the Birchwood Orchards web application case study at https://example.com/case-study-birchwood and decide how to resolve repeated HTTP 503 errors that indicate the site is running out of CPU and memory resources. Open the linked case study in a new tab and keep this test tab open. Which solution should you implement?

  • ✓ C. Scale the App Service plan up to a higher SKU such as Premium V3

The correct option is Scale the App Service plan up to a higher SKU such as Premium V3.

Scale the App Service plan up to a higher SKU such as Premium V3 is the right choice because the repeated HTTP 503 errors point to the application exhausting CPU and memory on the existing App Service plan. Moving to a higher SKU provides more vCPUs and more memory per instance and often faster underlying hardware, which directly addresses resource exhaustion on each instance without changing the application architecture.

Create an Azure Content Delivery Network endpoint is incorrect because a CDN mainly caches static content and reduces bandwidth and latency for clients. A CDN will not increase CPU or memory available to the web application processes on the origin App Service and so it will not resolve 503s caused by instance resource exhaustion.

Enable the App Service Local Cache feature is incorrect because Local Cache creates a read only copy of site files in memory to reduce file I O and improve cold start behavior. It does not add CPU cores or increase the memory available to the App Service process in a way that resolves sustained CPU or memory exhaustion across the application workload.

Add an App Service staging deployment slot is incorrect because deployment slots provide deployment and testing benefits and can help with warm up before swapping. A staging slot runs in the same App Service plan and shares the same instance resource limits unless you move it to a different plan. Creating a slot therefore does not increase the CPU or memory available to the production application and will not fix 503s caused by resource limits.

Before answering check the App Service metrics and Application Insights to confirm CPU and memory are saturated. Choose scaling up when single instance resources are the bottleneck and choose scaling out when you need more instances.

A development group at Solstice Systems wants to keep infrastructure definitions in source control and deploy environments using repeatable declarative files. Which Azure feature enables defining infrastructure as code?

  • ✓ D. Azure Resource Manager template files

The correct option is Azure Resource Manager template files.

Azure Resource Manager template files are Azure native, declarative infrastructure as code files that let you describe resources, properties, dependencies, and parameters so deployments are predictable and repeatable. You store these templates in source control and deploy them with the Azure portal, the Azure CLI, PowerShell, or CI and CD pipelines to provision consistent environments across subscriptions and resource groups. The templates are idempotent and support parameters and outputs so the same template can be used to create identical infrastructure repeatedly.

Terraform is a powerful infrastructure as code tool but it is not an Azure feature. It is a third party product that manages Azure resources through a provider, so it is not the Azure native solution the question asks for.

GitHub Actions provides automation for building and deploying code and it can run IaC deployments, but it is a CI and CD orchestration service rather than a declarative infrastructure definition format. It does not itself define Azure resources in the way ARM templates do.

Azure Automation Accounts offers runbooks and process automation and it includes configuration management features, but it is focused on operational automation rather than serving as the declarative infrastructure as code format used to define and deploy Azure resources repeatedly.

Look for Azure native IaC names when the question asks for an Azure feature such as Azure Resource Manager template files or Bicep. Remember that tools like Terraform are IaC but they are third party and not an Azure feature.

A regional retailer named Meridian Books has an Azure Cosmos DB for NoSQL account configured with session consistency and it accepts writes in a single Azure region while the data is replicated for reads to four regions. An application that will access the Cosmos DB container through an SDK requires that container items must never be removed automatically. What change should you make to the Azure Cosmos DB for NoSQL account to meet this requirement?

  • ✓ B. Set the Time to Live property on items to -1

The correct option is Set the Time to Live property on items to -1.

Setting Set the Time to Live property on items to -1 disables automatic expiry for those items so they are never removed by the Cosmos DB TTL process. Azure Cosmos DB supports a TTL value at the item level and a value of Set the Time to Live property on items to -1 explicitly marks the item to never expire while other TTL values can cause automatic deletion after the configured lifetime.

Apply a resource lock to the container is incorrect because resource locks apply to management operations and they prevent accidental deletion or modification of ARM resources. A lock does not change data plane behaviors and it will not stop the Cosmos DB TTL mechanism from expiring items.

Set the Time to Live property on items to 0 is incorrect because a TTL value of 0 causes items to expire immediately. That setting would result in items being removed rather than preserved.

Apply a resource lock to the database account is incorrect because locking the account affects Azure resource management actions only and does not alter how Cosmos DB automatically deletes items based on TTL.

When an item must never be auto deleted set its TTL to -1 at the item level and remember that resource locks do not change TTL behavior.

Review the Millerton Retail case study hosted at https://example.com/doc/2bTzQx and answer the prompt provided in the linked document. Keep the exam tab open and view the case details in a separate tab. You need to build the store locator Azure Function for the retailer. Which binding direction should you choose for the Cosmos DB binding?

  • ✓ B. Output

The correct option is Output.

You choose a Output Cosmos DB binding when the Azure Function needs to persist or update documents in the database from within the function. An output binding lets the function write objects directly to a collection by returning data or by using an output parameter. This approach avoids explicit SDK calls for simple insert or update scenarios.

For the Millerton Retail store locator the function must write store location records or update lookup results for downstream processing, so the function should emit documents to Cosmos DB and therefore use a Output binding.

Input is used when the function only needs to read or query documents from Cosmos DB. It would be appropriate if the function only performed lookups and did not need to persist any results, but it is not correct for scenarios where the function must write data.

Determine whether the function must write or read data before choosing a binding. Pick Output for write scenarios and Input for read only scenarios.

A mobile team is building an app that connects to an Azure SQL Database named ‘Aquila’. The database contains a table called ‘Clients’ and that table has a column named ‘contact_email’. The team wants to apply dynamic data masking to hide the contact_email values and they propose executing the PowerShell command Set-AzSqlDatabaseDataMaskingPolicy with the DatabaseName parameter set to ‘Aquila’. Will this approach achieve the masking requirement?

  • ✓ A. No

No is correct. Running Set-AzSqlDatabaseDataMaskingPolicy with the DatabaseName set to ‘Aquila’ will not by itself create a mask on the Clients.contact_email column.

The Set-AzSqlDatabaseDataMaskingPolicy cmdlet configures the database level dynamic data masking policy and its overall state, but it does not add per column masking rules. To hide values in the contact_email column you must create a column level masking rule that specifies the schema, table, column, and masking function. You can add that rule with the Azure PowerShell command that manages masking rules or by using T SQL to define a mask on the specific column.

Yes is incorrect because enabling or setting the database policy alone does not create the per column rule needed to mask the contact_email values. Without a column level rule the email data will remain unmasked.

When an exam question mentions a specific cmdlet check whether it manages the overall policy or whether it creates individual rules. Azure PowerShell typically uses separate commands for policy configuration and for column level data masking rules.

You are building an Azure solution to ingest point of sale device telemetry from 2,500 retail locations worldwide and each device generates about 2.5 megabytes of data every 24 hours. Each location has between one and six devices that transmit data and the data must be stored in Azure Blob storage. Device records must be correlated by a device identifier and more locations will be added in the future. The proposed solution provisions an Azure Notification Hub and registers all devices with the hub. Does this solution meet the requirement?

  • ✓ B. No the proposed design does not satisfy the requirement

The correct choice is No the proposed design does not satisfy the requirement.

The proposed design relies on Azure Notification Hubs, which is a service for sending push notifications to mobile and other client platforms. Notification Hubs does not provide a device telemetry ingestion pipeline, per device identities, or built in routing to Azure Blob Storage, and those capabilities are required to collect device telemetry and correlate records by device identifier. For telemetry ingestion and device correlation you would use services such as Azure IoT Hub or Azure Event Hubs which provide device identities, secure device authentication, scalable device-to-cloud ingestion, and message routing to Blob storage or other sinks.

Notification Hubs is optimized for many to many push messaging and not for device to cloud telemetry. It does not offer the per-device authentication, device twin or direct message routing features that make it suitable for correlating and reliably storing telemetry from thousands of devices.

The answer Yes the proposed design satisfies the requirement is incorrect because Notification Hubs cannot fulfill the ingestion, per-device identity, and storage routing requirements described in the scenario. Using Notification Hubs would not meet the need to reliably collect, correlate, and persist device telemetry to Azure Blob Storage as the question requires.

When a question mentions device telemetry and per-device correlation think about services that provide device identity and routing. Consider Azure IoT Hub for direct device telemetry ingestion and message routing to storage, or Event Hubs when you need high throughput ingestion without per-device identity.

A regional lender uses Azure Durable Functions to automate an auto loan approval workflow at Harbor Lending. The process requires several ordered steps and one step performs a credit verification that can take up to four days to finish. Which Azure Durable Functions function type should coordinate the entire loan workflow?

  • ✓ D. orchestrator

The correct option is orchestrator.

An orchestrator function defines and coordinates the overall workflow and enforces the order of steps. An orchestrator calls other functions to perform work and it can wait for long running operations by using durable timers or external events so it can handle a credit verification that takes up to four days without tying up a compute thread.

The orchestration is deterministic so the long running or blocking work itself should run in worker functions, while the orchestrator manages retries, sequencing, and durable waiting across the entire loan approval process.

client is incorrect because the client is merely the starter or caller that triggers an orchestration and it does not coordinate the ordered steps of a workflow.

activity is incorrect because activity functions perform the discrete work units such as calling a credit bureau, and they should not be used to control or wait for the overall workflow for days.

entity is incorrect because entity functions model small stateful objects and are not intended to orchestrate a multi step, ordered workflow across long running operations.

When you see a question about ordering steps and long waits think orchestration and not the worker that performs tasks. Remember that orchestrator manages flow while activity does the actual work.

A team is building a telemetry backend for courier drivers at ParcelWave that records driver first name driver last name total packages parcel id and current GPS coordinates in Azure Cosmos DB. You must select an Azure Cosmos DB partition key that provides even distribution and supports efficient retrieval of single delivery records. Which field should be used as the Azure Cosmos DB partition key?

  • ✓ C. parcelId

The correct answer is parcelId.

Using parcelId as the Azure Cosmos DB partition key gives you very high cardinality and helps distribute items evenly across physical partitions. This supports scalable writes and avoids hot partitions, and it also enables efficient single item lookups when you perform point reads by specifying the partition key and the item id.

The parcelId value is typically unique per delivery and is stable over the lifetime of the item, so it avoids costly partition key migrations or reshuffles and maps naturally to retrieving a single delivery record.

totalPackages is incorrect because it has low cardinality and many items will share the same value which leads to uneven distribution and hotspots. It also does not let you directly identify a single delivery record.

driverFirstName is incorrect because first names are not unique and have low cardinality which causes poor partition distribution and inefficient queries for single parcels. Names can also be nonunique and subject to change which makes them a poor partition key.

driverLastName is incorrect for the same reasons as first names. Last names are shared by many drivers and they do not provide the uniqueness or distribution needed for efficient point reads and balanced partitions.

Pick a partition key with high cardinality and stability that you will use for point reads. Think about how you will retrieve single items and prefer a unique, immutable identifier.

Read the Northgate Retail case study at https://example.com/northgate-case and keep this test tab open. You need to route events for incoming retail store location files so that processing is triggered. Which source configuration should be used?

  • ✓ D. Azure Blob Storage

The correct option is Azure Blob Storage.

Azure Blob Storage is the right source configuration because the scenario describes routing events for incoming retail store location files that arrive as files in object storage. Blob Storage natively emits creation events when a new blob is uploaded and those events can trigger downstream processing without polling.

With Azure Blob Storage you can subscribe to blob created events or connect the storage account to eventing and serverless handlers so processing is started automatically when new files arrive.

Azure Event Hub is a high throughput telemetry ingestion service for streaming data and not the direct source for file arrival events in object storage. It is not the appropriate source configuration for incoming store files.

Azure Service Bus is a message broker for reliable messaging between applications and not a file storage source. You would still need a storage event or a producer to push notifications so it does not fit the described source configuration.

Azure Event Grid is an eventing and routing service that can deliver events from sources like Blob Storage but it is not the source itself. The source configuration should be the storage account so Event Grid can receive and route those blob events.

When a question asks for a source choose the service that actually holds or emits the raw events and not the service that only routes or brokers those events. Pay attention to whether the files land in storage or are pushed to a messaging system.

You manage a web application through Azure Front Door and you expect incoming files to be served with Brotli compression. You observe that incoming XML files that are 25 megabytes in size are not being compressed. You need to determine the root cause and decide whether the edge nodes must be purged of all cached content?

  • ✓ B. No

No is correct because Azure Front Door will not Brotli compress responses that exceed the service compression size limits so a 25 megabyte XML response was simply not eligible for compression and you do not need to purge edge caches to force compression.

Azure Front Door applies response compression such as Brotli only when the client sends the appropriate Accept-Encoding header and when the response content type and size fall within the supported ranges. Large payloads are commonly excluded from on-the-fly edge compression to protect resource usage and throughput so a 25 megabyte file will typically be served uncompressed even though the feature is enabled.

Because the XML was not compressed at the edge in the first place purging cached content will not change that behavior. Compression is determined at response time based on headers and size so you can either enable compression at the origin for large files or check the documented size limits and content type rules if you need them to be compressed by the CDN edge.

Yes is incorrect because removing cached objects from the edge will not cause Azure Front Door to compress responses that are too large. Purging only clears stored content and does not alter the compression eligibility checks that prevented Brotli from being applied to the 25 megabyte XML.

When a large response is not compressed check the Accept-Encoding request header and the origin response headers and then verify the product documentation for the compressor size limits. Purging the cache is only useful for replacing cached content and not for changing compression rules.

A developer is building a service that uses Contoso Cache for Redis and must choose the appropriate Redis structure for a requirement. Which Redis data type would you select to implement a Publish/Subscribe messaging pattern?

  • ✓ C. channel

channel is the correct option for implementing a Publish/Subscribe messaging pattern in Redis.

Redis implements Pub/Sub by sending messages to named channel destinations so that publishers can publish and multiple subscribers can receive those messages in real time without persistence. Subscribers register interest in a channel name and they will receive each message published to that channel while they are connected.

stream is not correct because Redis Streams are a log style, persistent message structure that supports consumer groups and durable processing rather than the transient broadcast semantics of Pub/Sub. Streams are intended for reliable consumption and replay and do not provide the same simple publish to many subscribers behavior as a Pub/Sub channel.

list is not correct because lists are ordered collections used for push and pop operations and they can implement basic queues but they do not offer the broadcast delivery model of Pub/Sub. Lists require consumers to pop messages and do not natively broadcast a single published message to many subscribers the way a channel does.

Remember that Redis Pub/Sub uses channels for transient broadcasts and messages are not persisted. If you need durability or replayability then consider using Streams instead.

A team at Northshire deployed a container group to an Azure Container Instance and set the DNS label “appinstance”. What is the publicly reachable fully qualified domain name for that container group?

  • ✓ C. appinstance.westus2.azurecontainer.example.com

The correct answer is appinstance.westus2.azurecontainer.example.com.

When you set a DNS label for an Azure Container Instance that label becomes the leftmost subdomain and Azure includes the region as the next subdomain to form the fully qualified domain name. In this case the DNS label appinstance combined with the westus2 region yields the host name appinstance.westus2.azurecontainer.example.com.

Cloud Run is incorrect because Cloud Run is a Google Cloud product for running containers and it does not define the Azure Container Instance naming scheme.

azurecontainer.example.com/appinstance is incorrect because that shows the DNS label as a path component rather than a subdomain and Azure Container Instances expose the label as part of the host name not as a URL path.

appinstance.azurecontainer.example.com is incorrect because it omits the region segment that Azure includes in the FQDN for container instances.

When identifying the ACI FQDN remember the pattern uses the DNS label as the leftmost subdomain and the region as the following subdomain in the host name.

Which kinds of item operations does the Azure Cosmos DB change feed capture?

  • ✓ C. Inserts and updates

The correct answer is Inserts and updates.

Azure Cosmos DB change feed provides an ordered stream of item changes that reflects newly created documents and documents that have been modified. It exposes inserts and updates so downstream processors can react to new items and changes in near real time. The standard change feed does not emit delete operations.

The option Inserts updates and deletes is incorrect because the standard change feed does not include deletes. There is a separate Full Fidelity change feed mode that can capture deletes but that is an opt in capability and not the default behavior.

The option Updates only is incorrect because it omits newly inserted documents which the change feed also surfaces.

The option Inserts only is incorrect because the change feed also includes updates to existing items.

On the exam keep in mind that the change feed’s default behavior surfaces inserts and updates only. If you need delete events look for the Full Fidelity change feed which must be enabled explicitly.

A software team at Contoso Dev is building a REST API hosted in Azure App Service that is called by a companion web application. The API must read and update user profile attributes stored in the company’s Azure Active Directory tenant. Which two technologies should the team configure so the API can perform these updates? (Choose 2)

  • ✓ B. Microsoft Graph API

  • ✓ D. Microsoft Authentication Library MSAL

The correct answers are Microsoft Authentication Library MSAL and Microsoft Graph API.

Microsoft Authentication Library MSAL is used by the API to obtain and validate Azure AD tokens. The API uses MSAL to accept caller tokens and to perform the on behalf of the user flow so it can acquire access tokens to call Microsoft Graph on behalf of the signed in user.

Microsoft Graph API is the REST interface that exposes Azure AD user objects and profile attributes. The API must call Microsoft Graph API with an access token that has the appropriate permissions to read and update user profile attributes.

Azure Key Vault SDK is for storing and retrieving secrets keys and certificates and it does not provide APIs to manage Azure AD user profiles. You might use Key Vault to protect credentials but it does not perform directory updates.

Azure API Management is for publishing securing and managing APIs and it does not itself change Azure AD user attributes. API Management can front the API for routing and security but it is not the mechanism that performs updates in the directory.

When you see a question about updating Azure AD objects think about two parts. Use MSAL or another identity library to obtain the right token and use Microsoft Graph to make the actual directory changes.

A scheduled timer function at NovaWorks uses the cron expression “0 12,24,36 0 * * *” for its trigger schedule. When will the function execute?

  • ✓ C. At 00:12, 00:24, and 00:36 each day

The correct answer is At 00:12, 00:24, and 00:36 each day.

The cron expression is evaluated left to right and the fields are seconds then minutes then hours then day of month then month then day of week then year. The first field is 0 so the job runs at the zero second of the minute. The minutes field is 12,24,36 so it runs at those specific minutes. The hours field is 0 so it runs only during the midnight hour. Putting those parts together yields executions at 00:12, 00:24, and 00:36 each day.

Every 12 seconds continuously throughout the day is incorrect because the seconds field is 0 rather than an interval like */12, so the schedule does not fire every 12 seconds.

At 12 minutes 24 minutes and 36 minutes past every hour of the day is incorrect because the hours field is 0 not a wildcard, so the schedule is limited to the midnight hour and does not run past every hour.

Every 12 minutes during every hour of the day is incorrect because the minutes are a specific list of three values rather than an every 12 minute pattern and the hours value restricts execution to hour 0 only.

Count the fields to see if a seconds field is present and read them left to right. Pay attention to comma separated lists like 12,24,36 for exact minutes and to a single hour value like 0 which means midnight only.

Your team operates an Azure Cosmos DB for NoSQL instance for a retail startup called Soluna Tech. You plan to build two services named ReaderService and PushService that will consume the change feed to detect updates to containers. ReaderService will pull changes by polling and PushService will receive events using the push style. Which component should PushService use to record the most recently processed change for each partition?

  • ✓ C. Lease container

The correct option is Lease container.

A push style consumer such as PushService uses the Change Feed Processor pattern that relies on a dedicated container to store per partition leases and checkpoints, and that dedicated container is the Lease container. The processor writes lease documents that record ownership and the most recently processed position for each partition, and it updates those lease documents as it processes changes. Using the Lease container lets multiple instances coordinate work and resume from the correct point after failover or restart.

The Continuation token is used when a service polls the change feed to resume from a specific position and it is appropriate for pull style consumers, but it is not the mechanism the Change Feed Processor uses to coordinate multiple hosts or to persist leased ownership for push style processing.

The Integrated cache is a client side performance feature for reducing read latency and it does not provide durable checkpointing or partition lease management, so it is not suitable for recording the most recently processed change for each partition.

When a question contrasts push versus pull consumers remember that a push implementation typically uses the Change Feed Processor and stores checkpoints in a lease container, while a pull implementation uses continuation tokens.

A retail technology team is preparing to deploy an Azure Function App and they have implemented the logic in a runtime that the Functions host does not support natively, and the runtime can handle HTTP requests, so they must select a Publish option when deploying to production. Which Publish setting should they choose?

  • ✓ C. Docker Container

The correct choice is Docker Container.

Choose Docker Container when your application uses a runtime that the Functions host does not support because a container image lets you package the exact runtime and all dependencies and run the function app as a self contained environment. The container approach lets you run any HTTP capable server inside the image so Azure hosts the container rather than trying to run unsupported code inside the Functions host.

Using Docker Container provides consistent local and production environments and supports native binaries and nonstandard runtimes. It is the publish option intended for bringing your own runtime to Azure Functions in production.

Custom Handler is not the right publish setting in this question. Custom handlers let you integrate an external process with the Functions host and are used when you want the host to route triggers to your handler. They do not package the full runtime as a container and they do not replace the hosting environment in the same way as a container image.

Code is incorrect because that option deploys source code and relies on the Functions host to run supported languages. If the host does not natively support your runtime then publishing code will not provide the necessary runtime environment and dependencies.

When the Functions host does not support your runtime choose Docker Container so you can package the runtime and all dependencies into an image and keep production behavior consistent with local testing.

A developer created an HTTP triggered Azure Function for MeridianRetail to process files stored in Azure Storage blobs and the function uses an output binding to update the blob. The function repeatedly times out after three minutes when handling large blob content and it must finish processing the blob data. The team proposes enqueuing the incoming HTTP payload into an Azure Service Bus queue so that a separate queue triggered function will perform the work while the HTTP endpoint returns an immediate success response. Does this approach meet the objective?

  • ✓ B. Yes this approach meets the objective

Yes this approach meets the objective is correct.

Enqueuing the incoming HTTP payload into Azure Service Bus decouples the short lived HTTP request from the long running blob processing so the HTTP endpoint can return an immediate success while a separate queue triggered function performs the heavy work.

The queue triggered function can fetch the blob by reference and then use its output binding to update the blob after processing which avoids the three minute HTTP timeout constraint. It is best to put a blob reference or a SAS token in the queue message rather than the entire large payload when possible so you avoid message size limits, and you should design for idempotency and poison message handling in the queue consumer.

No this design does not meet the objective is incorrect because moving the work to a queue and processing it with a queue triggered function specifically addresses the timeout problem by removing the long running work from the HTTP request path.

When an HTTP triggered function may time out think about using a queue to decouple the request from processing so the HTTP call returns quickly and a queue triggered function can complete the long running task.

You are building an ASP.NET Core Web API for a logistics startup that uses Azure Application Insights for telemetry and dependency monitoring. The API stores data in a non Microsoft SQL Server database. You need to ensure outgoing calls to that external datastore are tracked as dependencies. Which two dependency telemetry properties should you set? (Choose 2)

  • ✓ D. Telemetry.Context.Operation.Id

  • ✓ E. Telemetry.Name

Telemetry.Context.Operation.Id and Telemetry.Name are the correct options.

Telemetry.Context.Operation.Id is the property that ties all telemetry to a single operation or request so that outgoing calls are correlated with the originating request and appear in distributed traces in Application Insights.

Telemetry.Name is the property that records the logical name of the dependency call and it is what shows up in dependency lists and charts so setting it makes the outgoing datastore call identifiable as a dependency.

Telemetry.Context.Cloud.RoleInstance is incorrect because that field identifies the role instance that emitted the telemetry and it does not mark a specific outgoing call as a dependency.

Telemetry.Id is incorrect because it is the unique id for a single telemetry item and it does not establish the cross item correlation needed to group dependencies under an operation.

Telemetry.Context.Operation.ParentId is incorrect because ParentId points to an immediate parent item and it is not the primary property used to correlate the whole operation. ParentId is often set automatically when child telemetry is created from a parent context.

When you need dependencies to appear in Application Insights focus on setting the operation correlation and a readable name. Set Operation.Id to group related telemetry and set Name so the dependency is visible in dependency charts.

A development team at LunarSoft is building a customer portal that relies on the Microsoft Identity platform for user sign in and API access. The portal calls several REST endpoints and you must implement authentication and authorization flows that validate the claims carried in the authentication token. Which token type should you use to identify users for the portal by using a JWT that contains claims?

  • ✓ D. ID token

ID token is correct because it is a JSON Web Token issued by the Microsoft Identity platform that contains identity claims about the signed in user and is intended to identify the user to the client application.

The ID token is returned during an OpenID Connect authentication flow and contains claims such as the user’s unique identifier name and email. The portal should validate the token signature audience issuer and relevant claims to authenticate the user and to make authorization decisions within the application.

Refresh token is incorrect because it is used to obtain new access or id tokens without re prompting the user. It is a credential for renewing tokens and it is not used to present user identity claims in requests to identify the user to the portal.

Access token is incorrect because it is intended to grant access to protected APIs and it may be opaque or scoped to a resource. It is not the primary token used by the client to prove the user’s identity to the application in OpenID Connect sign in flows.

SAML token is incorrect because it is an XML based assertion used in the SAML protocol rather than a JWT. It is not the JWT based identity token that OpenID Connect issues for user identification in modern web applications.

When the question asks which token identifies a user with claims in a JWT pick ID token. Remember that access tokens are for API authorization and refresh tokens are for renewing tokens without user interaction.

A small telemedicine company called Meridian Health is building a web service that needs to retrieve sensitive settings such as database connection strings and third party API keys at runtime and the team plans to store these values in Azure Key Vault. What should they do to allow the service to retrieve the Key Vault secrets in a secure manner?

  • ✓ C. Assign a managed identity to the application and grant it access to Key Vault secrets

Assign a managed identity to the application and grant it access to Key Vault secrets is correct.

Using a managed identity lets the application authenticate to Azure Active Directory without any credentials stored in code or configuration files. When you grant the managed identity appropriate access to Key Vault the application can obtain short lived access tokens and retrieve secrets at runtime which eliminates the need to manage or rotate embedded secrets manually.

Register a service principal and embed its client secret inside the application code is wrong because embedding a client secret in code creates a long lived credential that can be leaked from source repositories or runtime environments. A service principal can be used securely if its secret or certificate is stored and rotated safely but embedding the secret is not secure.

Store Key Vault secrets in the application source repository is wrong because putting secrets in source control exposes them to anyone with repository access and defeats the purpose of a secrets store. Key Vault is intended to keep secrets out of code and repositories and to provide controlled access at runtime.

Use a storage account shared access signature to access Key Vault is wrong because shared access signatures are specific to Azure Storage and do not provide a mechanism to authenticate to Key Vault. Using a SAS in place of proper identity based access would not give secure, auditable access to Key Vault secrets.

For authentication questions prefer answers that avoid embedded credentials. Look for options that use managed identities or short lived tokens so you can eliminate secret management from the application.

Your operations group is responsible for 120 Azure virtual machines that each have a system assigned managed identity enabled. You must obtain the objectId value for every managed identity attached to these VMs. Which two commands will return that information? (Choose 2)

  • ✓ B. Get-AzVM

  • ✓ E. Get-AzResource

The correct options are Get-AzVM and Get-AzResource.

Get-AzVM returns the virtual machine resource and includes the identity properties for the VM. The system assigned managed identity appears as the Identity.PrincipalId on the VM object and that PrincipalId is the Azure AD objectId you need, so you can query or select that field for each VM with PowerShell.

Get-AzResource can also retrieve the VM resource and its properties so you can inspect the identity block and read identity.principalId there. This command is useful when you are enumerating resources and need to extract the managed identity object id from the resource properties.

az ad sp credential list is incorrect because it lists credentials for service principals and does not directly enumerate the system assigned managed identity objectId for VM resources. The az ad namespace is tied to Azure AD Graph style operations which are being replaced by Microsoft Graph for newer tooling.

Get-AzureADUser is incorrect because it returns Azure AD user objects and not managed identities attached to VMs. The AzureAD PowerShell module that provides this cmdlet is also being deprecated in favor of Microsoft Graph PowerShell so it is less likely to be the intended approach on newer exams.

az ad signed-in-user list-owned-objects is incorrect because it lists objects owned by the signed in user and will not reliably return managed identities for every VM. It is focused on user ownership rather than the VM resource identity.

Get-AzureADUserOwnedObject is incorrect because it targets objects owned by a user and not VM managed identities. This cmdlet is part of the AzureAD module which is being deprecated and so it is not the recommended method for retrieving VM identity object ids on updated platforms.

When you need the object id for a system assigned managed identity query the VM resource identity and look for the principalId value. Use resource oriented commands like Get-AzVM or Get-AzResource rather than user or directory focused cmdlets.

Review the CoveTech Ltd case study in the linked document and then answer the question. Open the reference in a separate browser tab and keep this exam tab open. You must deploy the corporate website. Which Azure service should you choose?

  • ✓ B. Azure Static Web Apps

The correct answer is Azure Static Web Apps.

Azure Static Web Apps is designed to host modern static front end applications and it provides built in CI/CD from GitHub and Azure DevOps, automatic global content distribution, and easy configuration for custom domains with free TLS.

Azure Static Web Apps also integrates with Azure Functions to provide serverless APIs when the site needs dynamic behavior while keeping the front end hosted and served as static assets, which makes it a good fit for deploying a corporate website.

Azure Blob Storage static website can host static files and it is a low cost option but it lacks the same built in deployment workflows and integrated serverless API wiring and it does not provide the same developer experience as Static Web Apps.

Azure App Service Web App is designed for full stack or dynamic web applications and it supports many runtimes and containers but it is typically heavier and more complex than needed for a static corporate site.

Azure Functions is a serverless compute platform for event driven code and APIs and it is not intended to host a static front end site by itself.

When the question asks about hosting a corporate website check whether the site is primarily static and whether you need built in CI/CD and integrated serverless APIs. Match those needs to the service features rather than picking a more general purpose product.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.