AZ-204 Certified Azure Developer Questions and Answers
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
Microsoft AZ-204 Certification Exam Topics
Over the past few months, I have been helping software developers, solutions architects, DevOps engineers, and even Scrum Masters learn Azure development skills and prepare for cloud-focused certifications.
One of the most respected Azure developer certifications available is the Microsoft Azure Developer Associate (AZ-204).
So how do you pass the AZ-204 certification? You practice by using AZ-204 exam simulators, reviewing AZ-204 test questions, and taking online AZ-204 practice exams like this one.
AZ-204 Developer Associate Practice Questions
These practice questions help address commonly misunderstood AZ-204 concepts. If you can answer these correctly, you are well on your way to passing the certification.
One important note: these are not AZ-204 exam dumps. They reflect the style and difficulty of real exam questions but are not taken from the actual exam.
Now here are the AZ-204 practice questions and answers. Good luck!
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Certification Sample Questions
A development team at Meridian Retail is building an Azure pipeline to collect inventory snapshots from thousands of stores worldwide and each store will upload inventory files every 45 minutes to an Azure Blob storage account for downstream processing. The design must start processing when a blob is written to storage and it must filter events by store region metadata. The solution must invoke an Azure Logic App to transform data for Azure Cosmos DB and it must provide high availability across multiple regions. The system must allow up to 48 hours for retry attempts and implement exponential backoff between retries. Which Azure service should act as the event receiver?
-
❏ A. Azure Event Hubs
-
❏ B. Azure Blob Storage
-
❏ C. Azure Service Bus
-
❏ D. Azure Logic Apps
-
❏ E. Azure Event Grid
-
❏ F. Azure Functions
Which Azure CLI command should a developer run to stream an App Service application log file in real time?
-
❏ A. az webapp download
-
❏ B. Get-AzAppServiceLog
-
❏ C. az webapp log -all
-
❏ D. az webapp log tail
A development group at Nimbus Digital is running multiple ASP.NET web apps on Azure App Service and they must persist session state and store complete HTML responses. The storage mechanism must allow session data to be shared across every application. It must allow controlled concurrent access with many readers and a single writer to the same session record. It must also save full HTTP responses for concurrent requests. Would deploying an Azure Database for PostgreSQL and updating the applications meet these needs?
-
❏ A. Azure Blob Storage
-
❏ B. Azure Cache for Redis
-
❏ C. Azure Database for PostgreSQL
A retail startup called BlueHarbor is building an Azure Function named FileProcessor that must access a Blob Storage account without provisioning or rotating any secrets and the function must block JavaScript running in browsers on untrusted origins from calling it via CORS. Which authentication configuration should be used to grant the function access to the storage account?
-
❏ A. Service principal using a certificate
-
❏ B. User-assigned managed identity
-
❏ C. System-assigned managed identity
A development team is designing an Azure Cosmos DB solution with the SQL API for a retail insights service called MarketView. The dataset holds several million JSON documents and each document may include several hundred attributes. No single attribute provides sufficiently distinct values for partitioning. Containers must be able to scale independently so the application can maintain an even workload distribution across partitions over time. Which two partition key strategies would satisfy this requirement? (Choose 2)
-
❏ A. A single property whose values recur often across documents
-
❏ B. A field that stores the container name for each document
-
❏ C. A composite key formed by concatenating several attribute values and appending a random suffix
-
❏ D. A single attribute that is present in only a small subset of documents
-
❏ E. Appending a hash suffix to a property value to increase entropy
Nimbus Apps will publish an ASP.NET Core site to Azure App Service and they want to capture performance and usage telemetry for the application. Which actions should they take to collect that telemetry? (Choose 3)
-
❏ A. Create a Data Collection Rule
-
❏ B. Enable Application Insights integration for the App Service
-
❏ C. Install a site extension on the App Service
-
❏ D. Create an Application Insights resource in the subscription
-
❏ E. Instrument the ASP.NET application to send telemetry to Application Insights
In a C# ASP.NET web application running in a cloud App Service how do you emit a diagnostics entry that only appears when the logging level is set to warnings?
-
❏ A. Trace.TraceInformation(“message”)
-
❏ B. Trace.WriteLine(“message”)
-
❏ C. Trace.TraceWarning(“message”)
-
❏ D. Console.WriteLine(“message”)
How does strong consistency behave in Azure Cosmos DB for an application that is distributed across multiple geographic regions?
-
❏ A. Strong consistency requires that you commit to using Cosmos DB as your only storage solution
-
❏ B. Under strong consistency two readers in separate regions might read different versions of the same record at the same time
-
❏ C. Strong consistency ensures that any reader in any region always sees the most recently committed version of an item
-
❏ D. Strong consistency implies that when two clients write the same item the write that arrives last always wins
A startup named NovaStream is building a near real time notification service using Azure Event Grid. The solution must deliver events to over 8,000 clients that cover about 350 different event types. Events must be filtered by type before they are processed. Authentication and authorization must use Microsoft Entra ID. All event senders must publish to a single endpoint. Would publishing events to an event domain and creating a separate custom topic for each client satisfy these requirements?
-
❏ A. No
-
❏ B. Yes
You deploy an Azure App Service API to a Windows slot named DevEnv and you add two extra slots named QA and Live. You enable auto swap on the Live slot. You need to make sure initialization scripts run and dependent resources are ready before a swap happens. The proposed solution adds a new endpoint named warmupCheck to the app to execute the scripts and then updates the app settings WEBSITE_SWAP_WARMUP_PING_PATH and WEBSITE_SWAP_WARMUP_PING_STATUSES with the path to that endpoint and the expected HTTP response codes. Does this solution meet the requirement?
-
❏ A. No the change will not guarantee warmup before swap
-
❏ B. Yes the approach will trigger warmup checks before the auto swap
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
How can you make sure an Azure Cache for Redis instance removes entries proactively instead of waiting until memory is fully used?
-
❏ A. Enable an eviction policy such as allkeys-lru for the cache
-
❏ B. Periodically flush the entire cache
-
❏ C. Record last access timestamps and run a background job to prune idle entries
-
❏ D. Assign time to live values to keys so they expire automatically
You build and deploy an Azure Logic workflow that invokes an Azure Function app. The function publishes an OpenAPI swagger description and it interacts with an Azure Blob storage account. All components authenticate with Microsoft Entra ID. The Logic workflow must access the Blob storage securely and Microsoft Entra ID identities must persist if the Logic workflow gets removed. What should you configure?
-
❏ A. Create an Azure Key Vault and issue a client certificate
-
❏ B. Create a system assigned managed identity and provision a client certificate
-
❏ C. Register an Azure AD application and grant it the Storage Blob Data Contributor role
-
❏ D. Create a custom Microsoft Entra ID role and assign it to the storage account
-
❏ E. Create a user assigned managed identity and assign role based access controls to the storage account
You maintain Azure App Service sites for a coastal safety company named AquaDive Services and regulations require each diver to complete a health questionnaire every 30 days after a dive begins. You need the App Service to scale out while divers are filling the questionnaire and scale in after they finish. You must configure autoscaling. Which two autoscale configurations can meet this requirement? (Choose 2)
-
❏ A. Predictive autoscaling
-
❏ B. Scheduled recurrence profile
-
❏ C. Fixed date profile
-
❏ D. CPU usage based autoscaling
You have published an ASP.NET web application to Meridian App Service hosted by Contoso Cloud and you must monitor its health using Application Insights. Which Application Insights feature will automatically notify you about emerging performance degradations and anomalous failures in the web application?
-
❏ A. Application Insights Profiler
-
❏ B. Multi-step availability test
-
❏ C. Smart Detection
-
❏ D. Snapshot Debugger
Bluewave Hosting provides managed web app plans and offers both scale up and scale out choices for applications. What happens when you scale out a web app?
-
❏ A. Deploy the app to another region for global routing
-
❏ B. Upgrade the App Service Plan to a higher tier such as moving from S1 to S2
-
❏ C. Start additional VM instances to host the application
-
❏ D. Create extra app replicas within the same VM instance
A startup named BlueLantern Systems is building an application that must enqueue messages for later processing and require that messages are consumed in the order they were produced while the queue will remain under 95 GB in size. Which Azure messaging solution should the engineering team select?
-
❏ A. Azure Event Hubs
-
❏ B. Azure Storage queues
-
❏ C. Azure Event Grid
-
❏ D. Azure Service Bus queues
You administer an Azure Cosmos DB NoSQL account named ledgerAcct that contains a database named salesdb and a container named ordersColl. The account is set to session consistency and you are developing a service named OrderService that will run on several instances. Each instance must perform reads and writes and the instances must participate in the same session so they can share the session token. Which object should you use to share the session token between the nodes?
-
❏ A. Feed options
-
❏ B. Document response
-
❏ C. Request options
-
❏ D. Connection policy
A development team at Meridian Apps uses Azure Cosmos DB and they are trying to determine which indexing modes the database supports for automatic indexing. Which indexing modes does Azure Cosmos DB provide? (Choose 2)
-
❏ A. Composite
-
❏ B. Strong
-
❏ C. Consistent
-
❏ D. None
A payments startup called Meridian Cloud is building a Microsoft Azure hosted REST API for partner integrations and they require that every request be protected and that client applications do not provide or persist credentials that are passed to the API. Which authentication method should they implement?
-
❏ A. Azure Active Directory OAuth2 client credentials
-
❏ B. Client certificate authentication
-
❏ C. Managed identity
-
❏ D. HTTP Basic authentication
A team at BlueHarbor Technologies runs a web application in Azure and they use deployment slots. They need a PowerShell line that exchanges the preprod slot with the live slot for the app named webPortal in the resource group rgAppServices. Which PowerShell command performs the slot swap?
-
❏ A. Move-AzWebAppSlot -ResourceGroupName “rgAppServices” -Name “webPortal” -FromSlot “preprod” -ToSlot “live”
-
❏ B. Set-AzWebAppSlot -ResourceGroupName “rgAppServices” -Name “webPortal” -SourceSlot “preprod” -DestinationSlot “live”
-
❏ C. Swap-AzWebAppSlot -ResourceGroupName “rgAppServices” -Name “webPortal” -SourceSlotName “preprod” -DestinationSlotName “live”
-
❏ D. Switch-AzWebAppSlot -ResourceGroupName “rgAppServices” -AppName “webPortal” -Source “preprod” -Target “live”
A regional retailer operates an Azure Cosmos DB NoSQL account named dbx1. Multiple on-premises instances of a service called serviceA query data from dbx1. The team intends to enable integrated cache for the connections from serviceA. You must configure the “Connectivity mode” and the maximum consistency level for serviceA. Which value should be set for the “Connectivity mode” setting?
-
❏ A. gateway mode with the standard gateway
-
❏ B. direct mode
-
❏ C. gateway mode with a dedicated gateway
You are building a policy enforcement service for a midsize technology firm called Meridian Systems and you implement a stateful ASP.NET Core 3.1 web application named GovernanceService that runs in an Azure App Service instance. The GovernanceService consumes Azure Event Grid events to evaluate authentication related activity and take policy actions. You must ensure that every authentication event is processed by GovernanceService and that sign out events are handled with the lowest possible latency. What should you do?
-
❏ A. Create distinct Azure Event Grid topics and subscriptions for sign in and sign out events
-
❏ B. Add a subject prefix to sign out events and configure an Event Grid subscription that uses subjectBeginsWith filter
-
❏ C. Create a single Event Grid subscription for all authentication events and process sign outs in the same handler
-
❏ D. Deploy separate Azure Functions with dedicated Event Grid triggers for sign in and sign out events
A development team hosts a Python application in a Contoso App Service on Azure and needs to ensure the app runs on Python 3.10. Where in the Azure Portal do you configure the App Service runtime to choose the Python version?
-
❏ A. Settings then Properties
-
❏ B. App Service Plan then Apps
-
❏ C. Configuration then General Settings
-
❏ D. Deployment then Deployment Center
A software vendor called Ridgeview runs a public web application and needs to observe near real time request latencies and failure counts. Which Application Insights feature provides live metrics for performance and error counts?
-
❏ A. Smart Detection
-
❏ B. Profiler
-
❏ C. Live Metrics Stream
-
❏ D. Snapshot Debugger
A development team at mcnz.com is building an Azure application that uses Azure Cosmos DB with the latest SDK. They add a change feed processor to a new items container and attempt to read a batch of 80 documents but the operation fails when a single document cannot be read. They need to monitor the change feed processor progress on the new container while ensuring that one unreadable document does not cause the processor to retry the entire batch. Which feature should they implement to avoid retrying the whole batch when one document fails to be read?
-
❏ A. Change feed estimator
-
❏ B. Checkpointing
-
❏ C. Dead letter queue
-
❏ D. Lease container
A fintech startup named NovaPayments wants to run nightly batch jobs on Azure Spot Virtual Machines in the Europe North region. What is the primary risk of relying on Spot VMs compared with regularly provisioned pay as you go virtual machines?
-
❏ A. They may sometimes cost more than a standard VM
-
❏ B. There is no uptime SLA for Spot VMs
-
❏ C. Potential eviction when Azure reclaims the VM
-
❏ D. Quota limits on how many spot VMs you can run in a region
Review the Cedar Ridge Preserves case study at https://docs.example.com/document/d/2Kx9pQZ45/edit?usp=sharing in a separate browser tab and examine the security requirements. You need to harden the corporate web site to meet those security and traffic handling requirements. What action should you take?
-
❏ A. Azure Cache for Redis
-
❏ B. Azure Front Door
-
❏ C. Azure Application Gateway with Web Application Firewall and end to end TLS encryption
-
❏ D. Azure App Service on a Standard plan with a custom domain and TLS certificate
Your team at CedarTech is implementing an Azure Function that must accept a WebHook call which will read an image from Azure Blob Storage and then insert a new document into Azure Cosmos DB. Which trigger type should you configure for the Function app?
-
❏ A. Azure Event Grid
-
❏ B. Blob storage trigger
-
❏ C. Timer trigger
-
❏ D. HTTP trigger
You are building Azure solutions for a global ecommerce startup named Pioneer Retail and you must connect to a globally distributed NoSQL store by using the .NET SDK. You need to instantiate an object that will configure client options and send requests to the database. Which code snippet should you use?
-
❏ A. new DocumentClient(serviceEndpoint, authKey)
-
❏ B. new Container(serviceEndpoint, authKey)
-
❏ C. new CosmosClient(serviceEndpoint, authKey)
-
❏ D. new Database(serviceEndpoint, authKey)
Your operations team must ensure that every storage account in the enterprise subscription requires a minimum of TLS version 1.2 without checking each account individually. What is the most effective method to enforce this across the subscription?
-
❏ A. Deploy an Azure Automation runbook that updates all storage accounts to require TLS 1.2
-
❏ B. Manually change the TLS minimum setting for each storage account using the Azure portal
-
❏ C. Create and assign an Azure Policy that audits and enforces a minimum TLS 1.2 requirement on storage accounts
-
❏ D. Apply a Network Security Group rule to allow only TLS 1.2 traffic
A developer at Meridian Apps integrates the Acme Identity service to authenticate users and secure resources for a web portal that calls several REST APIs, and the team must validate the claims inside authentication tokens to determine granted API permissions which token type should be used when the token is a JWT that contains claims?
-
❏ A. Refresh token
-
❏ B. SAML assertion
-
❏ C. Access token
-
❏ D. ID token
A cloud team at BrightWave wants to get an email each time a new Azure Container Registry instance is created in their subscription. Which Azure approach will produce that email notification?
-
❏ A. Set up an Event Grid subscription on the subscription scope filtered to Microsoft.ContainerRegistry registries and invoke an Azure Function
-
❏ B. Create an Activity Log alert in Azure Monitor for the Create or Update Container Registry operation and attach an action group that sends an email
-
❏ C. Deploy an Azure Automation Runbook that polls the subscription every 10 minutes for new resources and emails the team
-
❏ D. Rely on Azure Service Health to watch the tenant and notify when resources are added
A software engineer is creating a customer portal for a startup called Northfield and will host the site on Azure App Service Web Apps. The application must keep sensitive items like database connection strings and external API keys out of the code repository. Which approach should be used to securely store and manage those secrets?
-
❏ A. Azure Key Vault
-
❏ B. Azure Blob Storage
-
❏ C. Azure App Service Application Settings
-
❏ D. Local configuration file in the application directory
A retail technology firm called Meridian Retail manages an Azure Blob Storage account named storageacct02 and they plan to grant blob access using a shared access signature together with a stored access policy. The administrators require the stored access policy to control the SAS token validity period and you must finish configuring the stored access policy and then generate the SAS token. Which type of SAS token should you create?
-
❏ A. User delegation shared access signature
-
❏ B. Account level shared access signature
-
❏ C. Container level shared access signature
A national fashion chain operates an online storefront and multiple physical shops and it keeps real time inventory to rebalance stock between locations. You build an Azure Event Grid solution to consume events from the inventory microservice running in Azure. You need a subscription filter that can adjust automatically as seasonal customer demand fluctuates. Which event filter should you implement?
-
❏ A. A prefix filter on the eventType field that matches the active season name
-
❏ B. An advanced filter that applies a Boolean expression across multiple event data fields including a season attribute
-
❏ C. A label based filter that routes events that are tagged with seasonal promotion labels
-
❏ D. A static subject filter that selects events whose subject ends with “/promo/seasonal_stock”
What benefit do Contoso Spot Virtual Machines offer compared with regularly provisioned virtual machines when cost and the risk of eviction are taken into account?
-
❏ A. They are specialized virtual machines optimized for GPU workloads
-
❏ B. They are intended for long running production services that must never be interrupted
-
❏ C. They are available at a significantly reduced cost
-
❏ D. They deliver burstable CPU performance for workloads with sporadic high CPU demand
You are building a cloud service that authenticates users through Microsoft Entra ID and you need to restrict certain endpoints so that only users assigned specific Entra ID roles can call them. Which application component should you configure to enforce those role assignments?
-
❏ A. Entra ID groups for assigning users to roles
-
❏ B. Application permissions configured in Entra ID
-
❏ C. Define application level roles and enforce role based access control inside the app
-
❏ D. App roles declared in the application registration manifest
You are deploying a static website for North Harbor Digital and you store the site files in Azure Blob Storage with static website hosting enabled. The site needs to present content on a branded domain using a privately provided SSL certificate. What component should you configure to enable HTTPS with your own certificate for the static website?
-
❏ A. Blob index tags
-
❏ B. Azure Front Door
-
❏ C. Azure Content Delivery Network (CDN)
-
❏ D. Cross-Origin Resource Sharing (CORS)
A development team at Norwood Systems has an Azure Web App that stores data in Cosmos DB and they run a PowerShell script to create a container. The script sets $resourceGroupName = ‘devResourceGroup’ $accountName = ‘norwoodCosmosAcct’ $databaseName = ‘staffDatabase’ $containerName = ‘staffContainer’ $partitionKeyPath = ‘/EmpId’ $autoscaleMaxThroughput = 6000 and then calls New-AzCosmosDBSqlContainer with those values. They then run these queries against the container SELECT * FROM c WHERE c.EmpId > ‘67890’ and SELECT * FROM c WHERE c.UserID = ‘67890’. Is the first query scoped to a single partition?
-
❏ A. Yes it is an in partition query
-
❏ B. No it does not target a single partition
A development team at Bluewave Systems deploys a Java service to Azure. The application is instrumented with the Application Insights SDK. The telemetry must have extra properties added or an existing property replaced before it is sent to the Application Insights endpoint. Which Application Insights SDK feature should be used to add properties or to override an existing property?
-
❏ A. Telemetry processor
-
❏ B. Telemetry channel
-
❏ C. Sampling
-
❏ D. Telemetry initializer
A marketing technology firm needs to set up an Azure Function with a cron schedule so that it runs at five minutes after each hour on every day of the week. Which cron expression achieves this schedule?
-
❏ A. 0/5 * * * *
-
❏ B. 5 * * * *
-
❏ C. 5 * * * 1-5
-
❏ D. 0 5 * * *
A development team at Meridian Systems is building an API that is hosted in Azure. The API must authenticate when it calls other Azure services. Every incoming request must be authenticated and external clients must not supply credentials to the API. Which authentication mechanism should they implement?
-
❏ A. Service principal
-
❏ B. Anonymous
-
❏ C. Managed identity
-
❏ D. Client certificate
A fintech startup named Marlowe Systems publishes an API through Azure API Management and it validates requests using a JWT. The team must enable gateway side response caching that keys entries by the caller id and by the requested location so each user receives their own cached results. You will insert a cache-store-value policy into the policy file. Which policy section should contain the cache-store-value policy?
-
❏ A. On error
-
❏ B. Inbound
-
❏ C. Outbound
Your team manages an Azure Kubernetes Service cluster from an Entra ID joined workstation. The cluster is located in a resource group named apps-resource-group. Developers produced an application called InventoryService and packaged it as a container image. You must apply the application YAML manifest to the cluster. The proposed approach is to install Docker on the workstation and then run an interactive container from the mcr.microsoft.com/azure-cli:1.5.0 image. Will this approach accomplish the deployment?
-
❏ A. Yes this approach will accomplish the deployment
-
❏ B. No this approach will not meet the requirement
A development team at BlueRiver Apps is building a .NET application that must read a secret stored in Azure Key Vault. The team uses the Azure SDK for .NET and needs to retrieve a secret called “AppToken”. Which code example correctly fetches the secret from the key vault?
-
❏ A. // csharp var client = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(…)) var secret = client.GetSecretAsync(“https://securevault-example.vault.azure.net/secrets/AppToken”).Result
-
❏ B. // csharp var secret = new SecretClient(“securevault-example”, new DefaultAzureCredential()) .GetSecret(“AppToken”).Value
-
❏ C. // csharp var client = new SecretClient(new Uri(“https://securevault-example.vault.azure.net/”;), new DefaultAzureCredential()) KeyVaultSecret secret = client.GetSecret(“AppToken”)
-
❏ D. // csharp var secretClient = new SecretClient(new Uri(“securevault-example”), new DefaultAzureCredential()) var secret = secretClient.GetSecretAsync(“AppToken”).GetAwaiter().GetResult()
Review the Riverton Technologies case study at the link provided and then answer the question that follows. Open the document in a new browser tab and keep the exam tab open. https://example.com/documents/2kLmN8pQ4R7S Which App Service plan should be chosen to host the public corporate website that requires managed SSL global content distribution GitHub based CI CD monitoring and a robust uptime guarantee?
-
❏ A. Premium
-
❏ B. Free
-
❏ C. Standard
-
❏ D. Isolated
LensWorks runs a hosted image service and users upload pictures through a web API that saves files to an Azure Storage blob container inside a Standard general purpose v2 account. Each uploaded picture must trigger creation of a mobile optimized image and the processing must start within 50 seconds of upload. You plan to achieve this by converting the storage account to a BlockBlobStorage account. Will that change ensure the processing begins within the required time frame?
-
❏ A. Yes converting the storage account to BlockBlobStorage will meet the start time requirement
-
❏ B. No converting the account to BlockBlobStorage will not ensure the processing starts within 50 seconds
An engineering group at Northbridge Solutions is composing nested Azure Resource Manager templates to provision a suite of resources for a pilot initiative. The team needs to run validations that check the templates against best practice guidance before any deployment. Which tool should they use to validate the templates against recommended practices?
-
❏ A. What if operation
-
❏ B. Parameter file
-
❏ C. ARM Template Test Toolkit
-
❏ D. Azure Policy
-
❏ E. Template functions
-
❏ F. Azure Deployment Manager
You are deploying a static website from Azure Blob Storage for a small retail catalog. You provisioned a storage account and enabled static website hosting. The website requires that every HTTP response include specific custom header values. What configuration should you apply?
-
❏ A. Cross Origin Resource Sharing CORS
-
❏ B. Blob index tags
-
❏ C. Azure Front Door
-
❏ D. Azure Content Delivery Network CDN
You deployed a customer facing web application to Azure App Service on a Basic tier in a single region for Meridian Analytics. Users report the site is sluggish and you need to capture full call stacks that are correlated across instances to diagnose code performance while keeping cost and user impact to a minimum. What actions should you take to collect the needed telemetry? (Choose 3)
-
❏ A. Enable Always On
-
❏ B. Enable Profiler
-
❏ C. Restart all apps in the App Service plan
-
❏ D. Enable Snapshot Debugger
-
❏ E. Enable Application Insights site extension
-
❏ F. Upgrade the App Service plan to Premium
-
❏ G. Enable remote debugging
Certification Sample Questions Answered
A development team at Meridian Retail is building an Azure pipeline to collect inventory snapshots from thousands of stores worldwide and each store will upload inventory files every 45 minutes to an Azure Blob storage account for downstream processing. The design must start processing when a blob is written to storage and it must filter events by store region metadata. The solution must invoke an Azure Logic App to transform data for Azure Cosmos DB and it must provide high availability across multiple regions. The system must allow up to 48 hours for retry attempts and implement exponential backoff between retries. Which Azure service should act as the event receiver?
-
✓ E. Azure Event Grid
The correct answer is Azure Event Grid. It is the appropriate event receiver for this design because it is a managed event routing service that can subscribe to blob created events and forward only the matching events for processing.
Azure Event Grid supports advanced filtering on event payload and custom metadata which lets you filter events by store region so only the relevant uploads trigger downstream processing. It also has native integration with Azure Logic Apps so a matching event can directly invoke the Logic App that transforms data for Azure Cosmos DB.
Azure Event Grid is offered as a globally available managed service with regional redundancy and built in delivery semantics. It provides reliable delivery with exponential backoff retries and supports deadlettering to storage which lets you retain and reprocess undelivered events and meet long retry windows required by the scenario.
Azure Event Hubs is a high throughput telemetry ingestion service for streaming scenarios and it is not intended as an event router for blob created notifications or for fine grained metadata filtering and direct Logic Apps invocation.
Azure Blob Storage is the event source in this scenario and not the event receiver. Blob Storage will emit events but it does not provide the subscription routing, filtering, or managed delivery semantics that an eventing service provides.
Azure Service Bus is a messaging broker that offers queues and topics and it is useful for decoupled messaging. It is not the natural receiver for blob created events and it does not natively provide the same event subscription model or direct Logic Apps integration that Event Grid provides.
Azure Logic Apps is the workflow engine that should be invoked to transform and load data and it acts as a handler rather than the central event router. Logic Apps can be triggered by Event Grid but they are not the recommended primary event receiver for global event routing at scale.
Azure Functions can act as an event processor and be triggered by events but they are compute endpoints and not a routing service. Functions do not replace Event Grid when you need native event subscription management, metadata filtering, and managed delivery with long retry and deadlettering support.
When a question asks for large scale event routing with metadata filtering and native invocation of Logic Apps think of Azure Event Grid first because it is designed for subscription based event delivery, filtering, and reliable retries.
Which Azure CLI command should a developer run to stream an App Service application log file in real time?
-
✓ D. az webapp log tail
az webapp log tail is correct because it is the Azure CLI command that streams an App Service application log file in real time.
The command opens a live stream to the App Service and displays application and web server logs as they are generated so you can observe requests and errors immediately. You invoke it against the target web app and resource group and the stream continues until you stop it.
az webapp download is incorrect because that command is for retrieving app content or deployment artifacts and it does not open a live log stream.
Get-AzAppServiceLog is incorrect because it represents a PowerShell style cmdlet rather than the Azure CLI command that provides the live tail feature, and the exam asked for an Azure CLI command.
az webapp log -all is incorrect because that is not the proper syntax for streaming logs and it does not correspond to the live tail operation which uses the explicit az webapp log tail command.
When a question asks about live or real time output look for verbs like tail that imply streaming and check whether the option is an Azure CLI command or a PowerShell cmdlet.
A development group at Nimbus Digital is running multiple ASP.NET web apps on Azure App Service and they must persist session state and store complete HTML responses. The storage mechanism must allow session data to be shared across every application. It must allow controlled concurrent access with many readers and a single writer to the same session record. It must also save full HTTP responses for concurrent requests. Would deploying an Azure Database for PostgreSQL and updating the applications meet these needs?
-
✓ B. Azure Cache for Redis
Azure Cache for Redis is the correct option.
The managed cache provides a shared in memory store that all of the App Service applications can use for session state and for storing full HTTP responses. It delivers low latency and high throughput while supporting TTLs and data structures that make storing session entries and responses efficient. The service also supports atomic operations as well as server side scripts and optimistic techniques that let you implement many readers with a single writer pattern and distributed locking without costly relational locks. Managed persistence, replication, and scaling options let you retain durability and scale reads when needed.
Azure Blob Storage can hold complete responses and is accessible across applications, but it is not ideal for low latency session state and controlled concurrent access patterns. Conditional writes based on ETags exist, but blob operations are slower and they lack the atomic in memory primitives and native TTL semantics that make caches suitable for high read volumes and session storage.
Azure Database for PostgreSQL could be used to persist sessions and responses, but it introduces relational overhead and higher latency for every read and write. Enforcing many readers with a single writer would require transactions or locks that can reduce concurrency and performance. Storing full HTTP responses in a relational database also increases I O and cost and makes horizontal scaling more complex compared with a distributed in memory cache designed for this use case.
When a question links session state with high concurrency and low latency think about using an in-memory distributed cache first because it provides fast reads, TTLs, and atomic operations that make single writer multi reader patterns easier to implement.
A retail startup called BlueHarbor is building an Azure Function named FileProcessor that must access a Blob Storage account without provisioning or rotating any secrets and the function must block JavaScript running in browsers on untrusted origins from calling it via CORS. Which authentication configuration should be used to grant the function access to the storage account?
-
✓ C. System-assigned managed identity
The correct answer is System-assigned managed identity.
A system-assigned managed identity allows the Azure Function to obtain Azure AD access tokens to call Blob Storage without any client secrets to provision or rotate. The identity is created and tied to the lifecycle of the Function which simplifies management and reduces operational risk.
This method also fits the CORS requirement because the Function performs authenticated server side calls to Storage with AAD tokens and browser calls remain subject to CORS restrictions so untrusted origins can be blocked from invoking the Function.
Service principal using a certificate is not ideal because it still requires creating and managing an app registration and a certificate lifecycle which introduces provisioning and rotation tasks that the question forbids.
User-assigned managed identity does remove the need for secrets but it requires creating and managing a separate identity resource and assigning it to the Function. That adds management overhead and it is not as tightly bound to the Function lifecycle as a system-assigned identity which is the best fit for the scenario.
When a question says no secrets or no rotation prefer managed identities and choose system-assigned if the identity should be automatically tied to the resource lifecycle.
A development team is designing an Azure Cosmos DB solution with the SQL API for a retail insights service called MarketView. The dataset holds several million JSON documents and each document may include several hundred attributes. No single attribute provides sufficiently distinct values for partitioning. Containers must be able to scale independently so the application can maintain an even workload distribution across partitions over time. Which two partition key strategies would satisfy this requirement? (Choose 2)
-
✓ C. A composite key formed by concatenating several attribute values and appending a random suffix
-
✓ E. Appending a hash suffix to a property value to increase entropy
The correct answers are A composite key formed by concatenating several attribute values and appending a random suffix and Appending a hash suffix to a property value to increase entropy.
A composite key formed by concatenating several attribute values and appending a random suffix increases partition key cardinality by combining multiple attributes and the random suffix further spreads writes and reads across partitions. This reduces the likelihood of hot partitions and helps the container maintain an even workload distribution as it scales.
Appending a hash suffix to a property value to increase entropy raises the effective uniqueness of otherwise low-entropy values by adding pseudo random variation. The hash suffix prevents a few popular values from concentrating load on a small set of partitions and supports balanced throughput across the container.
A single property whose values recur often across documents is wrong because recurring values produce low cardinality and create hot partitions that prevent even distribution.
A field that stores the container name for each document is wrong because that yields predictable and low variety values and does not help distribute load within a container or enable per-container independent scaling.
A single attribute that is present in only a small subset of documents is wrong because missing or sparse attributes lead to many documents sharing the same partition key value such as null, which creates imbalanced partitions and poor scalability.
Choose partition keys with high cardinality and stable access patterns. When no single attribute suffices, combine attributes or add a hash or random suffix to increase entropy and avoid hot partitions.
Nimbus Apps will publish an ASP.NET Core site to Azure App Service and they want to capture performance and usage telemetry for the application. Which actions should they take to collect that telemetry? (Choose 3)
-
✓ B. Enable Application Insights integration for the App Service
-
✓ D. Create an Application Insights resource in the subscription
-
✓ E. Instrument the ASP.NET application to send telemetry to Application Insights
The correct answers are Enable Application Insights integration for the App Service, Create an Application Insights resource in the subscription, and Instrument the ASP.NET application to send telemetry to Application Insights.
Enable Application Insights integration for the App Service configures the platform to surface App Service metrics and to link the running app to an Application Insights resource. Turning on integration in the portal or via deployment templates makes it easy to route App Service level telemetry and to inject the connection string into the app settings.
Create an Application Insights resource in the subscription is required because Application Insights is the service that stores, analyzes, and visualizes telemetry. The resource provides the connection string or instrumentation key that the integration or the SDK uses to send data to Azure Monitor.
Instrument the ASP.NET application to send telemetry to Application Insights means adding the Application Insights SDK or using the built in ASP.NET Core telemetry features so requests, exceptions, dependencies, and custom events are emitted. Platform integration can capture some metrics but application level traces and dependency data require proper instrumentation.
Create a Data Collection Rule is not correct because data collection rules are for the Azure Monitor agent and scenarios like virtual machines and Kubernetes. They are not the standard mechanism for collecting telemetry from App Service hosted ASP.NET Core apps.
Install a site extension on the App Service is not required for current Application Insights collection. Older site extensions existed but the supported approaches are to enable App Service integration or to instrument the application with the SDK, so the site extension is not the recommended path for modern deployments.
When you see App Service telemetry questions remember to create an Application Insights resource and then either enable App Service integration or instrument the app. Check the App Settings for the connection string to confirm telemetry is wired correctly.
In a C# ASP.NET web application running in a cloud App Service how do you emit a diagnostics entry that only appears when the logging level is set to warnings?
-
✓ C. Trace.TraceWarning(“message”)
The correct option is Trace.TraceWarning(“message”).
Trace.TraceWarning emits a trace at the warning level which the tracing infrastructure and App Service diagnostic listeners recognize. That message will be recorded when the application’s logging level is set to include warnings so it appears only at the warning threshold and above.
Trace.TraceInformation(“message”) is an information level trace and it will not be shown when logging is configured to only include warnings and errors because it is a lower severity than a warning.
Trace.WriteLine(“message”) writes a general trace string through trace listeners but it does not declare a warning level. It is treated as an untyped trace and will not be filtered or guaranteed to appear as a warning message.
Console.WriteLine(“message”) writes to standard output and is not tied to the trace severity levels. App Service may not capture or filter console output the same way as trace warnings so it will not reliably appear only when the logging level is set to warnings.
When you want messages controlled by the configured logging level prefer using Trace.TraceWarning or the corresponding trace level methods and avoid relying on Console.WriteLine for level filtered diagnostics.
How does strong consistency behave in Azure Cosmos DB for an application that is distributed across multiple geographic regions?
-
✓ C. Strong consistency ensures that any reader in any region always sees the most recently committed version of an item
The correct answer is Strong consistency ensures that any reader in any region always sees the most recently committed version of an item.
With strong consistency Azure Cosmos DB provides linearizability so any read anywhere will reflect the latest committed write across the account. This guarantees that readers in different regions do not observe older versions once a write is committed. Keep in mind that strong consistency has higher latency and availability trade offs when you distribute data globally and it is not available if you enable multi master writes on the account.
Strong consistency requires that you commit to using Cosmos DB as your only storage solution is incorrect because consistency levels define read and write semantics and do not force you to use Cosmos DB as the sole storage system in your architecture.
Under strong consistency two readers in separate regions might read different versions of the same record at the same time is incorrect because strong consistency prevents that behavior and ensures readers see the most recently committed version.
Strong consistency implies that when two clients write the same item the write that arrives last always wins is incorrect because strong consistency defines read visibility and ordering guarantees and it does not by itself specify conflict resolution policies such as last writer wins in multi master scenarios.
When answering consistency questions focus on the guarantee the level provides and the trade offs it brings. Remember that strong means global linearizability but usually increases latency and restricts multi region write options.
A startup named NovaStream is building a near real time notification service using Azure Event Grid. The solution must deliver events to over 8,000 clients that cover about 350 different event types. Events must be filtered by type before they are processed. Authentication and authorization must use Microsoft Entra ID. All event senders must publish to a single endpoint. Would publishing events to an event domain and creating a separate custom topic for each client satisfy these requirements?
-
✓ B. Yes
The correct option is Yes.
Using an Event Grid event domain with a separate custom topic for each client satisfies the requirements because a domain provides a single ingress endpoint for all publishers while hosting many topics that can represent individual clients.
Event Grid supports event subscriptions with type based and advanced filters so events can be filtered by event type before they are processed and you can subscribe to only the event types each client needs.
Authentication and authorization can use Microsoft Entra ID because Event Grid integrates with Azure Active Directory and Azure role based access control so publishers and subscribers can be authenticated and authorized by identity.
Event domains are designed to scale to thousands of topics and high throughput so modeling each of the 8 000 clients as its own topic and using subscriptions and filters to cover roughly 350 event types is a practical and supported approach.
No is incorrect because it contradicts the capabilities of an event domain and the question requirement for a single endpoint for all senders combined with per client filtering which the domain plus per client topics pattern provides.
When a scenario describes many publishers hitting one endpoint and many consumers needing selective events think about Event Grid domains for single ingress and event subscriptions with filters to route only the needed event types.
You deploy an Azure App Service API to a Windows slot named DevEnv and you add two extra slots named QA and Live. You enable auto swap on the Live slot. You need to make sure initialization scripts run and dependent resources are ready before a swap happens. The proposed solution adds a new endpoint named warmupCheck to the app to execute the scripts and then updates the app settings WEBSITE_SWAP_WARMUP_PING_PATH and WEBSITE_SWAP_WARMUP_PING_STATUSES with the path to that endpoint and the expected HTTP response codes. Does this solution meet the requirement?
-
✓ B. Yes the approach will trigger warmup checks before the auto swap
The correct option is Yes the approach will trigger warmup checks before the auto swap.
Configuring the app to expose a warmup endpoint and then setting the WEBSITE_SWAP_WARMUP_PING_PATH and WEBSITE_SWAP_WARMUP_PING_STATUSES environment variables tells Azure App Service to call that endpoint on the target slot during an auto swap. The service will poll the specified path and wait for one of the listed HTTP status codes before completing the swap so initialization scripts and dependent resources can finish starting up.
Make sure the warmup endpoint performs the necessary initialization and returns the expected status codes when ready. Also ensure the settings are applied to the slot that will receive the traffic so the platform can execute the warmup process prior to swapping.
No the change will not guarantee warmup before swap is incorrect because Azure App Service supports the WEBSITE_SWAP_WARMUP_PING_PATH and WEBSITE_SWAP_WARMUP_PING_STATUSES settings and will use them to verify readiness. The only time the warmup would not occur is if the endpoint is misconfigured blocked by network rules or fails to return the expected status codes within the platform timeout.
When you see questions about slot swaps look for environment variables like WEBSITE_SWAP_WARMUP_PING_PATH and WEBSITE_SWAP_WARMUP_PING_STATUSES and remember to validate the warmup endpoint returns the expected status codes before the swap.
How can you make sure an Azure Cache for Redis instance removes entries proactively instead of waiting until memory is fully used?
-
✓ D. Assign time to live values to keys so they expire automatically
The correct answer is Assign time to live values to keys so they expire automatically.
Setting expirations on keys ensures that entries are removed automatically when their time to live elapses and memory is reclaimed without waiting for the cache to fill completely.
Redis supports key expiry and it removes expired keys lazily when they are accessed and it also runs an active expiration cycle in the background so expired keys are purged and memory is freed periodically rather than only under memory pressure.
Enable an eviction policy such as allkeys-lru for the cache is not the best answer because eviction policies are reactive and run only when the cache nears its memory limit. They do not proactively remove entries at a specified time to live.
Periodically flush the entire cache is incorrect because flushing removes all keys and causes widespread cache misses and it is a blunt instrument rather than a targeted, proactive expiry of stale entries.
Record last access timestamps and run a background job to prune idle entries is incorrect because it requires extra application logic and storage and it adds overhead. Redis already offers native TTL support so this custom pruning is usually unnecessary and more complex.
For cache expiry questions think about whether you need time based removal or memory pressure removal. Use TTL for scheduled expiry and use eviction policies only when you need behavior under low memory.
You build and deploy an Azure Logic workflow that invokes an Azure Function app. The function publishes an OpenAPI swagger description and it interacts with an Azure Blob storage account. All components authenticate with Microsoft Entra ID. The Logic workflow must access the Blob storage securely and Microsoft Entra ID identities must persist if the Logic workflow gets removed. What should you configure?
-
✓ E. Create a user assigned managed identity and assign role based access controls to the storage account
The correct option is Create a user assigned managed identity and assign role based access controls to the storage account.
A user assigned managed identity is a standalone Microsoft Entra ID principal that you can attach to your Logic workflow and other resources. It persists independently of the workflow so the identity remains if the Logic workflow is removed. You assign RBAC roles such as Storage Blob Data Contributor to that identity on the storage account so the workflow can access blobs securely without any secrets.
Create a system assigned managed identity and provision a client certificate is incorrect because a system assigned managed identity is tied to the lifecycle of the resource and is deleted when the workflow is removed. That fails the persistence requirement.
Create an Azure Key Vault and issue a client certificate is incorrect because issuing and managing certificates adds secret management overhead and does not provide the native, persistent Microsoft Entra ID principal behavior that a user assigned managed identity provides.
Register an Azure AD application and grant it the Storage Blob Data Contributor role is incorrect because although an app registration would persist it requires you to manage credentials or certificates for the app. That approach is less secure and less integrated than using a user assigned managed identity.
Create a custom Microsoft Entra ID role and assign it to the storage account is incorrect because roles are granted to principals not assigned to storage accounts by themselves. Built in storage roles are normally sufficient and creating a custom role is unnecessary for the scenario and does not address the identity persistence requirement.
When a question requires an identity to persist after a resource is deleted look for user assigned managed identity because it exists independently of the resource. Also prefer RBAC with managed identities to avoid handling secrets.
You maintain Azure App Service sites for a coastal safety company named AquaDive Services and regulations require each diver to complete a health questionnaire every 30 days after a dive begins. You need the App Service to scale out while divers are filling the questionnaire and scale in after they finish. You must configure autoscaling. Which two autoscale configurations can meet this requirement? (Choose 2)
-
✓ B. Scheduled recurrence profile
-
✓ D. CPU usage based autoscaling
The correct options are Scheduled recurrence profile and CPU usage based autoscaling.
Scheduled recurrence profile is correct because it lets you scale an App Service on a repeating schedule so you can plan an automatic scale out at the start of each 30 day questionnaire period and scale in when the period ends.
CPU usage based autoscaling is correct because metric driven autoscaling will respond to the actual load when divers are filling the questionnaire and it can scale out as CPU or other metrics rise and scale in when the load drops after they finish.
Predictive autoscaling is not appropriate here because predictive features aim to forecast future demand rather than enforce a recurring 30 day window and predictive autoscale is generally focused on different resource types and scenarios than a simple scheduled or metric based App Service configuration.
Fixed date profile is not suitable because a fixed date schedule applies to a one time date and time and does not provide a convenient recurring 30 day pattern for ongoing questionnaires.
When a workload needs scaling at regular intervals use a scheduled recurrence profile and use metric based autoscale for variable load. Read the scenario carefully to decide if the requirement is periodic or driven by real time load.
You have published an ASP.NET web application to Meridian App Service hosted by Contoso Cloud and you must monitor its health using Application Insights. Which Application Insights feature will automatically notify you about emerging performance degradations and anomalous failures in the web application?
-
✓ C. Smart Detection
The correct option is Smart Detection.
Smart Detection uses built in anomaly detection and heuristics to continuously analyze telemetry so it can automatically surface emerging performance degradations and anomalous failures. It raises proactive alerts when it finds statistically significant deviations so you are notified without having to write custom queries or tests.
Application Insights Profiler collects detailed performance traces and method level profiling to help diagnose slow requests and CPU hotspots. It is focused on capturing call stacks for slow operations and it does not provide the automatic, cross telemetry anomaly detection that Smart Detection offers.
Multi-step availability test executes scripted web transactions to verify multi page flows and it reports availability and response time for those scripts. It is useful for synthetic monitoring and targeted alerts but it does not automatically detect broad emerging performance degradations across all application telemetry.
Snapshot Debugger captures snapshots of application state at the moment exceptions occur so developers can inspect variables and call stacks. It is a powerful debugging aid and it does not perform automated anomaly detection or proactively notify you about emerging performance degradations like Smart Detection.
When a question asks about automatic anomaly notifications look for features described as using automatic detection or machine learning. Exclude tools that are explicitly for profiling or manual debugging such as Application Insights Profiler and Snapshot Debugger.
Bluewave Hosting provides managed web app plans and offers both scale up and scale out choices for applications. What happens when you scale out a web app?
-
✓ C. Start additional VM instances to host the application
Start additional VM instances to host the application is correct when you scale out a web app.
Scaling out increases the number of running instances that handle traffic so the application can serve more requests and improve availability. The platform provisions additional virtual machines or workers and spreads load across them to increase capacity and fault tolerance.
Deploy the app to another region for global routing is incorrect because scaling out does not automatically move or replicate apps across geographic regions. Multi region deployments and global routing require separate configuration and are not the same as adding more instances.
Upgrade the App Service Plan to a higher tier such as moving from S1 to S2 is incorrect because that action is called scaling up and it increases the size or power of the existing machines rather than the number of instances.
Create extra app replicas within the same VM instance is incorrect because scale out creates additional VM instances or workers. Creating more replicas inside a single VM does not provide the same isolation or capacity benefits as adding full instances.
When a question contrasts scale up and scale out remember that scale up changes the machine size and scale out adds more instances. Look for keywords like instance count or region to guide your answer.
A startup named BlueLantern Systems is building an application that must enqueue messages for later processing and require that messages are consumed in the order they were produced while the queue will remain under 95 GB in size. Which Azure messaging solution should the engineering team select?
-
✓ D. Azure Service Bus queues
The correct option is Azure Service Bus queues.
Azure Service Bus queues provides ordered and reliable point to point messaging and includes features such as message sessions and sequencing that let consumers process messages in the order they were produced. Sessions allow related messages to be locked to a single consumer so FIFO ordering is preserved for that session. The service also provides quotas and tiers that can accommodate queues under 95 GB while offering advanced capabilities like dead lettering and transactions for reliable processing.
Azure Event Hubs is built for high throughput event streaming and preserves ordering only within a partition. It is not a point to point queue that guarantees global ordered consumption across all messages so it does not meet the strict FIFO requirement.
Azure Storage queues are a simple storage backed queue and they do not guarantee strict ordering or provide the session based ordering features that Service Bus does. They are suitable for basic buffering but they are not the right choice when ordered delivery must be ensured.
Azure Event Grid is an event routing and delivery service for reactive architectures and it does not act as a durable FIFO queue with ordered consumption guarantees. It is intended for event distribution rather than ordered message processing.
Look for the phrase ordered or FIFO in the question. That almost always points to a messaging service with session or sequencing features such as Service Bus rather than Event Hubs Event Grid or Storage queues.
You administer an Azure Cosmos DB NoSQL account named ledgerAcct that contains a database named salesdb and a container named ordersColl. The account is set to session consistency and you are developing a service named OrderService that will run on several instances. Each instance must perform reads and writes and the instances must participate in the same session so they can share the session token. Which object should you use to share the session token between the nodes?
-
✓ C. Request options
The correct option is Request options.
With session consistency the client must send the session token with subsequent requests and the SDK exposes a SessionToken property that you attach via Request options so all instances can reuse the same token when making reads and writes.
Using Request options you explicitly set the session token on each outgoing call and that is the supported way to continue a session across multiple service instances.
Feed options applies to query and feed enumeration settings and it does not provide the mechanism to carry or set the session token for requests.
Document response is an SDK response type that may contain the session token after an operation but it is not the object you attach to outgoing calls to propagate the session. You must extract the token from the response and place it into Request options for subsequent requests.
Connection policy configures connection mode timeouts and endpoints and it does not carry session tokens between requests or nodes.
Remember to store and set the session token in RequestOptions.SessionToken on each outgoing request when multiple instances must participate in the same session.
A development team at Meridian Apps uses Azure Cosmos DB and they are trying to determine which indexing modes the database supports for automatic indexing. Which indexing modes does Azure Cosmos DB provide? (Choose 2)
-
✓ C. Consistent
-
✓ D. None
The correct options are Consistent and None.
Consistent is an Azure Cosmos DB indexing mode that keeps the index updated synchronously with document writes so queries see the latest data immediately. This is the default automatic indexing mode and it ensures that query results reflect recent writes without delay.
None disables automatic indexing so documents are not indexed unless you create indexes manually. This mode is useful when you want to maximize write throughput or when you only perform point reads and do not need automatic queryable indexes.
Composite is incorrect because it refers to composite indexes that define multi-property index paths for sorting and filtering and it is not an indexing mode.
Strong is incorrect because it is a consistency level that controls read guarantees rather than an indexing mode.
When answering questions about indexing or consistency read the wording carefully and remember that Consistent and None refer to indexing modes while Strong is a consistency level.
A payments startup called Meridian Cloud is building a Microsoft Azure hosted REST API for partner integrations and they require that every request be protected and that client applications do not provide or persist credentials that are passed to the API. Which authentication method should they implement?
-
✓ C. Managed identity
The correct option is Managed identity.
Managed identity gives an Azure resource an identity in Azure Active Directory so the resource can obtain access tokens without client applications shipping or persisting credentials. The platform issues and rotates the credentials and the application code requests tokens from Azure AD at runtime, which meets the requirement that every request be protected and that clients do not provide or store secrets.
Azure Active Directory OAuth2 client credentials is incorrect because the client credentials flow requires the application to hold a client secret or certificate and present those credentials to obtain tokens. That requires the client to possess and potentially persist credentials which violates the requirement.
Client certificate authentication is incorrect because it requires the client to provision and present a certificate to the API. The certificate must be stored or managed by the client and that means credentials are held by the client application.
HTTP Basic authentication is incorrect because it sends a username and password with each request and typically requires clients to store credentials. It is also less secure than token based approaches and does not satisfy the requirement to avoid client-side credential persistence.
When an app runs on Azure prefer Managed identities to avoid storing secrets and to let Azure issue and rotate credentials automatically.
A team at BlueHarbor Technologies runs a web application in Azure and they use deployment slots. They need a PowerShell line that exchanges the preprod slot with the live slot for the app named webPortal in the resource group rgAppServices. Which PowerShell command performs the slot swap?
-
✓ C. Swap-AzWebAppSlot -ResourceGroupName “rgAppServices” -Name “webPortal” -SourceSlotName “preprod” -DestinationSlotName “live”
The correct answer is Swap-AzWebAppSlot -ResourceGroupName “rgAppServices” -Name “webPortal” -SourceSlotName “preprod” -DestinationSlotName “live”.
The Swap-AzWebAppSlot cmdlet is the Azure PowerShell command designed specifically to swap deployment slots for an App Service. It uses the ResourceGroupName and Name parameters to locate the app and the SourceSlotName and DestinationSlotName parameters to specify which slots to exchange, so the shown command will exchange the preprod slot with the live slot.
The Swap-AzWebAppSlot operation swaps the content and most configuration settings between the two slots in a controlled way, which is the intended behavior for promoting a preproduction slot to live without manual file moves.
Move-AzWebAppSlot -ResourceGroupName “rgAppServices” -Name “webPortal” -FromSlot “preprod” -ToSlot “live” is incorrect because there is no standard Azure PowerShell cmdlet named Move-AzWebAppSlot for performing slot swaps and the parameter names do not match the documented swap cmdlet.
Set-AzWebAppSlot -ResourceGroupName “rgAppServices” -Name “webPortal” -SourceSlot “preprod” -DestinationSlot “live” is incorrect because Set-AzWebAppSlot is used to configure or update slot settings rather than to perform a swap operation and the parameter names shown are not the ones used for swapping.
Switch-AzWebAppSlot -ResourceGroupName “rgAppServices” -AppName “webPortal” -Source “preprod” -Target “live” is incorrect because Switch-AzWebAppSlot is not a valid Azure PowerShell cmdlet and the parameter names do not follow Azure PowerShell conventions for slot operations.
When you need to swap App Service slots look for the verb Swap and the parameters SourceSlotName and DestinationSlotName in the Azure PowerShell cmdlet names and arguments.
A regional retailer operates an Azure Cosmos DB NoSQL account named dbx1. Multiple on-premises instances of a service called serviceA query data from dbx1. The team intends to enable integrated cache for the connections from serviceA. You must configure the “Connectivity mode” and the maximum consistency level for serviceA. Which value should be set for the “Connectivity mode” setting?
-
✓ C. gateway mode with a dedicated gateway
The correct setting for the Connectivity mode is gateway mode with a dedicated gateway.
gateway mode with a dedicated gateway is the right choice because integrated cache is implemented at the gateway layer and the dedicated gateway provides the caching capability. This lets on premises instances of serviceA route queries through the gateway so they can benefit from cached query results and reduced request unit consumption and latency.
gateway mode with the standard gateway is incorrect because the standard gateway does not offer the integrated cache feature and only proxies requests without providing the dedicated caching capabilities that the dedicated gateway provides.
direct mode is incorrect because direct mode connects clients straight to the data nodes and bypasses the gateway layer where integrated cache operates. Integrated cache cannot be applied when clients use direct mode so they would not receive the caching benefits.
When a question mentions a gateway level feature such as integrated cache pick the option that explicitly references a gateway designed for caching. Look for the word dedicated in the gateway choice.
You are building a policy enforcement service for a midsize technology firm called Meridian Systems and you implement a stateful ASP.NET Core 3.1 web application named GovernanceService that runs in an Azure App Service instance. The GovernanceService consumes Azure Event Grid events to evaluate authentication related activity and take policy actions. You must ensure that every authentication event is processed by GovernanceService and that sign out events are handled with the lowest possible latency. What should you do?
-
✓ A. Create distinct Azure Event Grid topics and subscriptions for sign in and sign out events
Create distinct Azure Event Grid topics and subscriptions for sign in and sign out events is correct because isolating sign in and sign out streams lets you guarantee delivery and route sign out events to a low latency handler while still ensuring GovernanceService receives every authentication event.
Creating distinct Azure Event Grid topics and subscriptions for sign in and sign out events lets you give each event class its own delivery endpoint, retry policy, and scaling path. This separation reduces contention and queuing for high priority sign out events, and it makes it simpler to route sign out deliveries to the lowest latency webhook or instance of GovernanceService. Event Grid provides at least once delivery and subscription-level retries so using separate subscriptions improves predictability and helps ensure every authentication event is processed.
Add a subject prefix to sign out events and configure an Event Grid subscription that uses subjectBeginsWith filter is not the best choice because subject filters operate on subscriptions but still share the same topic and overall throughput characteristics. A subject filter can route events to a subscription but it does not provide the same isolation, independent retry settings, or scaling boundaries that separate topics and subscriptions provide.
Create a single Event Grid subscription for all authentication events and process sign outs in the same handler is incorrect because a single subscription and handler can introduce processing delays and backpressure. Combining all event types in one handler makes it harder to guarantee low latency for sign out handling and increases the risk that failures or slow processing for other events will delay sign out processing.
Deploy separate Azure Functions with dedicated Event Grid triggers for sign in and sign out events is incorrect in this scenario because the requirement is to ensure events are processed by the stateful GovernanceService running in Azure App Service. Sending events to separate Functions would bypass GovernanceService and change the processing model. Functions could reduce latency but they do not meet the stated architectural requirement to have GovernanceService handle every authentication event.
When a question asks about guaranteed delivery and the lowest latency for a specific event type prefer architectural isolation with separate topics or subscriptions and think about independent retry and scaling settings. Keep in mind that filters are useful but they do not replace isolation when low latency and delivery guarantees matter.
A development team hosts a Python application in a Contoso App Service on Azure and needs to ensure the app runs on Python 3.10. Where in the Azure Portal do you configure the App Service runtime to choose the Python version?
-
✓ C. Configuration then General Settings
The correct answer is Configuration then General Settings.
Open your App Service in the Azure portal and use the Configuration blade. The General Settings area inside Configuration is where you choose the runtime stack and version so you can select Python 3.10 for that app. This setting applies to the individual App Service and causes the platform to provide the chosen Python interpreter for your application.
Settings then Properties is incorrect because the Properties page shows basic resource metadata such as region and subscription and it does not allow you to change the runtime version.
App Service Plan then Apps is incorrect because the App Service Plan controls compute resources and scaling and it is not where you configure an individual app’s runtime stack. You may see apps hosted in a plan there but you cannot set the Python version for a specific app from that view.
Deployment then Deployment Center is incorrect because the Deployment Center is used to configure source control and deployment pipelines and it does not set the platform runtime or the Python version for the app.
When you need to confirm or change the Python version go to the app’s Configuration blade and check the General Settings tab rather than looking in deployment or plan pages.
A software vendor called Ridgeview runs a public web application and needs to observe near real time request latencies and failure counts. Which Application Insights feature provides live metrics for performance and error counts?
-
✓ C. Live Metrics Stream
The correct answer is Live Metrics Stream.
Live Metrics Stream provides near real time telemetry and aggregated metrics such as request latency and failure counts by opening a live connection to your application. It displays updating charts and counters so you can observe performance and error trends as they occur and quickly respond to issues.
Smart Detection analyzes telemetry to surface anomalies and potential problems over time but it does not give a continuously updating live stream of request latencies and error counts.
Profiler captures detailed CPU and performance traces to help diagnose hotspots and long running operations but it is not intended to show live aggregated metrics for latency and failures.
Snapshot Debugger captures snapshots of exceptions and request state for debugging specific failures in production but it does not provide a near real time metrics stream of latency and error counts.
When a question asks for near real time visibility into performance and errors choose the feature that streams telemetry live rather than the tools that profile or capture debug snapshots.
A development team at mcnz.com is building an Azure application that uses Azure Cosmos DB with the latest SDK. They add a change feed processor to a new items container and attempt to read a batch of 80 documents but the operation fails when a single document cannot be read. They need to monitor the change feed processor progress on the new container while ensuring that one unreadable document does not cause the processor to retry the entire batch. Which feature should they implement to avoid retrying the whole batch when one document fails to be read?
-
✓ C. Dead letter queue
Dead letter queue is the correct option because it lets the processor isolate unreadable or poison documents so the rest of the batch can be considered handled.
Dead letter queue is a pattern where the change feed handler captures items that fail to be deserialized or processed and writes them to a separate container or store for later inspection. This allows the processor to acknowledge or checkpoint progress for the successfully processed items in the batch and avoid retrying the entire batch because a single bad document caused an error.
Change feed estimator is incorrect because it only provides an estimate of change feed lag and backlog and it does not provide a mechanism to handle per item failures or to isolate unreadable documents.
Checkpointing is incorrect as a sole solution because checkpoints record progress but do not by themselves handle a single unreadable document inside a batch. If you checkpoint only after the whole batch succeeds then one failing item will still force retries unless you implement logic to handle that item separately.
Lease container is incorrect because leases are used to distribute partition ownership and manage concurrency for the change feed processor. Leases do not address individual document processing failures or prevent a bad document from causing a batch retry.
When a question mentions a single unreadable or poison document think about the dead letter pattern which lets you isolate bad items and keep the processor moving while you monitor progress with estimators or checkpoints.
A fintech startup named NovaPayments wants to run nightly batch jobs on Azure Spot Virtual Machines in the Europe North region. What is the primary risk of relying on Spot VMs compared with regularly provisioned pay as you go virtual machines?
-
✓ C. Potential eviction when Azure reclaims the VM
Potential eviction when Azure reclaims the VM is the correct answer.
Azure Spot virtual machines are offered at a discount because the provider can reclaim capacity when it is needed for higher priority workloads. That means your nightly batch jobs can be interrupted when capacity is reclaimed and you must design for interruptions with retries, checkpoints, or fallback to regular VMs.
They may sometimes cost more than a standard VM is incorrect because Spot VMs are typically priced lower than standard pay as you go instances and the trade off is availability not higher cost.
There is no uptime SLA for Spot VMs is incorrect in the context of this question because the primary operational risk being tested is the possibility of eviction and interruption. SLA differences may exist but the immediate effect on batch jobs is that instances can be reclaimed.
Quota limits on how many spot VMs you can run in a region is incorrect because quota limits apply to resources in general and are not the defining risk of using Spot VMs. The main issue for Spot VMs is that available capacity can change and cause evictions.
When you see words like Spot or preemptible on the exam think about interruptible workloads and look for answers that mention eviction or reclamation as the key risk.
Review the Cedar Ridge Preserves case study at https://docs.example.com/document/d/2Kx9pQZ45/edit?usp=sharing in a separate browser tab and examine the security requirements. You need to harden the corporate web site to meet those security and traffic handling requirements. What action should you take?
-
✓ C. Azure Application Gateway with Web Application Firewall and end to end TLS encryption
Azure Application Gateway with Web Application Firewall and end to end TLS encryption is the correct choice.
The Application Gateway includes a Web Application Firewall that can protect the site from common web attacks and it supports end to end TLS so traffic remains encrypted between the client and the backend. Deploying the gateway in the virtual network places the WAF and Layer 7 routing close to the application and it also supports features like path based routing and autoscaling that meet traffic handling requirements.
Azure Cache for Redis is incorrect because it is a caching service to improve data access performance and it does not provide web traffic management or WAF capabilities for incoming client requests.
Azure Front Door is not the best fit for this case because Front Door is an edge optimized global load balancing and CDN service. It can include a WAF but the scenario requires regional application layer protection within the virtual network and native end to end TLS to backend servers which the Application Gateway implements more directly.
Azure App Service on a Standard plan with a custom domain and TLS certificate is incorrect because App Service can host the site and terminate TLS but a Standard plan does not include a built in Web Application Firewall and so it cannot provide the required application layer request inspection and protections.
Read the requirements for both security and traffic handling and look for keywords like WAF and end to end TLS. Choose the service that provides both the application layer firewall and in VNet protection near the backend.
Your team at CedarTech is implementing an Azure Function that must accept a WebHook call which will read an image from Azure Blob Storage and then insert a new document into Azure Cosmos DB. Which trigger type should you configure for the Function app?
-
✓ D. HTTP trigger
The correct option is HTTP trigger.
A HTTP trigger is appropriate because the Function must accept a webhook call which arrives as an HTTP request. The function can then read the image from Azure Blob Storage using a storage SDK or an input binding and it can insert a document into Azure Cosmos DB using an output binding or the Cosmos DB SDK.
Azure Event Grid is not the best choice because Event Grid is designed for delivering event notifications and for wiring events between services rather than acting as a general purpose HTTP endpoint for incoming webhooks.
Blob storage trigger is triggered by changes to blobs and cannot be invoked directly by an external webhook call. It is suitable when you want to react to blob create or update events but not when you must accept an on demand HTTP webhook.
Timer trigger runs on a schedule and is intended for periodic tasks. It does not provide an HTTP endpoint and so it cannot directly accept a webhook request.
When a question mentions an incoming webhook or an HTTP request look for the HTTP trigger option unless the scenario explicitly describes event notifications, blob change events, or scheduled runs.
You are building Azure solutions for a global ecommerce startup named Pioneer Retail and you must connect to a globally distributed NoSQL store by using the .NET SDK. You need to instantiate an object that will configure client options and send requests to the database. Which code snippet should you use?
-
✓ C. new CosmosClient(serviceEndpoint, authKey)
The correct option is new CosmosClient(serviceEndpoint, authKey).
new CosmosClient(serviceEndpoint, authKey) is the primary client class in the Azure Cosmos DB .NET SDK and it is designed to configure client options and send requests to the database. Instantiating CosmosClient gives you connection management, retry policies, serialization settings, and methods to obtain Database and Container objects for performing operations against the service.
The new DocumentClient(serviceEndpoint, authKey) option is incorrect because DocumentClient belonged to the legacy DocumentDB/.NET SDK. That older client has been superseded by CosmosClient in the modern .NET SDK and it should not be used for new development or on current exams.
The new Container(serviceEndpoint, authKey) option is incorrect because Container is a resource representation for a Cosmos DB container and you obtain it from a CosmosClient instance rather than instantiating it directly. It does not configure global client behavior or manage connections.
The new Database(serviceEndpoint, authKey) option is incorrect because Database is a resource representation for a Cosmos DB database and it is also obtained from CosmosClient. It does not act as the top level client that sends requests or manages SDK configuration.
When a question asks which class to instantiate to configure the SDK and send requests choose the CosmosClient class. Remember that Database and Container are resource objects that you retrieve from the client rather than instantiate directly.
Your operations team must ensure that every storage account in the enterprise subscription requires a minimum of TLS version 1.2 without checking each account individually. What is the most effective method to enforce this across the subscription?
-
✓ C. Create and assign an Azure Policy that audits and enforces a minimum TLS 1.2 requirement on storage accounts
The correct option is Create and assign an Azure Policy that audits and enforces a minimum TLS 1.2 requirement on storage accounts.
Create and assign an Azure Policy that audits and enforces a minimum TLS 1.2 requirement on storage accounts is the most effective method because Azure Policy can be applied at the subscription or management group level and it continuously evaluates resources for compliance. A policy can both audit existing storage accounts and be configured with remediation to enforce the minimum TLS version on non compliant accounts.
Azure Policy scales across the entire subscription and provides compliance reporting so the operations team does not need to check each storage account individually. It is the native governance tool for enforcing configuration state and it integrates with remediation and alerting to maintain the required setting.
Deploy an Azure Automation runbook that updates all storage accounts to require TLS 1.2 is not ideal because a runbook is typically a one time or scheduled script and it does not provide continuous, policy based compliance reporting. A runbook also requires ongoing maintenance and does not give the same governance visibility as a policy.
Manually change the TLS minimum setting for each storage account using the Azure portal is incorrect because manual changes do not scale and they are error prone when you manage many accounts. The question asks for a method to ensure the requirement across the subscription without checking each account individually.
Apply a Network Security Group rule to allow only TLS 1.2 traffic is incorrect because network security groups filter by IP address and port and they do not inspect or enforce TLS protocol versions. An NSG cannot guarantee that only TLS 1.2 is used to access storage services.
Choose solutions that provide continuous enforcement and subscription wide governance such as Azure Policy rather than manual edits or one time scripts.
A developer at Meridian Apps integrates the Acme Identity service to authenticate users and secure resources for a web portal that calls several REST APIs, and the team must validate the claims inside authentication tokens to determine granted API permissions which token type should be used when the token is a JWT that contains claims?
-
✓ C. Access token
The correct answer is Access token.
Access token is the token type that APIs validate to determine granted permissions when the token is a JWT. Access tokens are issued to authorize calls to REST APIs and they commonly contain claims such as scopes, roles, audience, and expiry that the API checks to make authorization decisions. You must validate the token signature, the issuer, the audience, and the relevant permission claims before granting access.
Refresh token is not correct because refresh tokens are used by clients to obtain new access tokens and they are not presented to APIs for authorization. Refresh tokens are typically stored and used by the client against the authorization server rather than validated by the resource server.
SAML assertion is not correct because SAML assertions are XML based tokens used for single sign on and federated authentication. They are not the JWT access tokens that REST APIs normally expect to validate for permission checks.
ID token is not correct because ID tokens are issued by OpenID Connect to convey user identity to the client and they are not intended to be used as authorization tokens for APIs. Although an ID token can be a JWT, it carries identity claims rather than the permission or scope claims that APIs should use to grant access.
When you see a question about which token is used to determine API permissions focus on whether the token is used for authorization or authentication. Access tokens are for authorization and APIs should validate signature, audience, expiry, and permission claims.
A cloud team at BrightWave wants to get an email each time a new Azure Container Registry instance is created in their subscription. Which Azure approach will produce that email notification?
-
✓ B. Create an Activity Log alert in Azure Monitor for the Create or Update Container Registry operation and attach an action group that sends an email
The correct answer is Create an Activity Log alert in Azure Monitor for the Create or Update Container Registry operation and attach an action group that sends an email.
An Activity Log alert watches management plane events for the subscription and can be filtered to the specific Create or Update operation for Container Registry. You attach a action group to that alert to deliver an email to the team without needing custom polling or glue code.
Set up an Event Grid subscription on the subscription scope filtered to Microsoft.ContainerRegistry registries and invoke an Azure Function is not correct because that approach is focused on event routing and would still require custom code to convert events into an email. The built in Activity Log alert plus an action group is the straightforward supported path to notify on ARM create operations.
Deploy an Azure Automation Runbook that polls the subscription every 10 minutes for new resources and emails the team is not correct because polling is inefficient and introduces latency and operational overhead. Alerts and action groups provide near real time notifications without the need to manage a polling runbook.
Rely on Azure Service Health to watch the tenant and notify when resources are added is not correct because Service Health reports service outages and issues and it does not track or notify on the creation of individual resources in your subscription.
When a question asks about notifications for resource creation look for solutions that use the Activity Log for management plane events and an action group to send email because that combination is built in and does not require custom polling or extra functions.
A software engineer is creating a customer portal for a startup called Northfield and will host the site on Azure App Service Web Apps. The application must keep sensitive items like database connection strings and external API keys out of the code repository. Which approach should be used to securely store and manage those secrets?
-
✓ C. Azure App Service Application Settings
Azure App Service Application Settings is correct.
Application Settings keep sensitive values out of the code repository by storing them in the App Service configuration and exposing them to the application as environment variables at runtime. The values are managed in the platform and stored encrypted at rest so they do not need to live in source control or in local files.
App Service can also integrate with external secret stores if you need advanced lifecycle and rotation features. For example you can configure the app to reference Azure Key Vault via managed identities, but the built in Application Settings feature is the direct way to keep connection strings and API keys out of your repository for an App Service hosted app.
Azure Key Vault is a dedicated secret store and it is not the chosen answer in this question. It is a strong option for centralized secret management and rotation, but it requires additional setup such as managed identities and Key Vault references when used with App Service.
Azure Blob Storage is designed for storing unstructured data such as files and objects and it is not a purpose built secret management solution. Blob storage does not automatically inject secrets into the app runtime as environment variables and it is not appropriate for storing connection strings or API keys in place of a secret store.
Local configuration file in the application directory is insecure because files in the project can be accidentally committed to the repository and leaked. Storing secrets in local files defeats the requirement to keep sensitive items out of the code repository and should be avoided.
When a question asks how to keep secrets out of source control look for answers that store values in the platform configuration or environment variables. If you see Application Settings for App Service that is usually the correct exam choice for a simple web app.
A retail technology firm called Meridian Retail manages an Azure Blob Storage account named storageacct02 and they plan to grant blob access using a shared access signature together with a stored access policy. The administrators require the stored access policy to control the SAS token validity period and you must finish configuring the stored access policy and then generate the SAS token. Which type of SAS token should you create?
-
✓ C. Container level shared access signature
The correct option is Container level shared access signature.
A container level shared access signature is a service SAS that is scoped to a specific container and it can reference a stored access policy that you create on that container so administrators can centrally control the start time, expiry time, and permissions and then you generate SAS tokens that inherit those settings.
User delegation shared access signature is based on an Azure Active Directory user delegation key and is intended for AD authenticated scenarios rather than for using a container stored access policy to manage SAS validity, so it does not meet the requirement to finish configuring the stored access policy and then generate the SAS token.
Account level shared access signature grants access at the storage account scope and covers multiple services and resources, and it cannot be tied to a stored access policy on a single container, so it is too broad and does not provide the required stored access policy binding.
When a question mentions a stored access policy and controlling SAS lifetime think about a service SAS scoped to the container rather than an account SAS or an AD user delegation SAS.
A national fashion chain operates an online storefront and multiple physical shops and it keeps real time inventory to rebalance stock between locations. You build an Azure Event Grid solution to consume events from the inventory microservice running in Azure. You need a subscription filter that can adjust automatically as seasonal customer demand fluctuates. Which event filter should you implement?
-
✓ B. An advanced filter that applies a Boolean expression across multiple event data fields including a season attribute
An advanced filter that applies a Boolean expression across multiple event data fields including a season attribute is the correct choice.
An advanced filter that applies a Boolean expression across multiple event data fields including a season attribute lets you match on multiple properties and use comparisons so you can react to the current season value inside each event. Advanced filters evaluate event data fields directly and support string, numeric, and boolean comparisons so you can express seasonal logic and combine conditions to keep routing accurate as demand shifts.
A prefix filter on the eventType field that matches the active season name is incorrect because prefix filters only inspect the start of a single string field and they cannot combine conditions across multiple data attributes. Relying on eventType for seasonal logic is brittle when you need to consider inventory levels or region and it will not adapt cleanly to changing rules.
A label based filter that routes events that are tagged with seasonal promotion labels is incorrect because Event Grid does not provide a built in label based subscription filter. That approach requires consistent tagging by the publisher and it lacks the expressive, data field level comparisons that advanced filters provide.
A static subject filter that selects events whose subject ends with “/promo/seasonal_stock” is incorrect because a static subject match is rigid and only looks at the subject text. It cannot express conditions across different data fields or adapt automatically to changing season criteria and it will break when subject patterns change.
When a question mentions filtering on multiple event fields or changing conditions prefer advanced filters. Simple prefix or subject matches are usually too static for dynamic business rules.
What benefit do Contoso Spot Virtual Machines offer compared with regularly provisioned virtual machines when cost and the risk of eviction are taken into account?
-
✓ C. They are available at a significantly reduced cost
They are available at a significantly reduced cost is the correct option.
Spot virtual machines are offered at steep discounts because they run on spare capacity that the provider can reclaim when needed. This lower price point is the primary benefit when you accept the trade off of potential eviction, so Spot VMs make sense for interruptible, fault tolerant, or batch workloads where cost savings outweigh the risk.
They are specialized virtual machines optimized for GPU workloads is incorrect because Spot VMs are a pricing and eviction model and are not a specific hardware specialization. GPU optimized machines are selected by choosing GPU VM sizes instead.
They are intended for long running production services that must never be interrupted is incorrect because Spot VMs can be evicted with little notice and are not suitable for services that require uninterrupted availability.
They deliver burstable CPU performance for workloads with sporadic high CPU demand is incorrect because burstable CPU behavior is provided by specific burstable VM series and is unrelated to the Spot pricing and eviction model.
Remember that Spot instances trade lower cost for higher eviction risk so you should select them for workloads that are fault tolerant, can checkpoint progress, or can be retried.
You are building a cloud service that authenticates users through Microsoft Entra ID and you need to restrict certain endpoints so that only users assigned specific Entra ID roles can call them. Which application component should you configure to enforce those role assignments?
-
✓ C. Define application level roles and enforce role based access control inside the app
The correct option is Define application level roles and enforce role based access control inside the app.
This approach is correct because enforcing role based access control happens inside the application that hosts the endpoints. You must define what roles mean for your service and then validate incoming user tokens or look up assignments so that only callers with the appropriate role can reach those endpoints. The enforcement logic lives in the app and it inspects the roles claim or queries directory information to make authorization decisions.
App roles declared in the application registration manifest is not the best answer by itself because declaring roles in the manifest only registers the role definitions with Entra ID. The manifest does not perform enforcement and you still need the application to check role claims and block or allow access to endpoints.
Application permissions configured in Entra ID is incorrect because those permissions represent app only access for daemon or service principals and not user role assignments for interactive callers. Application permissions do not map to user roles that are used to authorize individual users at runtime.
Entra ID groups for assigning users to roles is not the right choice for this question because groups are a membership construct and they do not by themselves implement per endpoint enforcement inside the app. Groups can be used to manage memberships but the application still needs to interpret group membership and enforce access, and group claims may not always be present in tokens without additional configuration.
When a question asks who should enforce role assignments look for the application level answer and think about checking the roles claim in the token or validating assignments at runtime inside the app.
You are deploying a static website for North Harbor Digital and you store the site files in Azure Blob Storage with static website hosting enabled. The site needs to present content on a branded domain using a privately provided SSL certificate. What component should you configure to enable HTTPS with your own certificate for the static website?
-
✓ C. Azure Content Delivery Network (CDN)
Azure Content Delivery Network (CDN) is the correct option.
Azure Content Delivery Network (CDN) can front an Azure Blob Storage static website and map your branded domain while enabling HTTPS with a privately provided SSL certificate. The CDN can integrate with Azure Key Vault or accept uploaded certificates depending on the profile and it performs TLS termination for the custom domain while serving the static files and providing caching and performance improvements.
Blob index tags are metadata labels used to organize and query blobs and they do not provide TLS termination or a way to present a static website on a custom HTTPS domain.
Azure Front Door does offer global routing and HTTPS termination and it can use custom certificates, but it is not the specific component named in this scenario and the expected solution is to use the CDN which directly integrates with static website hosting for custom HTTPS delivery.
Cross-Origin Resource Sharing (CORS) is a browser and server mechanism to control cross-origin requests and it does not provision or present SSL certificates or provide HTTPS termination for a branded domain.
When a question asks about serving a static site on a custom domain with your own certificate, look for services that perform TLS termination and support custom domain binding and then pick the option that integrates directly with static blob hosting.
A development team at Norwood Systems has an Azure Web App that stores data in Cosmos DB and they run a PowerShell script to create a container. The script sets $resourceGroupName = ‘devResourceGroup’ $accountName = ‘norwoodCosmosAcct’ $databaseName = ‘staffDatabase’ $containerName = ‘staffContainer’ $partitionKeyPath = ‘/EmpId’ $autoscaleMaxThroughput = 6000 and then calls New-AzCosmosDBSqlContainer with those values. They then run these queries against the container SELECT * FROM c WHERE c.EmpId > ‘67890’ and SELECT * FROM c WHERE c.UserID = ‘67890’. Is the first query scoped to a single partition?
-
✓ B. No it does not target a single partition
No it does not target a single partition is correct.
The container was created with a partition key path of /EmpId and the first query uses a range condition on c.EmpId so it covers multiple partition key values and cannot be routed to a single logical partition. Cosmos DB will route a request to a single partition only when you use an equality filter on the partition key or when you provide an explicit partition key value with the request.
The second query in the scenario uses c.UserID which is not the configured partition key and it would be evaluated as a cross partition query unless additional routing information is supplied. If the query had been WHERE c.EmpId = ‘67890’ then it would have been scoped to one partition.
Yes it is an in partition query is incorrect because the greater than operator on the partition key produces a range query that spans multiple partition key values and therefore cannot be confined to a single partition. It is also wrong to assume a query on a different property such as UserID targets the partition key.
When you see partitioning questions check whether the query uses an equality on the configured partition key or a range or a different property. Only equality on the partition key or supplying the partition key value will scope a query to a single partition.
A development team at Bluewave Systems deploys a Java service to Azure. The application is instrumented with the Application Insights SDK. The telemetry must have extra properties added or an existing property replaced before it is sent to the Application Insights endpoint. Which Application Insights SDK feature should be used to add properties or to override an existing property?
-
✓ D. Telemetry initializer
The correct answer is Telemetry initializer. Telemetry initializers run for each telemetry item and are intended to add new properties or to override existing properties on telemetry before the item is sent to the Application Insights endpoint.
Telemetry initializers provide a simple and consistent extension point to enrich or normalize telemetry. You implement or register an initializer and it executes for every telemetry item so it is the right place to append context such as environment, role, or request identifiers or to replace a property value before transmission.
Telemetry processor is incorrect because processors are primarily used to filter, drop, or implement sampling logic and they are not the standard mechanism for consistently adding or overriding properties on every telemetry item.
Telemetry channel is incorrect because the channel handles batching, buffering, and transmission of telemetry to the endpoint and it does not provide a standard facility to enrich telemetry with additional properties before send.
Sampling is incorrect because sampling reduces the volume of telemetry by selecting a subset to send and it is not used to add or override properties on telemetry items.
When a question mentions adding or overriding fields on telemetry items choose Telemetry initializers because they run on every telemetry item before it is sent.
A marketing technology firm needs to set up an Azure Function with a cron schedule so that it runs at five minutes after each hour on every day of the week. Which cron expression achieves this schedule?
-
✓ B. 5 * * * *
The correct option is 5 * * * *.
5 * * * * sets the minute field to five and leaves the hour day month and weekday fields as wildcards so the function runs at five minutes past every hour on every day of the week. In standard cron notation the first field is minutes and that is why this expression matches minute five of every hour.
0/5 * * * * is incorrect because it means every five minutes starting at minute zero so it fires at 0, 5, 10 and so on each hour rather than only at five minutes past the hour.
5 * * * 1-5 is incorrect because the trailing 1-5 restricts the schedule to weekdays only and the question requires every day of the week.
0 5 * * * is incorrect because it schedules minute zero of hour five each day so it runs once daily at 5 AM rather than at five minutes after every hour.
Remember that standard cron uses five fields in the order minute hour day month weekday. Azure Functions timer triggers often use a six field format that includes seconds so confirm which format the question expects before answering.
A development team at Meridian Systems is building an API that is hosted in Azure. The API must authenticate when it calls other Azure services. Every incoming request must be authenticated and external clients must not supply credentials to the API. Which authentication mechanism should they implement?
-
✓ C. Managed identity
Managed identity is the correct choice for this scenario.
Managed identities allow an Azure hosted API to obtain Azure AD access tokens to call other Azure services without storing or exposing credentials. Azure manages the lifecycle of the identity and the API can request tokens from a local endpoint so there are no secrets that developers or external clients need to handle.
Because the identity belongs to the API itself and is provisioned by Azure, external clients do not need to supply credentials to the API in order for it to authenticate to downstream services. This meets the requirement that clients must not provide credentials while still ensuring the API can authenticate when calling other Azure services.
Service principal is an Azure AD application identity but it normally requires managing credentials such as client secrets or certificates. That would force the team to distribute or rotate secrets which the scenario prohibits.
Anonymous would allow unauthenticated access so it fails the requirement that every incoming request must be authenticated.
Client certificate would require external clients to present certificates for authentication which contradicts the requirement that external clients must not supply credentials and it increases certificate management overhead.
When an Azure hosted service must call other Azure services without storing secrets prefer managed identities because Azure issues and manages the tokens for the service.
A fintech startup named Marlowe Systems publishes an API through Azure API Management and it validates requests using a JWT. The team must enable gateway side response caching that keys entries by the caller id and by the requested location so each user receives their own cached results. You will insert a cache-store-value policy into the policy file. Which policy section should contain the cache-store-value policy?
-
✓ C. Outbound
Outbound is correct because the cache-store-value policy needs to run as the response flows back through the gateway so the gateway can capture and store the response keyed by caller id and requested location.
The Outbound section runs after the backend returns a response and before the response is sent to the client, so placing cache-store-value there lets the gateway store the final response and use request or caller data to build the cache key. This ensures each caller receives their own cached results and prevents storing partial or upstream responses.
Inbound is incorrect. The Inbound section executes before the request is sent to the backend and it cannot capture the backend response to populate the gateway response cache. Inbound policies are for request validation and transformation rather than for storing responses.
On error is incorrect. The On error section only runs when a fault occurs, so it will not run for normal successful responses and is not suitable for populating the standard response cache.
If a policy needs the backend response then place it in the Outbound section because that is where response handling and caching should occur.
Your team manages an Azure Kubernetes Service cluster from an Entra ID joined workstation. The cluster is located in a resource group named apps-resource-group. Developers produced an application called InventoryService and packaged it as a container image. You must apply the application YAML manifest to the cluster. The proposed approach is to install Docker on the workstation and then run an interactive container from the mcr.microsoft.com/azure-cli:1.5.0 image. Will this approach accomplish the deployment?
-
✓ B. No this approach will not meet the requirement
No this approach will not meet the requirement.
The proposed method runs the Azure CLI inside an isolated Docker container and that container will not automatically have the cluster credentials or the host Entra ID single sign on session. You must have an authenticated kubectl context that targets the AKS cluster to apply the YAML manifest and an interactive azure-cli container does not provide that by default.
The azure-cli container image also does not necessarily include kubectl or a kubeconfig file inside the container. To make this work you would need to perform an explicit sign in inside the container or mount your host kubeconfig and install kubectl inside the container. Without those steps you cannot reach the cluster to run kubectl apply.
Yes this approach will accomplish the deployment is incorrect because it assumes the container will inherit the host credentials and tooling automatically. That inheritance does not happen by default and additional configuration is required to authenticate and provide kubectl access to the AKS cluster.
When a question mentions running CLI in a container check whether the container will have an authenticated kubectl context or access to the host credentials and whether you need to mount kubeconfig or install tooling inside the container.
A development team at BlueRiver Apps is building a .NET application that must read a secret stored in Azure Key Vault. The team uses the Azure SDK for .NET and needs to retrieve a secret called “AppToken”. Which code example correctly fetches the secret from the key vault?
-
✓ C. // csharp var client = new SecretClient(new Uri(“https://securevault-example.vault.azure.net/”;), new DefaultAzureCredential()) KeyVaultSecret secret = client.GetSecret(“AppToken”)
The correct answer is // csharp var client = new SecretClient(new Uri(“https://securevault-example.vault.azure.net/“), new DefaultAzureCredential()) KeyVaultSecret secret = client.GetSecret(“AppToken”).
This is correct because the modern Azure SDK for .NET exposes the secrets API through the SecretClient in the Azure.Security.KeyVault.Secrets package and it expects the vault endpoint as a full URI and a TokenCredential such as DefaultAzureCredential. Calling GetSecret(“AppToken”) returns the secret in a KeyVaultSecret result which matches the example code.
// csharp var client = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(…)) var secret = client.GetSecretAsync(“https://securevault-example.vault.azure.net/secrets/AppToken”).Result is incorrect because it uses the older Microsoft.Azure.KeyVault client. That legacy client and authentication pattern have been superseded by the Azure SDK libraries and the example also blocks on an async call with .Result which can lead to deadlocks. The older client is less likely to be used on newer exams.
// csharp var secret = new SecretClient(“securevault-example”, new DefaultAzureCredential()) .GetSecret(“AppToken”).Value is incorrect because the SecretClient constructor expects a full vault URI or a Uri instance and not just the vault name string. Passing the vault name string will not create a valid endpoint for the client.
// csharp var secretClient = new SecretClient(new Uri(“securevault-example”), new DefaultAzureCredential()) var secret = secretClient.GetSecretAsync(“AppToken”).GetAwaiter().GetResult() is incorrect because the new Uri call uses an invalid URI string that lacks the scheme and host. The endpoint must be a full URI such as https://securevault-example.vault.azure.net/ for the client to work correctly.
When answering SDK questions look for the newer Azure SDK types such as SecretClient and check whether a full vault URI and a TokenCredential like DefaultAzureCredential are being used.
Review the Riverton Technologies case study at the link provided and then answer the question that follows. Open the document in a new browser tab and keep the exam tab open. https://example.com/documents/2kLmN8pQ4R7S Which App Service plan should be chosen to host the public corporate website that requires managed SSL global content distribution GitHub based CI CD monitoring and a robust uptime guarantee?
-
✓ C. Standard
Standard is the correct choice to host the public corporate website.
The Standard App Service plan provides the features required for a public corporate site while remaining cost effective. It supports managed SSL certificates and custom domains, it can be integrated with global content distribution solutions, it supports GitHub based continuous deployment through the Deployment Center, and it has monitoring options and the ability to scale to multiple instances to meet a robust uptime guarantee.
Premium is not the chosen answer because it is a higher tier that offers more compute and advanced scaling and isolation features. It would also satisfy the requirements but it is typically more costly and is not required when the Standard plan already meets the stated needs.
Free is incorrect because that tier lacks support for custom domains and managed SSL in a production ready way and it does not offer the scaling or SLA characteristics needed for a corporate public site.
Isolated is incorrect because it targets scenarios that require dedicated hosting in an App Service Environment for network isolation and compliance. That makes it more complex and expensive than needed for the described public website where the Standard plan suffices.
When selecting a tier look for the smallest plan that meets all functional requirements and SLA needs and verify CI CD and monitoring support before finalizing your choice.
LensWorks runs a hosted image service and users upload pictures through a web API that saves files to an Azure Storage blob container inside a Standard general purpose v2 account. Each uploaded picture must trigger creation of a mobile optimized image and the processing must start within 50 seconds of upload. You plan to achieve this by converting the storage account to a BlockBlobStorage account. Will that change ensure the processing begins within the required time frame?
-
✓ B. No converting the account to BlockBlobStorage will not ensure the processing starts within 50 seconds
The correct answer is No converting the account to BlockBlobStorage will not ensure the processing starts within 50 seconds.
Converting a storage account to the BlockBlobStorage kind only changes the account type and its performance profile and it does not change how blob-created events are emitted or delivered. The requirement to start processing within a strict time window depends on the eventing or trigger mechanism you choose and on its configuration and runtime behavior rather than on the storage account kind.
For near real time reaction you need to use a suitable event delivery approach such as subscribing to blob-created events with Azure Event Grid and invoking your processing immediately, and you must validate end to end latency and scale settings. Simply converting the account does not guarantee that processing will begin within 50 seconds.
Yes converting the storage account to BlockBlobStorage will meet the start time requirement is incorrect because it assumes that changing the account kind alone will reduce event delivery or trigger latency. That assumption is not valid since event delivery and trigger behavior are separate concerns from storage account type.
Focus on the trigger mechanism and measure end to end latency in tests. Choose Event Grid or a direct eventing integration for near real time workflows and validate under realistic load.
An engineering group at Northbridge Solutions is composing nested Azure Resource Manager templates to provision a suite of resources for a pilot initiative. The team needs to run validations that check the templates against best practice guidance before any deployment. Which tool should they use to validate the templates against recommended practices?
-
✓ C. ARM Template Test Toolkit
The correct answer is ARM Template Test Toolkit.
ARM Template Test Toolkit is a static testing toolkit that runs rule based checks against ARM templates and nested templates to surface authoring issues and deviations from recommended practices before you deploy. It can be executed locally or in continuous integration pipelines so the engineering group can validate templates as part of their pre deployment process.
What if operation provides a preview of the resource changes a deployment would make so you can see deltas before you deploy. It does not perform a rule driven best practice analysis of the template itself so it does not meet the requirement.
Parameter file only supplies runtime values to an ARM template at deployment time. It helps with deploying different environments but it does not analyze or validate template authoring guidelines.
Azure Policy is a governance and compliance service that evaluates resource properties and can audit or block non compliant deployments. It focuses on resource state and enforced rules and is not a static ARM template testing toolkit for authoring best practices.
Template functions are the built in expressions and functions used inside ARM templates to compute values and reference resources. They are part of template authoring and not a validation tool for checking recommended practices.
Azure Deployment Manager is intended to orchestrate and coordinate staged deployments across regions and resource groups and it helps manage deployment flow. It does not provide a rule based static analysis of ARM templates for best practices.
When a question asks about checking templates against recommended practices before deployment look for tools that explicitly mention testing or static analysis rather than governance or orchestration services. Integrate the ARM Template Test Toolkit into your CI pipeline to catch template issues early.
You are deploying a static website from Azure Blob Storage for a small retail catalog. You provisioned a storage account and enabled static website hosting. The website requires that every HTTP response include specific custom header values. What configuration should you apply?
-
✓ D. Azure Content Delivery Network CDN
The correct answer is Azure Content Delivery Network CDN.
Azure Content Delivery Network CDN can add or override HTTP response headers at the edge using its rules engine so you can ensure every response from your static website includes the required custom header values while continuing to serve content from Azure Blob Storage. The CDN also provides caching and global delivery which improves performance for a retail catalog.
Cross Origin Resource Sharing CORS is not correct because CORS controls which origins may access resources and manages a small set of CORS response headers rather than letting you add arbitrary custom headers to every response.
Blob index tags are not correct because they are metadata used for indexing and querying blobs inside Azure Storage and they do not affect HTTP response headers sent to clients.
Azure Front Door is not the correct choice for this question because it is primarily a global application delivery and routing service and it is generally more complex and costly for a small static site. Although some Front Door SKUs can modify headers, the Azure CDN is the more appropriate and common solution to inject custom response headers for static content.
When a question asks about adding or modifying HTTP response headers for static content think about using edge services like CDN because they can modify headers without changing the origin. Also weigh cost and complexity when choosing between CDN and Front Door.
You deployed a customer facing web application to Azure App Service on a Basic tier in a single region for Meridian Analytics. Users report the site is sluggish and you need to capture full call stacks that are correlated across instances to diagnose code performance while keeping cost and user impact to a minimum. What actions should you take to collect the needed telemetry? (Choose 3)
-
✓ B. Enable Profiler
-
✓ D. Enable Snapshot Debugger
-
✓ E. Enable Application Insights site extension
The correct answers are Enable Application Insights site extension, Enable Profiler and Enable Snapshot Debugger.
Enable Application Insights site extension installs the Application Insights agent into your App Service and creates the connection to an Application Insights resource. This extension is the low impact way to start collecting telemetry from an Azure Web App and it is required to use the Profiling and Snapshot features that surface call stacks across instances.
Enable Profiler turns on periodic, sampled profiling that collects call stacks and performance traces across all instances and surfaces aggregated hotspots in Application Insights. The profiler is designed to run in production with minimal overhead because it samples and uploads compact traces rather than continuously recording every thread.
Enable Snapshot Debugger captures lightweight snapshots of the process state when exceptions or breakpoints occur so you can see full call stacks and local variable values. It is intended to be used in production because it limits impact by taking targeted snapshots instead of attaching a full live debugger.
Enable Always On is not a telemetry collector. It keeps an app loaded to avoid cold starts but it does not capture call stacks or profile code performance.
Restart all apps in the App Service plan will not collect diagnostic data and it causes downtime or user disruption. Restarting is an operational action and does not replace the diagnostic tooling needed for correlated call stacks.
Upgrade the App Service plan to Premium is unnecessary for obtaining call stacks when you can enable the Application Insights extension and the Profiling and Snapshot features. Upgrading would increase cost without being required for these diagnostics.
Enable remote debugging attaches an interactive debugger that can pause or slow requests and it is not suitable for lightweight production tracing. Remote debugging has higher impact on users and on performance than the profiler or snapshot debugger.
When troubleshooting production slowness prefer tools that are designed for low overhead. Enable the Application Insights extension and then use Profiler or the Snapshot Debugger rather than attaching a full remote debugger.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
