Azure Developer Exam Dumps and AZ-204 Certification Braindumps
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
Microsoft AZ-204 Certification Exam Topics
Despite the title of this section, this is not an AZ-204 exam braindump.
Cheating by copying exam questions provides no real knowledge and undermines your professional credibility. This is not an AZ-204 exam dump.
All practice questions here are developed from study materials and the certificationexams.pro platform, which provides free AZ-204 practice questions.
Real AZ-204 Sample Questions
Each question aligns with Microsoft Azure Developer Associate exam objectives. They reflect real-world development challenges but are not copied from the exam.
AZ-204 Developer Associate Practice Questions
If you understand these questions and why incorrect options are incorrect, you will be well prepared for the real AZ-204 exam.
These materials help you learn, not cheat.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
AZ-204 Certification Exam Questions & Answers
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
Review the Northfield Stores case brief hosted at example.com and keep this test tab open. You must provide the inventory application developers with access to the store location dataset stored in a storage account for a 90 day development window. What should you use?
-
❏ A. Azure RBAC role
-
❏ B. Microsoft Entra ID refresh token
-
❏ C. Shared access signature (SAS) token
-
❏ D. Storage account access key
-
❏ E. Microsoft Entra ID access token
An online billing service named HarborCloud runs a browser callable REST API and the engineers must prevent web pages served from other domains from invoking those endpoints in a browser. Which security feature controls whether web pages from a different origin can call the API?
-
❏ A. Azure Active Directory
-
❏ B. Cross Site Scripting (XSS)
-
❏ C. Cross Origin Resource Sharing (CORS)
-
❏ D. OAuth 2.0
A payments technology firm called ArborCloud is creating a serverless Azure Functions app. The function must run when a message is placed on an Azure Storage queue and it must use the queue name supplied by an app setting named queue_setting. The function must also create a blob whose name matches the message body. How should the blob name be referenced inside the function.json file?
-
❏ A. %queue_setting%/{fileName}
-
❏ B. {queueTrigger}
-
❏ C. {queue_setting}/{id}
Which container image formats are accepted by Contoso Container Registry?
-
❏ A. Docker images only
-
❏ B. ISO disk images and ZIP archives
-
❏ C. OCI images, Docker images, OCI artifacts and Helm charts
-
❏ D. Docker images and Google Container Registry artifacts
Which two methods will reduce read latency when performing store locations lookups in Azure Cosmos DB? (Choose 2)
-
❏ A. Use Azure Cache for Redis to cache store locations
-
❏ B. Provision an Azure Cosmos DB dedicated gateway
-
❏ C. Change the account to strong consistency and raise provisioned RUs
-
❏ D. Build a composite index and use parameterized queries
You are building a web portal that relies on the Contoso Identity Hub for user sign in and access control. The portal calls multiple REST APIs. One API call needs to read a user calendar. The portal also requires the ability to send email on behalf of the user. You must request authorization for the portal and the APIs. Which OAuth parameter should you include in the authorization request?
-
❏ A. tenant
-
❏ B. code_challenge
-
❏ C. client_id
-
❏ D. state
-
❏ E. scope
You are building a client portal for Aurora Systems that uses the Microsoft identity platform to authenticate users and request resources. The portal calls multiple REST endpoints that require an access token from the identity service. Which three properties must you provide when you request an access token? (Choose 3)
-
❏ A. Directory (tenant) ID
-
❏ B. Application secret
-
❏ C. Redirect URI
-
❏ D. Application (client) ID
A retail development team must run an application on an Azure virtual machine and the application requires access to encryption keys stored in an Azure Key Vault instance. Identities created in Microsoft Entra ID must be removed automatically when their parent Azure resources are deleted. How should the team configure identity and Key Vault access to avoid embedding secrets in the application code? (Choose 2)
-
❏ A. Create a user assigned managed identity for the virtual machine
-
❏ B. Enable a system assigned managed identity on the virtual machine
-
❏ C. Place the key value in application configuration and reference that secret from the app
-
❏ D. Grant Key Vault access by assigning Azure role based access control permissions
A team at Lakeside Systems runs several microservices on Azure Container Apps and needs to diagnose issues. Which capability should they use to view a container’s console output in near real time?
-
❏ A. Azure Monitor Log Analytics
-
❏ B. Azure Monitor metrics
-
❏ C. Attach to container shell
-
❏ D. Log streaming
Which Service Bus subscription rule action adds an annotation to the message by modifying the original message properties?
-
❏ A. Duplicate the message and change the payload
-
❏ B. Update the original message properties
-
❏ C. Clone the message and add a user property
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
A development team at HarborTech is building a web application that uses the Azure SDK to read and update blobs in a zone-redundant BlockBlobStorage account. The app needs to detect whether a blob has changed since it was last read and update operations must apply only when they are based on the current blob contents. Which conditional HTTP header should the application set when performing updates to ensure the operation uses the latest version of the data?
-
❏ A. If-Modified-Since
-
❏ B. If-None-Match
-
❏ C. If-Match
A fintech startup called LumaData has an Azure Function that reads files from a storage container named uploads. The function must run when a new file arrives and it must avoid reprocessing a file if the file is later overwritten. Which setting should you apply to the Blob Trigger binding to meet this requirement?
-
❏ A. Set dataType to binary
-
❏ B. Set the connection property to the storage account connection string
-
❏ C. Enable blob versioning for the container
-
❏ D. Set source to EventGrid
A development team at Summit Apps needs a managed Azure offering that supports the publish and subscribe messaging model for decoupling components and distributing messages to multiple subscribers. Which Azure service meets this requirement?
-
❏ A. Azure Event Grid
-
❏ B. Azure Storage Queues
-
❏ C. Azure Service Bus
-
❏ D. Azure Event Hubs
Refer to the Harbor Beans case study at the web link below and answer the following. Open the linked document in another browser tab and keep the exam tab open. https://example.com/doc/harbor-beans-case-study You must implement a function using Azure Functions to handle customized orders. Which implementation choice should you specify for “Azure Functions feature”?
-
❏ A. Output binding
-
❏ B. Trigger
-
❏ C. Input binding
How can you ensure that blobs copied into Azure Blob Storage are automatically moved to the Archive access tier?
-
❏ A. Include the x-ms-access-tier header set to Archive on the Put Blob request
-
❏ B. Create a lifecycle management rule with tierToArchive and daysAfterModificationGreaterThan 30
-
❏ C. Create a lifecycle management rule with tierToArchive and apply a blobIndexMatch filter
You are building a mobile client that connects to an Azure SQL Database named ‘Prometheus’ owned by NovaCorp. The database contains a table named ‘Clients’ that has a column called ‘contact_email’. You want to apply dynamic data masking to hide values in the ‘contact_email’ column. The proposed solution runs the PowerShell command Set-AzSqlDatabaseDataMaskingRule -DatabaseName ‘Prometheus’ -SchemaName ‘dbo’ -TableName ‘Clients’ -ColumnName ‘contact_email’ -MaskingFunction ’email’. Does this approach meet the requirement?
-
❏ A. Yes
-
❏ B. No
A development team at Meridian Apps is updating serverless code with Azure Functions and they want to know how many triggers a single function may declare?
-
❏ A. No upper limit on triggers per function
-
❏ B. A fixed cap of 64 triggers per function
-
❏ C. Exactly one trigger
-
❏ D. Zero triggers or a single trigger
You operate an Azure App Service web API that is deployed across three regions and that uses Azure Front Door for global traffic routing while Application Insights collects telemetry. You must calculate the application availability for each month. Which two methods will give you that information? (Choose 2)
-
❏ A. Azure Monitor workbooks
-
❏ B. Application Insights availability tests
-
❏ C. Azure Monitor logs
-
❏ D. Azure Monitor metrics
A regional courier firm named Northpoint Delivery must map delivery driver records to a stable user identifier in Azure Active Directory. You are configuring the application setting labeled “Payload claim value” to extract an identifier from the ID token. Which claim value should you choose to consistently identify driver profiles?
-
❏ A. aud
-
❏ B. idp
-
❏ C. oid
Which scope should be configured to enable per operation consistency for write operations to Azure Cosmos DB?
-
❏ A. Container level
-
❏ B. The client application level for orderSvc
-
❏ C. The Cosmos DB account level
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
A development team at Alder Labs is building an ASP.NET Core web application that will be deployed to Azure App Service. The application requires a centralized store for session state and it also needs to cache complete HTTP responses that are frequently served. Which Azure service can meet both needs?
-
❏ A. Azure SQL Database
-
❏ B. Azure Cache for Redis
-
❏ C. Azure Content Delivery Network
-
❏ D. Azure Storage Account
Refer to the Meadow Roasters case study at the link below and answer the follow up question. Open the document in a new tab and do not close the test tab. https://example.com/docs/meadow-roasters-case You must implement an Azure Function to handle customized orders. Which value should you select for the Event source setting?
-
❏ A. HTTP trigger
-
❏ B. Blob Storage trigger
-
❏ C. Service Bus Queue
-
❏ D. Cosmos DB change feed
An online retailer named Solstice Retail operates four web apps on Azure App Service called PortalApp InventoryApp OrdersApp and APIBackend and each app currently uses a system assigned managed identity to access resources while all secrets are stored in an Azure Key Vault named VaultPrime They want to simplify permission management by moving to a single shared managed identity for all apps What action should they take?
-
❏ A. Create an Azure AD service principal and distribute its credentials to the apps
-
❏ B. Create a single Azure Active Directory user account for all applications to use
-
❏ C. Configure all App Service instances to use the same user assigned managed identity
-
❏ D. Configure all apps to share the same system assigned managed identity
A regional chain called Harbor Retail needs to collect point of sale device telemetry from 2,400 stores around the globe and persist the data in Azure Blob storage. Each device produces about 3 megabytes of data every 24 hours and each location has between one and six devices sending data. The device records must be correlatable by a device identifier and the chain expects to open more locations in the future. You propose provisioning Azure Event Grid and configuring event filtering to evaluate the device identifier to accept the device data. Will this approach satisfy the requirements?
-
❏ A. Yes
-
❏ B. No
Which area of the Azure portal displays the ARM template that was used to deploy a resource?
-
❏ A. Activity log
-
❏ B. Deployments blade
-
❏ C. Virtual Machine pane
You are creating a public gateway for a news API used by Cityline Media. The backend is a RESTful service and it publishes an OpenAPI specification. You must enable access to the news API through an Azure API Management instance. Which Azure PowerShell command should you run?
-
❏ A. New-AzureRmApiManagementBackend -Context $ApimContext -Url $BackendUrl -Protocol http
-
❏ B. New-AzureRmApiManagement -ResourceGroupName $ResourceGroup -Name $ApimName -Location $EastUS -Organization “Cityline Media” -AdminEmail [email protected]
-
❏ C. Import-AzureRmApiManagementApi -Context $ApimContext -SpecificationFormat ‘Swagger’ -SpecificationPath $OpenApiFile -Path $ApiPath
-
❏ D. New-AzureRmApiManagementBackendProxy -Url $ApiUrl
A payments startup named Northwind Labs uses Azure Functions and the engineers must add a library that allows one function to call and coordinate other functions while preserving state. Which Functions extension should they install to enable that capability?
-
❏ A. Extension Bundles
-
❏ B. Azure Functions Core Tools
-
❏ C. Durable Functions
-
❏ D. Azure Logic Apps
A small fintech called BlueRidge Financial is creating an Azure Key Vault with PowerShell and they must retain deleted vault objects for 75 days. Which two parameters must be specified together to meet this retention requirement? (Choose 2)
-
❏ A. EnabledForTemplateDeployment
-
❏ B. EnablePurgeProtection
-
❏ C. EnabledForDeployment
-
❏ D. EnableSoftDelete
A small SaaS firm called BrightApps operates an Azure Web App that persists data in Cosmos DB. They provision a Cosmos container by running this PowerShell snippet $resourceGroupName = ‘webAppRg’ $accountName = ‘brtCosmosAcct’ $databaseName = ‘hrDatabase’ $containerName = ‘staffContainer’ $partitionKeyPath = ‘/StaffId’ $autoscaleMaxThroughput = 6000 New-AzCosmosDBSqlContainer -ResourceGroupName $resourceGroupName -AccountName $accountName -DatabaseName $databaseName -Name $containerName -PartitionKeyKind Hash -PartitionKeyPath $partitionKeyPath -AutoscaleMaxThroughput $autoscaleMaxThroughput. They execute these queries against the container SELECT * FROM c WHERE c.StaffId > ‘20000’ SELECT * FROM c WHERE c.UserKey = ‘20000’. Is the minimum provisioned throughput for the container 400 RUs?
-
❏ A. Yes
-
❏ B. No
Which Azure managed disk redundancy option minimizes downtime from a datacenter outage and enables rapid rollback to the prior day’s disk image?
-
❏ A. Read-access geo-redundant storage RA-GRS
-
❏ B. Zone-redundant storage ZRS
-
❏ C. Locally-redundant storage LRS
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
A regional fintech named Meridian Trust plans to use Azure Storage for documents and logs and they must issue a shared access signature that allows access to an item in a single storage service such as blob storage. Which SAS type should they select to delegate access to a resource within one storage service?
-
❏ A. User delegation SAS
-
❏ B. Account-level SAS
-
❏ C. Service-level SAS
A startup named Meridian Labs must deploy a collection of Azure virtual machines with an ARM template and place them into a single availability set. You must configure the template so that the maximum possible number of VMs remain reachable during a hardware outage or planned maintenance. Which value should you set for the platformFaultDomainCount property?
-
❏ A. 2
-
❏ B. Lowest supported value
-
❏ C. Highest supported value
-
❏ D. 1
A retail chain named Harbor Retail operates 2500 locations worldwide and needs to ingest point of sale terminal data into an Azure Blob storage account. Each terminal emits approximately 3 megabytes of data every 24 hours. Each site hosts between one and six terminals that send data. The data must be correlated by a terminal identifier and the solution must allow for future store expansion. Will provisioning an Azure Notification Hub and registering every terminal with it satisfy these requirements?
-
❏ A. Azure Event Hubs
-
❏ B. Azure Notification Hub
-
❏ C. Yes
-
❏ D. Azure IoT Hub
-
❏ E. No
A payments startup called BlueMarble Systems exposes its APIs through Azure API Management to control client access and it needs to enforce client certificate authentication so only authorized callers reach the services. In which policy section should the <authentication-certificate> policy be placed?
-
❏ A. Outbound processing stage
-
❏ B. Backend configuration section
-
❏ C. Error handling section
-
❏ D. Inbound processing stage
How should you deploy a web application so it stays responsive during periods of heavy traffic while minimizing costs?
-
❏ A. Azure Functions Consumption plan
-
❏ B. App Service Standard tier with autoscale
-
❏ C. Virtual Machine Scale Set with autoscale
BlueWave Solutions operates several web front ends on Azure and needs to collect events and telemetry using Application Insights. You must configure the web apps so they send telemetry to Application Insights. Which three tasks should you perform in sequence? Actions 1 Enable the App Service diagnostics extension 2 Add the Application Insights SDK to the application code 3 Retrieve the Application Insights connection string 4 Create an Azure Machine Learning workspace 5 Provision an Application Insights instance?
-
❏ A. 3 then 5 then 1
-
❏ B. 2 then 4 then 5
-
❏ C. 5 then 3 then 2
-
❏ D. 1 then 4 then 3
-
❏ E. 4 then 1 then 2
A development team at HarborTech is creating a web portal that will interact with blob containers in an Azure Storage account. The team registered an application in Azure Active Directory and the portal will rely on that registration. End users will sign in with their Azure AD credentials and those credentials will be used to access the stored blobs. Which API permission type should be granted to the Azure AD application registration to permit the app to act on behalf of the signed in user?
-
❏ A. User.Read
-
❏ B. Storage Blob Data Contributor
-
❏ C. user_impersonation
A team at HarborSoft deployed an Azure Container Apps instance and they turned off ingress for the container app. End users report they cannot reach the service and you observe that the app has scaled down to zero replicas. You need to restore access to the app. The suggested fix is to enable ingress, add a TCP scaling rule and apply that rule to the container app. Will this change resolve the accessibility problem?
-
❏ A. Yes this change will resolve the accessibility problem
-
❏ B. No this change will not resolve the accessibility problem
You just provisioned a new Azure subscription for a small company and you are building an internal staff portal that will display confidential records. The portal uses Microsoft Entra ID for sign in. You need to require multi factor authentication for staff who access the portal. What two tasks should you perform? (Choose 2)
-
❏ A. Enable the legacy baseline policy in Microsoft Entra ID conditional access
-
❏ B. Upgrade the tenant to Microsoft Entra ID Premium P1
-
❏ C. Create a new conditional access policy in Microsoft Entra ID
-
❏ D. Enable per user multi factor authentication for the user accounts
-
❏ E. Configure the portal to use Microsoft Entra ID B2C
-
❏ F. Enable the application proxy to publish the internal site
How do you link reply messages to the original Azure Service Bus messages while preserving session context and enabling correlation for auditing? (Choose 2)
-
❏ A. Set ReplyTo to receiving entity name
-
❏ B. Place MessageId into CorrelationId
-
❏ C. Assign MessageId to SequenceNumber
-
❏ D. Copy SessionId to ReplyToSessionId
-
❏ E. Set SequenceNumber into DeliveryCount
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
At Cascade Analytics a team is building an Azure Function that acts as a webhook to fetch an image from Azure Blob Storage and then insert a metadata record into Azure Cosmos DB. Which output binding should be configured for the function?
-
❏ A. Azure Queue Storage
-
❏ B. HTTP
-
❏ C. Azure Cosmos DB
-
❏ D. Azure Blob Storage
A software operations team at Stratus Digital manages an Azure App Service web app named WebAppAlpha and an Azure Function called FuncProcessor. The web app reports telemetry to an Application Insights instance named appInsightsProd. The team set up a synthetic availability test and an alert rule in appInsightsProd that sends an email when the test fails. They want the alert to also cause FuncProcessor to run. The proposed solution is to create an Azure Monitor action group. Will this solution achieve the goal?
-
❏ A. No the action group alone will not cause the function to execute
-
❏ B. Yes an Azure Monitor action group can be used to invoke the Function app
You are building a .NET Core web service for a company called Meridian Retail that uses Azure App Configuration. You populate the App Configuration store with 120 entries. The application must keep all configuration values consistent when any single entry changes and it must apply changes at runtime without restarting the service. The solution should also limit the total number of calls to the App Configuration service. How can you implement dynamic configuration updates in the application? (Choose 2)
-
❏ A. Decrease the App Configuration client cache duration from the default value
-
❏ B. Register a sentinel key in the App Configuration store and enable a refresh that updates all settings
-
❏ C. Increase the App Configuration cache duration beyond the default value
-
❏ D. Add Azure Key Vault and configure the Key Vault configuration provider
-
❏ E. Register every key in the store and disable global refresh so each key is refreshed individually
-
❏ F. Replace the App Configuration entries with environment variables for each setting
You deploy an ASP.NET Core web application for Riverton Software to Azure App Service and you instrument it with “Application Insights” telemetry so you can monitor performance. You need to confirm that the application is reachable and responsive from multiple geographic probe locations at scheduled intervals and you must notify the support team if the site stops responding. Which test types can you configure for the web application? (Choose 2)
-
❏ A. Load test
-
❏ B. URL ping
-
❏ C. TrackAvailability API
-
❏ D. Multi-step web test
-
❏ E. Unit test
Which Application Insights sampling setting should be chosen for an Azure Function that forwards telemetry to a workspace so that ingestion remains within the workspace quota?
-
❏ A. Configure sampling overrides
-
❏ B. Enable fixed rate sampling
-
❏ C. Enable adaptive sampling
Refer to the Maple Roastery case study at the link below and answer the following question. Open the link in a new tab and keep the test tab open. https://docs.google.com/document/d/1co5iSrqjkbDIQT5rvGEk7FAlZQdRYPIesWy0ncNT06I/edit?usp=sharing You must store customized product records in Azure Cosmos DB. Which Cosmos DB API should you choose?
-
❏ A. Cassandra API
-
❏ B. MongoDB API
-
❏ C. Core SQL API
-
❏ D. Gremlin API
Contoso Cloud Streams uses throughput units to limit how quickly events can be ingested and you can assign between 1 and 25 throughput units. How fast does one throughput unit represent for incoming data?
-
❏ A. Cloud Pub/Sub
-
❏ B. 1 megabyte per second or 1000 events per second whichever comes first
-
❏ C. 1000 events per second
-
❏ D. 1 gigabyte per second
You manage an Azure SQL database for BlueTech Solutions that uses Microsoft Entra ID for sign in and you must allow database developers to connect from Microsoft SQL Server Management Studio with their corporate on premises Entra accounts while keeping interactive sign in prompts to a minimum. Which authentication method should you enable?
-
❏ A. Azure Multi Factor Authentication
-
❏ B. Microsoft Entra authentication token
-
❏ C. OATH software tokens
-
❏ D. Microsoft Entra ID integrated authentication
An engineering team at Contoso Cloud is revising an ARM template and they need each deployed resource to use the same region as the resource group that contains it. Which ARM template function returns the region of the resource group?
-
❏ A. [parameters(‘deployLocation’)]
-
❏ B. [environment()]
-
❏ C. [resourceGroup().location]
-
❏ D. [subscription().location]
What configuration should be applied to an Azure Cache for Redis instance to minimize metadata loss if a region becomes unavailable? (Choose 2)
-
❏ A. Enable geo-replication to a secondary region
-
❏ B. Enable append only file persistence
-
❏ C. Configure frequent snapshot backups
Azure Developer Exam Questions Answered
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
Review the Northfield Stores case brief hosted at example.com and keep this test tab open. You must provide the inventory application developers with access to the store location dataset stored in a storage account for a 90 day development window. What should you use?
-
✓ C. Shared access signature (SAS) token
Shared access signature (SAS) token is the correct choice for granting the inventory application developers time limited access to the store location dataset for a 90 day development window.
A Shared access signature (SAS) token lets you grant scoped permissions to specific storage resources and set an explicit expiry time so you can limit access to exactly 90 days. You can restrict permissions to read or write and you can create the SAS at the service level or use a user delegation SAS tied to Azure AD for better control. Using a Shared access signature (SAS) token follows the principle of least privilege and avoids sharing long lived credentials.
Azure RBAC role is not the best answer because RBAC grants identity based permissions through Azure AD and usually requires creating or managing Azure AD identities and assignments. That approach is better for ongoing managed access rather than a simple, scoped temporary token for external or short term developer work.
Microsoft Entra ID refresh token is not appropriate because refresh tokens are used to obtain new access tokens for an authenticated user or service. They are not a direct mechanism for granting scoped, time limited access to storage resources that can be handed out to developers.
Storage account access key is incorrect because the account key grants full control over the storage account and it cannot be scoped to a single dataset or easily limited to a 90 day window. Sharing account keys greatly increases risk and does not follow least privilege practices.
Microsoft Entra ID access token is not the right choice because access tokens are short lived and tied to a specific authenticated principal and flow. They require Azure AD authentication and are not as convenient for issuing a simple scoped, time limited grant to a set of developers the way a Shared access signature (SAS) token is.
When the exam asks for temporary, scoped access to Storage think SAS token rather than long lived account keys or managing extra identities. Check the allowed expiry and permission scope in the question before answering.
An online billing service named HarborCloud runs a browser callable REST API and the engineers must prevent web pages served from other domains from invoking those endpoints in a browser. Which security feature controls whether web pages from a different origin can call the API?
-
✓ C. Cross Origin Resource Sharing (CORS)
The correct option is Cross Origin Resource Sharing (CORS).
Cross Origin Resource Sharing (CORS) is a browser enforced policy that controls whether web pages served from a different origin can make requests to your API. The server indicates allowed origins by returning response headers such as Access-Control-Allow-Origin and related CORS headers and the browser will block or allow the call accordingly. For non simple requests the browser issues a preflight OPTIONS request and the server must respond with the appropriate headers to permit the actual request. Properly configuring CORS on the API is the way to prevent web pages from other domains from invoking the endpoints in a browser while still allowing approved origins.
Azure Active Directory is an identity and access management service that handles authentication and authorization and it does not control browser cross origin request behavior. You can require tokens from Azure AD to protect an API but that does not replace the browser enforced CORS checks.
Cross Site Scripting (XSS) is a type of client side vulnerability where attackers inject malicious scripts into web pages and it is not a mechanism for controlling cross origin requests. XSS is something to prevent rather than the control that governs whether a page from another origin can call an API.
OAuth 2.0 is an authorization framework used to obtain and use access tokens and it addresses who can access resources rather than whether browsers are allowed to make cross origin requests. OAuth is often used alongside CORS but it does not implement the browser policy that blocks cross origin calls.
When a question refers to browser behavior or web pages on other domains think CORS first and look for server response headers like Access-Control-Allow-Origin that the server must set to allow the request.
A payments technology firm called ArborCloud is creating a serverless Azure Functions app. The function must run when a message is placed on an Azure Storage queue and it must use the queue name supplied by an app setting named queue_setting. The function must also create a blob whose name matches the message body. How should the blob name be referenced inside the function.json file?
-
✓ B. {queueTrigger}
{queueTrigger} is correct because the queue trigger exposes the message text as binding data named {queueTrigger} and that binding data can be used directly in the blob path to name the blob after the message body.
The storage queue trigger populates binding data that is available to other bindings in function.json. The binding data property {queueTrigger} contains the full queue message text so using {queueTrigger} in the output blob path results in a blob whose name matches the message body. App setting substitution is a separate mechanism and should be used for configuration values rather than for runtime message content.
%queue_setting%/{fileName} is incorrect because %queue_setting% uses app setting substitution and it will resolve configuration values. The token {fileName} is not a standard binding data property supplied by the queue trigger and will not receive the message body.
{queue_setting}/{id} is incorrect because {queue_setting} is not a binding data token and application settings are referenced with percent signs rather than braces. The token {id} is not the binding data name for the queue message body and will not provide the message text for the blob name.
Remember that runtime values from triggers are referenced with binding data placeholders like {queueTrigger}. Use %SETTING_NAME% only for app setting substitution and not for trigger message content.
Which container image formats are accepted by Contoso Container Registry?
-
✓ C. OCI images, Docker images, OCI artifacts and Helm charts
OCI images, Docker images, OCI artifacts and Helm charts is correct.
The Contoso Container Registry accepts modern container and artifact formats so it can store OCI images and Docker images which are compatible because Docker images implement the OCI image specification. It also supports OCI artifacts which cover a broader set of artifact types and it can host Helm charts for Kubernetes application packaging.
Docker images only is incorrect because the registry supports additional formats beyond just Docker images and the correct option includes OCI artifacts and Helm charts as well.
ISO disk images and ZIP archives is incorrect because those are general archive or disk image formats and they are not container image or artifact formats that a container registry is designed to store and serve.
Docker images and Google Container Registry artifacts is incorrect because Google Container Registry refers to a service rather than a container image format and the option does not mention OCI artifacts or Helm charts which the Contoso registry does support. Note that Google Cloud is moving users toward Artifact Registry which is why answers naming older services are less likely on newer exams.
Read each option carefully and watch for the difference between a format and a service. The exam will usually list supported artifact formats such as OCI and Helm rather than naming a specific registry service.
Which two methods will reduce read latency when performing store locations lookups in Azure Cosmos DB? (Choose 2)
-
✓ B. Provision an Azure Cosmos DB dedicated gateway
-
✓ D. Build a composite index and use parameterized queries
The correct answers are Provision an Azure Cosmos DB dedicated gateway and Build a composite index and use parameterized queries.
Provision an Azure Cosmos DB dedicated gateway reduces read latency by offloading connection management and query processing to a gateway instance so clients avoid repeated connection overhead and can achieve more consistent read performance in scenarios with many concurrent connections or complex networking.
Build a composite index and use parameterized queries improves read latency by allowing queries that filter and sort on multiple properties to be served directly from the index without full scans. Parameterized queries promote plan reuse and avoid inefficient query text variations so the database can execute lookups faster and with lower request charge.
Use Azure Cache for Redis to cache store locations is not selected here because the question targets Cosmos DB techniques to reduce read latency. An external cache can lower end to end latency but it is not a Cosmos DB feature and it adds architectural complexity that the exam item did not ask for.
Change the account to strong consistency and raise provisioned RUs is incorrect because strong consistency typically increases cross region read latency and raising provisioned RUs improves throughput but does not directly address inefficient queries or lack of proper indexing which are more common causes of high read latency.
When a question asks about reducing read latency in Cosmos DB look for answers that mention native features such as indexing and gateway modes rather than external caches unless the question explicitly includes cross system caching.
You are building a web portal that relies on the Contoso Identity Hub for user sign in and access control. The portal calls multiple REST APIs. One API call needs to read a user calendar. The portal also requires the ability to send email on behalf of the user. You must request authorization for the portal and the APIs. Which OAuth parameter should you include in the authorization request?
-
✓ E. scope
The correct option is scope.
The scope parameter tells the authorization server which permissions or delegated access the client is requesting on behalf of the user. In this scenario the portal needs permission to read a user calendar and to send mail on the user’s behalf so those capabilities are expressed as scopes in the authorization request and the user or administrator grants consent for them.
APIs and identity providers define named scopes such as Calendars.Read and Mail.Send and the client includes multiple scopes in a single scope value separated by spaces so both the portal and the APIs can be authorized in one request.
tenant is not the OAuth parameter used to request permissions and it is typically part of the identity provider endpoint or tenant selection in the URL rather than the authorization request that asks for delegated access.
code_challenge is used for PKCE to mitigate authorization code interception for public clients and it does not convey which permissions or scopes the application is requesting.
client_id identifies the application to the authorization server so the server knows which app is requesting access but it does not specify which resources or actions the app needs on behalf of the user.
state is a value returned unchanged from the authorization response and it is used to correlate requests and protect against cross site request forgery attacks rather than to request permissions.
When a question asks which parameter requests permissions during OAuth authorization look for the scope parameter and remember that multiple scopes are space separated and defined by the API.
You are building a client portal for Aurora Systems that uses the Microsoft identity platform to authenticate users and request resources. The portal calls multiple REST endpoints that require an access token from the identity service. Which three properties must you provide when you request an access token? (Choose 3)
-
✓ B. Application secret
-
✓ C. Redirect URI
-
✓ D. Application (client) ID
The correct options are Application secret, Redirect URI, and Application (client) ID.
The Application (client) ID identifies which registered application is requesting the token. The identity service uses this value to look up the app configuration and to validate granted permissions for the token request.
The Application secret is the credential used by confidential applications to prove their identity when exchanging an authorization code or when performing a client credentials grant. Without the secret a confidential client cannot authenticate to the token endpoint.
The Redirect URI must match the value registered for the application because the authorization response and authorization code are returned to that address. The identity provider enforces an exact match to prevent tokens from being sent to unintended endpoints.
Directory (tenant) ID is not one of the three properties that must be included in the access token request itself. The tenant identifier is used when constructing the authorization or token endpoint URL to target a specific directory, but it is not a credential field that is sent as part of the token exchange.
Remember that questions about token requests ask for the fields that are sent in the exchange. Focus on the app credentials and callback settings and note that the client ID and client secret authenticate the app while the redirect URI must match exactly.
A retail development team must run an application on an Azure virtual machine and the application requires access to encryption keys stored in an Azure Key Vault instance. Identities created in Microsoft Entra ID must be removed automatically when their parent Azure resources are deleted. How should the team configure identity and Key Vault access to avoid embedding secrets in the application code? (Choose 2)
-
✓ B. Enable a system assigned managed identity on the virtual machine
-
✓ D. Grant Key Vault access by assigning Azure role based access control permissions
Enable a system assigned managed identity on the virtual machine and Grant Key Vault access by assigning Azure role based access control permissions are correct.
Enabling a system assigned managed identity creates an identity that is tied to the VM lifecycle so the identity is created with the VM and removed automatically when the VM is deleted. The application running on the VM can request Azure AD tokens using that identity so no credentials or secrets need to be embedded in code.
Granting Key Vault access by assigning Azure role based access control permissions allows you to give the VM identity the specific permissions it needs to access keys or secrets in Key Vault. Using RBAC together with managed identity provides least privilege access and central logging without putting secrets into application configuration or source code.
Create a user assigned managed identity for the virtual machine is incorrect because a user assigned managed identity is independent of the VM lifecycle and does not get removed automatically when the VM is deleted. That independence means it does not meet the requirement for automatic deletion tied to the parent resource.
Place the key value in application configuration and reference that secret from the app is incorrect because this approach still embeds or references secrets in application configuration and requires secure storage and rotation outside of the VM identity model. It does not eliminate secrets from the application code or avoid manual secret management.
When a question mentions automatic cleanup or lifecycle coupling look for system assigned identities and prefer Azure RBAC for Key Vault access to avoid embedding secrets.
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
A team at Lakeside Systems runs several microservices on Azure Container Apps and needs to diagnose issues. Which capability should they use to view a container’s console output in near real time?
-
✓ D. Log streaming
The correct option is Log streaming.
Log streaming provides near real time access to a container’s standard output and standard error so you can observe console output as the service runs and troubleshoot issues quickly. You can view streamed logs in the Azure portal or use CLI tooling that tails logs to get live output while the container is running.
Azure Monitor Log Analytics collects and stores log data in a workspace for querying and historical analysis and it is focused on aggregation and search rather than immediate live console tailing. It is useful for investigations but it does not give the same near real time console stream.
Azure Monitor metrics provides numeric time series about resource performance and it does not display a container’s textual console output. Metrics are aggregated values and are not a substitute for log streaming when you need live logs.
Attach to container shell suggests opening an interactive shell into the container but Azure Container Apps does not provide a straightforward portal feature to attach an interactive shell the same way some other services do. Even where interactive attach is possible it is a different workflow from streaming the container’s stdout and stderr for near real time observation.
When a question asks about seeing console output in near real time think streaming logs rather than metrics or log storage and query tools.
Which Service Bus subscription rule action adds an annotation to the message by modifying the original message properties?
-
✓ B. Update the original message properties
The correct answer is Update the original message properties.
Subscription rule actions in Azure Service Bus run when a message is evaluated for a subscription and they apply changes to the existing brokered message. Actions execute statements that set or update message properties so they add annotations by modifying the original message properties rather than creating a new message.
Duplicate the message and change the payload is incorrect because rule actions do not produce a duplicate message or alter the message body. They operate on properties and annotations of the same message.
Clone the message and add a user property is incorrect because actions do not clone messages. Although actions can add or update user properties they do so on the original message and they do not create a cloned instance.
When you see questions about Service Bus rule actions remember that they modify message properties and not the message body. Watch for choices that say duplicate or clone because those are usually incorrect.
A development team at HarborTech is building a web application that uses the Azure SDK to read and update blobs in a zone-redundant BlockBlobStorage account. The app needs to detect whether a blob has changed since it was last read and update operations must apply only when they are based on the current blob contents. Which conditional HTTP header should the application set when performing updates to ensure the operation uses the latest version of the data?
-
✓ C. If-Match
The correct option is If-Match.
If-Match is the header you set with the blob’s ETag from the last read so the service only performs the update when the resource’s current ETag matches the supplied value. This provides optimistic concurrency so updates apply only when they are based on the current blob contents and prevent lost updates when the blob has changed since it was read.
If-Modified-Since is incorrect because it compares modification timestamps and is mainly used for conditional GETs to decide whether to return the resource or a 304 Not Modified response. It is not designed to enforce that an update is based on the exact current content.
If-None-Match is incorrect because it makes the operation succeed only when the ETag does not match the supplied value and is commonly used for caching or create-if-not-exists semantics. That behavior is the opposite of the required check which must ensure the update only proceeds when the blob has not changed.
When you need optimistic concurrency use the blob’s ETag and include it with the If-Match header on your update requests so the service rejects the update if the ETag has changed.
A fintech startup called LumaData has an Azure Function that reads files from a storage container named uploads. The function must run when a new file arrives and it must avoid reprocessing a file if the file is later overwritten. Which setting should you apply to the Blob Trigger binding to meet this requirement?
-
✓ D. Set source to EventGrid
The correct answer is Set source to EventGrid.
Setting Set source to EventGrid configures the function to be driven by storage events rather than by the default polling mechanism. Using EventGrid lets the function subscribe to and filter for events such as BlobCreated so the code runs when a new file arrives and you can avoid reprocessing based on event types and metadata.
Set dataType to binary is incorrect because the dataType setting only controls how the blob payload is delivered to the function and does not affect when the trigger fires or whether overwrites cause reprocessing.
Set the connection property to the storage account connection string is incorrect because the connection property only tells the binding which storage account to use and it does not change the trigger source or prevent reprocessing on overwrites.
Enable blob versioning for the container is incorrect because blob versioning is a storage feature that retains prior versions but it does not by itself change the trigger behavior on overwrites. Versioning can help preserve data but it does not replace the event driven trigger configuration that prevents unwanted reprocessing.
When a question asks how to avoid repeated processing from overwrites think about using event driven triggers such as EventGrid and then filter for the specific event types you want to handle like BlobCreated.
A development team at Summit Apps needs a managed Azure offering that supports the publish and subscribe messaging model for decoupling components and distributing messages to multiple subscribers. Which Azure service meets this requirement?
-
✓ C. Azure Service Bus
Azure Service Bus is the correct choice.
Azure Service Bus provides a fully managed message broker with topics and subscriptions so publishers can send messages to a topic and multiple subscribers can receive them. It offers durable storage and delivery guarantees along with features such as sessions and transactions which make it well suited for decoupling components and distributing messages to multiple subscribers.
Azure Event Grid is designed for lightweight event routing and serverless integrations and it is not a traditional message broker with durable topics and enterprise messaging features. It focuses on delivering discrete event notifications rather than handling reliable, transactional pub sub scenarios.
Azure Storage Queues provide simple point to point queuing for basic decoupling and they do not include topics or subscriptions for distributing the same message to multiple independent subscribers. They lack the advanced messaging capabilities that Service Bus provides.
Azure Event Hubs is a high throughput event ingestion service for telemetry and streaming scenarios and it is optimized for big data pipelines and real time analytics. It uses consumer groups for parallel consumption but it is not intended as an enterprise pub sub broker with durable topics and subscriptions.
When a question asks for durable publish and subscribe with topics and subscriptions choose Azure Service Bus. Use Event Hubs for high throughput telemetry and use Event Grid for lightweight event routing.
Refer to the Harbor Beans case study at the web link below and answer the following. Open the linked document in another browser tab and keep the exam tab open. https://example.com/doc/harbor-beans-case-study You must implement a function using Azure Functions to handle customized orders. Which implementation choice should you specify for “Azure Functions feature”?
-
✓ B. Trigger
Trigger is the correct choice for the Azure Functions feature to implement a function that handles customized orders.
A Trigger causes a function to run in response to an external event such as an HTTP request, a queue message, or a timer. Handling customized orders requires the function to be invoked when a new order or a customization request arrives so using a Trigger ensures the function executes at the right time while bindings are used to move data in or out of the function.
Output binding is incorrect because output bindings are used to send or persist data after a function runs and they do not invoke the function when an event occurs.
Input binding is incorrect because input bindings provide data to a function that is already running and they are not the mechanism that starts the function when a new order arrives.
When a question asks which Azure Functions feature starts code execution on an event choose Trigger and remember that bindings handle data movement rather than invocation.
How can you ensure that blobs copied into Azure Blob Storage are automatically moved to the Archive access tier?
-
✓ B. Create a lifecycle management rule with tierToArchive and daysAfterModificationGreaterThan 30
The correct answer is Create a lifecycle management rule with tierToArchive and daysAfterModificationGreaterThan 30.
Create a lifecycle management rule with tierToArchive and daysAfterModificationGreaterThan 30 is correct because it defines a server side lifecycle policy that automatically transitions blobs to the Archive access tier after they have been unmodified for the configured period. This method works regardless of how blobs are copied or uploaded and does not require modifying client requests.
Include the x-ms-access-tier header set to Archive on the Put Blob request is incorrect because that approach requires setting the access tier at the time of the upload request and it does not provide an automatic, time based transition for blobs that are copied later. It is a per request action rather than a lifecycle policy.
Create a lifecycle management rule with tierToArchive and apply a blobIndexMatch filter is incorrect because the blobIndexMatch filter can select which blobs are targeted but it does not by itself define the time based condition required to move blobs automatically. A rule needs a setting such as daysAfterModificationGreaterThan to perform the transition to Archive.
When the exam asks about automatic tiering look for lifecycle management and time based properties such as daysAfterModificationGreaterThan as clues that the solution is a server side policy.
You are building a mobile client that connects to an Azure SQL Database named ‘Prometheus’ owned by NovaCorp. The database contains a table named ‘Clients’ that has a column called ‘contact_email’. You want to apply dynamic data masking to hide values in the ‘contact_email’ column. The proposed solution runs the PowerShell command Set-AzSqlDatabaseDataMaskingRule -DatabaseName ‘Prometheus’ -SchemaName ‘dbo’ -TableName ‘Clients’ -ColumnName ‘contact_email’ -MaskingFunction ’email’. Does this approach meet the requirement?
-
✓ B. No
No is correct because the PowerShell command as shown is incomplete and will not successfully apply a masking rule to the target database. The No option reflects that the command is missing required parameters and prerequisites even though the chosen masking function would otherwise be valid.
Dynamic data masking supports the email masking function and it is appropriate for hiding values in a contact_email column. To apply the rule with Az.Sql cmdlets you must include the resource group and SQL server identifiers and you must have the proper administrative permissions. You should also be aware that dynamic data masking does not encrypt data and privileged logins or database owners can still view the unmasked values.
Yes is incorrect because the example command omits required parameters such as ResourceGroupName and ServerName and it therefore will not run against the intended Azure SQL instance. The command needs those arguments and proper role permissions to successfully create or update the masking rule.
When a question shows a snippet double check for missing required parameters and required permissions. Look for the ResourceGroupName and ServerName arguments when you see Az.Sql PowerShell commands.
A development team at Meridian Apps is updating serverless code with Azure Functions and they want to know how many triggers a single function may declare?
-
✓ C. Exactly one trigger
The correct option is Exactly one trigger.
Azure Functions require that each function declares a single trigger which is the event source that starts the function. You can choose different trigger types such as HTTP, Timer, or Blob and you can add input and output bindings for additional data interaction while still having only one trigger per function.
No upper limit on triggers per function is incorrect because the function model does not support multiple triggers on the same function and it enforces a single trigger to start execution.
A fixed cap of 64 triggers per function is incorrect because there is no mechanism to attach multiple triggers up to a numeric cap since functions are defined with exactly one trigger.
Zero triggers or a single trigger is incorrect because a function must have a trigger to be invoked so declaring zero triggers is not a valid function definition in Azure Functions.
When answering questions remember that an Azure Function must declare exactly one trigger and you should distinguish triggers from input and output bindings.
You operate an Azure App Service web API that is deployed across three regions and that uses Azure Front Door for global traffic routing while Application Insights collects telemetry. You must calculate the application availability for each month. Which two methods will give you that information? (Choose 2)
-
✓ B. Application Insights availability tests
-
✓ C. Azure Monitor logs
The correct options are Application Insights availability tests and Azure Monitor logs.
Application Insights availability tests perform synthetic URL or multi step checks from multiple locations and record each success and failure with timestamps, and those results are specifically designed to produce availability percentages for a given period so they directly answer the monthly availability requirement.
Azure Monitor logs store the raw availability test results and related telemetry in Log Analytics or the Application Insights tables, and you can run Kusto queries to aggregate successes and failures across regions and compute accurate monthly availability percentages, and you can also correlate those results with Front Door logs if needed.
Azure Monitor workbooks are a visualization and reporting layer that present data from logs or metrics, and they do not themselves collect the availability data required to calculate monthly availability.
Azure Monitor metrics are time series for resource metrics and do not capture Application Insights synthetic availability test results in the form needed to compute endpoint availability across regions, so they are not the right source for calculating monthly application availability in this scenario.
When a question asks for calculating availability over time prefer sources that record synthetic test results or that provide queryable logs because those let you compute precise monthly percentages rather than relying on visualization alone.
A regional courier firm named Northpoint Delivery must map delivery driver records to a stable user identifier in Azure Active Directory. You are configuring the application setting labeled “Payload claim value” to extract an identifier from the ID token. Which claim value should you choose to consistently identify driver profiles?
-
✓ C. oid
The correct option is oid.
The oid claim is the Azure AD object identifier and it is issued as an immutable GUID in the ID token. This claim uniquely and stably identifies a user across applications and tenant updates, so using the oid lets the courier map delivery driver records to a consistent Azure AD user identifier even if other profile attributes change.
aud is the audience claim and it indicates the intended recipient of the token rather than the user identity, so aud is not suitable for mapping driver profiles.
idp identifies the identity provider that authenticated the user and it does not provide a unique, stable user identifier, so idp is not appropriate for mapping driver records.
When you need a stable cross application user identifier look for the object identifier or oid in the ID token because it is immutable and reliable for mapping users.
Which scope should be configured to enable per operation consistency for write operations to Azure Cosmos DB?
-
✓ B. The client application level for orderSvc
The client application level for orderSvc is correct.
The client application level for orderSvc is where you configure per operation consistency because the Cosmos DB SDK lets the client set request or operation level consistency options. Configuring the client application or request options for orderSvc allows each write operation to specify the consistency needed for that operation without changing the account default.
Container level is incorrect because Azure Cosmos DB does not provide a per container consistency configuration. Consistency is defined at the account level by default and can be overridden by the client per request, but containers do not have independent consistency settings.
The Cosmos DB account level is incorrect as the sole choice because the account level only establishes the default consistency for the account. You use the client or request options to vary consistency on a per operation basis rather than relying on the account setting alone.
When you need per-operation consistency use the SDK request options in the client and remember the account level provides the default that can be overridden per request.
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
A development team at Alder Labs is building an ASP.NET Core web application that will be deployed to Azure App Service. The application requires a centralized store for session state and it also needs to cache complete HTTP responses that are frequently served. Which Azure service can meet both needs?
-
✓ B. Azure Cache for Redis
The correct answer is Azure Cache for Redis.
Azure Cache for Redis provides a distributed in memory key value store that is commonly used as a centralized session state provider for ASP.NET Core and it also supports caching of complete HTTP responses for low latency delivery.
Azure Cache for Redis is designed for high throughput and low latency and it offers eviction policies and clustering that let you scale and keep frequently requested responses in memory so you avoid costly database or blob retrieval on each request.
Azure SQL Database can persist session data but it is not an in memory distributed cache so it has higher latency and higher operational cost when used for frequent session reads or response caching.
Azure Content Delivery Network is built to cache static or cacheable responses at the edge but it is not a centralized store for per user session state and it does not act as an in memory Redis cache for dynamic session data.
Azure Storage Account can store blobs and table records and it is useful for durable storage but it is not optimized for low latency in memory session state or for quickly serving cached full HTTP responses.
When a question asks for both centralized session state and fast response caching choose an in memory distributed cache such as Azure Cache for Redis and watch for phrases about low latency or in memory storage to confirm your choice.
Refer to the Meadow Roasters case study at the link below and answer the follow up question. Open the document in a new tab and do not close the test tab. https://example.com/docs/meadow-roasters-case You must implement an Azure Function to handle customized orders. Which value should you select for the Event source setting?
-
✓ C. Service Bus Queue
The correct option is Service Bus Queue.
Service Bus Queue is appropriate because Azure Functions can be triggered directly by messages on a Service Bus queue and this pattern provides decoupled, asynchronous processing with reliable delivery. Using a queue lets other components enqueue customized orders and lets the function process them independently so the system can handle spikes and retry failed processing.
Azure Functions includes a native Service Bus trigger binding so selecting Service Bus Queue as the Event source maps to that trigger and enables features like built in retries and dead letter handling which are useful for order processing workflows.
HTTP trigger is incorrect because that trigger is designed for synchronous HTTP requests and it does not provide the same queuing or reliable message delivery characteristics that a queue offers. An HTTP trigger would require callers to wait for the function to complete.
Blob Storage trigger is incorrect because that trigger fires when blobs are created or changed and it is not meant for processing incoming order messages.
Cosmos DB change feed is incorrect because it triggers on database writes to a Cosmos DB container and it is tied to data changes rather than to message based order submission, so it does not fit the case for handling externally submitted customized orders.
When choosing an Azure Function event source ask whether the workload requires asynchronous message processing and reliability. If it does then prefer a queue based trigger such as a Service Bus queue.
An online retailer named Solstice Retail operates four web apps on Azure App Service called PortalApp InventoryApp OrdersApp and APIBackend and each app currently uses a system assigned managed identity to access resources while all secrets are stored in an Azure Key Vault named VaultPrime They want to simplify permission management by moving to a single shared managed identity for all apps What action should they take?
-
✓ C. Configure all App Service instances to use the same user assigned managed identity
The correct answer is Configure all App Service instances to use the same user assigned managed identity.
A user assigned managed identity is created independently of any single App Service instance and can be assigned to multiple apps so all four web apps can authenticate with a single identity when accessing the Key Vault. Granting that one identity the necessary Key Vault permissions simplifies permission management and avoids distributing secrets or credentials to each application.
Create an Azure AD service principal and distribute its credentials to the apps is incorrect because distributing credentials reintroduces secret management and increases security risk. One of the main benefits of managed identities is that you do not have to handle or rotate credentials manually.
Create a single Azure Active Directory user account for all applications to use is incorrect because user accounts are intended for human users and sharing a user account and its credentials is insecure and against best practices. Applications should use identities designed for services instead.
Configure all apps to share the same system assigned managed identity is incorrect because system assigned managed identities are tied to a single resource and cannot be shared across multiple App Service instances. Each app receives its own system assigned identity so this approach is not feasible.
When a question asks about a single identity usable by multiple resources look for user assigned managed identity because it is reusable across resources while system assigned identities are bound to a single resource.
A regional chain called Harbor Retail needs to collect point of sale device telemetry from 2,400 stores around the globe and persist the data in Azure Blob storage. Each device produces about 3 megabytes of data every 24 hours and each location has between one and six devices sending data. The device records must be correlatable by a device identifier and the chain expects to open more locations in the future. You propose provisioning Azure Event Grid and configuring event filtering to evaluate the device identifier to accept the device data. Will this approach satisfy the requirements?
-
✓ B. No
The correct answer is No.
Azure Event Grid is designed for lightweight event notification and routing and not for bulk telemetry ingestion. It supports filtering and routing of individual events but it does not provide the partitioning, consumer group model, throughput scaling, retention and device management features that telemetry scenarios need. For device telemetry at scale you would use a service such as Azure Event Hubs or Azure IoT Hub which are built for high throughput ingest and for correlating data by device identifier.
Event Grid also has limits on event size and is optimized for discrete events rather than continuous streams of multi-megabyte daily payloads from thousands of devices. Persisting the telemetry reliably to Blob storage at this scale would require an ingestion pipeline that supports ordering, partitioning and durable retention and Event Grid alone does not provide those guarantees in the same way that Event Hubs or IoT Hub do.
Yes is incorrect because simply provisioning Event Grid and using event filtering on the device identifier does not address the throughput limitations, the event size constraints and the need for robust ingestion and device management. Filtering can route events but it does not make Event Grid a suitable telemetry ingestion platform for this workload.
When questions describe continuous high volume device telemetry think about services that are built for streaming and device identity. Use Event Hubs or IoT Hub for ingestion and use Event Grid for lightweight notifications and routing.
Which area of the Azure portal displays the ARM template that was used to deploy a resource?
-
✓ B. Deployments blade
Deployments blade is the correct option.
The Deployments blade displays the Azure Resource Manager template and the parameter values that were used for a deployment and it shows the deployment operations so you can inspect the exact JSON that created the resources.
You open the resource group and select the Deployments blade to view the list of deployments and then select an individual deployment to view the Template and Parameters tabs where the ARM template and inputs are shown.
Activity log is incorrect because the activity log records subscription and resource events and operations and it does not present the ARM template JSON used for the deployment.
Virtual Machine pane is incorrect because the virtual machine pane shows that VM’s settings and properties and it does not contain the full ARM template for the resource group deployment.
For questions about where to find an ARM template remember to check the resource group’s Deployments area in the portal where templates and parameters are exposed rather than the Activity log or the individual resource panes.
You are creating a public gateway for a news API used by Cityline Media. The backend is a RESTful service and it publishes an OpenAPI specification. You must enable access to the news API through an Azure API Management instance. Which Azure PowerShell command should you run?
-
✓ C. Import-AzureRmApiManagementApi -Context $ApimContext -SpecificationFormat ‘Swagger’ -SpecificationPath $OpenApiFile -Path $ApiPath
Import-AzureRmApiManagementApi -Context $ApimContext -SpecificationFormat ‘Swagger’ -SpecificationPath $OpenApiFile -Path $ApiPath is correct because it imports the OpenAPI or Swagger specification into the target API Management instance and creates the API endpoints that expose the RESTful backend.
The Import-AzureRmApiManagementApi cmdlet uses the provided Context to target the APIM service, it accepts the specification format and the path to the OpenAPI file, and it lets you set the API Path under which the gateway will publish the API, so it directly satisfies the requirement to enable access to the news API through API Management.
Note that the Import-AzureRmApiManagementApi and other AzureRm cmdlets belong to the older AzureRm PowerShell module which is deprecated and replaced by the Az module, so newer exams and scripts may instead use the corresponding Az cmdlets.
New-AzureRmApiManagementBackend -Context $ApimContext -Url $BackendUrl -Protocol http is incorrect because that cmdlet creates or configures a backend entity in API Management to describe a backend service, but it does not import an OpenAPI or Swagger file to create the API operations that the gateway exposes.
New-AzureRmApiManagement -ResourceGroupName $ResourceGroup -Name $ApimName -Location $EastUS -Organization “Cityline Media” -AdminEmail [email protected] is incorrect because that command provisions a new API Management service instance and it does not import an API definition into an existing APIM instance.
New-AzureRmApiManagementBackendProxy -Url $ApiUrl is incorrect because there is no standard cmdlet by that name that performs an OpenAPI import, and it would not perform the necessary import that publishes the RESTful backend through the gateway.
Choose the command that explicitly imports an OpenAPI or Swagger file and that accepts a Context parameter to target the API Management instance. Also be aware that AzureRm cmdlets are being replaced by Az equivalents on newer exams.
A payments startup named Northwind Labs uses Azure Functions and the engineers must add a library that allows one function to call and coordinate other functions while preserving state. Which Functions extension should they install to enable that capability?
-
✓ C. Durable Functions
Durable Functions is the correct option.
Durable Functions adds an orchestration programming model to Azure Functions so one function can call and coordinate other functions while preserving state across long running workflows and across restarts. It provides orchestrator functions and activity functions and it manages durable timers and reliable state so you can implement fan out and fan in patterns or human interaction workflows.
Extension Bundles is not correct because extension bundles only provide a mechanism to simplify installing function extensions and they do not themselves implement stateful orchestration.
Azure Functions Core Tools is not correct because that toolchain helps you run and deploy functions locally and it does not add an orchestration or state management extension.
Azure Logic Apps is not correct because Logic Apps is a separate managed workflow service and not an Azure Functions extension that you install to provide durable orchestration within functions.
When a scenario requires preserving state across function calls look for orchestration features and choose Durable Functions rather than tooling or bundle options.
A small fintech called BlueRidge Financial is creating an Azure Key Vault with PowerShell and they must retain deleted vault objects for 75 days. Which two parameters must be specified together to meet this retention requirement? (Choose 2)
-
✓ B. EnablePurgeProtection
-
✓ D. EnableSoftDelete
The correct options are EnableSoftDelete and EnablePurgeProtection.
EnableSoftDelete turns on soft delete so that deleted keys, secrets, and certificates are retained for a configurable retention period and to meet 75 days you must also set the retention value with the SoftDeleteRetentionInDays parameter when creating the vault. EnablePurgeProtection is required together with soft delete to prevent permanent purge of deleted objects so that the items cannot be permanently removed before the retention period ends and this combination enforces the 75 day retention.
EnabledForTemplateDeployment is incorrect because it only controls whether Azure Resource Manager template deployments can access the vault during deployment and it does not affect deletion retention or purge behavior.
EnabledForDeployment is incorrect because it controls whether certain services can access the vault during resource deployment and it does not enable soft delete or block purging.
When a question asks about retaining deleted Key Vault objects think about enabling EnableSoftDelete and then enabling EnablePurgeProtection, and remember to set the retention days with SoftDeleteRetentionInDays.
A small SaaS firm called BrightApps operates an Azure Web App that persists data in Cosmos DB. They provision a Cosmos container by running this PowerShell snippet $resourceGroupName = ‘webAppRg’ $accountName = ‘brtCosmosAcct’ $databaseName = ‘hrDatabase’ $containerName = ‘staffContainer’ $partitionKeyPath = ‘/StaffId’ $autoscaleMaxThroughput = 6000 New-AzCosmosDBSqlContainer -ResourceGroupName $resourceGroupName -AccountName $accountName -DatabaseName $databaseName -Name $containerName -PartitionKeyKind Hash -PartitionKeyPath $partitionKeyPath -AutoscaleMaxThroughput $autoscaleMaxThroughput. They execute these queries against the container SELECT * FROM c WHERE c.StaffId > ‘20000’ SELECT * FROM c WHERE c.UserKey = ‘20000’. Is the minimum provisioned throughput for the container 400 RUs?
-
✓ B. No
The correct answer is No.
The container was created with autoscale by specifying the AutoscaleMaxThroughput parameter as 6000 so autoscale governs the throughput. For autoscale the minimum provisioned throughput is 10 percent of the maximum so the minimum for a 6000 RU max is 600 RU per second rather than 400 RU. The PowerShell snippet therefore results in a 600 RU minimum and not 400 RU.
If you had used manual or fixed provisioned throughput the minimum per-container throughput is 400 RU per second for a single partition. That 400 RU floor applies to manual provisioning and not to autoscale, so it does not make 400 RU the correct minimum in this autoscale case.
Yes is incorrect because it assumes the container minimum is 400 RU. The 400 RU minimum only applies to manually provisioned containers. With autoscale the minimum is always 10 percent of the AutoscaleMaxThroughput so a 6000 max yields a 600 RU minimum.
First check whether throughput is autoscale or manual. For autoscale compute the minimum as 10% of the AutoscaleMaxThroughput and remember that manual containers have a 400 RU baseline for single partition containers.
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
Which Azure managed disk redundancy option minimizes downtime from a datacenter outage and enables rapid rollback to the prior day’s disk image?
-
✓ B. Zone-redundant storage ZRS
Zone-redundant storage ZRS is correct because it is designed to reduce downtime from a datacenter outage while enabling rapid rollback to a prior disk image.
Zone-redundant storage ZRS replicates data synchronously across multiple availability zones within a region so a single datacenter failure does not make the disk unavailable and snapshots or incremental snapshots can be used to roll back to a previous image quickly.
Read-access geo-redundant storage RA-GRS is incorrect because RA-GRS is a geo replication option that asynchronously replicates to a secondary region and it is intended for blob storage read access rather than fast, zone-level recovery of managed disks.
Locally-redundant storage LRS is incorrect because LRS keeps copies within a single datacenter and it cannot tolerate a datacenter outage, so it does not meet the requirement to reduce downtime from a datacenter failure.
Focus on whether the redundancy option spans multiple availability zones or stays in a single datacenter and whether the solution integrates with snapshots for fast rollback. Choose ZRS when the question asks about surviving a datacenter outage and enabling quick image rollback.
A regional fintech named Meridian Trust plans to use Azure Storage for documents and logs and they must issue a shared access signature that allows access to an item in a single storage service such as blob storage. Which SAS type should they select to delegate access to a resource within one storage service?
-
✓ C. Service-level SAS
Service-level SAS is correct because it delegates access to a resource within a single Azure Storage service such as Blob storage.
A Service-level SAS is issued for a specific service and it grants scoped, time limited permissions to a specific resource or container in that service. This makes it the appropriate choice when Meridian Trust needs to allow access to an item in a single storage service without exposing account keys or granting broader account permissions.
User delegation SAS is incorrect because that form relies on Azure Active Directory credentials and a user delegation key and it is used specifically for delegating access to Blob resources under AD control rather than the general single-service delegation described in the question.
Account-level SAS is incorrect because it grants permissions at the storage account level and can span multiple services and operations and that is broader than the requirement to delegate access to an item in a single service.
When the exam asks for access to a specific blob or container pick a service SAS. If Azure AD delegation is mentioned think user delegation, and if access must span services think account-level SAS.
A startup named Meridian Labs must deploy a collection of Azure virtual machines with an ARM template and place them into a single availability set. You must configure the template so that the maximum possible number of VMs remain reachable during a hardware outage or planned maintenance. Which value should you set for the platformFaultDomainCount property?
-
✓ C. Highest supported value
Highest supported value is correct because selecting the highest supported platformFaultDomainCount spreads the VMs across the maximum number of physical fault domains and that maximizes the number of machines that remain reachable during hardware outages or planned maintenance.
The platformFaultDomainCount property controls how many fault domains Azure will use inside the availability set so a larger value places VMs on more separate physical racks and reduces the blast radius of a single hardware failure. Choosing the highest supported value therefore gives the best resilience and keeps the largest possible subset of VMs online.
In most Azure regions the highest supported fault domain count for availability sets is three so requesting the highest supported value typically results in three fault domains. Platform capabilities can vary by region and can change over time so asking for the maximum supported value in your template is the safest way to maximize availability.
2 is not correct because two fault domains may improve distribution compared to a single domain but it may not be the maximum possible distribution and it does not satisfy the requirement to keep as many VMs reachable as possible.
Lowest supported value is wrong because selecting the lowest supported count concentrates VMs into fewer fault domains and increases the risk that a single outage or maintenance event will affect many or all VMs.
1 is incorrect because a single fault domain places all VMs in the same physical failure boundary so a hardware failure or maintenance event could render all VMs unreachable.
When the exam asks you to maximize availability choose the highest or maximum supported counts for distribution settings such as fault domains.
A retail chain named Harbor Retail operates 2500 locations worldwide and needs to ingest point of sale terminal data into an Azure Blob storage account. Each terminal emits approximately 3 megabytes of data every 24 hours. Each site hosts between one and six terminals that send data. The data must be correlated by a terminal identifier and the solution must allow for future store expansion. Will provisioning an Azure Notification Hub and registering every terminal with it satisfy these requirements?
-
✓ E. No
No is correct because provisioning an Azure Notification Hub and registering every terminal with it would not meet the ingestion, per terminal correlation, and storage requirements for the point of sale terminals.
Azure Notification Hub is designed to deliver push notifications to mobile and native app clients and not to receive and persist telemetry from thousands of devices. It does not provide built in device identity and management or native routing to Azure Blob storage with the reliability and device level correlation that this scenario needs.
Azure Event Hubs is a high throughput event ingestion service and it can ingest large volumes of events from many terminals. It is mentioned here to explain why it is not the selected answer. Event Hubs does not offer the device identity and management features out of the box so correlating and managing individual terminals would require additional infrastructure and work.
Azure IoT Hub is purpose built for per device telemetry and it provides device identity, reliable device to cloud messaging, and built in routing to storage and other sinks. It is not the correct option in this question set because the question asked whether Notification Hub would satisfy the requirements and the correct response is the simple choice No.
Yes is incorrect because answering yes would assert that Notification Hub can serve as a scalable, identity aware telemetry ingestion endpoint with native routing to Blob storage. It cannot, and therefore the correct selection is No.
When a question describes per device telemetry and asks about ingesting and storing device data look for keywords like device identity, device management, and routing to storage. Those terms point toward IoT or event ingestion services rather than push notification services.
A payments startup called BlueMarble Systems exposes its APIs through Azure API Management to control client access and it needs to enforce client certificate authentication so only authorized callers reach the services. In which policy section should the <authentication-certificate> policy be placed?
-
✓ D. Inbound processing stage
The correct answer is Inbound processing stage.
The Inbound processing stage executes before Azure API Management forwards the request to the backend and so this is where the authentication-certificate policy must run to validate client TLS certificates and enforce that only authorized callers reach the services.
Outbound processing stage is incorrect because outbound policies run after the backend responds and they cannot prevent unauthorized requests from reaching the backend.
Backend configuration section is incorrect because that section is for configuring how API Management connects to the backend and it does not handle authentication of incoming client certificates presented to the gateway.
Error handling section is incorrect because error handling policies execute only when errors occur and they are not the place to enforce client authentication for normal incoming requests.
Remember that policies which validate incoming credentials belong in the inbound section so they run before requests are forwarded to the backend.
How should you deploy a web application so it stays responsive during periods of heavy traffic while minimizing costs?
-
✓ B. App Service Standard tier with autoscale
The correct option is App Service Standard tier with autoscale.
The App Service Standard tier with autoscale is a managed platform that can automatically scale out under heavy traffic and scale in when demand falls. It gives built in autoscaling, integrated load balancing, and platform health management so you remain responsive without the operational overhead of managing virtual machines.
The App Service Standard tier with autoscale is also typically more cost efficient for web applications than running equivalent VM instances because you pay for managed instances and you can tune scaling rules to match traffic patterns and reduce wasted capacity.
The Azure Functions Consumption plan can be very cost effective for spiky, short lived workloads because it scales to zero and charges per execution. However the Azure Functions Consumption plan can introduce cold start latency and has execution time limits that may hurt responsiveness for a user facing web application under sustained heavy traffic.
The Virtual Machine Scale Set with autoscale can provide strong performance and control, but the Virtual Machine Scale Set with autoscale usually requires more operational work and results in higher baseline costs because each scaled unit is a full VM. That makes it less cost efficient than a managed App Service for typical web apps.
When choosing between options look for managed platform features like built in autoscaling and lower operational overhead as clues that a service balances responsiveness and cost effectively.
BlueWave Solutions operates several web front ends on Azure and needs to collect events and telemetry using Application Insights. You must configure the web apps so they send telemetry to Application Insights. Which three tasks should you perform in sequence? Actions 1 Enable the App Service diagnostics extension 2 Add the Application Insights SDK to the application code 3 Retrieve the Application Insights connection string 4 Create an Azure Machine Learning workspace 5 Provision an Application Insights instance?
-
✓ C. 5 then 3 then 2
The correct option is 5 then 3 then 2.
First you must provision an Application Insights instance so that there is a resource to receive telemetry. After the resource exists you retrieve the Application Insights connection string so your apps know where to send data. Finally you add the Application Insights SDK to the application code so the app collects and sends events and telemetry to the provisioned resource.
3 then 5 then 1 is incorrect because you cannot retrieve a connection string before the Application Insights resource is created. The connection string is issued by the created resource and enabling the App Service diagnostics extension is not the required first step for SDK based instrumentation.
2 then 4 then 5 is incorrect because adding the SDK before creating the Application Insights resource leaves you without the connection string needed to send data. Creating an Azure Machine Learning workspace is unrelated to instrumenting web apps with Application Insights.
1 then 4 then 3 is incorrect because enabling the App Service diagnostics extension is not the primary route for setting up Application Insights and a Machine Learning workspace is not needed. Also retrieving the connection string must follow creating the Application Insights resource which this sequence does not do.
4 then 1 then 2 is incorrect because creating an Azure Machine Learning workspace is not required and enabling the App Service diagnostics extension is optional. You still need to create the Application Insights resource and obtain its connection string before adding the SDK to your code.
Provision the monitoring resource first and then get its connection string before you instrument code. Keep in mind that Machine Learning workspaces are not part of the Application Insights setup for web apps.
A development team at HarborTech is creating a web portal that will interact with blob containers in an Azure Storage account. The team registered an application in Azure Active Directory and the portal will rely on that registration. End users will sign in with their Azure AD credentials and those credentials will be used to access the stored blobs. Which API permission type should be granted to the Azure AD application registration to permit the app to act on behalf of the signed in user?
-
✓ C. user_impersonation
The correct option is user_impersonation.
user_impersonation is a delegated API permission scope that allows the application to call an API on behalf of the signed in user using standard OAuth2 flows. This is the permission type you grant in the app registration when you want the portal to act as the user when accessing protected resources such as blob data.
User.Read is a Microsoft Graph delegated permission that lets an app sign in and read a user profile and it does not provide access to Azure Storage blobs.
Storage Blob Data Contributor is an Azure role based access control role that grants data plane permissions when assigned to a principal at the storage account or container level. It is not the OAuth API permission scope name you add to an app registration to let the app act on behalf of a user.
When a question asks about acting on behalf of a signed in user look for delegated permission names such as user_impersonation rather than RBAC roles or application permissions.
A team at HarborSoft deployed an Azure Container Apps instance and they turned off ingress for the container app. End users report they cannot reach the service and you observe that the app has scaled down to zero replicas. You need to restore access to the app. The suggested fix is to enable ingress, add a TCP scaling rule and apply that rule to the container app. Will this change resolve the accessibility problem?
-
✓ B. No this change will not resolve the accessibility problem
The correct answer is No this change will not resolve the accessibility problem.
The suggested change is ineffective because Azure Container Apps only provides HTTP ingress and the platform triggers scale from zero for incoming HTTP requests. Adding a TCP scaling rule will not enable raw TCP listeners or cause the Container App to wake from zero in response to TCP connections.
Enabling ingress is the correct step to restore external access for HTTP traffic and the platform will scale the app up from zero when it receives HTTP requests. The addition of a TCP scaling rule is unnecessary and it will not make the app reachable over TCP because Container Apps do not support raw TCP ingress or TCP wakeup behavior.
Yes this change will resolve the accessibility problem is incorrect because the TCP scaling rule will not create a TCP listener or trigger scale from zero and so the proposed change will not achieve the intended external connectivity.
When you see scale to zero questions remember that Azure Container Apps scale from zero on HTTP requests only. If a design requires raw TCP connectivity consider AKS, virtual machines, or other services that support TCP listeners.
You just provisioned a new Azure subscription for a small company and you are building an internal staff portal that will display confidential records. The portal uses Microsoft Entra ID for sign in. You need to require multi factor authentication for staff who access the portal. What two tasks should you perform? (Choose 2)
-
✓ B. Upgrade the tenant to Microsoft Entra ID Premium P1
-
✓ C. Create a new conditional access policy in Microsoft Entra ID
The correct answers are Upgrade the tenant to Microsoft Entra ID Premium P1 and Create a new conditional access policy in Microsoft Entra ID.
You must Upgrade the tenant to Microsoft Entra ID Premium P1 because conditional access features that allow you to require multi factor authentication are part of the Premium P1 licensing level. The P1 license provides the policy and controls needed to enforce MFA for specific users and applications.
You also need to Create a new conditional access policy in Microsoft Entra ID so you can target the staff users and the portal application and then require multi factor authentication as a grant control. Conditional access is the recommended and flexible way to require MFA for internal staff access to confidential resources.
Enable the legacy baseline policy in Microsoft Entra ID conditional access is not correct because the legacy baseline or baseline policies have been deprecated and replaced by security defaults and modern conditional access capabilities. Relying on a legacy baseline policy is not the supported way to enforce MFA on new tenants.
Enable per user multi factor authentication for the user accounts is not correct because per user MFA is a coarse control that is harder to manage at scale and it does not give the same conditional access targeting and session controls as conditional access policies. The exam expects the use of conditional access with the appropriate license.
Configure the portal to use Microsoft Entra ID B2C is not correct because Entra ID B2C is designed for consumer and external customer identities and it is not needed for internal employee sign in with Microsoft Entra ID.
Enable the application proxy to publish the internal site is not correct because Application Proxy is for publishing internal applications to external users and it does not by itself enforce multi factor authentication. You still need conditional access and the proper license to require MFA.
When an answer involves conditional access think about licensing first. If a feature requires a plan like Premium P1 then creating a conditional access policy without that license will not meet the requirement.
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
How do you link reply messages to the original Azure Service Bus messages while preserving session context and enabling correlation for auditing? (Choose 2)
-
✓ B. Place MessageId into CorrelationId
-
✓ D. Copy SessionId to ReplyToSessionId
Place MessageId into CorrelationId and Copy SessionId to ReplyToSessionId are correct.
Place MessageId into CorrelationId links a reply back to the original message because the original message id is carried in the CorrelationId property and consumers or auditors can match replies to requests by that value.
Copy SessionId to ReplyToSessionId preserves the session context so that replies are routed into the same session as the original message and state and ordering are maintained for session aware consumers.
Set ReplyTo to receiving entity name is incorrect because the ReplyTo property indicates the address where replies should be sent but it does not preserve session context or provide a correlation identifier by itself.
Assign MessageId to SequenceNumber is incorrect because the SequenceNumber is assigned and managed by the broker and cannot be set by the client.
Set SequenceNumber into DeliveryCount is incorrect because the DeliveryCount is a broker maintained counter of delivery attempts and it is not a client settable field.
When you need request reply with sessions remember to set the original message id into CorrelationId and to set ReplyToSessionId to preserve the session. On the exam check whether a property is client settable or broker assigned.
At Cascade Analytics a team is building an Azure Function that acts as a webhook to fetch an image from Azure Blob Storage and then insert a metadata record into Azure Cosmos DB. Which output binding should be configured for the function?
-
✓ C. Azure Cosmos DB
The correct option is Azure Cosmos DB.
Azure Cosmos DB is the appropriate output binding because the function needs to insert a metadata record into a Cosmos container. The Cosmos DB output binding lets the function write items directly to a container without requiring manual SDK calls and it handles connection details and serialization for you.
The function can use an input binding such as Azure Blob Storage to fetch the image and then use the Azure Cosmos DB output binding to persist the metadata record.
Azure Queue Storage is designed for message queuing and asynchronous processing. It does not provide a direct mechanism to insert documents into Cosmos DB so it is not the correct output binding for saving metadata records.
HTTP can act as a trigger or handle web requests but it does not itself write documents to a Cosmos container. Using HTTP would require explicit code to call the Cosmos DB SDK or REST API, so it is not the correct output binding here.
Azure Blob Storage is used to store binary objects such as images and it can serve as an input binding to read the image. It is not suitable as the output binding for inserting a metadata document into Cosmos DB.
Remember to choose an output binding that matches the target resource type. Use a database output binding when you need to persist records and use storage or queue bindings for blobs or messages.
A software operations team at Stratus Digital manages an Azure App Service web app named WebAppAlpha and an Azure Function called FuncProcessor. The web app reports telemetry to an Application Insights instance named appInsightsProd. The team set up a synthetic availability test and an alert rule in appInsightsProd that sends an email when the test fails. They want the alert to also cause FuncProcessor to run. The proposed solution is to create an Azure Monitor action group. Will this solution achieve the goal?
-
✓ B. Yes an Azure Monitor action group can be used to invoke the Function app
Yes an Azure Monitor action group can be used to invoke the Function app is correct.
An Azure Monitor action group can be configured to call an Azure Function either by using the built in Azure Function action type or by invoking a webhook endpoint for the function. When the Application Insights availability test alert fires the alert rule can call the action group and that action group will trigger the function so FuncProcessor can run as part of the alert response.
No the action group alone will not cause the function to execute is incorrect because an action group is exactly the mechanism used to run downstream actions such as Functions when an alert fires. The caveat is that you must add the Function (or a webhook to the Function) as an action in the action group and associate that action group with the Application Insights alert rule so the function actually gets invoked.
When answering alerts and action questions remember to check two things. First confirm the alert rule is linked to an action group. Second confirm the action group contains the proper action type such as an Azure Function or a webhook that invokes the function.
You are building a .NET Core web service for a company called Meridian Retail that uses Azure App Configuration. You populate the App Configuration store with 120 entries. The application must keep all configuration values consistent when any single entry changes and it must apply changes at runtime without restarting the service. The solution should also limit the total number of calls to the App Configuration service. How can you implement dynamic configuration updates in the application? (Choose 2)
-
✓ A. Decrease the App Configuration client cache duration from the default value
-
✓ B. Register a sentinel key in the App Configuration store and enable a refresh that updates all settings
The correct answers are Decrease the App Configuration client cache duration from the default value and Register a sentinel key in the App Configuration store and enable a refresh that updates all settings.
Register a sentinel key in the App Configuration store and enable a refresh that updates all settings is correct because the sentinel pattern lets the application watch a single key for changes and then trigger a refresh that pulls all 120 settings at once. This keeps all values consistent when any single entry changes and avoids polling every key individually.
Decrease the App Configuration client cache duration from the default value is correct because a shorter cache expiration lets the client detect changes to the sentinel sooner and apply updates at runtime without restarting the service. When you combine a short cache for the sentinel with the global refresh that updates all settings you get timely updates while minimizing the total number of calls to the App Configuration service.
Increase the App Configuration cache duration beyond the default value is wrong because increasing the cache lifetime would delay detection of changes and would not meet the requirement to apply changes at runtime without restarting the service.
Add Azure Key Vault and configure the Key Vault configuration provider is wrong because Key Vault is for secret storage and rotation and it does not provide the App Configuration refresh mechanism needed to keep all configuration values consistent across the store.
Register every key in the store and disable global refresh so each key is refreshed individually is wrong because polling or registering all 120 keys individually greatly increases the number of calls and complexity, and it does not efficiently ensure that all values are updated atomically when one key changes.
Replace the App Configuration entries with environment variables for each setting is wrong because environment variables do not support dynamic runtime refresh in the application and usually require a restart or redeploy to change values.
Look for the sentinel key plus client cache trade off. Use a sentinel to minimize which key you poll and tune the cache duration so the sentinel changes are detected quickly without querying all keys frequently.
You deploy an ASP.NET Core web application for Riverton Software to Azure App Service and you instrument it with “Application Insights” telemetry so you can monitor performance. You need to confirm that the application is reachable and responsive from multiple geographic probe locations at scheduled intervals and you must notify the support team if the site stops responding. Which test types can you configure for the web application? (Choose 2)
-
✓ B. URL ping
-
✓ D. Multi-step web test
The correct options are URL ping and Multi-step web test.
A URL ping test is a simple availability probe that periodically requests a single URL from multiple geographic locations to verify reachability and HTTP response codes. This type of test is designed for scheduled, synthetic monitoring and it can trigger alerts when the site stops responding.
A Multi-step web test runs scripted sequences of requests to exercise user flows across multiple pages and it can validate content at each step. This makes it suitable when you need more than a single endpoint check and you still want scheduled checks from various probe locations with alerting.
Load test is not an Application Insights availability test type. Load testing addresses performance and scale rather than synthetic availability probes and legacy Visual Studio cloud load testing was retired, with Azure Load Testing available as the current service for load scenarios.
TrackAvailability API refers to an SDK method used to record custom availability telemetry from within application code. It is not an external probe that runs on a schedule from multiple geographic locations so it does not fulfill the requirement for synthetic, scheduled monitoring from probes.
Unit test is a development time test that runs inside your build or test environment and it does not provide scheduled external monitoring or geographic probe locations. Unit tests cannot replace Availability tests for monitoring a live application.
When the question asks about checking reachability from multiple locations and scheduled checks think of Application Insights Availability tests. Remember that URL ping is for single endpoint checks and Multi-step web test is for scripted flows and content validation.
Which Application Insights sampling setting should be chosen for an Azure Function that forwards telemetry to a workspace so that ingestion remains within the workspace quota?
-
✓ B. Enable fixed rate sampling
The correct option is Enable fixed rate sampling.
Enable fixed rate sampling applies a constant percentage to telemetry so the volume forwarded to the workspace is predictable and can be planned against a quota. Fixed rate sampling gives a stable and reproducible reduction before or during export which helps keep ingestion within workspace limits.
Enable adaptive sampling is not correct because adaptive sampling changes the sampling percentage dynamically to manage local throughput and it can vary during spikes. That variability makes ingestion less predictable and it is not ideal when you must guarantee workspace quota limits.
Configure sampling overrides is not correct because overrides are meant to tune sampling for specific telemetry types or conditions rather than to enforce a single predictable reduction across all telemetry. Overrides add complexity and do not by themselves provide the straightforward quota control that fixed rate sampling provides.
When a question asks for keeping workspace ingestion within a quota choose fixed rate sampling because it gives predictable reduction and simpler quota planning.
Refer to the Maple Roastery case study at the link below and answer the following question. Open the link in a new tab and keep the test tab open. https://docs.google.com/document/d/1co5iSrqjkbDIQT5rvGEk7FAlZQdRYPIesWy0ncNT06I/edit?usp=sharing You must store customized product records in Azure Cosmos DB. Which Cosmos DB API should you choose?
-
✓ C. Core SQL API
Core SQL API is the correct choice.
Core SQL API uses a JSON document model and provides rich SQL‑like queries and automatic indexing which makes it well suited for storing customized product records with varying attributes. The document model lets you store flexible product properties and the SQL query surface lets you filter and project across those properties without needing a fixed schema.
Cassandra API is not correct because it targets wide column workloads and uses the Cassandra query language which is not optimized for flexible JSON document queries and document indexing.
MongoDB API is not correct because it exposes the MongoDB wire protocol and is intended for workloads that specifically require MongoDB compatibility. The question does not indicate a need for MongoDB features or drivers and the Core SQL API is the native document API for Cosmos DB.
Gremlin API is not correct because it is designed for graph data and traversal queries and it is not appropriate when the primary need is storing and querying document records for products.
When a question describes flexible product records and rich queries think of storing JSON documents and choose the Core SQL API unless the case explicitly requires a graph or a specific wire protocol.
Contoso Cloud Streams uses throughput units to limit how quickly events can be ingested and you can assign between 1 and 25 throughput units. How fast does one throughput unit represent for incoming data?
-
✓ B. 1 megabyte per second or 1000 events per second whichever comes first
The correct answer is 1 megabyte per second or 1000 events per second whichever comes first.
A single throughput unit limits incoming data so that ingestion does not exceed one megabyte per second or one thousand events per second and the effective cap is whichever of those two limits is reached first. This matches the typical throughput unit definition used when a service exposes one to twenty five throughput units for scaling.
Cloud Pub/Sub is incorrect because it names a different messaging service and does not state the throughput unit rate described in the question.
1000 events per second is incomplete and therefore incorrect because a throughput unit also enforces a one megabyte per second cap and the lower of the two limits is applied.
1 gigabyte per second is incorrect because it is three orders of magnitude larger than the actual one megabyte per second ingress limit for a single throughput unit.
When a question mentions throughput units remember to check both the megabytes per second and the events per second limits and watch for the phrase whichever comes first.
All Azure questions come from my AZ-204 Udemy course and certificationexams.pro
You manage an Azure SQL database for BlueTech Solutions that uses Microsoft Entra ID for sign in and you must allow database developers to connect from Microsoft SQL Server Management Studio with their corporate on premises Entra accounts while keeping interactive sign in prompts to a minimum. Which authentication method should you enable?
-
✓ D. Microsoft Entra ID integrated authentication
The correct answer is Microsoft Entra ID integrated authentication.
Microsoft Entra ID integrated authentication lets SQL Server Management Studio use the developer’s corporate on premises Entra credentials for single sign on. This reduces interactive sign in prompts by using integrated Windows or Kerberos based tokens when the device and identity are joined and synchronized with Entra ID.
To use integrated authentication you must enable Azure Active Directory authentication for the database and ensure client machines are domain joined or hybrid joined and configured for Entra single sign on. When configured this option gives the most seamless experience for corporate on premises accounts.
Azure Multi Factor Authentication is an additional verification method and it increases interactive prompts rather than minimizing them. It does not provide the seamless integrated Windows sign in that the question requires.
Microsoft Entra authentication token refers to acquiring bearer tokens and may still require interactive steps to obtain consent or tokens. It is not the integrated Windows single sign on method that minimizes prompts from SSMS for on premises corporate accounts.
OATH software tokens are time based one time passwords used for multifactor verification. They are not an integrated authentication method and they would add interactive entry of codes instead of reducing prompts.
When a question asks to minimize interactive prompts look for answers that mention integrated or single sign on and verify whether device and identity join requirements are implied.
An engineering team at Contoso Cloud is revising an ARM template and they need each deployed resource to use the same region as the resource group that contains it. Which ARM template function returns the region of the resource group?
-
✓ C. [resourceGroup().location]
The correct option is [resourceGroup().location].
[resourceGroup().location] returns the location property of the resource group that contains the deployment so it ensures each deployed resource can use the same region as the resource group. Using this function reads the actual resource group metadata and so it is the reliable way to get the deployment region.
[parameters(‘deployLocation’)] is incorrect because that expression refers to a parameter value supplied to the template and it does not automatically reflect the resource group location unless you explicitly pass the group location into the parameter.
[environment()] is incorrect because the environment function provides information about the Azure environment and endpoints and it does not return the resource group region.
[subscription().location] is incorrect because the subscription object does not expose a location property for resource placement and subscription scope is not the right place to look for the resource group region.
When you need the resource group region use the resourceGroup() function and reference its location property rather than relying on a parameter or the subscription object.
What configuration should be applied to an Azure Cache for Redis instance to minimize metadata loss if a region becomes unavailable? (Choose 2)
-
✓ B. Enable append only file persistence
-
✓ C. Configure frequent snapshot backups
The correct options are Enable append only file persistence and Configure frequent snapshot backups.
Enabling Enable append only file persistence causes Redis to log every write operation to disk so that after a failure you can replay those operations and recover most recent metadata and data. This approach lowers the recovery point objective because writes are persisted incrementally rather than only at snapshot intervals.
Choosing to Configure frequent snapshot backups produces regular point in time RDB files that you can restore from stored backups in another region or after a failover. Making snapshots more frequent reduces the amount of state lost between backups and gives a simpler restore point for large data sets.
Enable geo-replication to a secondary region is not sufficient on its own to minimize metadata loss because replication is often asynchronous and recent writes may not have replicated before a region failure. Geo replication can help with availability but it does not guarantee the same low recovery point objective that disk persistence and frequent backups provide, and it may not be available in all tiers.
When the question is about minimizing metadata loss choose options that persist writes to disk and increase backup frequency rather than relying only on replication.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
