AZ-400 DevOps Engineer Expert Questions and Answers

AZ-400 DevOps Engineer Expert Certification Exam Topics

Want to pass the AZ-400 certification exam on your first try? You are in the right place. Here is a collection of sample AZ-400 exam questions that will help you learn key concepts and prepare for the real AZ-400 test.

These practice resources are drawn from my Udemy courses and the certificationexams.pro website.

AZ-400 DevOps Engineer Expert Practice Questions

These are not AZ-400 exam dumps or braindumps. They are written to closely resemble what you will encounter on the real AZ-400 certification exam so you can prepare honestly and gain real DevOps knowledge.

Good luck on these practice questions, and even better luck when you take the official AZ-400 exam.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AZ-400 DevOps Exam Simulator Questions

Bellmont Systems uses an Azure DevOps project to track work items and the team wants Microsoft Teams to receive alerts whenever work items are updated. What action should you perform first to allow Teams to receive those Azure DevOps notifications?

  • ❏ A. Create an Azure DevOps service hook subscription

  • ❏ B. Add a new channel to the Microsoft Teams team

  • ❏ C. Configure an incoming webhook by adding a Teams connector

  • ❏ D. Install an Azure DevOps marketplace extension

  • ❏ E. Enable external access in the Teams admin center

Orion Systems has an Azure DevOps project named ProjectOmega and all engineers use Windows 11 workstations. You must create a Git repository that supports large binary assets and stores the binaries outside the repository while preserving pointer metadata in Git commits. Available actions are 1 Configure SSH key based authentication. 2 Configure personal access token based authentication. 3 Perform a custom installation of Git for Windows that includes Git Virtual File System GVFS. 4 Configure Git Large File Storage LFS file tracking. 5 Perform a custom installation of Git for Windows that includes Git Large File Storage LFS. Which three steps should you perform in sequence on each developer workstation?

  • ❏ A. 4, 2, 3

  • ❏ B. 2, 5, 4

  • ❏ C. 1, 4, 5

  • ❏ D. 5, 2, 4

Vertex Labs is building a new Java application and they already run a SonarQube server that analyzes their C# solutions. They want to add code quality checks for the Java project and include the appropriate tasks in their build pipeline to enable SonarQube scanning. Which task types should be added to the build pipeline?

  • ❏ A. Chef

  • ❏ B. Gradle

  • ❏ C. Maven

  • ❏ D. Octopus Deploy

You have an Azure subscription that contains a Traffic Manager profile named TMProfileZ and a web app named BlueApp. TMProfileZ directs traffic for BlueApp and it is currently set to route clients to the endpoint with the lowest response time. You must configure TMProfileZ so that all requests from Asian locations are sent to an endpoint located in Australia. Which three actions should you perform in order from the Azure portal? The available actions are 1 Add new endpoints and assign geographic regions to them, 2 Reduce the DNS record TTL for BlueApp, 3 Change the routing method to performance, 4 Remove all current endpoints, 5 Change the routing method to geographic?

  • ❏ A. Reduce the DNS record TTL then remove all current endpoints then change the routing method to geographic

  • ❏ B. Remove all current endpoints then change the routing method to geographic then add new endpoints and assign geographic regions to them

  • ❏ C. Change the routing method to performance then add new endpoints and assign geographic regions then remove all current endpoints

  • ❏ D. Add new endpoints with geographic region assignments then change the routing method to geographic then reduce the DNS record TTL

You maintain an Azure DevOps project named PipelineHub and it contains an Azure Artifacts feed called artifactstream. The artifactstream feed currently has no public upstream sources and you must add a public upstream to retrieve .NET packages. Which upstream source should you add to Azure Artifacts?

  • ❏ A. Chocolatey

  • ❏ B. Maven

  • ❏ C. NuGet

  • ❏ D. npm

A development team at ApexSoft is preparing an application to be deployed with ARM templates and the templates will also provision virtual machines. They must store a VM credential in an Azure Key Vault and they have this script az SLOT_1 create –name “ApexVault3” –resource-group “ApexResourceGroup” –location westus2 az SLOT_2 SLOT_3 set –vault-name “ApexVault3” –name “AdminPass” –value “xYz6789AbCd” Which of the following belongs in SLOT_1?

  • ❏ A. set

  • ❏ B. gcloud

  • ❏ C. secret

  • ❏ D. keyvault

You create a Git repository named RepoA in Azure Repos for a development team codebase. You need to ensure that all pull requests are associated with a work item while keeping ongoing administration to a minimum. Which policy type should you configure?

  • ❏ A. Build

  • ❏ B. Status

  • ❏ C. Branch

  • ❏ D. Check-in

A development team uses Azure DevOps and the release pipeline contains two stages named Testing and Production. The Testing stage deploys new builds to an Azure Web App called webapp-alpha and the Production stage deploys to an Azure Web App called webapp-beta. You need to stop deployments to webapp-beta when Application Insights generates Failed requests alerts after a new release has been deployed to webapp-alpha. What should you configure on the Testing stage?

  • ❏ A. Require a manual post-deployment approval in the post-deployment settings

  • ❏ B. Set a pre-deployment gate on the Testing stage

  • ❏ C. Add a pipeline task to create or configure Application Insights alert rules

  • ❏ D. Enable an automatic redeploy trigger in the post-deployment settings

A development team at Nimbus Innovations must configure Azure Pipelines for a new feature and the pipeline requires a Linux self-hosted agent. The pipeline will run two times per day and each execution should take about 20 minutes. You need to select a compute option to host the self-hosted agent while minimizing cost. Which compute option should you choose?

  • ❏ A. Azure Kubernetes Service (AKS)

  • ❏ B. Azure Virtual Machines

  • ❏ C. Azure Container Instances (ACI)

  • ❏ D. Azure Container Apps

A retail analytics startup named Meridian Labs is connecting a cloud hosted Jenkins server to a new Azure DevOps organization. Meridian Labs requires that Azure DevOps notify the Jenkins server whenever a developer pushes updates to a branch in Azure Repos. The proposed solution is to add a trigger to the build pipeline that starts a pipeline on commit. Will that meet the requirement?

  • ❏ A. Add a pipeline trigger that starts the Azure Pipelines build on commit

  • ❏ B. Enable Azure DevOps notifications to send alerts to team members

  • ❏ C. Create a service hook subscription in Azure DevOps that calls the Jenkins endpoint

A development group at NovaSoft is building a continuous integration pipeline for a Java application and they require a utility that reports the percentage of the codebase exercised by automated tests. Which tool provides that capability?

  • ❏ A. JaCoCo

  • ❏ B. Maven

  • ❏ C. Cobertura

You manage an Azure DevOps project at Nimbus Systems that hosts artifact feeds and you must grant a group of engineers permission to save packages retrieved from upstream feeds while avoiding unnecessary extra privileges. The proposed solution is to assign the developers the Collaborator access level. Does this solution meet the requirement?

  • ❏ A. No

  • ❏ B. Yes

Your company maintains an Azure Active Directory tenant under Microsoft Entra and the directory holds three groups named TeamAlpha TeamBeta and TeamGamma. You create a new Azure DevOps project called ProjectZephyr. You must secure the project service connections while following the principle of least privilege. Members of TeamAlpha must be able to share and revoke sharing of a service connection across projects. Members of TeamBeta must be able to rename a service connection and update its description. Members of TeamGamma must be able to use the service connection in build or release pipelines. Which permission should you assign to TeamAlpha?

  • ❏ A. Service Connections Administrator

  • ❏ B. Project-level Administrator

  • ❏ C. Organization-level Administrator

  • ❏ D. Contributor

  • ❏ E. Reader

An engineering group at Meridian Systems will collect metrics and key performance indicators from their Stratus DevOps projects to confirm they are meeting targets and expectations. You must determine which KPI reflects the project’s quality and security posture. Which KPI should you choose?

  • ❏ A. Application Response Time

  • ❏ B. Mean Time to Recover (MTTR)

  • ❏ C. Change Failure Rate

  • ❏ D. Lead Time for Changes

A cloud engineer at Meridian Systems is converting an Azure Resource Manager template that uses the expression [if(parameters(‘isComplete’), ‘1a’, ‘2a’)] into Bicep. Which Bicep expression will produce the same result?

  • ❏ A. if(isComplete, ‘1a’, ‘2a’)

  • ❏ B. parameters(‘isComplete’) ? ‘1a’ : ‘2a’

  • ❏ C. iif(isComplete, ‘1a’, ‘2a’)

  • ❏ D. isComplete ? ‘1a’ : ‘2a’

Your team uses Azure Boards at a small firm named Skyforge. A work item called taskX must wait for another work item called taskY to finish before it can proceed. You need to express that dependency using the Azure DevOps web portal. What should you do?

  • ❏ A. Add a Parent link from the user story that contains taskX

  • ❏ B. In the Backlog view open the item context menu add a link choose Successor and enter the ID of taskY

  • ❏ C. From the work item form open the Links tab add a link and use References with the ID of taskY

  • ❏ D. From Queries open the context menu add a link choose Existing Item set the link type to Affected By and enter the ID of taskY

Your team manages an Azure DevOps project that hosts package feeds for a mid size firm called MapleWave. You need to grant a group of engineers the minimal permissions required to save packages from upstream sources while avoiding excessive privileges. The proposed action is to give them the Reader access level. Does this plan satisfy the requirement?

  • ❏ A. Yes

  • ❏ B. No

A small company named Orion Software follows semantic versioning and the team is unsure when to bump the major minor or patch component. Which part should they increment to publish a bug fix?

  • ❏ A. Minor

  • ❏ B. Patch

  • ❏ C. Major

At Meridian Solutions your team has deployed several Azure App Service instances to run production web applications. You must configure an Azure Monitor alert to trigger when an application is unresponsive for more than four minutes. Which of the following could be used to meet this requirement?

  • ❏ A. Azure Monitor metric alert

  • ❏ B. Availability tests in Application Insights

  • ❏ C. Azure Service Health Alert

A cloud engineering group at AuroraSoft is adding security checkpoints to their CI CD workflow and they want to align security testing to each pipeline phase. Which activity should be performed during the planning stage?

  • ❏ A. Static code analysis

  • ❏ B. Threat modeling

  • ❏ C. Load testing

  • ❏ D. Penetration testing

A product team at Meridian Apps plans to use Azure DevOps for continuous integration and continuous delivery of a service. The service must be automatically released by Azure Pipelines onto a group of Azure virtual machines that host the workload. Which feature should be created in Azure DevOps to register and manage those target servers?

  • ❏ A. Environments

  • ❏ B. Deployment groups

  • ❏ C. Agent pools

You have a .NET application named MonolithApp2 that produced a NuGet package at publish/Release/MonolithApp2.1.2.0.nupkg and you must upload it to Acme Packages which command will correctly push the package to the remote feed?

  • ❏ A. gcloud artifacts packages upload “publish/Release/MonolithApp2.1.2.0.nupkg” –location us-central1 –repository acme-repo –format nuget

  • ❏ B. dotnet nuget add source “publish/Release/MonolithApp2.1.2.0.nupkg” –api-key PAT_TOKEN –source “acme”

  • ❏ C. git nuget push “publish/Release/MonolithApp2.1.2.0.nupkg” –api-key PAT_TOKEN –source “acme”

  • ❏ D. dotnet nuget push “publish/Release/MonolithApp2.1.2.0.nupkg” –api-key PAT_TOKEN –source “acme”

At NimbusApps you host your repositories on GitHub Enterprise and you need a way to have a PowerShell script run automatically when a rebase is initiated in a repository. Which mechanism should you use?

  • ❏ A. GitHub Copilot

  • ❏ B. Cloud Functions

  • ❏ C. a webhook

  • ❏ D. Gist

A small engineering group at Meridian Systems uses Azure Boards for tracking tasks and they want a dashboard widget to measure a specific metric. They need to know the duration from when a work item is first created until it is marked closed. Which widget will provide that measurement?

  • ❏ A. Velocity

  • ❏ B. Lead time widget

  • ❏ C. Burndown chart

  • ❏ D. Cycle time

A developer at SummitApps must record diagnostics that reveal how service hook conditions are evaluated for event matching when calling the CloudForge DevOps Services REST API version 6.2 Service Hooks diagnostics endpoint, which EventSubscriptionDiagnostics setting should they enable?

  • ❏ A. deliveryTracing

  • ❏ B. deliveryResults

  • ❏ C. evaluationTracing

You manage an Azure subscription for Northside Systems that hosts several Azure services. You need the platform to send an SMS notification whenever scheduled maintenance is announced for those services. Which actions should you perform? (Choose 2)

  • ❏ A. Create a Resource Health alert

  • ❏ B. Create and configure an action group

  • ❏ C. Enable Azure Security Center

  • ❏ D. Create an Azure Service Health alert

A development group at NovaSoft maintains a pipeline called BuildFlow7 and a developer named DevA must remove a temporary cleanup stage called cleanupFinal once validation is complete. You must grant the least permissions necessary so DevA can delete that stage. At which permission scope should you grant access?

  • ❏ A. Project level

  • ❏ B. Organization level

  • ❏ C. Pipeline level

  • ❏ D. Stage level

A development team maintains a CodeWave repository that uses CI workflows and stores credentials as encrypted repository secrets. The team plans to update those secrets through the provider’s REST endpoint and they must encrypt the values with the repository public key before transmitting them. Which encryption library should they use?

  • ❏ A. BouncyCastle

  • ❏ B. OpenSSL

  • ❏ C. hashlib

  • ❏ D. libsodium

A cloud operations team at Northwind Systems needs to baseline metrics for Windows Server virtual machines running in Azure and they require detailed information about processes running inside the guest operating system. Which agents should be installed to satisfy this requirement? (Choose 2)

  • ❏ A. Dependency agent

  • ❏ B. Telegraf agent

  • ❏ C. Azure Network Watcher Agent for Windows

  • ❏ D. Azure Log Analytics agent

At SolaceWorks you are creating automated tests for the online storefront and you need a framework that can control browsers to run end to end user interface tests for the web application. Which framework should you choose?

  • ❏ A. JaCoCo

  • ❏ B. Xamarin.UITest

  • ❏ C. Selenium WebDriver

  • ❏ D. Playwright

Orion Technologies has started using Azure DevOps and created four projects for separate teams. Each team requires a different work tracking approach. Team Alpha needs to track product backlogs and defects on a Kanban board and decompose backlog items into tasks on a task board. Team Beta needs to manage user stories and defects on a Kanban board and to track defects and tasks on a task board. Team Gamma needs to record requirements change requests risks and reviews. Which Azure Boards process should you select for Team Beta?

  • ❏ A. Scrum

  • ❏ B. CMMI

  • ❏ C. Agile

  • ❏ D. XP

A development team at NimbusSoft uses Azure DevOps to orchestrate builds and releases and they store their application code in a Git repository. They want a pull request workflow that keeps the number of commits in the main branch to a minimum. If they adopt a pull request approach that performs a three way merge does that achieve the objective?

  • ❏ A. Use squash merge for pull requests

  • ❏ B. Yes

  • ❏ C. No

A development team at Nimbus Labs uses Azure DevOps to automate the build pipeline for a Spring Boot service written in Java and they need to collect code coverage metrics and publish them to the pipeline. Which tools can be used to generate Java code coverage reports? (Choose 2)

  • ❏ A. Coverlet

  • ❏ B. Clover

  • ❏ C. JaCoCo

  • ❏ D. Cobertura

A development group maintains a GitHub repository named app-repo and an Azure Key Vault named secrets-vault. They plan to add a GitHub Actions workflow called DeployDBWorkflow that will provision a database instance using credentials stored in secrets-vault. You must allow DeployDBWorkflow to read the vault secrets. Which three steps should you perform in sequence?

  • ❏ A. Create a personal access token in GitHub then create a service principal in Azure AD then reference the credentials in DeployDBWorkflow

  • ❏ B. Create a service principal in Azure AD then reference the credentials in DeployDBWorkflow then grant secret permissions to secrets-vault

  • ❏ C. Create a service principal in Azure AD then grant secret permissions to secrets-vault then reference the credentials in DeployDBWorkflow

  • ❏ D. Create a service principal in Azure AD then grant key permissions to secrets-vault then reference the credentials in DeployDBWorkflow

Developer efficiency depends on tests detecting real defects in code updates quickly and consistently. Which capability of the Summit CI CD platform can produce different outcomes such as pass or fail even when the source code and execution environment have not changed?

  • ❏ A. Parallel test execution

  • ❏ B. Test impact analysis

  • ❏ C. Flaky or intermittent test failures

  • ❏ D. User interface automated tests

A technology consultancy named HarborSoft stores the source for a client web portal in an Azure Repos Git repository and team members push commits directly to the main branch. You need to enforce a change control workflow that keeps the main branch protected, requires feature branches to be built before integration, requires at least one release lead to approve changes, and mandates that all merges into main use pull requests. What should you configure in Azure Repos?

  • ❏ A. Branch security of the main branch

  • ❏ B. Agent pools in Project Settings

  • ❏ C. Branch policies on the main branch

  • ❏ D. Service connections in Project Settings

A platform engineering group at Meridian Labs must extract shared libraries and maintain them as a set of packages. The candidate actions are numbered as follows. 1 Group related modules into packages. 2 Build a dependency graph for the codebase. 3 Assign a code owner to each grouped module. 4 Rewrite modules in the most commonly used programming language. 5 Identify the predominant programming language used in the system. Which three steps should the team perform in sequence?

  • ❏ A. 5-1-3

  • ❏ B. 1-3-2

  • ❏ C. 3-1-4

  • ❏ D. 1-2-3

  • ❏ E. 2-1-3

A DevOps team at Aurora Systems plans to deploy a containerized application to an Azure Kubernetes Service cluster and the application must access other Azure services securely from the cluster. Which identity or credential should you create to grant the cluster that capability?

  • ❏ A. Kubernetes Secret

  • ❏ B. SSH key pair

  • ❏ C. Service principal

  • ❏ D. Managed identity

A development group at NovaSoft stores its code in Azure DevOps and must install a self-hosted agent through an unattended installation script. Which two parameters need to be provided within that setup script? (Choose 2)

  • ❏ A. The deployment group name

  • ❏ B. Authorization credentials such as a personal access token

  • ❏ C. A service connection name

  • ❏ D. The agent pool name

  • ❏ E. The organization URL

Your team at Fabrikam uses Azure DevOps to build and deploy a web application named WebApp1 into an Azure subscription. Azure Monitor currently sends email alerts when server side errors occur in WebApp1. You want to send those Azure Monitor alerts to a Microsoft Teams channel. What steps should you take? (Choose 2)

  • ❏ A. Create an Azure Logic App with an HTTP request trigger

  • ❏ B. Modify the action group in Azure Monitor

  • ❏ C. Create an Azure Monitor workbook

  • ❏ D. Create an Event Grid subscription to forward alerts to Microsoft Teams

  • ❏ E. Change the Diagnostics settings for the resource in Azure Monitor

You operate an Azure App Service site for an online retail platform named NimbusRetail and you must raise diagnostic logging when the site begins showing abnormal traffic patterns. The approach must minimize ongoing administrative effort. Which resources should you include in the solution? (Choose 2)

  • ❏ A. Azure Monitor autoscale settings

  • ❏ B. An Azure Monitor alert that uses an action group with an email notification

  • ❏ C. An Azure Automation runbook

  • ❏ D. An Azure Monitor alert with a static threshold

  • ❏ E. An Azure Monitor metric alert with dynamic thresholds

A development team at Meridian Tech intends to publish a web application to twelve virtual machines located in a corporate colocation facility and all machines run the Azure Pipelines agent. Which deployment mechanism should you propose to manage the release to these machines?

  • ❏ A. Management group

  • ❏ B. Resource group

  • ❏ C. Deployment group

  • ❏ D. Service connection

Orion Labs uses Azure DevOps to orchestrate builds and releases for its applications and it stores code in a Git repository. The team wants to branch from an open pull request and later merge that new branch into the pull request’s target branch. The new branch must include only part of the changes from the pull request. Which pull request action should the team use?

  • ❏ A. Approve with suggestions

  • ❏ B. Reopen pull request

  • ❏ C. Set as default branch

  • ❏ D. Cherry-pick

A development group at Nimbus Systems must create a build pipeline inside an Azure DevOps project and they want each build to produce a measurement of the codebase technical debt and provide code quality metrics. Which tool is appropriate to include in the build to obtain this analysis?

  • ❏ A. Jenkins

  • ❏ B. SonarQube

  • ❏ C. Azure Pipelines

  • ❏ D. Azure Boards

An engineering team at Meridian Apps publishes successive builds of their service into an Azure Artifacts feed. You must prevent packages that are still under active development from being visible to most users. Which feature will allow you to control which packages users can see?

  • ❏ A. Upstream sources

  • ❏ B. Feed views

  • ❏ C. Feed permissions

A development team at MarinerApps has deployed an Azure Web App and connected it to an Application Insights resource. You need to ensure that developers are alerted about unusual performance degradations or recurring failure patterns in the web service. Which Application Insights capability will provide these proactive notifications?

  • ❏ A. Application Insights Live Metrics Stream

  • ❏ B. Azure Monitor alert rules

  • ❏ C. Application Insights Application Map

  • ❏ D. Application Insights Smart Detection

Your team maintains a private RepoHub code repository and they want commit status to appear in Azure Boards. What should you do first?

  • ❏ A. Enable multi-factor authentication for your RepoHub account

  • ❏ B. Create a RepoHub workflow in the repository

  • ❏ C. Add the Azure Pipelines app to the repository

  • ❏ D. Install the Azure Boards app on the repository

Your team stores code and conversations in RepoHub and you receive an email for every team discussion that is posted. You only want to get email notifications for threads where you left a comment or where someone mentioned you. Which two notification preferences should you turn off? (Choose 2)

  • ❏ A. Participating

  • ❏ B. Automatically watch teams

  • ❏ C. Watching

  • ❏ D. Automatically watch repositories

A development group at Skylark Technologies uses an Azure DevOps CI CD pipeline called PipelineA and they want to integrate OWASP ZAP scans into that pipeline. Which four steps should be added to PipelineA in the correct order?

  • ❏ A. Crawl the application then start the ZAP container then run a passive baseline then execute an active scan

  • ❏ B. Start the ZAP container then run an active scan then crawl the application then generate a results report

  • ❏ C. Start the ZAP container then run a passive baseline scan then crawl the application then run an active scan

  • ❏ D. Use Google Cloud Web Security Scanner then run a baseline then crawl the site then run an active scan

BrightWave Solutions uses Azure Pipelines to build and deploy web portals. You need the pipeline to run only when files inside the /frontend directory change and only when a pull request is opened. The pipeline YAML was configured with trigger paths include /frontend branches include refs/head/pr. Does this configuration meet the stated requirements?

  • ❏ A. Yes

  • ❏ B. No

DevOps Expert Exam Simulator Answers

Bellmont Systems uses an Azure DevOps project to track work items and the team wants Microsoft Teams to receive alerts whenever work items are updated. What action should you perform first to allow Teams to receive those Azure DevOps notifications?

  • ✓ C. Configure an incoming webhook by adding a Teams connector

The correct answer is Configure an incoming webhook by adding a Teams connector.

Configure an incoming webhook by adding a Teams connector is correct because an incoming webhook registers a connector in a specific Teams channel and returns a URL that external systems can post to. Azure DevOps needs that webhook URL in order to send work item update notifications into Teams.

You typically add the Teams connector first to obtain the webhook URL and then configure a Create an Azure DevOps service hook subscription that posts work item update events to that URL. The service hook cannot effectively deliver messages until the Teams incoming webhook exists.

Create an Azure DevOps service hook subscription is not the first action to perform because it requires an endpoint to post to. You must create the Teams incoming webhook before the service hook can send notifications.

Add a new channel to the Microsoft Teams team is not sufficient by itself because a channel does not automatically accept external posts. You still need to add an incoming webhook connector to the channel to receive Azure DevOps messages.

Install an Azure DevOps marketplace extension is unnecessary for basic notifications because Azure DevOps includes built in service hooks and Teams supports incoming webhooks without installing an extension. Some extensions exist for richer integration but they are not required for simple work item update alerts.

Enable external access in the Teams admin center is not required for this scenario because external access controls govern federation and intertenant communication and do not create the webhook endpoint that Azure DevOps needs to post notifications.

Set up the incoming webhook in the exact Teams channel where you want messages, copy the generated URL, and then create the Azure DevOps service hook. Test with a small work item change to confirm delivery.

Orion Systems has an Azure DevOps project named ProjectOmega and all engineers use Windows 11 workstations. You must create a Git repository that supports large binary assets and stores the binaries outside the repository while preserving pointer metadata in Git commits. Available actions are 1 Configure SSH key based authentication. 2 Configure personal access token based authentication. 3 Perform a custom installation of Git for Windows that includes Git Virtual File System GVFS. 4 Configure Git Large File Storage LFS file tracking. 5 Perform a custom installation of Git for Windows that includes Git Large File Storage LFS. Which three steps should you perform in sequence on each developer workstation?

  • ✓ B. 2, 5, 4

The correct sequence is 2, 5, 4. In other words start by configuring 2 then perform the custom Git for Windows installation that includes 5 and finally configure Git LFS tracking with 4.

Begin with 2 which is personal access token based authentication. A PAT is the recommended way to authenticate Git over HTTPS to Azure Repos from Windows workstations and it lets Git operations authenticate non interactively and integrate with the credential manager.

Next perform 5 which is a custom installation of Git for Windows that includes Git Large File Storage LFS. Installing Git with built in LFS support ensures the client can store large binaries outside the repository and exchange pointer files with the remote LFS store.

Finally run 4 which is configure Git Large File Storage LFS file tracking. You must register the file patterns to be tracked so Git commits contain pointer metadata while the actual binary content is kept in LFS storage.

4, 2, 3 is incorrect because it configures tracking before ensuring LFS is installed and it ends with installing Git Virtual File System GVFS. GVFS is not the supported path for Azure Repos large file handling and the GVFS project has been superseded and is not used for LFS workflows.

1, 4, 5 is incorrect because using SSH key based authentication is not the expected method in this scenario and the sequence attempts to configure LFS tracking before ensuring the client has LFS support installed.

5, 2, 4 is incorrect because the exam answer expects authentication to be configured first so Git can access the remote when you begin working. Installing before configuring authentication may work technically but it does not follow the recommended sequence for setting up Windows workstations for Azure Repos with LFS.

When a question combines Azure Repos and large files remember to set up authentication with a personal access token first then install Git with Git LFS and finally run git lfs track to declare the file patterns to store as LFS objects.

Vertex Labs is building a new Java application and they already run a SonarQube server that analyzes their C# solutions. They want to add code quality checks for the Java project and include the appropriate tasks in their build pipeline to enable SonarQube scanning. Which task types should be added to the build pipeline?

  • ✓ B. Gradle

The correct option is Gradle.

Gradle is a build tool commonly used for Java projects and it includes a SonarQube plugin and a built in sonarqube task so you add Gradle tasks to the pipeline to compile the code run tests and invoke the SonarQube analysis. Using the Gradle task in the build pipeline lets you run the project build and then call the SonarQube scanner in one flow which is the typical approach for Java code quality checks.

Chef is a configuration management and infrastructure automation tool and it is not used as a Java build task or as the mechanism to run SonarQube scans in a build pipeline. It does not provide the Gradle or Sonar integration needed to perform code analysis as part of the build.

Maven is also a Java build tool and it can integrate with SonarQube, but it is not the correct choice in this question. The exam answer specifies Gradle so you would add Gradle tasks rather than Maven goals unless the project specifically used Maven.

Octopus Deploy is a deployment and release management tool and it is not intended to run code compilation or SonarQube analysis as a build task. Octopus is used after the build and test stages to orchestrate deployments rather than to perform source code analysis.

When a question asks which tasks to add match the task type to the project’s build tool and look for explicit SonarQube support such as the sonarqube task in Gradle or the equivalent in Maven.

You have an Azure subscription that contains a Traffic Manager profile named TMProfileZ and a web app named BlueApp. TMProfileZ directs traffic for BlueApp and it is currently set to route clients to the endpoint with the lowest response time. You must configure TMProfileZ so that all requests from Asian locations are sent to an endpoint located in Australia. Which three actions should you perform in order from the Azure portal? The available actions are 1 Add new endpoints and assign geographic regions to them, 2 Reduce the DNS record TTL for BlueApp, 3 Change the routing method to performance, 4 Remove all current endpoints, 5 Change the routing method to geographic?

  • ✓ B. Remove all current endpoints then change the routing method to geographic then add new endpoints and assign geographic regions to them

The correct answer is Remove all current endpoints then change the routing method to geographic then add new endpoints and assign geographic regions to them.

You remove the current endpoints first because the Traffic Manager profile is currently configured for performance based routing and the existing endpoint setup will not include the geographic region mappings you need. You then change the routing method to geographic so the profile accepts geographic assignments. After the profile is set to geographic you add the endpoints and assign geographic regions and you make sure Asian locations map to the Australia endpoint so those requests are routed there.

Reduce the DNS record TTL then remove all current endpoints then change the routing method to geographic is incorrect because changing the TTL is not part of the required configuration steps. Lowering the DNS TTL can help speed propagation but it is optional and it is not needed to implement geographic routing.

Change the routing method to performance then add new endpoints and assign geographic regions then remove all current endpoints is incorrect because performance routing will not send traffic by geographic region. You must use the geographic routing method to map client locations to specific endpoints so choosing performance first defeats the goal.

Add new endpoints with geographic region assignments then change the routing method to geographic then reduce the DNS record TTL is incorrect because you should change the profile to geographic before adding endpoints that rely on geographic mappings so the assignments are made under the correct routing mode. Also lowering the DNS TTL is optional and is not required for the configuration steps.

When a question asks for ordered portal actions remember to change the routing or policy at the profile level before recreating or reassigning endpoints. If you need fast cutover you can also reduce DNS TTL as an additional step but do that separately from the main configuration steps.

You maintain an Azure DevOps project named PipelineHub and it contains an Azure Artifacts feed called artifactstream. The artifactstream feed currently has no public upstream sources and you must add a public upstream to retrieve .NET packages. Which upstream source should you add to Azure Artifacts?

  • ✓ C. NuGet

The correct option is NuGet.

NuGet is the package manager and repository ecosystem for .NET libraries and Azure Artifacts can add a public NuGet upstream so your feed can retrieve .NET packages from nuget.org or other NuGet sources.

Chocolatey is not correct because Chocolatey targets Windows application installation and is not the standard upstream for .NET library packages.

Maven is not correct because Maven is the build and dependency management system for Java and it provides Java artifacts rather than .NET packages.

npm is not correct because npm is the package manager for JavaScript and Node.js and it will not supply .NET libraries.

Match the package manager to the language or runtime and remember that NuGet is the go to choice for .NET related package questions.

A development team at ApexSoft is preparing an application to be deployed with ARM templates and the templates will also provision virtual machines. They must store a VM credential in an Azure Key Vault and they have this script az SLOT_1 create –name “ApexVault3” –resource-group “ApexResourceGroup” –location westus2 az SLOT_2 SLOT_3 set –vault-name “ApexVault3” –name “AdminPass” –value “xYz6789AbCd” Which of the following belongs in SLOT_1?

  • ✓ D. keyvault

The correct option is keyvault.

The Azure CLI command structure is az followed by the service group and then the action. Placing keyvault in SLOT_1 yields az keyvault create which provisions an Azure Key Vault resource and fits the provided arguments for name and resource group.

set is incorrect because set is an action subcommand used to store or update a secret and not the service group that creates resources. You would see set used after the secret subcommand as in az keyvault secret set.

gcloud is incorrect because it is the Google Cloud CLI and not part of Azure CLI commands. The script begins with az so the CLI must be Azure.

secret is incorrect for SLOT_1 because secret is the subgroup used to manage secrets and it normally follows the service group as in az keyvault secret set. The subgroup does not create the vault itself so it cannot occupy SLOT_1.

When you see an Azure CLI command start with az look for the Azure service group name as the first token and then the action such as create or set.

You create a Git repository named RepoA in Azure Repos for a development team codebase. You need to ensure that all pull requests are associated with a work item while keeping ongoing administration to a minimum. Which policy type should you configure?

  • ✓ C. Branch

The correct option is Branch.

You should configure Branch policies because Azure Repos provides a built in rule to require a linked work item on pull requests and that rule is applied at the branch level so it enforces the association automatically for protected branches.

Using Branch policies minimizes ongoing administration because you configure the policy once for the target branches and all pull requests that target those branches must meet the linked work item requirement before they can be completed.

The Build policy is not correct because build policies focus on requiring successful CI builds for pull requests and they do not enforce that a work item is linked to the PR.

The Status policy is not correct because status checks allow external services to report pass or fail on a pull request and they do not provide a built in mechanism to require linked work items.

The Check-in policy is not correct and it is less relevant for Git repositories because check-in policies are associated with older TFVC workflows and they are not the standard mechanism for enforcing pull request rules in Azure Repos Git.

When a question asks to enforce that every pull request must be linked to a work item look for the Require linked work items rule under branch policy settings and apply it to your protected branches to avoid manual enforcement.

A development team uses Azure DevOps and the release pipeline contains two stages named Testing and Production. The Testing stage deploys new builds to an Azure Web App called webapp-alpha and the Production stage deploys to an Azure Web App called webapp-beta. You need to stop deployments to webapp-beta when Application Insights generates Failed requests alerts after a new release has been deployed to webapp-alpha. What should you configure on the Testing stage?

  • ✓ C. Add a pipeline task to create or configure Application Insights alert rules

Add a pipeline task to create or configure Application Insights alert rules is correct. Add a pipeline task to create or configure Application Insights alert rules lets the Testing stage provision or update alert rules that detect Failed requests after the deployment to webapp-alpha so the pipeline can use those alerts to prevent or gate the deployment to webapp-beta.

Require a manual post-deployment approval in the post-deployment settings is incorrect because a manual approval requires human intervention and will not automatically respond to Application Insights alerts that fire after the Testing deployment.

Set a pre-deployment gate on the Testing stage is incorrect because a pre-deployment gate runs before the Testing stage deploys and cannot evaluate alerts that only appear after the deployment to webapp-alpha completes.

Enable an automatic redeploy trigger in the post-deployment settings is incorrect because an automatic redeploy trigger causes reruns when artifacts change and does not stop or gate downstream deployments based on monitoring alerts.

Automate the creation or configuration of monitoring rules in the pipeline when you need alerts to influence release flow and always check whether a gate or check must run before or after a deployment.

A development team at Nimbus Innovations must configure Azure Pipelines for a new feature and the pipeline requires a Linux self-hosted agent. The pipeline will run two times per day and each execution should take about 20 minutes. You need to select a compute option to host the self-hosted agent while minimizing cost. Which compute option should you choose?

  • ✓ C. Azure Container Instances (ACI)

The correct option is Azure Container Instances (ACI).

ACI is the best fit because it runs containers on demand without requiring you to provision or manage virtual machines. You pay only for the short runtime and the service starts quickly, so two daily executions of about twenty minutes each keep costs minimal when compared with always on compute.

Azure Kubernetes Service (AKS) is not ideal because it requires provisioning and managing a cluster and node resources remain allocated which increases cost and operational overhead for occasional short runs.

Azure Virtual Machines are not the optimal choice because they incur VM-level charges while running and you must manage the operating system and scaling which is more work and more expensive for brief intermittent jobs.

Azure Container Apps is designed for microservices and event driven workloads with built in scaling and extra platform features which add complexity and potential cost. For a simple, short lived self hosted pipeline agent, ACI is simpler and usually cheaper.

For intermittent, short lived CI agent runs prefer compute that offers on demand container execution and per second billing to minimize cost when the workload is not continuous.

A retail analytics startup named Meridian Labs is connecting a cloud hosted Jenkins server to a new Azure DevOps organization. Meridian Labs requires that Azure DevOps notify the Jenkins server whenever a developer pushes updates to a branch in Azure Repos. The proposed solution is to add a trigger to the build pipeline that starts a pipeline on commit. Will that meet the requirement?

  • ✓ C. Create a service hook subscription in Azure DevOps that calls the Jenkins endpoint

Create a service hook subscription in Azure DevOps that calls the Jenkins endpoint is correct.

Azure DevOps service hooks let you call an external endpoint when events occur in Azure Repos. A service hook subscription can be configured for push or commit events and then POST a payload to your Jenkins endpoint so Jenkins is notified immediately and can start a job. This approach directly connects Azure DevOps events to your external Jenkins server and meets the requirement to notify the Jenkins server on branch commits.

Add a pipeline trigger that starts the Azure Pipelines build on commit is incorrect because pipeline triggers only start builds inside Azure Pipelines. That option does not send a notification or HTTP request to an external Jenkins endpoint and so it will not cause the cloud hosted Jenkins server to run a job.

Enable Azure DevOps notifications to send alerts to team members is incorrect because Azure DevOps notifications are designed to inform people by email or in the UI. They do not act as a webhook to trigger an external CI server like Jenkins and so they do not meet the stated requirement.

When the exam asks about notifying an external system look for terms like service hooks or webhooks. Those are the mechanisms that deliver HTTP requests to external endpoints rather than starting internal pipelines or notifying users.

A development group at NovaSoft is building a continuous integration pipeline for a Java application and they require a utility that reports the percentage of the codebase exercised by automated tests. Which tool provides that capability?

  • ✓ C. Cobertura

The correct option is Cobertura.

Cobertura is a Java code coverage tool that instruments classes and measures which lines and branches are executed by automated tests. It produces coverage reports expressed as percentages and fits naturally into continuous integration pipelines where teams need a numeric measure of how much of the codebase is exercised by tests.

Cobertura can output reports in HTML and XML and it can be integrated into build workflows so a pipeline can publish coverage results or enforce thresholds when coverage falls below an expected percentage.

JaCoCo is a separate Java coverage tool and it can also report coverage percentages, but it is not the option marked correct in this question. It is an alternative rather than the expected answer here.

Maven is a build and project management tool and it does not itself provide test coverage metrics. Coverage reporting in a Maven-based build is provided by plugins or by external coverage tools rather than by Maven alone.

When a question asks about tools that report code coverage percentages focus on options that explicitly name a coverage reporter and not on general build systems. Look for keywords like coverage or coverage reports in the answer choices.

You manage an Azure DevOps project at Nimbus Systems that hosts artifact feeds and you must grant a group of engineers permission to save packages retrieved from upstream feeds while avoiding unnecessary extra privileges. The proposed solution is to assign the developers the Collaborator access level. Does this solution meet the requirement?

  • ✓ B. Yes

Yes is correct.

Assigning the developers the Collaborator access level gives them the ability to retrieve packages from upstream sources and to save those packages into the feed without granting unnecessary administrative privileges. Azure Artifacts uses feed permissions to control who can read and who can contribute and a user with contribute rights can cache or save upstream packages when they install them.

This approach satisfies the requirement for least privilege because you enable only the package contribute capability that is needed to persist upstream packages and you avoid giving full owner or project administration rights.

No is incorrect because rejecting the proposed solution would either prevent engineers from saving upstream packages or force you to grant broader rights than necessary to meet the requirement.

When answering, focus on the specific feed permissions required and choose the access level that grants the needed contribute capability without providing full administrative rights.

Your company maintains an Azure Active Directory tenant under Microsoft Entra and the directory holds three groups named TeamAlpha TeamBeta and TeamGamma. You create a new Azure DevOps project called ProjectZephyr. You must secure the project service connections while following the principle of least privilege. Members of TeamAlpha must be able to share and revoke sharing of a service connection across projects. Members of TeamBeta must be able to rename a service connection and update its description. Members of TeamGamma must be able to use the service connection in build or release pipelines. Which permission should you assign to TeamAlpha?

  • ✓ C. Organization-level Administrator

Organization-level Administrator is correct because the ability to share and revoke sharing of a service connection across projects requires organization level scope and permissions that go beyond a single project.

The organization level administrator role has the authority to change sharing settings and manage service connections at the organization scope so TeamAlpha can grant or remove access to service connections across multiple projects while following least privilege principles.

Service Connections Administrator is incorrect because that role name suggests focused management of service connections but it does not necessarily grant the organization wide sharing and revocation rights required to manage access across projects.

Project-level Administrator is incorrect because project level administrators can manage service connections only within their project and they cannot change sharing that affects other projects in the organization.

Contributor is incorrect because contributors can typically update and use resources inside a project but they do not have the organization wide authority needed to share or revoke service connections across projects.

Reader is incorrect because readers have view only access and cannot perform management actions such as sharing or revoking service connections.

Focus on the scope of the required action when you answer permissions questions. If an action must span multiple projects then look for an organization level role rather than a project level role.

An engineering group at Meridian Systems will collect metrics and key performance indicators from their Stratus DevOps projects to confirm they are meeting targets and expectations. You must determine which KPI reflects the project’s quality and security posture. Which KPI should you choose?

  • ✓ B. Mean Time to Recover (MTTR)

The correct answer is Mean Time to Recover (MTTR).

Mean Time to Recover (MTTR) measures the average time required to restore service after an incident and it therefore directly reflects the project�s ability to respond to quality failures and security incidents. Lower MTTR indicates faster detection and remediation and it captures operational resilience and incident handling which are central to quality and security posture.

Application Response Time measures how quickly the application responds to requests and it is primarily a performance metric rather than a measure of recovery or security posture. It does not indicate how quickly the team can recover from incidents or breaches.

Change Failure Rate measures the proportion of deployments that cause failures and it reflects release quality but it does not measure incident recovery speed. It is useful for stability insights but it is less direct for assessing recovery and security response.

Lead Time for Changes measures how long it takes to get code from commit into production and it indicates delivery speed and process efficiency. It does not assess how well the project detects and recovers from failures or security events.

On exam questions compare what the KPI actually measures and ask whether it captures response and recovery or just performance or delivery speed. MTTR is the metric that focuses on recovery capability.

A cloud engineer at Meridian Systems is converting an Azure Resource Manager template that uses the expression [if(parameters(‘isComplete’), ‘1a’, ‘2a’)] into Bicep. Which Bicep expression will produce the same result?

  • ✓ D. isComplete ? ‘1a’ : ‘2a’

The correct option is isComplete ? ‘1a’ : ‘2a’.

Bicep uses a ternary conditional operator so you write the condition followed by a question mark then the true value and then the false value. You reference parameter names directly in Bicep so using the parameter name isComplete as the condition produces the same result as the ARM template if expression.

if(isComplete, ‘1a’, ‘2a’) is incorrect because that is the ARM template function style and Bicep does not use the if(…​) function for expressions. Bicep uses the ? operator instead.

parameters(‘isComplete’) ? ‘1a’ : ‘2a’ is incorrect because Bicep does not call parameters(‘…​’) to read parameters. In Bicep you refer to the parameter by its name rather than using a parameters function.

iif(isComplete, ‘1a’, ‘2a’) is incorrect because there is no iif function in Bicep or ARM templates in that form and that syntax belongs to other languages rather than Bicep.

When converting ARM templates to Bicep look for the condition ? trueValue : falseValue pattern and remember that parameter names are referenced directly in Bicep.

Your team uses Azure Boards at a small firm named Skyforge. A work item called taskX must wait for another work item called taskY to finish before it can proceed. You need to express that dependency using the Azure DevOps web portal. What should you do?

  • ✓ B. In the Backlog view open the item context menu add a link choose Successor and enter the ID of taskY

The correct option is In the Backlog view open the item context menu add a link choose Successor and enter the ID of taskY.

Using the Backlog view and the item context menu lets you create a Successor link that explicitly models that taskX must wait for taskY to finish. The Successor link type is a first class relationship in Azure Boards for sequencing work and the backlog and dependency visualizations respect that relationship.

Add a Parent link from the user story that contains taskX is incorrect because a Parent link creates a hierarchy to show decomposition of work and it does not express sequencing or a finish-to-start dependency between two tasks.

From the work item form open the Links tab add a link and use References with the ID of taskY is incorrect because a References link is a generic connection for context and notes and it does not establish a predecessor or successor relationship that the backlog will treat as a dependency.

From Queries open the context menu add a link choose Existing Item set the link type to Affected By and enter the ID of taskY is incorrect because the Affected By link denotes impact rather than an ordered dependency and it is not the standard predecessor or successor link used to express sequencing in the backlog.

When you need to express sequencing in Azure Boards use the backlog context menu and select Successor or Predecessor rather than parent or reference links so the backlog and dependency views reflect the order correctly.

Your team manages an Azure DevOps project that hosts package feeds for a mid size firm called MapleWave. You need to grant a group of engineers the minimal permissions required to save packages from upstream sources while avoiding excessive privileges. The proposed action is to give them the Reader access level. Does this plan satisfy the requirement?

  • ✓ B. No

The correct option is No.

The Reader access level in Azure DevOps is a read only permission set and it only allows users to view and download packages. Reader does not allow publishing or saving packages into a feed from upstream sources. Because saving upstream packages requires write or contribute permissions you must grant a role such as Contributor on the feed or explicitly enable the Contribute permission for the engineers to meet the requirement.

Yes is incorrect because assigning Reader would not let the engineers save packages from upstream sources. Reader only permits read operations and therefore would not satisfy the minimal required ability to publish or save packages into the feed.

When a question asks about minimal permissions focus on the specific action required. If users must add or save artifacts then read is not enough and you should look for contribute or equivalent write permissions on the feed.

A small company named Orion Software follows semantic versioning and the team is unsure when to bump the major minor or patch component. Which part should they increment to publish a bug fix?

  • ✓ B. Patch

The correct option is Patch. The Patch component is incremented when you publish a bug fix that is backwards compatible and does not add new functionality or change the public API.

Semantic versioning follows the format MAJOR.MINOR.PATCH. The Major number is for incompatible, breaking changes and the Minor number is for added functionality that remains backwards compatible. For fixes only, you increase the Patch number so consumers can safely upgrade.

Minor is incorrect because it is intended for adding new, backwards compatible features rather than for fixing bugs. Use the Patch bump for bug fixes instead.

Major is incorrect because it signals breaking changes that are not backwards compatible. A simple bug fix should never trigger a Major version increment.

When deciding quickly choose the patch increment for bug fixes and reserve the minor increment for new features and the major increment for breaking changes.

At Meridian Solutions your team has deployed several Azure App Service instances to run production web applications. You must configure an Azure Monitor alert to trigger when an application is unresponsive for more than four minutes. Which of the following could be used to meet this requirement?

  • ✓ B. Availability tests in Application Insights

The correct option is Availability tests in Application Insights.

Availability tests in Application Insights perform synthetic HTTP requests from multiple locations and validate that your web application responds correctly and within expected time bounds. These tests can be tied to alert rules so that Azure Monitor generates an alert when the test failures indicate the application has been unresponsive for a configured period, which meets the requirement to detect an app that is unresponsive for more than four minutes.

Azure Monitor metric alert is not the best choice because metric alerts monitor numerical platform or custom metrics such as CPU or request count and they do not by themselves run synthetic HTTP checks to determine if an application is unresponsive at the HTTP level.

Azure Service Health Alert is not correct because Service Health reports on Azure platform outages and planned maintenance and it does not monitor the responsiveness of individual application endpoints.

When you need to detect an application that is unresponsive use synthetic availability tests and then connect those tests to alerts. Also verify the test frequency and alert criteria match the outage duration you must detect.

A cloud engineering group at AuroraSoft is adding security checkpoints to their CI CD workflow and they want to align security testing to each pipeline phase. Which activity should be performed during the planning stage?

  • ✓ B. Threat modeling

Threat modeling is the correct option for activities performed during the planning stage of a CI/CD pipeline.

Threat modeling is an early design activity where teams identify assets, enumerate potential threats, map attack surfaces, and define mitigations and security requirements before code is written. Doing this work during planning helps set security requirements and priorities and it guides which checks and tests to automate later in the pipeline.

Static code analysis is not a planning stage activity because it inspects source code for vulnerabilities and style issues and it runs during implementation or continuous integration when code is available to analyze.

Penetration testing is an active exercise that simulates attacks against a deployed system and it is normally performed during staging or production testing rather than during planning.

Load testing measures performance and scalability under high load and it belongs to performance and system testing phases after the architecture and implementation decisions have been made.

Map security activities to the software development phase. In the planning stage choose design activities like threat modeling and reserve static analysis, penetration testing, and load testing for implementation and testing stages.

A product team at Meridian Apps plans to use Azure DevOps for continuous integration and continuous delivery of a service. The service must be automatically released by Azure Pipelines onto a group of Azure virtual machines that host the workload. Which feature should be created in Azure DevOps to register and manage those target servers?

  • ✓ B. Deployment groups

The correct answer is Deployment groups.

Deployment groups are the Azure DevOps feature that lets you register a set of target virtual machines and install a small deployment agent on each machine so Azure Pipelines can push releases directly to those servers. They provide a way to organize machines into a named group, apply tags to select subsets, and run rolling or parallel deployments to the registered hosts.

Environments represents logical deployment environments and can contain resources for multi stage YAML pipelines, but it is not the feature that you use to register and manage a group of target servers in the classic release sense. The question specifically asks about registering and managing target servers as a group which points to deployment groups.

Agent pools are collections of agents that run pipeline jobs and provide compute for builds and deployments, but they do not serve to register or group target servers for release deployments. Agent pools manage the agents that execute tasks rather than the deployment targets themselves.

When a question mentions registering and managing the actual servers that receive releases think Deployment groups. Save Agent pools for questions about where pipeline jobs run and use Environments for resource views in YAML multi stage scenarios.

You have a .NET application named MonolithApp2 that produced a NuGet package at publish/Release/MonolithApp2.1.2.0.nupkg and you must upload it to Acme Packages which command will correctly push the package to the remote feed?

  • ✓ D. dotnet nuget push “publish/Release/MonolithApp2.1.2.0.nupkg” –api-key PAT_TOKEN –source “acme”

The correct option is dotnet nuget push “publish/Release/MonolithApp2.1.2.0.nupkg” –api-key PAT_TOKEN –source “acme”.

The dotnet nuget push “publish/Release/MonolithApp2.1.2.0.nupkg” –api-key PAT_TOKEN –source “acme” command is the proper way to publish a built .nupkg to a NuGet feed. The command takes the package file path, an API key for authentication and a source which can be a feed name or feed URL, so replacing PAT_TOKEN with a valid key and pointing –source to the Acme feed will upload the package.

gcloud artifacts packages upload “publish/Release/MonolithApp2.1.2.0.nupkg” –location us-central1 –repository acme-repo –format nuget is incorrect because the typical approach to publish NuGet packages to a NuGet feed is to use the NuGet client such as dotnet nuget push. The gcloud command is not the standard client operation for pushing a .nupkg to a NuGet feed.

dotnet nuget add source “publish/Release/MonolithApp2.1.2.0.nupkg” –api-key PAT_TOKEN –source “acme” is incorrect because add source registers a feed endpoint with your NuGet configuration and it expects a feed URL not a package file path. It does not upload a package.

git nuget push “publish/Release/MonolithApp2.1.2.0.nupkg” –api-key PAT_TOKEN –source “acme” is incorrect because there is no standard git nuget push command and mixing git with nuget in that form is not a valid way to publish a NuGet package. The supported client is the NuGet CLI or the dotnet tool.

When you need to publish a NuGet package use dotnet nuget push and ensure the –source value points to the feed URL or configured feed name and that you supply a valid API key.

At NimbusApps you host your repositories on GitHub Enterprise and you need a way to have a PowerShell script run automatically when a rebase is initiated in a repository. Which mechanism should you use?

  • ✓ C. a webhook

The correct option is a webhook.

A webhook is the mechanism GitHub Enterprise provides to send HTTP POST notifications to an external endpoint when repository events occur. You can configure a webhook to fire on the events that correspond to a rebase or to the push that follows a rebase and have your receiving service trigger the PowerShell script automatically based on the payload.

GitHub Copilot is a code suggestion and completion tool and it does not provide event notifications or a way to invoke scripts when repository operations happen.

Cloud Functions are a place to run code and they can host the receiver for events but they are not the GitHub-side mechanism that sends notifications. You would still rely on a webhook to notify a Cloud Functions endpoint.

Gist is a snippet and snippet sharing service and it does not offer an event notification mechanism to trigger scripts on repository actions.

When questions ask about responding to repository actions think about services that push event data such as webhooks and then decide where to host the receiver that will run your script.

A small engineering group at Meridian Systems uses Azure Boards for tracking tasks and they want a dashboard widget to measure a specific metric. They need to know the duration from when a work item is first created until it is marked closed. Which widget will provide that measurement?

  • ✓ B. Lead time widget

Lead time widget is the correct option.

The Lead time widget reports the elapsed time from when a work item is first created until it is marked closed, so it directly matches the engineering group’s requirement to measure duration from creation to closure.

Velocity tracks the amount of work a team completes per iteration and usually uses story points or completed backlog items. It does not measure the time span of individual work items from creation to closed and so it is not the right widget for this metric.

Burndown chart shows the remaining work in an iteration over time and helps teams monitor sprint progress. It does not provide per item elapsed time from creation to closure and therefore does not meet the stated need.

Cycle time measures the time an item spends in active work or in the workflow after work begins and before it is completed. That differs from lead time which starts at creation, so cycle time does not answer the specific question about time from creation to closed.

Look for wording that says from creation or first created to identify lead time, and contrast that with cycle time which begins when active work starts.

A developer at SummitApps must record diagnostics that reveal how service hook conditions are evaluated for event matching when calling the CloudForge DevOps Services REST API version 6.2 Service Hooks diagnostics endpoint, which EventSubscriptionDiagnostics setting should they enable?

  • ✓ C. evaluationTracing

The correct option is evaluationTracing.

evaluationTracing enables diagnostic traces that show how the service hook matching conditions are evaluated for incoming events. This setting records the condition evaluation logic and the matching decisions so you can see why an event did or did not trigger a subscription when you call the Service Hooks diagnostics endpoint in the 6.2 REST API.

deliveryTracing is focused on tracing the delivery process to external subscribers and webhooks and not on the condition evaluation itself. It helps diagnose network and delivery behavior rather than why an event matched a subscription.

deliveryResults records the outcomes of delivery attempts and responses from subscribers. It does not show the internal evaluation of subscription conditions that determines event matching.

When a question asks why an event matched a subscription look for diagnostic options that mention evaluation or condition tracing because those settings show the matching logic.

You manage an Azure subscription for Northside Systems that hosts several Azure services. You need the platform to send an SMS notification whenever scheduled maintenance is announced for those services. Which actions should you perform? (Choose 2)

  • ✓ B. Create and configure an action group

  • ✓ D. Create an Azure Service Health alert

The correct options are Create and configure an action group and Create an Azure Service Health alert.

Create an Azure Service Health alert is the right choice because Service Health reports platform events that affect your subscription such as planned maintenance and health advisories, and alerts can be created to notify you when Microsoft announces scheduled maintenance that could impact your services.

Create and configure an action group is required because action groups define who gets notified and how, and they support SMS as a notification channel so you can have the platform send an SMS when the Service Health alert fires.

Create a Resource Health alert is incorrect because Resource Health informs you about the health status of individual resources and their availability, and it does not provide subscription level scheduled maintenance announcements from the platform.

Enable Azure Security Center is incorrect because Security Center focuses on security posture and threat protection, and it does not send platform maintenance or scheduled maintenance notifications.

Use Azure Service Health for platform and scheduled maintenance notifications and pair it with an Action Group to deliver SMS messages. Test the action group after creating it to confirm phone numbers and delivery work as expected.

A development group at NovaSoft maintains a pipeline called BuildFlow7 and a developer named DevA must remove a temporary cleanup stage called cleanupFinal once validation is complete. You must grant the least permissions necessary so DevA can delete that stage. At which permission scope should you grant access?

  • ✓ C. Pipeline level

Pipeline level is correct because DevA only needs permission to remove the temporary cleanup stage from the BuildFlow7 pipeline and that right should be granted on the pipeline itself.

Granting access at the Pipeline level limits the permission to that single pipeline so DevA can delete the cleanupFinal stage without gaining rights across other pipelines or projects. This approach follows the principle of least privilege by giving only the minimal scope required to perform the task.

Project level is incorrect because granting rights at the project scope would allow DevA to affect all pipelines in the project which is broader than necessary.

Organization level is incorrect because that scope would permit changes across the entire organization and is far too wide for a single stage deletion task.

Stage level is incorrect because most CI systems manage stage removal through pipeline level controls and do not offer isolated stage deletion permissions, so you cannot reliably grant only a stage scoped permission to allow deletion.

Grant access at the narrowest scope that allows the required action and verify the permission on a non production pipeline before applying it in production.

A development team maintains a CodeWave repository that uses CI workflows and stores credentials as encrypted repository secrets. The team plans to update those secrets through the provider’s REST endpoint and they must encrypt the values with the repository public key before transmitting them. Which encryption library should they use?

  • ✓ D. libsodium

The correct option is libsodium.

libsodium is the library the provider expects for encrypting repository secrets with the repository public key because it offers a high level sealed box API that takes the recipient public key and produces a ciphertext suitable for the REST endpoint. The typical workflow is to use the sealed box function to encrypt the secret with the repository public key and then base64 encode the result before sending it to the Create or Update repository secret endpoint.

BouncyCastle is a Java cryptography provider that supports many primitives but it is not the library named in provider documentation and it will not necessarily produce the same sealed box format that the endpoint expects.

OpenSSL is a general purpose toolkit for TLS and cryptography but the provider examples and client libraries target the libsodium sealed box API for compatibility so using OpenSSL would require implementing a compatible sealed box scheme which is error prone.

hashlib is a Python module for hashing and does not provide public key encryption facilities so it cannot be used to perform the required public key encryption of secrets.

When a question mentions encrypting secrets with a repository public key check the provider documentation for the required library and algorithm and favor using libsodium or an implementation of its sealed box API for direct compatibility.

A cloud operations team at Northwind Systems needs to baseline metrics for Windows Server virtual machines running in Azure and they require detailed information about processes running inside the guest operating system. Which agents should be installed to satisfy this requirement? (Choose 2)

  • ✓ A. Dependency agent

  • ✓ D. Azure Log Analytics agent

The correct options are Dependency agent and Azure Log Analytics agent.

The Azure Log Analytics agent runs in the Windows guest and collects performance counters, event logs, and other telemetry from the operating system and installed applications. It provides the baseline metrics that the cloud operations team needs and forwards data to Log Analytics workspaces for analysis and alerting.

The Dependency agent complements the Log Analytics agent by collecting process level and dependency information inside the guest. It enables process mapping and shows which processes communicate with each other and with remote endpoints, which satisfies the requirement for detailed process information.

Telegraf agent is not the best choice for this scenario because it is a general metrics collector that is most commonly used to send metrics to InfluxDB or other time series databases and it does not provide the Service Map style process dependency mapping inside Azure.

Azure Network Watcher Agent for Windows focuses on network diagnostic capabilities such as packet capture, connection troubleshooting, and topology insights and it does not collect detailed guest process information for baseline metrics.

When a question asks for process level visibility inside the guest think of agents that run in the guest OS such as the Log Analytics and Dependency agents and remember that newer exams may reference the Azure Monitor Agent.

At SolaceWorks you are creating automated tests for the online storefront and you need a framework that can control browsers to run end to end user interface tests for the web application. Which framework should you choose?

  • ✓ D. Playwright

The correct option is Playwright.

Playwright is a modern end to end browser automation framework that can control Chromium Firefox and WebKit so it covers the major browsers you would need for storefront testing. Playwright includes features that simplify UI testing such as automatic waiting for elements the ability to intercept and modify network traffic and a built in test runner and it supports multiple languages which makes it a strong fit for automated web application tests.

JaCoCo is a Java code coverage tool and it does not perform browser automation or run end to end user interface tests so it is not suitable for controlling browsers.

Xamarin.UITest is focused on testing mobile applications built with Xamarin and native platforms and it is not designed to automate web browsers so it does not match the web storefront testing requirement.

Selenium WebDriver is a well known browser automation tool and it can run web UI tests but the question calls for Playwright as the best choice because Playwright provides built in multi browser support automatic waiting network interception and modern APIs that make reliable cross browser testing easier. You could use Selenium WebDriver in many scenarios but it is not the selected answer here.

When you see an exam question about controlling browsers for web UI tests look for tools that are explicitly for browser automation and match the keyword web or browser to the tool purpose.

Orion Technologies has started using Azure DevOps and created four projects for separate teams. Each team requires a different work tracking approach. Team Alpha needs to track product backlogs and defects on a Kanban board and decompose backlog items into tasks on a task board. Team Beta needs to manage user stories and defects on a Kanban board and to track defects and tasks on a task board. Team Gamma needs to record requirements change requests risks and reviews. Which Azure Boards process should you select for Team Beta?

  • ✓ C. Agile

The correct option is Agile.

Agile uses the User Story work item type and provides Kanban boards for backlogs and task boards for breaking backlog items into tasks which aligns with Team Beta needing to manage user stories and defects on a Kanban board and to track defects and tasks on a task board.

Agile also supports configuring how bugs appear on backlogs and boards so it gives the flexibility to treat defects either as backlog items or as tasks depending on the project settings which matches Team Beta requirements.

Scrum is oriented around Product Backlog Items and sprint based planning and it does not use the User Story work item type that Team Beta requires.

CMMI focuses on formal change control and includes work item types for change requests risks and reviews so it is more appropriate for the scenario described for Team Gamma and it is heavier than needed for Team Beta.

XP is not a built in Azure DevOps process template so it is not a valid choice when selecting the Azure Boards process for Team Beta.

Match the named work item types in the question to the process template. If you see User Story think Agile. If you see Product Backlog Item think Scrum. If the scenario mentions change requests or formal reviews think CMMI.

A development team at NimbusSoft uses Azure DevOps to orchestrate builds and releases and they store their application code in a Git repository. They want a pull request workflow that keeps the number of commits in the main branch to a minimum. If they adopt a pull request approach that performs a three way merge does that achieve the objective?

  • ✓ C. No

The correct option is No.

A three way merge preserves the individual commits from the source branch and it typically creates a merge commit when branches have diverged. That means the main branch will receive all of the original commits plus the merge commit and so the number of commits is not minimized. For the goal of keeping the main branch compact you would use a squash merge because it combines the branch changes into a single commit before merging, which is why No is the correct response to the question about three way merge.

Use squash merge for pull requests is incorrect in the context of this question because it describes a strategy that would actually minimize commits. The question asked whether adopting a three way merge achieves the objective and the squash option does not answer that claim.

Yes is incorrect because a three way merge does not reduce the number of commits. It preserves the branch history and usually adds a merge commit, so it increases or preserves the commit count on the main branch rather than minimizing it.

When you see a question about minimizing commits think about whether the merge strategy preserves history or combines it. Squash merge combines commits into one and reduces commit noise while three way merge preserves all commits and often adds a merge commit.

A development team at Nimbus Labs uses Azure DevOps to automate the build pipeline for a Spring Boot service written in Java and they need to collect code coverage metrics and publish them to the pipeline. Which tools can be used to generate Java code coverage reports? (Choose 2)

  • ✓ C. JaCoCo

  • ✓ D. Cobertura

The correct answers are JaCoCo and Cobertura.

JaCoCo is a modern Java code coverage library that integrates easily with Maven and Gradle and it produces XML and HTML reports that CI systems such as Azure DevOps can consume to publish coverage metrics.

Cobertura is an older Java coverage tool that can generate reports usable by build pipelines and it has been supported by Maven plugins and other tooling. It remains usable in many environments but it has seen less active maintenance than JaCoCo which makes it less common on newer projects.

Coverlet is incorrect because it targets .NET assemblies and tooling such as dotnet test rather than the Java ecosystem. It will not produce Java bytecode coverage reports.

Clover is incorrect in this context and it is also important to note that the original Atlassian Clover was retired and later forks such as OpenClover exist. That history makes it less likely to be the recommended choice on newer exams compared with tools like JaCoCo.

When a question asks about code coverage for Java look for tools tied to the Java ecosystem such as JaCoCo and Cobertura and exclude tools that are clearly aimed at other platforms such as Coverlet.

A development group maintains a GitHub repository named app-repo and an Azure Key Vault named secrets-vault. They plan to add a GitHub Actions workflow called DeployDBWorkflow that will provision a database instance using credentials stored in secrets-vault. You must allow DeployDBWorkflow to read the vault secrets. Which three steps should you perform in sequence?

  • ✓ C. Create a service principal in Azure AD then grant secret permissions to secrets-vault then reference the credentials in DeployDBWorkflow

Create a service principal in Azure AD then grant secret permissions to secrets-vault then reference the credentials in DeployDBWorkflow is correct because you must create an identity that can authenticate to Azure, give that identity explicit permission to read secrets from the vault, and only then use those credentials from the workflow to access the secrets.

First you create a service principal in Azure AD so the workflow has an Azure identity to authenticate as. Next you grant that service principal the appropriate secret permissions on the Key Vault so it can read the stored credentials. Finally you reference the service principal credentials from the DeployDBWorkflow so the GitHub Actions runner can sign in and retrieve the secrets at runtime.

Create a personal access token in GitHub then create a service principal in Azure AD then reference the credentials in DeployDBWorkflow is incorrect because a GitHub personal access token does not replace the need for an Azure service principal to authenticate to Azure Key Vault. The PAT is a GitHub credential and does not grant Key Vault access.

Create a service principal in Azure AD then reference the credentials in DeployDBWorkflow then grant secret permissions to secrets-vault is incorrect because referencing the credentials in the workflow before granting the service principal Key Vault permissions will cause access failures. The identity must have permissions granted before the workflow attempts to read secrets.

Create a service principal in Azure AD then grant key permissions to secrets-vault then reference the credentials in DeployDBWorkflow is incorrect because granting key permissions only authorizes operations on Key Vault keys and does not permit reading Key Vault secrets. You must grant secret permissions to allow the workflow to retrieve stored secret values.

When you set up GitHub Actions to access Azure Key Vault, create a minimal Azure service principal, grant it only the required secret permissions, and store the service principal credentials as GitHub repository secrets before running the workflow.

Developer efficiency depends on tests detecting real defects in code updates quickly and consistently. Which capability of the Summit CI CD platform can produce different outcomes such as pass or fail even when the source code and execution environment have not changed?

  • ✓ C. Flaky or intermittent test failures

Flaky or intermittent test failures is correct because it names the phenomenon where the same test may pass or fail even when the source code and execution environment have not changed.

This happens when tests depend on timing, external services, random values, shared state, or other non deterministic factors that make outcomes inconsistent across runs. Flaky failures are specifically the category of test behavior that can produce different results without changes to code or the environment.

Parallel test execution is not the best answer because it refers to running tests concurrently to save time, and it is an execution strategy rather than the underlying cause of inconsistent results. Parallel runs can expose flakiness but they are not the phenomenon described.

Test impact analysis is not correct because it is a technique to determine which tests need to run after a code change, and it does not itself cause tests to pass or fail intermittently.

User interface automated tests is not the correct choice because it names a test type rather than the intermittent behavior. UI tests can be flaky, but the question asks for the capability or phenomenon that explains inconsistent pass or fail outcomes.

When a question describes tests that sometimes pass and sometimes fail without code changes look for an answer mentioning flaky or intermittent tests rather than test types or execution strategies.

A technology consultancy named HarborSoft stores the source for a client web portal in an Azure Repos Git repository and team members push commits directly to the main branch. You need to enforce a change control workflow that keeps the main branch protected, requires feature branches to be built before integration, requires at least one release lead to approve changes, and mandates that all merges into main use pull requests. What should you configure in Azure Repos?

  • ✓ C. Branch policies on the main branch

Branch policies on the main branch is correct because it enforces the required protections by requiring pull requests and build validation and reviewer approvals before changes can be merged into main.

The Branch policies on the main branch settings let you require a successful build as part of pull request validation and they let you require a minimum number of reviewers so at least one release lead must approve changes. The policies can also require that merges into main use pull requests which prevents direct pushes when the policy is applied.

Branch security of the main branch is focused on permissions such as who can push or force push to the branch and it does not provide build validation or enforce reviewer approvals by itself.

Agent pools in Project Settings describe where pipeline jobs run and manage build agents. They do not control branch level protections or require pull requests and approvals.

Service connections in Project Settings store credentials and endpoints that pipelines use to access external services. They do not implement branch protection rules or require specific merge workflows.

When a question mentions require build validation, require reviewers, or force pull requests look for answers that mention branch policies because those settings live at the branch policy level and enforce the workflow.

A platform engineering group at Meridian Labs must extract shared libraries and maintain them as a set of packages. The candidate actions are numbered as follows. 1 Group related modules into packages. 2 Build a dependency graph for the codebase. 3 Assign a code owner to each grouped module. 4 Rewrite modules in the most commonly used programming language. 5 Identify the predominant programming language used in the system. Which three steps should the team perform in sequence?

  • ✓ E. 2-1-3

The correct option is 2-1-3. This sequence means build a dependency graph first then group related modules into packages and then assign a code owner to each grouped module.

Building a dependency graph first reveals how code is coupled and which modules change together. The graph gives objective data to define package boundaries and to avoid splitting tightly coupled code across packages.

With the dependency graph you can group related modules into cohesive packages that minimize cross package dependencies and expose clear interfaces. Packaging after analysis reduces rework and helps preserve internal invariants when extracting shared libraries.

Assigning a code owner after grouping ensures that each package has a responsible maintainer for versioning releases and reviewing changes. Owners are critical for long term maintenance and for enforcing compatibility guarantees.

5-1-3 is incorrect because identifying the predominant language first does not provide the dependency information needed to form correct packages and it can tempt teams to rewrite code before understanding coupling and cost.

1-3-2 is incorrect because grouping modules before building a dependency graph risks creating packages that break dependencies and that do not reflect actual runtime or compile time coupling.

3-1-4 is incorrect because assigning owners before grouping skips the analysis step and because rewriting modules into a single language is usually unnecessary and risky when extracting shared libraries.

1-2-3 is incorrect because grouping before analysis may force unnatural boundaries and lead to extra refactoring once the true dependency graph is known.

When sequencing extraction work perform analysis first by building a dependency graph then organize into cohesive packages and finally assign ownership for ongoing maintenance.

A DevOps team at Aurora Systems plans to deploy a containerized application to an Azure Kubernetes Service cluster and the application must access other Azure services securely from the cluster. Which identity or credential should you create to grant the cluster that capability?

  • ✓ C. Service principal

The correct option is Service principal.

A Service principal is an Azure Active Directory identity that you create and assign roles to so the AKS cluster can authenticate to Azure Resource Manager and call other Azure services. You register the principal in Azure AD and grant it the minimum roles the cluster needs so pods or cluster components can access storage, key vaults, or other resources securely.

Kubernetes Secret is incorrect because it is a Kubernetes object for storing small pieces of sensitive data inside the cluster and not an Azure identity. Storing an Azure credential in a Kubernetes Secret would be less secure and it does not provide an Azure AD identity for role assignment.

SSH key pair is incorrect because SSH keys grant access to virtual machines for administrative purposes and they do not serve as an identity for accessing Azure services from the cluster.

Managed identity is listed as incorrect for this question. Managed identities are an Azure mechanism to give Azure resources an identity and they can be used with AKS in some configurations, but many AKS deployments and exam scenarios expect a Service principal to be created and granted the cluster permissions unless the question explicitly calls out using managed identities or pod workload identities.

Read the question for identity clues and choose an Azure AD identity when AKS must call Azure APIs. Use service principal when the exam asks for the cluster identity unless it specifically mentions managed identity features.

A development group at NovaSoft stores its code in Azure DevOps and must install a self-hosted agent through an unattended installation script. Which two parameters need to be provided within that setup script? (Choose 2)

  • ✓ B. Authorization credentials such as a personal access token

  • ✓ E. The organization URL

The correct options are Authorization credentials such as a personal access token and The organization URL.

The unattended installation script must include Authorization credentials such as a personal access token so the agent can authenticate to Azure DevOps and register itself. The token provides the permission needed for the agent to join the organization and accept jobs.

The script also needs The organization URL so the agent knows which Azure DevOps account or organization to contact during registration. The URL directs the agent to the server endpoint that will manage and configure it.

The deployment group name is not required for a basic unattended agent installation. Deployment groups are used for a different release pattern and are registered with their own workflow rather than being a mandatory parameter for the agent setup script.

A service connection name is not part of the agent registration parameters. Service connections are created to allow pipelines to access external services and they are configured at the project level after agents are available.

The agent pool name is not strictly required as a minimal parameter for unattended setup because the installer can register the agent to a default pool or you can specify a pool later. The essential items for unattended registration are the authentication token and the organization endpoint.

For unattended agent installation questions focus on what the agent needs to reach and how it will authenticate. Remember that the organization URL and a personal access token are commonly required parameters.

Your team at Fabrikam uses Azure DevOps to build and deploy a web application named WebApp1 into an Azure subscription. Azure Monitor currently sends email alerts when server side errors occur in WebApp1. You want to send those Azure Monitor alerts to a Microsoft Teams channel. What steps should you take? (Choose 2)

  • ✓ A. Create an Azure Logic App with an HTTP request trigger

  • ✓ B. Modify the action group in Azure Monitor

The correct answer is Create an Azure Logic App with an HTTP request trigger and Modify the action group in Azure Monitor.

Create an Azure Logic App with an HTTP request trigger lets you accept the alert payload as an incoming webhook and then use actions such as the Microsoft Teams connector to format and post a message into the channel. The Logic App performs any needed transformation and orchestration so the alert becomes a readable Teams message.

Modify the action group in Azure Monitor is necessary so Azure Monitor will invoke the Logic App when the alert fires. Action groups are the mechanism Azure Monitor uses to deliver notifications and you add the Logic App endpoint as a webhook or Logic App action inside the action group.

Create an Azure Monitor workbook is incorrect because workbooks are for visualization and investigation of telemetry and they do not send alert notifications to external services like Microsoft Teams.

Create an Event Grid subscription to forward alerts to Microsoft Teams is incorrect because Event Grid is for event routing and it would still require an intermediary to transform and post the alert into Teams. It is not the standard direct notification path for Azure Monitor alerts.

Change the Diagnostics settings for the resource in Azure Monitor is incorrect because diagnostics settings control where resource logs and metrics are sent and they do not change how Azure Monitor sends alert notifications to channels such as Microsoft Teams.

Remember that Azure Monitor delivers alert notifications through action groups so pick a notification target such as a webhook or Logic App when you need to post alerts into external systems like Teams.

You operate an Azure App Service site for an online retail platform named NimbusRetail and you must raise diagnostic logging when the site begins showing abnormal traffic patterns. The approach must minimize ongoing administrative effort. Which resources should you include in the solution? (Choose 2)

  • ✓ C. An Azure Automation runbook

  • ✓ E. An Azure Monitor metric alert with dynamic thresholds

The correct options are An Azure Automation runbook and An Azure Monitor metric alert with dynamic thresholds.

An Azure Monitor metric alert with dynamic thresholds is appropriate because dynamic thresholds learn the normal baseline for a metric and detect anomalies as traffic patterns change. This means the alert adapts automatically and requires much less manual tuning than fixed thresholds.

An Azure Automation runbook is correct because a runbook can be invoked automatically when the alert fires to enable diagnostic logging or to capture diagnostics artifacts. Automating the logging actions removes the need for a human to respond each time and it minimizes ongoing administrative effort.

Azure Monitor autoscale settings are not sufficient because autoscale changes instance count to meet load and it does not by itself enable or collect diagnostic logs when abnormal traffic is detected. Autoscale addresses capacity rather than diagnostic capture.

An Azure Monitor alert that uses an action group with an email notification is not ideal because email notification still relies on a person to investigate and enable diagnostics. The option as written does not provide the automation needed to minimize ongoing administrative work.

An Azure Monitor alert with a static threshold is not a good choice because static thresholds need frequent tuning for varying traffic and they will either miss anomalies or create noise. Static thresholds therefore increase administrative overhead rather than reduce it.

Choose alerts that adapt to baseline changes and pair them with automated responses. Use dynamic thresholds to detect anomalies and use runbooks to trigger diagnostic logging automatically.

A development team at Meridian Tech intends to publish a web application to twelve virtual machines located in a corporate colocation facility and all machines run the Azure Pipelines agent. Which deployment mechanism should you propose to manage the release to these machines?

  • ✓ C. Deployment group

Deployment group is the correct option because the team needs to publish to multiple virtual machines that already run the Azure Pipelines agent.

A Deployment group defines a set of target machines for a release pipeline and relies on the Azure Pipelines agent installed on each machine to execute deployment tasks. Deployment groups let you manage and orchestrate releases across many servers whether they are in a colocation facility or in a cloud environment and they are the intended mechanism when you have agent based targets.

Management group is incorrect because management groups are used to organize subscriptions for governance and policy purposes and they do not target or control deployments to individual machines.

Resource group is incorrect because resource groups are an Azure construct for grouping resources within a subscription and they do not provide the release orchestration for machines in an external colocation facility.

Service connection is incorrect because service connections grant pipelines access to external services or subscriptions and they do not group or manage a set of target machines for agent based deployments.

When the question mentions machines running the Azure Pipelines agent look for deployment group as the answer because it is specifically designed to target and manage agent based servers.

Orion Labs uses Azure DevOps to orchestrate builds and releases for its applications and it stores code in a Git repository. The team wants to branch from an open pull request and later merge that new branch into the pull request’s target branch. The new branch must include only part of the changes from the pull request. Which pull request action should the team use?

  • ✓ D. Cherry-pick

The correct option is Cherry-pick.

Cherry-pick lets you take specific commits or changes from an existing pull request and apply them to a new branch. This makes it possible to branch from an open pull request and include only part of the pull request changes in the new branch, and then merge that branch into the pull request’s target branch.

In Azure Repos you can use the cherry-pick action or the git cherry-pick command to create a commit that contains only the selected changes. You can then push that commit to a new branch and merge it into the target branch independently of the rest of the pull request.

Approve with suggestions is a reviewer action that records approval or suggested edits and does not create a branch or extract specific commits for separate merging.

Reopen pull request only reopens a previously closed pull request and does not provide a mechanism to create a new branch that contains only some of the pull request changes.

Set as default branch changes which branch is the repository default and does not help select or move particular commits between branches.

When you need only specific commits from a pull request use cherry-pick or create a short lived topic branch and push the selected commits so they can be reviewed and merged independently.

A development group at Nimbus Systems must create a build pipeline inside an Azure DevOps project and they want each build to produce a measurement of the codebase technical debt and provide code quality metrics. Which tool is appropriate to include in the build to obtain this analysis?

  • ✓ B. SonarQube

The correct option is SonarQube.

SonarQube is a static code analysis platform that measures technical debt and produces code quality metrics. It analyzes code for issues such as bugs, vulnerabilities, code smells, and maintainability and reports a technical debt estimate that can be collected as part of a build pipeline.

Jenkins is a continuous integration server that orchestrates builds and can run analysis tools but it does not itself generate technical debt or code quality metrics unless you integrate an analysis tool such as SonarQube.

Azure Pipelines is the CI/CD service that runs your build and test tasks and it can host analysis steps but it is not the analysis engine that computes technical debt. You would add a task to Azure Pipelines to run SonarQube or a similar tool to get those metrics.

Azure Boards is for work tracking and planning and it does not perform static code analysis or produce code quality metrics.

When a question asks for tools that produce technical debt and code quality metrics think of static analysis platforms such as SonarQube rather than CI runners or work tracking services.

An engineering team at Meridian Apps publishes successive builds of their service into an Azure Artifacts feed. You must prevent packages that are still under active development from being visible to most users. Which feature will allow you to control which packages users can see?

  • ✓ B. Feed views

The correct option is Feed views.

Feed views let you create separate views that represent stages of a package lifecycle so you can publish early builds into a private or prerelease view while keeping a different view visible to most users. This approach lets the engineering team push successive builds into the feed and keep the in development versions out of the default view most users see.

The Feed permissions setting controls who can manage a feed and who can push or pull packages at the feed level, but it does not let you selectively hide specific package versions by lifecycle stage. That makes Feed permissions the wrong choice for this requirement.

The Upstream sources option is used to proxy or consume packages from external registries into a feed and to aggregate packages from multiple sources. It does not provide a way to hide in development builds from most users, so Upstream sources is not the correct feature for controlling visibility by lifecycle.

When a question asks about hiding in development packages think about lifecycle separation and look for features that create staged views. Use views to separate prerelease and release packages instead of relying only on permissions.

A development team at MarinerApps has deployed an Azure Web App and connected it to an Application Insights resource. You need to ensure that developers are alerted about unusual performance degradations or recurring failure patterns in the web service. Which Application Insights capability will provide these proactive notifications?

  • ✓ D. Application Insights Smart Detection

Application Insights Smart Detection is the correct option.

Application Insights Smart Detection uses telemetry driven anomaly detection to automatically identify unusual performance degradations and recurring failure patterns and it surfaces those issues so developers can be notified proactively.

Smart Detection analyzes exceptions, response times, and failure rates across your telemetry and it highlights sudden spikes or persistent patterns without requiring you to define explicit thresholds.

Application Insights Live Metrics Stream provides real time telemetry to inspect live requests and dependencies and it is intended for interactive debugging rather than automated, recurring anomaly detection and proactive notifications.

Azure Monitor alert rules let you create alerts by defining specific thresholds and conditions on metrics and logs and they require manual rule configuration rather than offering the automatic telemetry driven detection that Smart Detection provides.

Application Insights Application Map visualizes the topology and dependencies of your application and it helps identify hotspots and failures visually but it does not automatically detect anomalies or generate proactive notifications about recurring failure patterns.

When a question asks about proactive anomaly alerts look for features described as automatic or proactive detection rather than manual threshold based alerting.

Your team maintains a private RepoHub code repository and they want commit status to appear in Azure Boards. What should you do first?

  • ✓ D. Install the Azure Boards app on the repository

Install the Azure Boards app on the repository is correct.

Installing the Azure Boards app on the repository creates the integration that links commits and pull requests from the RepoHub repository to work items in Azure Boards and enables commit status and linking to appear inside Azure Boards.

Enable multi-factor authentication for your RepoHub account is not correct because multi factor authentication improves account security but does not create the integration needed to show commit status in Azure Boards.

Create a RepoHub workflow in the repository is not correct because a workflow can run checks or CI tasks but it does not establish the native Boards integration that links commits and work items unless you also install the Azure Boards app.

Add the Azure Pipelines app to the repository is not correct because Azure Pipelines provides CI and build status reporting and it does not itself configure the Azure Boards linking for commits and work items in the way that the Azure Boards app does.

When a question asks how to get commit and pull request data into Azure Boards think about installing the official integration app first because that is usually the supported and simplest approach.

Your team stores code and conversations in RepoHub and you receive an email for every team discussion that is posted. You only want to get email notifications for threads where you left a comment or where someone mentioned you. Which two notification preferences should you turn off? (Choose 2)

  • ✓ B. Automatically watch teams

  • ✓ D. Automatically watch repositories

The correct options are Automatically watch teams and Automatically watch repositories.

Turning off Automatically watch teams and Automatically watch repositories prevents the system from automatically subscribing you to every team discussion and every repository thread. This reduces noise so you will only receive emails for threads where you are Participating or where someone mentions you provided those notification types remain enabled.

Automatically watch teams subscribes you to posts from teams you belong to so it generates emails for team discussions when enabled. Disabling it stops those blanket team notifications.

Automatically watch repositories subscribes you to activity across repositories by default so it causes emails for many repository threads. Turning it off keeps your inbox focused on items you actively engage in or where you are mentioned.

Participating is not correct because it controls notifications for threads where you left a comment and you want to keep that so you still get emails for your own discussions.

Watching is not correct because it is a manual watch state for repositories and is not the automatic preference that subscribes you to all team and repository activity by default.

When a question asks about reducing notification noise identify and disable settings that automatically subscribe you to everything. Keep Participating and mention notifications enabled to receive only relevant emails.

A development group at Skylark Technologies uses an Azure DevOps CI CD pipeline called PipelineA and they want to integrate OWASP ZAP scans into that pipeline. Which four steps should be added to PipelineA in the correct order?

  • ✓ C. Start the ZAP container then run a passive baseline scan then crawl the application then run an active scan

The correct option is Start the ZAP container then run a passive baseline scan then crawl the application then run an active scan.

Start the ZAP container first so the proxy and scanner are available to intercept traffic and manage scans. Running a passive baseline scan next lets ZAP detect low risk and informational issues without sending intrusive requests. Crawling the application after that discovers URLs, parameters, and the site structure so the scanner knows the attack surface. Finally running an active scan performs targeted, intrusive tests against the discovered endpoints to identify exploitable vulnerabilities.

Crawl the application then start the ZAP container then run a passive baseline then execute an active scan is incorrect because the crawler must be proxied through ZAP. Crawling before the container is running means ZAP cannot observe the traffic and the passive baseline will not collect expected data.

Start the ZAP container then run an active scan then crawl the application then generate a results report is incorrect because an active scan should be performed after discovery. Running an active scan before crawling will miss endpoints and increase false negatives and risk of unintended impact.

Use Google Cloud Web Security Scanner then run a baseline then crawl the site then run an active scan is incorrect because it substitutes a different tool and platform instead of integrating OWASP ZAP into the Azure DevOps pipeline. The question specifically requires ZAP workflow steps rather than a separate cloud scanner.

When answering pipeline ordering questions remember to start the scanner first then collect non intrusive findings with a passive baseline then crawl to discover the site and finally run an active scan.

BrightWave Solutions uses Azure Pipelines to build and deploy web portals. You need the pipeline to run only when files inside the /frontend directory change and only when a pull request is opened. The pipeline YAML was configured with trigger paths include /frontend branches include refs/head/pr. Does this configuration meet the stated requirements?

  • ✓ B. No

The correct answer is No.

The configuration does not meet the requirements because the YAML uses the trigger block which controls builds on pushes and not on pull request creation. For pull request events you must use the pr block with branch and path filters so that the pipeline runs only when a pull request is opened. The branch filter in the example is also wrong because branch filters use the refs/heads/ form and not refs/head/pr. The path filter is not correct either because Azure path filters typically do not start with a leading slash and you should use a glob such as frontend/** to match changes inside the folder.

The option Yes is incorrect because the given configuration will not restrict runs to pull request openings and the branch and path specifications are malformed. As a result the pipeline will not behave as required.

When you see questions about triggers check whether the YAML uses the trigger block or the pr block and verify that branch names use refs/heads/ and path filters use globs like frontend/**.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.