AZ-400 Practice Tests on Azure DevOps Engineer Exam Topics

AZ-400 DevOps Engineer Expert Certification Exam Topics

Over the past few months, I have been helping software developers, solutions architects, DevOps engineers, and even Scrum Masters who want to learn Azure DevOps gain the skills and certifications needed to stay competitive.

One of the most respected Microsoft DevOps certifications today is the AZ-400 Microsoft Certified DevOps Engineer Expert.

To pass the AZ-400 certification exam, use AZ-400 exam simulators, review AZ-400 sample test questions, and complete online AZ-400 practice exams like this one.

AZ-400 DevOps Engineer Expert Practice Questions

This set of AZ-400 practice questions focuses on commonly misunderstood concepts that frequently appear on the exam. If you can answer these confidently, you are well positioned to pass.

These are not AZ-400 exam dumps. They represent the style and reasoning required for the real exam but are not copies of actual questions.

Here are the AZ-400 practice questions and answers. Good luck!

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Certification Sample Questions

A team at Meridian Software is migrating its TFVC repository into Git by using the “Import repository” tool in Contoso DevOps. What is the exact number of days of history they can import from TFVC into Git?

  • ❏ A. 365 days

  • ❏ B. 180 days

  • ❏ C. 90 days

  • ❏ D. There is no limit on history

A regional SaaS provider named Bluehaven runs multiple Azure App Service instances for live workloads and they need production alerts for those apps to automatically create incidents in their external ServiceNow incident management platform. Which Azure capability can be used to accomplish this?

  • ❏ A. Application Insights

  • ❏ B. IT Service Management connector

  • ❏ C. Azure DevOps service connections

  • ❏ D. Azure Monitor action groups

A regional bank operates an on premises ForgeRepo Server that is protected by a firewall which blocks incoming Internet connections and it stores Git repositories for source control. The operations team wants to use Azure DevOps to run builds and releases and must integrate Azure DevOps with the on premises ForgeRepo Server. What approaches will enable this integration? (Choose 2)

  • ❏ A. Service hooks

  • ❏ B. External Git service connection

  • ❏ C. Microsoft hosted agent

  • ❏ D. Self hosted agent

Your engineering team builds and deploys a service called MicroServe using Azure Pipelines and you must ensure that a custom security scanner succeeds before any code is merged into the primary branch and deployed. What should you do?

  • ❏ A. Create a service hook to invoke the external scanner

  • ❏ B. Restrict job authorization scope to only the current project for all release pipelines

  • ❏ C. Add a status check to the primary branch policy

  • ❏ D. Require a build validation policy for the primary branch

A development team at HarborApps wants a centralized place to keep configuration values so several build jobs and release workflows can use the same settings. What should they create to store those values?

  • ❏ A. Service connection

  • ❏ B. Variable group

  • ❏ C. Task group

Your engineering group at Nimbus Labs is defining a Semantic Versioning based release approach for library artifacts and you must decide which version segment to bump when changes are made. If you mark an API feature as deprecated but leave it functioning for backward compatibility which version segment should be incremented?

  • ❏ A. Patch version

  • ❏ B. Minor version

  • ❏ C. Major version

A development team at Skylark Software uses a Dependency Visualizer add on inside their Skylark DevOps project and they generate a risk graph for the repository. Which visual feature in the risk graph should they examine to determine the relative number or weight of dependencies connecting components in the project?

  • ❏ A. Node color

  • ❏ B. Edge thickness

  • ❏ C. Edge length

  • ❏ D. Node degree

A DevOps team at Nimbus Software uses Contoso DevOps and they manage work items in Contoso Boards. They plan to add dashboard widgets to monitor project indicators and they need a widget that shows the summary of results from a shared query. Which widget will display “Display the summary of shared query results”?

  • ❏ A. Build history

  • ❏ B. Query tile

  • ❏ C. Sprint capacity

  • ❏ D. Release pipeline overview

Which package format cannot be hosted in Contoso DevOps artifact feeds?

  • ❏ A. NPM

  • ❏ B. PHP

  • ❏ C. Maven

  • ❏ D. Python

A development team at Meridian Software uses Azure DevOps for the build and release workflow of a Java microservice and they plan to integrate SonarQube to run static code analysis and report quality metrics. Which build pipeline task should they include so SonarQube can analyze the Java project?

  • ❏ A. MSBuild

  • ❏ B. Cloud Build

  • ❏ C. Gradle

  • ❏ D. Docker

A regional fintech firm named HarborTech uses an Azure Resource Manager template to provision a three tier application. You need to ensure that the person who executes the deployment cannot view the application service credentials and database connection strings used by the deployment. Which Azure capability should be used?

  • ❏ A. Azure Resource Manager parameter file

  • ❏ B. Azure Storage table

  • ❏ C. appsettings.prod.json

  • ❏ D. Azure Key Vault

  • ❏ E. web.config file

A development team at NimbusSoft must use an Azure Pipelines YAML pipeline to compile an application and run tests against the app and its database. The requirements state that the test stages must run concurrently, the Aggregate_Test_Results stage must always run, the test stages must execute only after a successful Compile_Service stage, and the Aggregate_Test_Results stage must run after all test stages complete. The proposed pipeline stages section is stages: – stage: Compile_Service jobs: – stage: Service_Tests dependsOn: [Compile_Service] jobs: – stage: DB_Tests dependsOn: [Compile_Service] jobs: – stage: Aggregate_Test_Results jobs: Does this pipeline meet the requirements?

  • ❏ A. Yes

  • ❏ B. No

Your organization Valence Labs has a cloud subscription that hosts 35 virtual machines and you intend to manage their configurations with Azure Automation State Configuration, how should you arrange the blocks inside a Desired State Configuration file?

  • ❏ A. Configuration > Resource > Node

  • ❏ B. Configuration > Node > Resource

  • ❏ C. Node > Configuration > Resource

  • ❏ D. Resource > Configuration > Node

Your company DevStudio partners with an external team at example.com to build a web application. DevStudio uses Azure Boards to track work while the partner uses Trello. You want a Trello card to be created automatically whenever a new work item is added in Azure Boards. What actions should you perform? (Choose 2)

  • ❏ A. Use Azure Logic Apps with the Trello connector to create cards automatically

  • ❏ B. Grant Trello access to your Azure DevOps organization

  • ❏ C. Export work items from Azure Boards as a CSV and then import them into Trello

  • ❏ D. Configure a service hook subscription in Azure DevOps

A development team at Horizon Labs is building a Java application. The group already operates a SonarQube server to evaluate the quality of their C Sharp projects and they want to add automated Java analysis and continuous quality monitoring into their build pipeline. Which build task types should they add to the pipeline?

  • ❏ A. Gulp

  • ❏ B. SonarScanner

  • ❏ C. Maven or Gradle

  • ❏ D. Grunt

A development group at Brightline Systems keeps their Git repository in Azure Repos and multiple engineers work on separate feature branches at the same time. The team needs the main branch to avoid accumulating many intermediate commits when pull requests are merged. Which merge strategy should they apply to maintain a compact main branch history?

  • ❏ A. Three-way merge

  • ❏ B. Fast forward merge

  • ❏ C. Squash merge

Your organization manages a CodeForge Enterprise account and you must activate push time protection to scan for secrets in the repositories. What must you enable first?

  • ❏ A. Enable mandatory multi factor authentication for all accounts

  • ❏ B. Purchase CodeForge Advanced Security license

  • ❏ C. Subscribe to a Priority Support plan

  • ❏ D. Create a repository secret access policy

A development team at Northbridge Systems builds Node Package Manager packages in an Azure DevOps project and several repositories consume those packages. They need to reduce the disk usage from older package versions stored in Azure Artifacts. Which setting should they change?

  • ❏ A. Azure Artifacts feed retention policy

  • ❏ B. Project pipeline retention settings

  • ❏ C. Project release retention settings

  • ❏ D. Project test result retention settings

A development group has deployed an Azure App Service and linked it to an Application Insights resource. Which query language is used to retrieve records from the Log Analytics workspace that contains the Application Insights telemetry?

  • ❏ A. Transact SQL

  • ❏ B. C# .NET

  • ❏ C. Kusto Query Language

  • ❏ D. JavaScript

At StellarApps your team uses a hosted Git service for version control and a file with private keys was accidentally committed into the repository history. You need to remove the file and erase every trace of it from the repository history. Which two tools can accomplish this? (Choose 2)

  • ❏ A. Google Cloud Source Repositories

  • ❏ B. git rebase

  • ❏ C. BFG Repo-Cleaner

  • ❏ D. git filter-branch

Your team at NovaSoft is releasing updates to a customer facing service and you want to limit the impact on users while you progressively deploy and validate changes in production. Which two approaches should you use? (Choose 2)

  • ❏ A. Blue green deployment

  • ❏ B. Deployment rings

  • ❏ C. Canary releases

  • ❏ D. Feature flags

Review the Riverton Credit Union case study at example.com/documents/riverton-case and then answer the following. You are configuring the Azure DevOps dashboard and you must meet the specified technical requirements. Which dashboard widget should be used to represent Metric 3?

  • ❏ A. Cumulative flow diagram

  • ❏ B. Query tile

  • ❏ C. Release pipeline overview

  • ❏ D. Query results

  • ❏ E. Sprint burndown

  • ❏ F. Velocity

In Azure DevOps pipelines which feature lets you reference secrets that are stored in an Azure Key Vault?

  • ❏ A. Task groups

  • ❏ B. Variable groups

  • ❏ C. Deployment groups

The development team at Northbridge Systems runs a web application on Azure App Service and they have an Application Insights instance connected to the app. They need to use Application Insights features to determine “how page load times and other performance variables affect conversion rates across different sections of the site”?

  • ❏ A. User Flows

  • ❏ B. Funnels

  • ❏ C. Metrics Explorer

  • ❏ D. Impact

Your team uses an Azure DevOps pipeline named BuildFlow that needs to pull a public container image during execution and you must add a service connection to allow the pipeline to access that image. Which type of service connection should you create?

  • ❏ A. Docker host

  • ❏ B. Azure Kubernetes Service (AKS)

  • ❏ C. Docker registry

  • ❏ D. Azure Service Fabric

You manage an Azure DevOps artifact feed for a small software company named Meridian Apps that hosts internal packages. You must allow a group of engineers to save packages that come from upstream feeds while keeping their privileges as limited as possible. The proposed plan is to give them the Owner permission on the feed. Does this plan meet the requirement?

  • ❏ A. Yes

  • ❏ B. No

A development team at Aurora Apps uses an Azure Pipelines job to build and deliver a service. The pipeline has a custom test task that is configured to search the default working directory and to merge results and it looks for files that match the pattern */RESULT-.trx. Which test result format must the testResultsFiles pattern correspond to?

  • ❏ A. JUnit

  • ❏ B. xUnit

  • ❏ C. VSTest

  • ❏ D. NUnit

Your team stores code in DevStore and a file named secrets.log that contains confidential information was pushed to a repository by mistake. You must remove the file from the repository history across all branches. Which commands can accomplish this task? (Choose 2)

  • ❏ A. git lfs migrate import –include=secrets.log git push –force

  • ❏ B. git filter-repo –invert-paths –path secrets.log git push origin –force –all

  • ❏ C. git rm secrets.log git commit -m “remove secrets” git push –force –all

  • ❏ D. bfg –delete-files secrets.log git push –force

  • ❏ E. git clean -fd secrets.log –force

  • ❏ F. git revert -n HEAD~1..HEAD git push –force

Your operations team manages a customer portal hosted on Azure App Service for a company called Meridian Solutions and they have connected an Application Insights instance to the App Service to collect telemetry and usage data. Which Application Insights feature lets you examine how customers move through the successive steps of your application?

  • ❏ A. Retention

  • ❏ B. Application Map

  • ❏ C. Funnels

  • ❏ D. User Flows

A development group at Meridian Software plans to adopt Azure Boards for work tracking. They need the ability to create and manage product backlog items and log bugs on a Kanban board. Which work item process should they select when creating a new project in their Azure DevOps organization?

  • ❏ A. CMMI process

  • ❏ B. Basic process

  • ❏ C. Scrum process

  • ❏ D. Agile process

A development team manages an Azure DevOps account and an Azure subscription. They provisioned a Windows Server 2022 virtual machine to act as a self hosted agent for Azure Pipelines. Which credential type must be used to register the virtual machine as a self hosted agent?

  • ❏ A. Service Principal

  • ❏ B. OAuth

  • ❏ C. Service Connection

  • ❏ D. Personal Access Token

At Meridian Software a repository contains several GitHub Actions workflows and a secret has been stored as an environment secret within that repository. You need to make the secret available to workflows across multiple repositories in the organization. What should you do first?

  • ❏ A. Use Google Secret Manager to centralize the secret

  • ❏ B. Recreate the secret at the repository level

  • ❏ C. Create the secret at the organization level

A software engineering team at DataRipple must use an Azure Pipelines YAML to build a service and then run both a unit test stage and a database test stage in parallel after the build succeeds, and the stage that publishes test results must always run after both test stages complete. The pipeline uses these stages in the YAML. stages: – stage: Build_Service jobs: – stage: Run_Unit_Tests dependsOn: [] jobs: – stage: Run_DB_Tests dependsOn: [] jobs: – stage: Publish_Test_Report dependsOn: – Build_Service – Run_Unit_Tests – Run_DB_Tests condition: succeededOrFailed() jobs: Does this pipeline configuration satisfy the stated requirements?

  • ❏ A. Yes the configuration satisfies the requirements

  • ❏ B. No the pipeline configuration does not meet the requirements

An IT operations team at Northbridge Cloud manages an Azure subscription with a storage account and 24 virtual machines. The team plans to send the virtual machine logs to LogRhythm for centralized analysis and they must configure AzLog to export the logs into the storage account. Which log format should they export the logs in?

  • ❏ A. EVTX

  • ❏ B. Avro

  • ❏ C. EVT

  • ❏ D. JSON

Your team maintains a GitHub repository named CodeBaseX and a web application named StoreFrontX. CodeBaseX contains the source code for StoreFrontX. You need to run a ZAP spider against StoreFrontX and then execute an AJAX spider scan once the initial spider completes. Which GitHub Action should you select to perform those scans?

  • ❏ A. Google Cloud Web Security Scanner

  • ❏ B. ZAP Full Scan

  • ❏ C. ZAP API Scan

A team creates a Git repository named RepoA in Azure Repos for a fintech startup. They must ensure that each pull request is verified by an external code review tool which reports its result back to the pull request and the chosen solution must minimize administrative overhead. Which type of policy should be applied to enforce this requirement?

  • ❏ A. Service hook

  • ❏ B. Branch policy

  • ❏ C. Status check

  • ❏ D. Build validation

A continuous integration pipeline at NimbusSoft defines four jobs named Alpha, Beta, Gamma and Delta within Azure Pipelines. Alpha has its own steps. Beta and Gamma declare no dependencies so they can start independently. Delta specifies a dependency on Gamma using the field “dependsOn” so Delta will begin only after Gamma finishes. Can Gamma and Delta execute concurrently?

  • ❏ A. Yes

  • ❏ B. No

A small technology firm runs applications on a Kubernetes cluster and needs a solution to package configure and deploy releases across environments. Which tool below provides package management for Kubernetes?

  • ❏ A. Artifact Registry

  • ❏ B. Helm

  • ❏ C. Artifactory

A deployment pipeline runs an AzureMonitor@1 task with connectedServiceNameARM set to deployConn and ResourceGroupName set to rg-apps and filterType set to none and severity set to ‘Sev0,Sev1,Sev2,Sev3’ and timeRange set to ‘5h’ and alertState set to ‘New’. Will this task notify only for alerts that are in the New state?

  • ❏ A. No

  • ❏ B. Yes

An engineering team at NovusSoft stores C# projects in GitHub Enterprise repositories and they want to enable CodeQL analysis across those repositories. What is the required action to activate CodeQL scanning for the C# code?

  • ❏ A. Enable Dependabot alerts

  • ❏ B. Add a GitHub Actions workflow that runs CodeQL in each repository

  • ❏ C. Enable Dependabot security updates

  • ❏ D. Enable GitHub Advanced Security

You use Azure Boards to track work items and Azure Repos to store your source code for a project. There is a bug work item whose ID is 210. You want a commit to automatically transition that work item to the Resolved state when the commit references it. What text should you put in the commit message?

  • ❏ A. #210 completes

  • ❏ B. Resolves #PROJ-210

  • ❏ C. Verifies #210

  • ❏ D. Fixes #210

Your company uses Azure DevOps to run build pipelines and release pipelines and the engineering team is large and expands regularly. You have been instructed to automate user and license administration whenever feasible. Which of the following tasks cannot be automated?

  • ❏ A. Managing user entitlements

  • ❏ B. Assigning access level licenses to accounts

  • ❏ C. Updating team and security group memberships

  • ❏ D. Acquiring additional paid licenses

A development team at MeridianApps maintains an Azure DevOps project with a release pipeline that has two stages named Staging and Production. Staging deploys code to an Azure web app called stagingapp and Production deploys to an Azure web app called productionapp. The team needs deployments to production to be prevented when Application Insights raises Failed requests alerts after completing a new deployment to staging. What should be configured for the Production stage?

  • ❏ A. Add a pipeline task to create or update Application Insights alert rules

  • ❏ B. Configure a pre deployment gate that checks Application Insights Failed requests alerts

  • ❏ C. Require a manual approval from a team member or stakeholder in pre deployment conditions

  • ❏ D. Configure a pre deployment trigger to invoke an external service before Production deployment

Your team at Meridian Software uses Azure Boards in a DevOps project and you must add a dashboard widget that displays the burndown of remaining work for a single sprint iteration. Which widget type should you choose?

  • ❏ A. Velocity

  • ❏ B. Cumulative flow diagram (CFD)

  • ❏ C. Sprint burndown

  • ❏ D. Burndown chart

  • ❏ E. Control chart

  • ❏ F. Lead time

Your team uses Git and you enabled GitHub code scanning for a repository. A developer opens a pull request from a feature branch into the primary branch and the scan output shows the message “Analysis not found.” You must ensure the code scanning finishes successfully for that pull request. Which two steps should you perform? (Choose 2)

  • ❏ A. Add the feature branch name to the workflow on push trigger

  • ❏ B. Update the code in the pull request to trigger a new scan

  • ❏ C. Close the pull request and open a new pull request from the primary branch

  • ❏ D. Add the repository’s primary branch name to the workflow on push trigger

  • ❏ E. Create a separate code scanning workflow file for the repository

A development team uses the BlueWave CI pipeline to compile and deploy a web application. They need a testing approach that confirms the application communicates properly with databases and external services and ensures dependent components function together. Which type of test should they implement?

  • ❏ A. Acceptance testing

  • ❏ B. Load testing

  • ❏ C. Integration testing

  • ❏ D. Unit testing

  • ❏ E. Smoke testing

A backend team at Nimbus Labs stores its application code in a Git repository and they will open several feature branches from the main branch. What is the recommended way to handle these feature branches?

  • ❏ A. Keep feature branches active for long periods

  • ❏ B. Create feature branches that are short lived and merge them back into main frequently

  • ❏ C. Treat feature branches as permanent protected primary branches

Your engineering team at Clearwater Apps keeps its source in a GitHub repository and the security team must approve every code change before it is merged into the primary branch, so what actions can you take to enforce this requirement? (Choose 2)

  • ❏ A. Google Cloud Build

  • ❏ B. Add a LICENSE file

  • ❏ C. Require signed commits

  • ❏ D. Add a CODEOWNERS file

  • ❏ E. Apply a branch protection rule to the primary branch

A development team at AzureNova is building an Azure Pipelines release flow and they need an automated check that queries Azure Boards to confirm there are no active work items before a build goes to production. Which type of check should they add?

  • ❏ A. Manual validations

  • ❏ B. Pre deployment gates

  • ❏ C. Post deployment approvals

  • ❏ D. Pre deployment approvals

The engineering team at Meridian Financial runs Team Foundation Server 2015 and plans to move its projects to Azure DevOps Services. You must provide a migration plan that maintains Team Foundation Version Control changeset timestamps and work item revision timestamps. The migration should carry all TFS artifacts and minimize manual effort. You have recommended upgrading TFS to the latest RTW release. What other step should you advise?

  • ❏ A. Use the TFS Integration Platform to migrate artifacts

  • ❏ B. Use the TFS Database Import Service for Azure DevOps Services

  • ❏ C. Install the TFS Java SDK for integration tasks

  • ❏ D. Upgrade PowerShell to the latest supported release

AZ-400 Sample Questions Answered

A team at Meridian Software is migrating its TFVC repository into Git by using the “Import repository” tool in Contoso DevOps. What is the exact number of days of history they can import from TFVC into Git?

  • ✓ B. 180 days

The correct answer is 180 days.

The Azure DevOps Import repository tool for TFVC limits the amount of history it converts to the most recent 180 days. This limit is enforced to keep imports manageable in time and size and to prevent extremely large conversions from impacting service performance.

365 days is incorrect because the import tool does not support that long of a history window and the documented limit is shorter than a year.

90 days is incorrect because the supported window is longer than 90 days and the actual limit is 180 days.

There is no limit on history is incorrect because the import process does impose a maximum history range and it will not import unlimited TFVC changesets into Git.

When studying migration limits remember to memorize the numeric limits and which tool they apply to. For TFVC to Git imports the key number to remember for the Import repository tool is 180 days.

A regional SaaS provider named Bluehaven runs multiple Azure App Service instances for live workloads and they need production alerts for those apps to automatically create incidents in their external ServiceNow incident management platform. Which Azure capability can be used to accomplish this?

  • ✓ B. IT Service Management connector

The correct answer is IT Service Management connector.

The IT Service Management connector is a built in integration that lets Azure Monitor alerts create and update incidents in external ITSM platforms such as ServiceNow. It maps alert fields to incident fields and provides a managed workflow so teams do not have to implement and maintain custom webhook handlers to create incidents.

Application Insights is focused on application performance monitoring and telemetry and it is not the mechanism used to push alerts directly into ServiceNow as incidents.

Azure DevOps service connections are for connecting Azure DevOps to external services for pipelines and deployments and they do not provide incident creation from Azure Monitor alerts.

Azure Monitor action groups are used to route notifications and can invoke webhooks or functions as part of a custom solution but they are not the dedicated ITSM connector that directly provisions incidents in ServiceNow.

When a question asks about sending alerts to an incident management platform look for the managed ITSM integration. The IT Service Management connector is the specific capability that creates incidents in systems like ServiceNow.

A regional bank operates an on premises ForgeRepo Server that is protected by a firewall which blocks incoming Internet connections and it stores Git repositories for source control. The operations team wants to use Azure DevOps to run builds and releases and must integrate Azure DevOps with the on premises ForgeRepo Server. What approaches will enable this integration? (Choose 2)

  • ✓ B. External Git service connection

  • ✓ D. Self hosted agent

The correct options are External Git service connection and Self hosted agent.

External Git service connection is the service connection type in Azure DevOps that lets you reference Git repositories hosted outside of Azure Repos. It provides the configuration and credentials Azure DevOps uses to point pipelines at an on premises Git endpoint so the repository can be used as the source for builds.

Self hosted agent runs on a machine inside the bank network or otherwise able to reach the ForgeRepo Server. Because the server blocks incoming Internet connections a self hosted agent is required so the pipeline agent can clone, build, and push code without requiring inbound access from Microsoft hosted infrastructure.

Service hooks are incorrect because they are used to notify or trigger external services on Azure DevOps events and they do not establish a Git source connection for running builds against an on premises repository.

Microsoft hosted agent is incorrect because those agents run in Azure and need network access to the repository over the public internet. They cannot reach a ForgeRepo Server that blocks incoming Internet connections unless the network is changed to allow that access.

When a repo is behind a firewall prefer a self hosted agent to run pipelines inside the network and use an External Git service connection to register the on premises repository in Azure DevOps.

Your engineering team builds and deploys a service called MicroServe using Azure Pipelines and you must ensure that a custom security scanner succeeds before any code is merged into the primary branch and deployed. What should you do?

  • ✓ C. Add a status check to the primary branch policy

Add a status check to the primary branch policy is correct.

A status check branch policy requires an external service to post a pass or fail status on the commit or pull request and Azure Repos will block merges until the required status is present. This lets your custom security scanner run outside of the build pipeline and still prevent merges to the primary branch until it reports success.

Create a service hook to invoke the external scanner is incorrect because a service hook can trigger or notify external systems but it does not by itself enforce a merge gate based on the scanner result.

Restrict job authorization scope to only the current project for all release pipelines is incorrect because that setting pertains to pipeline job permissions and does not create a premerge policy that requires the scanner to succeed.

Require a build validation policy for the primary branch is incorrect because build validation runs Azure Pipelines builds and would require the scanner to be executed inside that build. A status check is the proper policy when the scanner runs externally and reports its status back to the repo.

When an external tool must block a merge prefer a status check branch policy because it enforces a required external status on the commit or pull request and prevents merging until the external service reports success.

A development team at HarborApps wants a centralized place to keep configuration values so several build jobs and release workflows can use the same settings. What should they create to store those values?

  • ✓ B. Variable group

The correct answer is Variable group.

Variable group provides a centralized library for storing configuration values and secrets so multiple build jobs and release pipelines can reuse the same settings. You can link Variable group to several pipelines and update values in one place so all linked jobs pick up the changes automatically.

Variable group supports secret variables and can be referenced from YAML or classic pipelines which makes it appropriate for cross pipeline configuration management and consistent releases.

Service connection represents credentials and endpoints for external services and is used to authorize pipelines to access resources such as cloud subscriptions or artifact feeds. It does not serve as a general store for arbitrary build configuration values.

Task group bundles multiple pipeline tasks into a reusable unit to simplify pipeline definitions. It helps reuse steps but it is not designed to hold centralized configuration values for multiple builds or releases.

When a question mentions sharing configuration across pipelines think Variable group and look for wording about a centralized or reusable store for settings rather than credentials or grouped tasks.

Your engineering group at Nimbus Labs is defining a Semantic Versioning based release approach for library artifacts and you must decide which version segment to bump when changes are made. If you mark an API feature as deprecated but leave it functioning for backward compatibility which version segment should be incremented?

  • ✓ B. Minor version

The correct answer is Minor version.

Minor version is appropriate when you mark an API feature as deprecated but keep it functioning for backward compatibility because you are not removing or breaking the API. Deprecation is a change in the API lifecycle and it signals to users that the feature will be removed in the future while current clients continue to work.

Patch version is intended for backwards compatible bug fixes only and it does not communicate a change in the API lifecycle. Deprecating a feature is more significant than a bug fix so a patch bump would understate the impact.

Major version is reserved for incompatible changes that break existing clients. Since the deprecated API remains functional and does not introduce breaking changes there is no reason to increment the major version.

When a change keeps existing behavior but signals future removal choose a minor bump to communicate deprecation while preserving backward compatibility.

A development team at Skylark Software uses a Dependency Visualizer add on inside their Skylark DevOps project and they generate a risk graph for the repository. Which visual feature in the risk graph should they examine to determine the relative number or weight of dependencies connecting components in the project?

  • ✓ B. Edge thickness

The correct answer is Edge thickness.

Edge thickness is the visual feature that encodes the weight or number of dependencies between components in a graph. Visualizers scale edge thickness so that thicker lines represent stronger or more numerous dependencies and thinner lines represent weaker or fewer dependencies. This makes thickness the most direct way to compare the relative weight of connections between specific components.

The option Node color is incorrect because color is usually used to show categories or a scalar metric such as risk level or status rather than the numeric weight of a specific dependency.

The option Edge length is incorrect because length most often reflects the layout algorithm or spatial separation and it is not a reliable encoding for dependency weight in typical risk graphs.

The option Node degree is incorrect because degree reports how many connections a node has overall and it does not show the weight of individual edges between pairs of components. Degree helps identify highly connected nodes but not the strength of each dependency.

When a question asks about weight or number of dependencies look for visual encodings of quantity such as edge thickness or numeric labels. Remember that node degree shows how many connections a node has but it does not replace edge weight as a measure of pairwise dependency strength.

A DevOps team at Nimbus Software uses Contoso DevOps and they manage work items in Contoso Boards. They plan to add dashboard widgets to monitor project indicators and they need a widget that shows the summary of results from a shared query. Which widget will display “Display the summary of shared query results”?

  • ✓ B. Query tile

The correct option is Query tile.

The Query tile widget shows a summary of results from a shared query. It displays counts of work items that match the query and provides a link back to the full query so team members can drill into details and track changes over time.

The Build history widget shows recent build pipeline runs and their statuses and it does not summarize results from a shared work item query.

The Sprint capacity widget displays team capacity and remaining effort for a sprint and it is not designed to surface shared query results.

The Release pipeline overview widget provides status and summaries for release pipelines and it does not display counts from a work item query.

When a question asks for a dashboard widget that summarizes work item queries remember that the Query tile shows counts and links to shared queries and that is the key phrase to look for.

Which package format cannot be hosted in Contoso DevOps artifact feeds?

  • ✓ B. PHP

PHP is the correct option because Contoso DevOps artifact feeds do not natively host PHP Composer packages.

Azure DevOps Artifacts supports several package ecosystems such as npm, Maven, Python, NuGet, and Universal packages, and those formats can be published to and consumed from feeds directly. Because PHP relies on the PHP Composer ecosystem there is no built in Composer registry in Artifacts and so PHP packages cannot be hosted in the same native way.

NPM is incorrect because Azure DevOps Artifacts provides first class support for npm and you can publish and consume npm packages from feeds.

Maven is incorrect because Artifacts can act as a Maven repository and you can host Maven packages in feeds.

Python is incorrect because Python packages are supported and you can publish wheels and source distributions to Artifacts feeds.

When you are asked about supported package formats think of the major package managers and map them to the product features. Remember that Composer is the PHP package manager and it is not natively hosted by Azure DevOps Artifacts.

A development team at Meridian Software uses Azure DevOps for the build and release workflow of a Java microservice and they plan to integrate SonarQube to run static code analysis and report quality metrics. Which build pipeline task should they include so SonarQube can analyze the Java project?

  • ✓ C. Gradle

The correct option is Gradle.

Gradle is the build tool commonly used for Java projects and SonarQube integrates with it through the SonarScanner for Gradle or the Gradle SonarQube plugin. In an Azure DevOps pipeline you add a Gradle task to run the build and the SonarQube analysis goal so the scanner can collect sources, compiled classes and test coverage data for the Java microservice.

MSBuild is designed for .NET and Visual Studio projects and it will not perform a Java Gradle build or invoke the Java SonarScanner, so it is not appropriate for analyzing a Java microservice.

Cloud Build is a Google Cloud build service and not an Azure DevOps pipeline task, so it is not the correct choice within an Azure DevOps build pipeline for running SonarQube analysis on a Java project.

Docker is used to build and run container images and it does not replace the project build tool that produces the compiled classes and reports SonarQube needs. You can run analysis inside a container, but you still need a Gradle build step to produce the artifacts SonarQube analyzes.

Focus on the project build tool when deciding which pipeline task to include. For Java projects choose the Gradle task or the equivalent Java build tool so SonarQube can run during the build.

A regional fintech firm named HarborTech uses an Azure Resource Manager template to provision a three tier application. You need to ensure that the person who executes the deployment cannot view the application service credentials and database connection strings used by the deployment. Which Azure capability should be used?

  • ✓ D. Azure Key Vault

The correct answer is Azure Key Vault.

Azure Key Vault is a purpose built secret management service that stores connection strings and credentials and enforces access control and auditing. You can configure your deployment to retrieve secrets from Azure Key Vault at runtime using a managed identity or access policies so the person who runs the deployment does not need direct permission to read the secrets.

Azure Resource Manager parameter file is incorrect because parameter files are used to pass values into templates and can expose secrets to anyone who can view the file or the deployment inputs. The parameter file is not a centralized secret store with access controls like Azure Key Vault.

Azure Storage table is incorrect because it is a general purpose NoSQL store and it is not designed for secure secret management. It does not provide the secret lifecycle, access policies, and key protection that Azure Key Vault provides.

appsettings.prod.json is incorrect because it is a configuration file and embedding credentials there can leak secrets through source control or deployments. It is not a managed secret store with enforced access controls.

web.config file is incorrect because it is also a configuration file for .NET applications and placing secrets there makes them visible to anyone with file access. It lacks centralized secret management features that Azure Key Vault offers.

When you must hide credentials from the deployer use Key Vault with a managed identity and reference secrets from your template instead of embedding them in files or parameter files.

A development team at NimbusSoft must use an Azure Pipelines YAML pipeline to compile an application and run tests against the app and its database. The requirements state that the test stages must run concurrently, the Aggregate_Test_Results stage must always run, the test stages must execute only after a successful Compile_Service stage, and the Aggregate_Test_Results stage must run after all test stages complete. The proposed pipeline stages section is stages: – stage: Compile_Service jobs: – stage: Service_Tests dependsOn: [Compile_Service] jobs: – stage: DB_Tests dependsOn: [Compile_Service] jobs: – stage: Aggregate_Test_Results jobs: Does this pipeline meet the requirements?

  • ✓ B. No

No is correct.

The two test stages, Service_Tests and DB_Tests, depend only on Compile_Service so they will run after a successful Compile_Service and they can run concurrently because there is no dependency between them.

The Aggregate_Test_Results stage has no dependsOn and no explicit condition. When dependsOn is omitted a stage defaults to depending on the immediately prior stage, which in this layout is DB_Tests. That means Aggregate_Test_Results will start after DB_Tests completes and it may start before Service_Tests finishes. It also will not automatically run if an upstream stage fails because there is no condition set to always run.

Yes is wrong because the pipeline does not ensure that Aggregate_Test_Results waits for all test stages to complete and it does not ensure that Aggregate_Test_Results always runs. To meet the requirements Aggregate_Test_Results must depend on both test stages and must include a condition such as always() so it runs regardless of prior success or failure.

Add a dependsOn list with all required stages when you must wait for multiple stages to finish and add the condition value of always() when a stage must run even if prior stages fail.

Your organization Valence Labs has a cloud subscription that hosts 35 virtual machines and you intend to manage their configurations with Azure Automation State Configuration, how should you arrange the blocks inside a Desired State Configuration file?

  • ✓ B. Configuration > Node > Resource

Configuration > Node > Resource is correct. This is the required hierarchy for a Desired State Configuration file when using Azure Automation State Configuration.

The Configuration > Node > Resource pattern means you declare a configuration block at the top level which defines the configuration name and parameters. Inside that configuration you create one or more node blocks that target specific machines or node groups. Within each node block you list the resource declarations that define the desired state for that node, for example file or package resources.

Configuration > Resource > Node is wrong because resources must be scoped inside node blocks and not the other way around. A configuration file first organizes targets as nodes and then applies resources to those nodes.

Node > Configuration > Resource is wrong because the configuration block must come first to define the configuration context and parameters. You cannot define nodes outside of a containing configuration block.

Resource > Configuration > Node is wrong because resources are the lowest level of the hierarchy and they must be placed inside node blocks which in turn are inside a configuration block. Starting with a resource would violate the DSC file structure.

Remember the top down order when you write or read DSC files. Think of a Configuration that contains Node entries and then the Resource declarations inside those nodes.

Your company DevStudio partners with an external team at example.com to build a web application. DevStudio uses Azure Boards to track work while the partner uses Trello. You want a Trello card to be created automatically whenever a new work item is added in Azure Boards. What actions should you perform? (Choose 2)

  • ✓ B. Grant Trello access to your Azure DevOps organization

  • ✓ D. Configure a service hook subscription in Azure DevOps

Grant Trello access to your Azure DevOps organization and Configure a service hook subscription in Azure DevOps are correct.

Configure a service hook subscription in Azure DevOps creates an automated subscription that triggers when a work item is added. Service hooks let Azure DevOps push events to an external service so Trello can receive the event and create a card automatically.

Grant Trello access to your Azure DevOps organization is required so that the service hook can authenticate to Trello and create cards on behalf of your organization. This establishes the permissions and trust needed for Azure DevOps to call Trello APIs to create cards in the partner board.

Use Azure Logic Apps with the Trello connector to create cards automatically is not the selected answer for this scenario. Azure Logic Apps can be used for custom integration flows but the built in service hook plus granting Trello access is the straightforward supported approach to automatically create Trello cards from Azure Boards.

Export work items from Azure Boards as a CSV and then import them into Trello is incorrect because that is a manual bulk transfer and it does not provide automatic card creation when new work items are added.

When the exam asks about automatic integration look for answers that mention event subscriptions and granting external access. Service hooks and granting access are common requirements to enable automatic cross tool workflows.

A development team at Horizon Labs is building a Java application. The group already operates a SonarQube server to evaluate the quality of their C Sharp projects and they want to add automated Java analysis and continuous quality monitoring into their build pipeline. Which build task types should they add to the pipeline?

  • ✓ C. Maven or Gradle

The correct answer is Maven or Gradle.

Java applications are built and tested with tools such as Maven or Gradle and those build tasks produce compiled classes test results and coverage data that SonarQube needs to perform full Java analysis and continuous quality monitoring.

Using Maven or Gradle allows you to run analysis as part of the build lifecycle through the build plugins so the scanner can access bytecode and test artifacts which yields more accurate results than running a generic scanner as a separate step.

Gulp is a JavaScript task runner and it is not the standard tool for compiling and testing Java projects so it will not integrate properly for Java analysis.

SonarScanner is a generic scanner implementation and there is a CLI available but for Java it is best to integrate analysis into the Maven or Gradle build so the scanner can use compiled classes and coverage reports rather than relying on a standalone scanner task.

Grunt is also a JavaScript task runner and it is aimed at front end workflows rather than building Java applications so it is not the correct choice here.

When a question asks which build task types to add for Java projects think of the native Java build tools and how analysis plugins run during the build lifecycle. Use Maven or Gradle where possible to get accurate Java reports.

A development group at Brightline Systems keeps their Git repository in Azure Repos and multiple engineers work on separate feature branches at the same time. The team needs the main branch to avoid accumulating many intermediate commits when pull requests are merged. Which merge strategy should they apply to maintain a compact main branch history?

  • ✓ C. Squash merge

Squash merge is correct because it collapses all commits from a feature branch into a single commit when the pull request is merged and that keeps the main branch history compact.

With a squash merge the intermediate commits created during development are combined into one commit on main. This reduces clutter from work in progress commits and makes the main branch easier to read and to revert if needed while still preserving the overall changes introduced by the feature.

Three-way merge is wrong because that strategy produces a merge commit and preserves each individual commit from the feature branch. That behavior results in many intermediate commits being added to main and it does not keep the history compact.

Fast forward merge is wrong because it advances the main branch pointer to include each commit from the feature branch instead of collapsing them. It only works when main has no new commits and it will not reduce the number of intermediate commits accumulated on main.

When a question focuses on keeping history compact look for the option that produces a single commit per merged branch rather than options that preserve all development commits.

Your organization manages a CodeForge Enterprise account and you must activate push time protection to scan for secrets in the repositories. What must you enable first?

  • ✓ B. Purchase CodeForge Advanced Security license

The correct answer is Purchase CodeForge Advanced Security license.

Push time protection for scanning secrets is provided as a feature of the CodeForge Advanced Security offering, and you must have the Purchase CodeForge Advanced Security license active for your organization or repositories before you can enable push time secret scanning or enforcement. The license enables the scanning engines and the administrative controls that let you block or warn on commits that contain secrets.

Enable mandatory multi factor authentication for all accounts is not the prerequisite for turning on push time protection. Multi factor authentication improves account security but it does not unlock the secret scanning or push protection features.

Subscribe to a Priority Support plan is unrelated to enabling push time secret scanning. A support plan may help you get help faster but it does not provide the product capabilities required to run push time scans.

Create a repository secret access policy is not the feature you must enable first. Repository policies may help govern access or handling of secrets but they do not replace the Advanced Security license that supplies the scanning and push protection functionality.

On questions about enabling a specific security feature look for whether the feature is part of an add on product or license. If the feature is described as a product capability then check whether that product or license must be purchased and enabled first. Important features are often gated by an Advanced Security or similar license.

A development team at Northbridge Systems builds Node Package Manager packages in an Azure DevOps project and several repositories consume those packages. They need to reduce the disk usage from older package versions stored in Azure Artifacts. Which setting should they change?

  • ✓ B. Project pipeline retention settings

The correct option is Project pipeline retention settings.

Pipeline retention controls how long build runs and their produced artifacts are kept. Reducing retention for pipelines will remove older builds and their artifacts which in turn reduces the disk space used by package versions that were produced or tied to those builds in Azure Artifacts.

Azure Artifacts feed retention policy is not the chosen answer here because feed retention focuses on rules inside the feed for unreferenced or old package cleanup. That setting can help in some scenarios but the question targets changing project level build behavior and the pipeline retention setting is the direct control for artifacts produced by builds.

Project release retention settings is incorrect because it applies to classic release pipeline runs and their artifacts and does not govern retention of build pipeline outputs where the NPM packages are produced.

Project test result retention settings is incorrect because it only controls how long test results and related attachments are kept and it does not affect packages or build artifacts stored in Azure Artifacts.

When a question links disk usage to build outputs think about pipeline retention first because it directly removes old run artifacts and associated package versions.

A development group has deployed an Azure App Service and linked it to an Application Insights resource. Which query language is used to retrieve records from the Log Analytics workspace that contains the Application Insights telemetry?

  • ✓ C. Kusto Query Language

The correct answer is Kusto Query Language.

Kusto Query Language is the query language used by Azure Monitor Logs and by Application Insights to retrieve telemetry that is stored in a Log Analytics workspace. It uses a readable, pipe based syntax and provides powerful operators for filtering, summarizing, and visualizing log and telemetry data.

Transact SQL is the SQL dialect used for querying SQL Server and Azure SQL databases and it is not the language used to query Log Analytics workspaces.

C# .NET is a programming language and framework for writing application code and it is not a query language for Application Insights or Log Analytics.

JavaScript is a scripting language commonly used for web and server development and it is not the language used to run queries against Azure Monitor Logs or Application Insights.

When a question asks about querying Application Insights or a Log Analytics workspace remember to look for Kusto Query Language or the abbreviation KQL since that is the language used by Azure Monitor Logs.

At StellarApps your team uses a hosted Git service for version control and a file with private keys was accidentally committed into the repository history. You need to remove the file and erase every trace of it from the repository history. Which two tools can accomplish this? (Choose 2)

  • ✓ C. BFG Repo-Cleaner

  • ✓ D. git filter-branch

The correct options are BFG Repo-Cleaner and git filter-branch.

The BFG Repo-Cleaner is a purpose built tool that rewrites Git history to remove files and sensitive data. It is designed to be fast and simple for whole repository cleanups and is often the easiest choice when you need to purge keys or large files from many commits.

The git filter-branch command is a built in Git mechanism that can apply filters to every commit so you can remove a file from the entire history. It is flexible and powerful but it is slower and more complex than modern alternatives and the Git project now recommends tools like git filter-repo for heavy or scripted rewrites.

Both tools rewrite commit history so you must force push the cleaned branches and coordinate with your team so everyone reclones or resets their local repositories to avoid divergent histories.

The Google Cloud Source Repositories option is incorrect because a hosted repository service does not by itself remove a file from past commits. You still need a history rewriting tool to purge the file and then push the cleaned history back to the host.

The git rebase option is incorrect because rebase is intended for editing or replaying commits in a linear history and it is not practical for purging a file across an entire repository or across many branches. It can handle small manual edits but it does not scale for complete history cleansing.

When a question asks about erasing secrets from Git history focus on tools that rewrite history and remember to coordinate with your team because you will need to force push and have collaborators reclone or reset. Prefer BFG for speed and git filter-repo or git filter-branch for more complex scripted filters.

Your team at NovaSoft is releasing updates to a customer facing service and you want to limit the impact on users while you progressively deploy and validate changes in production. Which two approaches should you use? (Choose 2)

  • ✓ B. Deployment rings

  • ✓ D. Feature flags

The correct options are Deployment rings and Feature flags.

Deployment rings let you stage releases by rolling changes out to progressively larger groups of users. This approach limits the blast radius and gives you time to validate behavior in production before exposing everyone to the update.

Feature flags let you toggle features at runtime so you can enable functionality for a small subset of users or turn it off instantly if a problem appears. Feature flags provide fine grained control for progressive exposure and fast rollback without a redeploy.

Blue green deployment uses two complete environments and switches traffic from the old to the new environment in a single cutover. That reduces downtime risk but does not provide the same staged, progressive validation as the selected answers.

Canary releases are another progressive rollout technique used in many pipelines, but this question identifies Deployment rings and Feature flags as the intended strategies so Canary releases is not chosen here.

When a question asks about minimizing user impact during production changes look for options that explicitly support gradual exposure and runtime control such as deployment rings and feature flags.

Review the Riverton Credit Union case study at example.com/documents/riverton-case and then answer the following. You are configuring the Azure DevOps dashboard and you must meet the specified technical requirements. Which dashboard widget should be used to represent Metric 3?

  • ✓ B. Query tile

The correct option is Query tile.

Query tile is designed to display a single numeric value or an aggregated result derived from a saved work item query and that makes it the right choice for Metric 3 when the requirement is to show a single KPI on the Azure DevOps dashboard.

Query tile can be configured to point to a saved query and to show the total count or a specific aggregation so the dashboard viewer sees the metric at a glance without needing to open a list or chart.

Cumulative flow diagram displays the amount of work in each state over time and it is used to visualize flow and bottlenecks rather than to present a single aggregated metric.

Release pipeline overview shows the status and health of release pipelines and deployments and it does not present a single work item metric like Metric 3.

Query results returns a list of work items that match a query and it is useful for inspecting items and details rather than for showing a single numeric KPI.

Sprint burndown charts remaining work across a sprint and it is focused on sprint progress rather than a standalone aggregated metric for Metric 3.

Velocity displays completed work across past sprints to show team throughput and it does not provide a single query based KPI display suitable for Metric 3.

When the requirement is to show a single KPI check whether the widget can display a numeric aggregation or only lists. Use Query tile for single numeric values and use list or chart widgets when you need item details or trend visualizations.

In Azure DevOps pipelines which feature lets you reference secrets that are stored in an Azure Key Vault?

  • ✓ B. Variable groups

The correct option is Variable groups.

Variable groups can be linked to an Azure Key Vault so pipelines can reference secrets as variables. You create a variable group in the Library and link it to a Key Vault by configuring a service connection and granting access so the pipeline can retrieve the secrets at runtime. Marking the secrets as secret in the variable group ensures they are masked in logs and handled securely.

Task groups let you bundle and reuse a sequence of pipeline tasks across builds and releases but they do not provide a way to reference or sync secrets from an Azure Key Vault.

Deployment groups define sets of target machines and agents for deployments and they are concerned with deployment topology rather than storing or exposing Key Vault secrets.

When you need Key Vault secrets in a pipeline link the vault to a variable group and make sure the pipeline has a service connection with the right access policies so secrets can be retrieved securely.

The development team at Northbridge Systems runs a web application on Azure App Service and they have an Application Insights instance connected to the app. They need to use Application Insights features to determine “how page load times and other performance variables affect conversion rates across different sections of the site”?

  • ✓ D. Impact

The correct answer is Impact.

The Impact feature in Application Insights is designed to correlate telemetry with business outcomes and it shows which performance problems and failures have the largest effect on user metrics and conversions. It helps you look at slow page loads and other performance variables and see how those issues change conversion rates across different segments or pages of your site.

The Impact view surfaces affected users and affected pages and it prioritizes issues by the degree of business impact so you can focus remediation where it will improve conversions the most.

User Flows visualizes the navigation paths and sequences that users take through the site but it does not link performance metrics to conversion impact.

Funnels measures conversion and drop off between defined steps and it is useful for tracking where users leave a process but it does not automatically show which performance issues are causing the drop offs.

Metrics Explorer is useful for exploring time series of performance and reliability metrics and for drilling into latency or error trends but it does not by itself correlate those metrics to conversion rates across site sections.

When an item asks how performance affects business outcomes look for the tool that explicitly correlates telemetry with user impact rather than the tools that only show user paths or raw metric charts.

Your team uses an Azure DevOps pipeline named BuildFlow that needs to pull a public container image during execution and you must add a service connection to allow the pipeline to access that image. Which type of service connection should you create?

  • ✓ C. Docker registry

The correct option is Docker registry.

You should create a Docker registry service connection because the pipeline needs to pull a container image from a registry and that connection type is designed to store the registry endpoint and any credentials required to authenticate to Docker Hub or other container registries. The service connection lets pipeline tasks authenticate and pull images during the build.

Docker host is incorrect because that refers to connecting to a Docker daemon on a specific machine rather than configuring access to a container registry for pulling images.

Azure Kubernetes Service (AKS) is incorrect because an AKS connection is used to manage and deploy to Kubernetes clusters and it is not the mechanism to provide registry credentials to a build pipeline.

Azure Service Fabric is incorrect because Service Fabric service connections are for deploying to Service Fabric clusters and they do not serve as container registry credentials for pulling images in a pipeline.

When a pipeline must pull an image first decide if the image is public or private. If credentials are needed choose a Docker registry service connection and link it to the Docker or container tasks in the pipeline.

You manage an Azure DevOps artifact feed for a small software company named Meridian Apps that hosts internal packages. You must allow a group of engineers to save packages that come from upstream feeds while keeping their privileges as limited as possible. The proposed plan is to give them the Owner permission on the feed. Does this plan meet the requirement?

  • ✓ B. No

No is correct because giving engineers the Owner permission is more access than required to save packages from upstream feeds while keeping privileges limited.

Azure Artifacts supports role based permissions on feeds and engineers who need to publish or cache packages only require the equivalent of a Contributor or a specific Contribute permission to push packages. Granting Owner is unnecessary because it also allows changing feed permissions and deleting the feed which violates the principle of least privilege.

You can meet the requirement by granting the team the minimum publish or contribute rights on the feed or by creating a scoped view or separate feed for upstream caching so their permissions are restricted to what they need.

Yes is wrong because assigning Owner gives full control over the feed and exceeds the stated goal of keeping privileges as limited as possible.

When a question mentions keeping privileges minimal prefer the role that provides Contribute or Contributor rights rather than Owner and check feed level permissions to validate the minimum required access.

A development team at Aurora Apps uses an Azure Pipelines job to build and deliver a service. The pipeline has a custom test task that is configured to search the default working directory and to merge results and it looks for files that match the pattern */RESULT-.trx. Which test result format must the testResultsFiles pattern correspond to?

  • ✓ C. VSTest

The correct answer is VSTest.

The file pattern */RESULT-.trx uses the .trx extension which is the Visual Studio test results format. Azure Pipelines expects .trx files to be VSTest results and it will merge and publish those files when the test results format is set to VSTest. TRX is the native output of the Visual Studio Test runner which is why VSTest is the correct choice.

JUnit is incorrect because JUnit produces a different XML schema and file naming convention, typically files named like TEST-*.xml, and those are only recognized when the JUnit format is selected.

xUnit is incorrect because xUnit test runners emit their own XML format and not TRX, so the publish task will not treat */RESULT-.trx as xUnit results.

NUnit is incorrect because NUnit also produces a distinct NUnit XML format rather than TRX, and you must choose NUnit when publishing NUnit output.

When a pattern ends with .trx remember that this is the Visual Studio Test result format and you should set the test result format to VSTest when configuring publish or merge settings.

Your team stores code in DevStore and a file named secrets.log that contains confidential information was pushed to a repository by mistake. You must remove the file from the repository history across all branches. Which commands can accomplish this task? (Choose 2)

  • ✓ B. git filter-repo –invert-paths –path secrets.log git push origin –force –all

  • ✓ D. bfg –delete-files secrets.log git push –force

The correct options are git filter-repo –invert-paths –path secrets.log git push origin –force –all and bfg –delete-files secrets.log git push –force.

git filter-repo –invert-paths –path secrets.log git push origin –force –all is correct because git filter-repo rewrites history and the invert-paths option removes the specified file from every commit it appears in. The command shown includes a force push of all branches which is required to replace the remote history with the rewritten local history.

bfg –delete-files secrets.log git push –force is correct because the BFG Repo-Cleaner is a fast and user friendly tool for removing files from a repository history. It rewrites commits to purge the file and you must force push the cleaned repository to the remote to update the branches.

git lfs migrate import –include=secrets.log git push –force is wrong because git lfs migrate moves files into Git LFS pointers and does not remove the file from history in the way required to purge sensitive data across all commits and branches.

git rm secrets.log git commit -m “remove secrets” git push –force –all is wrong because that sequence only removes the file in the new commit and does not rewrite prior commits. The secret would still exist in the repository history.

git clean -fd secrets.log –force is wrong because git clean removes untracked working tree files and does not operate on commit history at all. It will not remove the file from past commits.

git revert -n HEAD~1..HEAD git push –force is wrong because git revert creates new commits that undo changes and does not expunge a file from the repository history. Reverting does not remove the sensitive data from earlier commits.

When you need to remove a file from history choose a history rewriting tool such as git filter-repo or the BFG and then remember to force push updates to all branches and to verify tags and reflogs before considering the secret fully removed.

Your operations team manages a customer portal hosted on Azure App Service for a company called Meridian Solutions and they have connected an Application Insights instance to the App Service to collect telemetry and usage data. Which Application Insights feature lets you examine how customers move through the successive steps of your application?

  • ✓ C. Funnels

The correct option is Funnels.

Funnels in Application Insights lets you define an ordered series of events or page views and then shows how users progress from step to step and where they drop off. This makes Funnels the appropriate feature when you need to examine customer movement through successive steps and to measure conversion between those steps.

You can build Funnels from custom events or page view telemetry and then segment results by user properties or dimensions to compare conversion rates across groups. That capability helps identify bottlenecks in a multi step user journey.

Retention measures how many users return to the app over time and it focuses on ongoing engagement rather than the ordered sequence of actions, so it does not answer the question about step by step movement.

Application Map visualizes the topology and dependencies of services in your application to help diagnose performance and failures. It does not provide a step sequence of user actions or drop off rates between successive steps.

User Flows can show common navigation paths, but the feature designed to define precise ordered steps and compute drop off and conversion in Application Insights is Funnels. The question asks specifically about examining movement through successive steps which points to Funnels.

When a question mentions successive steps or drop off look for features that define ordered events like Funnels rather than topology or aggregate retention metrics.

A development group at Meridian Software plans to adopt Azure Boards for work tracking. They need the ability to create and manage product backlog items and log bugs on a Kanban board. Which work item process should they select when creating a new project in their Azure DevOps organization?

  • ✓ C. Scrum process

The correct option is Scrum process.

The Scrum process provides a Product Backlog Item work item type and includes support for Bugs and board-based workflows, including Kanban boards. This makes it the appropriate choice when a team needs to create and manage product backlog items and log bugs on a Kanban board in Azure Boards.

The CMMI process is oriented toward formal project and process management and uses Requirement and Change Request work items rather than Product Backlog Item, so it does not match the Scrum artifact names the question specifies.

The Basic process is a simplified template with a minimal set of work item types and it does not provide the full Product Backlog Item structure or the richer backlog management features that Scrum offers.

The Agile process uses User Story as the primary backlog item rather than Product Backlog Item, and it follows Agile terminology instead of Scrum artifacts, so it does not directly match the requirement to use Product Backlog Items.

When a question mentions Product Backlog Items look for the Scrum process because it uses that work item type natively.

A development team manages an Azure DevOps account and an Azure subscription. They provisioned a Windows Server 2022 virtual machine to act as a self hosted agent for Azure Pipelines. Which credential type must be used to register the virtual machine as a self hosted agent?

  • ✓ D. Personal Access Token

The correct option is Personal Access Token.

A self hosted Azure Pipelines agent is registered by running the agent configuration tool and authenticating to Azure DevOps with a Personal Access Token. The PAT is created in the user account and it allows the agent to register and communicate with the Azure DevOps service without an interactive login.

The registration process explicitly requests the organization URL and a Personal Access Token and the PAT can be scoped and time limited to follow least privilege practices. This is the supported method documented by Azure DevOps for registering Windows Server virtual machines as self hosted agents.

Service Principal is used to authenticate applications and services to Azure resources through Azure AD. It is not the mechanism used to register an Azure Pipelines self hosted agent.

OAuth provides delegated user authorization for web and mobile applications and it is not the flow used to supply credentials when registering a noninteractive build agent. The agent setup requires a PAT instead.

Service Connection is an Azure DevOps construct that stores credentials for pipelines to access external services such as an Azure subscription. It is not the credential type you provide when registering a self hosted agent.

Create a PAT with the least privileges and a short expiration when registering agents and store it securely. Rotate tokens regularly and remove any tokens that are no longer needed.

At Meridian Software a repository contains several GitHub Actions workflows and a secret has been stored as an environment secret within that repository. You need to make the secret available to workflows across multiple repositories in the organization. What should you do first?

  • ✓ C. Create the secret at the organization level

The correct option is Create the secret at the organization level.

Create the secret at the organization level lets you centrally manage a secret and make it available to workflows across many repositories. Organization-level secrets in GitHub Actions can be scoped to all repositories or to a selected set of repositories and they are the supported built in way to share secrets across an organization.

Use Google Secret Manager to centralize the secret is incorrect because GitHub Actions does not automatically consume secrets stored outside GitHub. You can build integrations that fetch secrets from external stores but that is not the straightforward first step to share a secret across multiple GitHub repositories and it adds complexity.

Recreate the secret at the repository level is incorrect because duplicating the secret in each repository is time consuming and error prone. The question asks to make the secret available across multiple repositories so managing the secret at the organization-level is the appropriate first action.

When a question asks about sharing secrets across repositories think in terms of scope and choose the higher level scope. Use organization-level secrets in GitHub to avoid duplicating secrets per repository.

A software engineering team at DataRipple must use an Azure Pipelines YAML to build a service and then run both a unit test stage and a database test stage in parallel after the build succeeds, and the stage that publishes test results must always run after both test stages complete. The pipeline uses these stages in the YAML. stages: – stage: Build_Service jobs: – stage: Run_Unit_Tests dependsOn: [] jobs: – stage: Run_DB_Tests dependsOn: [] jobs: – stage: Publish_Test_Report dependsOn: – Build_Service – Run_Unit_Tests – Run_DB_Tests condition: succeededOrFailed() jobs: Does this pipeline configuration satisfy the stated requirements?

  • ✓ B. No the pipeline configuration does not meet the requirements

The correct option is No the pipeline configuration does not meet the requirements.

Both Run_Unit_Tests and Run_DB_Tests declare dependsOn: [] which explicitly removes any stage dependencies. That means those test stages can start as soon as the pipeline begins and they do not wait for Build_Service to complete. Because the tests can run before the build finishes they do not satisfy the requirement that the unit test stage and the database test stage run after the build succeeds.

The Publish_Test_Report stage does list Build_Service, Run_Unit_Tests, and Run_DB_Tests in its dependsOn and it uses condition succeededOrFailed() so it will run after those stages complete regardless of their outcome. That means the publish stage will run after the tests finish, but it will not ensure the tests themselves ran after a successful build because the test stages were made independent by dependsOn: [].

Yes the configuration satisfies the requirements is incorrect because the key requirement was that the two test stages run after the build succeeds. The empty dependency arrays mean the tests do not depend on Build_Service and so the pipeline as shown does not enforce the required ordering.

When you see dependsOn: [] in a stage it means the stage has no dependencies and can start immediately. Look for explicit dependencies to confirm ordering when the question requires one stage to run after another.

An IT operations team at Northbridge Cloud manages an Azure subscription with a storage account and 24 virtual machines. The team plans to send the virtual machine logs to LogRhythm for centralized analysis and they must configure AzLog to export the logs into the storage account. Which log format should they export the logs in?

  • ✓ D. JSON

The correct option is JSON.

Azure diagnostic and activity logs exported to a storage account are produced as structured text in JSON. That format preserves fields and nested structures so LogRhythm and other centralized analysis tools can parse and map log fields reliably.

EVTX is incorrect because it is the native Windows event log file format and it is a binary format used on hosts rather than the export format used by Azure diagnostic settings to write logs to a storage account.

Avro is incorrect because Avro is a compact binary serialization format and it is not the typical export output when configuring AzLog to export logs into an Azure storage account for SIEM ingestion.

EVT is incorrect because it refers to an older legacy Windows event log format and it is not used as the export format by Azure diagnostic export to storage.

When a question asks about export formats pick the widely supported, structured text option such as JSON because SIEMs and log collectors expect easily parsed fields and nested data.

Your team maintains a GitHub repository named CodeBaseX and a web application named StoreFrontX. CodeBaseX contains the source code for StoreFrontX. You need to run a ZAP spider against StoreFrontX and then execute an AJAX spider scan once the initial spider completes. Which GitHub Action should you select to perform those scans?

  • ✓ B. ZAP Full Scan

The correct option is ZAP Full Scan.

ZAP Full Scan is the GitHub Action built to run a complete OWASP ZAP workflow and it includes both a traditional spider and an AJAX spider when configured to do so. This action lets you start with an initial crawl and then run an AJAX spider scan to explore JavaScript rendered content once the initial spider completes which matches the required sequence in the question.

Google Cloud Web Security Scanner is incorrect because it is a managed scanner for applications hosted on Google Cloud and it is not the GitHub Action that runs OWASP ZAP spiders inside a repository workflow.

ZAP API Scan is incorrect because it focuses on API style scanning with ZAP and it does not perform the full spider followed by an AJAX spider crawl needed for scanning dynamic web pages.

When a question requires a specific sequence of scans look for an action that explicitly offers a Full Scan or full workflow and then check the action documentation to confirm the scan ordering.

A team creates a Git repository named RepoA in Azure Repos for a fintech startup. They must ensure that each pull request is verified by an external code review tool which reports its result back to the pull request and the chosen solution must minimize administrative overhead. Which type of policy should be applied to enforce this requirement?

  • ✓ B. Branch policy

The correct answer is Branch policy.

A Branch policy is applied at the branch or repository level and it enforces rules for every pull request targeting that branch. You can configure a branch policy to require that external tools report results back to the pull request before the PR can be completed and this single configuration applies to all matching PRs so it minimizes administrative overhead.

Service hook is incorrect because service hooks are outbound notifications and triggers for external systems and they do not by themselves enforce or block a pull request. Service hooks are useful for integrations but they do not provide the gating enforcement required by the scenario.

Status check is incorrect in the context of this question because the enforceable mechanism is the branch policy itself. Status checks are signals that external systems post back to a pull request but you implement enforcement by configuring a branch policy to require those signals rather than by selecting a standalone status check option.

Build validation is incorrect because build validation is specifically about running pipeline builds as part of a branch policy. It can gate PRs but it is focused on running CI builds and it often requires more setup in the pipeline. It is not the minimal administrative solution when an external code review tool can directly report its verdict to the PR and be enforced by a branch policy.

When asked which setting enforces checks on all pull requests look for options that apply at the branch level and focus on policies that can require external signals rather than on notification or trigger mechanisms.

A continuous integration pipeline at NimbusSoft defines four jobs named Alpha, Beta, Gamma and Delta within Azure Pipelines. Alpha has its own steps. Beta and Gamma declare no dependencies so they can start independently. Delta specifies a dependency on Gamma using the field “dependsOn” so Delta will begin only after Gamma finishes. Can Gamma and Delta execute concurrently?

  • ✓ B. No

No is correct because Delta declares an explicit dependency on Gamma using dependsOn so Delta cannot start until Gamma has finished.

When a job lists another job in its dependsOn setting Azure Pipelines enforces that ordering and the dependent job waits for the specified job to complete. In this scenario Beta and Gamma can start independently but Delta will be scheduled only after Gamma completes due to the declared dependency.

Yes is incorrect because a declared dependency prevents concurrent execution between the dependent jobs. Since Delta depends on Gamma they cannot run at the same time.

When you see an explicit dependsOn relationship treat it as a hard ordering constraint and assume the dependent job will start only after the listed job completes.

A small technology firm runs applications on a Kubernetes cluster and needs a solution to package configure and deploy releases across environments. Which tool below provides package management for Kubernetes?

  • ✓ B. Helm

Helm is the correct option.

Helm is the de facto package manager for Kubernetes and it uses charts to package an application and its Kubernetes resources. It provides commands to install upgrade and roll back releases and it supports templating and values files so teams can deploy consistent releases across multiple environments.

Artifact Registry is a Google Cloud service for storing container images and other build artifacts and it is not a package manager for Kubernetes. It can host artifacts that Helm or other tools consume but it does not provide chart packaging or release management features by itself.

Artifactory is an artifact repository manager from JFrog and it can act as a chart repository for Helm charts. It is not the Kubernetes package manager itself and teams still use Helm or another client to manage installs upgrades and rollbacks even when charts are stored in Artifactory.

When a question mentions packaging Kubernetes applications or deploying releases across environments look for the term package manager or charts and associate those with Helm.

A deployment pipeline runs an AzureMonitor@1 task with connectedServiceNameARM set to deployConn and ResourceGroupName set to rg-apps and filterType set to none and severity set to ‘Sev0,Sev1,Sev2,Sev3’ and timeRange set to ‘5h’ and alertState set to ‘New’. Will this task notify only for alerts that are in the New state?

  • ✓ B. Yes

The correct answer is Yes.

The Azure Monitor Alerts task uses the alertState parameter to filter which alerts are returned and when alertState is set to ‘New’ the task will retrieve only alerts that are currently in the New state. The other parameters such as timeRange and severity further limit which alerts are returned so the task will notify only those alerts that match all of the configured filters.

That means alerts that are Acknowledged or Closed or in other states will not be included when alertState is set to ‘New’. If you want alerts in other states you must change or remove that filter.

No is incorrect because the provided alertState setting does in fact restrict notifications to the New state and will exclude alerts in other states.

When you see questions about pipeline tasks look at the exact parameter name and value. Check alertState and use a short timeRange in a test run to confirm which alerts the task will return.

An engineering team at NovusSoft stores C# projects in GitHub Enterprise repositories and they want to enable CodeQL analysis across those repositories. What is the required action to activate CodeQL scanning for the C# code?

  • ✓ B. Add a GitHub Actions workflow that runs CodeQL in each repository

Add a GitHub Actions workflow that runs CodeQL in each repository is correct. You must add a workflow to each repository so that CodeQL analysis runs during the repository build and the scan picks up C# projects and their build artifacts.

CodeQL analysis is executed by the CodeQL Action workflows and those workflows build the code and run the CodeQL queries. Adding the repository workflow is the required activation step because the workflow controls when and how the analysis runs and it specifies the target language and build steps for C#.

Enable Dependabot alerts is incorrect because Dependabot alerts identify vulnerable dependencies and they do not run CodeQL queries or perform source code analysis.

Enable Dependabot security updates is incorrect because enabling automatic dependency updates helps remediate vulnerabilities but it does not activate CodeQL scanning or perform code query analysis.

Enable GitHub Advanced Security is incorrect by itself because the product provides access to CodeQL features in licensed repositories but you still need to add or enable the actual CodeQL workflow or configuration to run scans in each repository.

When a question asks how to start CodeQL scanning think about the mechanism that runs the analysis. Add or configure the required GitHub Actions workflow in the repository because enabling a feature alone usually does not start scans.

You use Azure Boards to track work items and Azure Repos to store your source code for a project. There is a bug work item whose ID is 210. You want a commit to automatically transition that work item to the Resolved state when the commit references it. What text should you put in the commit message?

  • ✓ D. Fixes #210

The correct option is Fixes #210.

Azure Boards recognizes specific closing keywords followed by the numeric work item id with a leading hash. When you include Fixes #210 in the commit message the commit is linked to work item 210 and the bug is transitioned to the Resolved state automatically.

#210 completes is incorrect because “completes” is not a recognized closing keyword for Azure Boards and so it will not trigger the state transition.

Resolves #PROJ-210 is incorrect because Azure Boards expects the plain numeric id prefixed with a hash and does not use a project prefix like PROJ-210. Using Resolves with the correct format such as #210 would be recognized but the PROJ- prefix prevents matching.

Verifies #210 is incorrect because “Verifies” is not one of the recognized keywords that close or resolve work items in Azure Boards.

When writing commit messages use a recognized closing keyword such as Fixes and include the work item id with a leading # to ensure Azure Boards links and transitions the work item automatically.

Your company uses Azure DevOps to run build pipelines and release pipelines and the engineering team is large and expands regularly. You have been instructed to automate user and license administration whenever feasible. Which of the following tasks cannot be automated?

  • ✓ D. Acquiring additional paid licenses

The correct answer is Acquiring additional paid licenses.

Purchasing extra paid licenses is a billing and subscription activity and it normally cannot be performed through the standard Azure DevOps entitlement APIs or CLI. Buying additional paid seats requires access to the tenant billing account and is typically done through the Azure portal or the Visual Studio subscriptions and billing interfaces. Some enterprise or partner billing APIs exist but they are separate systems and are not the typical automation path inside Azure DevOps.

Managing user entitlements is automatable because Azure DevOps provides the Member Entitlement Management REST API and the Azure DevOps CLI to add or remove users and change their entitlements programmatically.

Assigning access level licenses to accounts can be done via the same entitlement APIs and the CLI since access levels are part of a user s entitlement and can be updated by API calls or scripts.

Updating team and security group memberships is also automatable because the Azure DevOps Graph APIs and CLI allow you to add and remove users from teams and security groups as part of scripted onboarding and role changes.

When deciding if a task can be automated check whether it affects billing or only entitlements. Billing and purchase actions are usually manual or handled by separate billing APIs while entitlements and group membership are exposed to automation through Azure DevOps APIs and the CLI.

A development team at MeridianApps maintains an Azure DevOps project with a release pipeline that has two stages named Staging and Production. Staging deploys code to an Azure web app called stagingapp and Production deploys to an Azure web app called productionapp. The team needs deployments to production to be prevented when Application Insights raises Failed requests alerts after completing a new deployment to staging. What should be configured for the Production stage?

  • ✓ B. Configure a pre deployment gate that checks Application Insights Failed requests alerts

Configure a pre deployment gate that checks Application Insights Failed requests alerts is correct because a pre deployment gate can evaluate Application Insights alerts that occur after the Staging deployment and prevent the Production stage from starting when failed request alerts are present.

A pre deployment gate runs automated checks before allowing a release to continue to the next stage. Azure DevOps gates can query Azure Monitor or Application Insights and can call APIs to evaluate telemetry. Gates will pause or fail the deployment when the configured alert or metric conditions are met which makes them the appropriate mechanism to block Production after a problematic Staging deployment.

Add a pipeline task to create or update Application Insights alert rules is incorrect because modifying alert rules does not itself evaluate the current alert state after a deployment and it does not automatically block the pipeline based on runtime telemetry.

Require a manual approval from a team member or stakeholder in pre deployment conditions is incorrect because manual approvals require a person to review and act. That approach will not automatically prevent deployments when Application Insights raises failed request alerts unless someone notices and stops the release.

Configure a pre deployment trigger to invoke an external service before Production deployment is incorrect because pre deployment triggers are used to start or coordinate deployments and they are not the same as gates that evaluate monitoring signals. Invoking an external service does not provide the built in alert evaluation that a gate does.

When a question asks to automatically block deployments based on monitoring alerts focus on configuring automated gates or checks rather than manual approvals or tasks that only change alert definitions.

Your team at Meridian Software uses Azure Boards in a DevOps project and you must add a dashboard widget that displays the burndown of remaining work for a single sprint iteration. Which widget type should you choose?

  • ✓ C. Sprint burndown

The correct option is Sprint burndown.

Sprint burndown is the dashboard widget designed to display the remaining work for a single sprint iteration. It plots remaining effort or remaining work items by day so the team can see whether they are on track to meet the sprint goal.

Velocity reports historical throughput across past sprints to help with planning future sprints and does not show the day by day remaining work for the current sprint.

Cumulative flow diagram (CFD) visualizes how work is distributed across workflow states over time to reveal bottlenecks and flow issues and it does not provide a sprint specific remaining work burndown.

Burndown chart often refers to the built in report available in the Sprints hub rather than the dashboard widget you add, so it is not the dashboard widget named Sprint burndown which is the correct choice.

Control chart shows cycle time or lead time distribution to help assess process stability and variation and it does not display remaining work for a sprint.

Lead time measures the elapsed time from work item creation to completion and does not present the sprint remaining work trend that the Sprint burndown widget provides.

When a question asks for remaining work within a single sprint look for widget names that include the word sprint or that explicitly mention remaining work per day during the iteration.

Your team uses Git and you enabled GitHub code scanning for a repository. A developer opens a pull request from a feature branch into the primary branch and the scan output shows the message “Analysis not found.” You must ensure the code scanning finishes successfully for that pull request. Which two steps should you perform? (Choose 2)

  • ✓ B. Update the code in the pull request to trigger a new scan

  • ✓ D. Add the repository’s primary branch name to the workflow on push trigger

The correct answers are Update the code in the pull request to trigger a new scan and Add the repository’s primary branch name to the workflow on push trigger.

The first action, Update the code in the pull request to trigger a new scan, forces GitHub Actions to re-evaluate the workflow and run the code scanning jobs when a new commit or change is pushed. Making a small change or pushing a new commit to the feature branch typically triggers the workflow and produces a fresh analysis instead of the “Analysis not found” message.

The second action, Add the repository’s primary branch name to the workflow on push trigger, ensures the workflow is configured to run for the repository base branch that GitHub uses to store analysis results. If the workflow only lists other branches then scans for the pull request base may not be generated. Including the primary branch in the workflow triggers causes scans to run for that branch and avoids missing analysis for pull requests targeting it.

Add the feature branch name to the workflow on push trigger is not the best fix because the missing analysis is usually related to the base branch configuration. Adding the feature branch to push triggers will not address the core issue when the base or primary branch is not included.

Close the pull request and open a new pull request from the primary branch is incorrect because pull requests are meant to be opened from feature branches into the primary branch. Recreating the pull request from the primary branch is unnecessary and does not solve the underlying workflow trigger configuration problem.

Create a separate code scanning workflow file for the repository is unnecessary because the existing workflow can be adjusted to include the correct triggers. Creating a new workflow file is not required to get a successful scan if the issue is just missing branch names or the need to re-run the scan.

When you see “Analysis not found” first check the workflow triggers and then make a small commit to the pull request to force a re-run. Also confirm that the workflow includes the repository’s primary branch in its push or pull_request settings.

A development team uses the BlueWave CI pipeline to compile and deploy a web application. They need a testing approach that confirms the application communicates properly with databases and external services and ensures dependent components function together. Which type of test should they implement?

  • ✓ C. Integration testing

The correct option is Integration testing.

Integration testing verifies that separate modules, databases, and external services interact correctly. It focuses on interactions across component boundaries and it is the appropriate level to confirm that the application communicates properly with databases and external services and that dependent components function together.

Acceptance testing validates the system against business requirements and user expectations rather than probing internal component interactions. It may cover end to end behavior but it does not specifically target whether services and databases integrate correctly.

Load testing measures performance and behavior under high traffic and stress and it does not primarily check correctness of component interactions or external service communication. It is about capacity and scalability rather than functional integration.

Unit testing exercises individual functions or classes in isolation and commonly uses mocks for external dependencies. It will not confirm real communication with databases or external services so it does not satisfy the integration requirement.

Smoke testing is a basic health check to ensure key functionality is present and the system starts correctly. It is a shallow test and does not perform the deep interaction checks that integration testing provides.

Match the scope of the test to the goal. If the question asks about verifying interactions between components or real external systems choose integration tests rather than unit, load, smoke, or acceptance tests.

A backend team at Nimbus Labs stores its application code in a Git repository and they will open several feature branches from the main branch. What is the recommended way to handle these feature branches?

  • ✓ B. Create feature branches that are short lived and merge them back into main frequently

The correct option is Create feature branches that are short lived and merge them back into main frequently.

Choosing short lived feature branches encourages small, incremental changes that are easier to review and merge. Frequent merges keep the main branch current and reduce the chance of large, painful merge conflicts. This approach also enables continuous integration to run tests on changes early so problems are caught sooner and releases remain more reliable.

Keep feature branches active for long periods is not recommended because long lived branches diverge from main and typically cause complex merge conflicts and delayed feedback from testing and code reviews.

Treat feature branches as permanent protected primary branches is incorrect because feature branches are meant to be temporary. Protected status belongs to primary branches like main or release branches and making feature branches permanent undermines the goals of isolation and fast integration.

When answering branching questions pick the option that emphasizes short lived branches and frequent merges because that supports easier merges and faster continuous integration feedback.

Your engineering team at Clearwater Apps keeps its source in a GitHub repository and the security team must approve every code change before it is merged into the primary branch, so what actions can you take to enforce this requirement? (Choose 2)

  • ✓ D. Add a CODEOWNERS file

  • ✓ E. Apply a branch protection rule to the primary branch

The correct options are Add a CODEOWNERS file and Apply a branch protection rule to the primary branch.

A Add a CODEOWNERS file lets you designate teams or individuals as owners for paths in the repository and GitHub will automatically request reviews from those owners when matching files change. Configuring a CODEOWNERS file is how you assign the security team as required reviewers for the parts of the code they must approve.

Applying a Apply a branch protection rule to the primary branch enforces repository level policies such as requiring pull request reviews, blocking direct pushes, and requiring that status checks pass before a merge. A branch protection rule can require reviews from code owners so combining protection rules with a CODEOWNERS file ensures the security team must approve changes before the primary branch accepts them.

Google Cloud Build is a continuous integration and delivery service and it can run builds and tests, but it does not enforce GitHub repository merge policies or require specific team approvals in the repository settings.

Add a LICENSE file documents the legal terms for reuse of the code and it has no effect on review workflows or merge requirements.

Require signed commits enforces cryptographic signatures on commits to verify authorship, but it does not by itself force that a particular team approves pull requests before merging. Branch protection and code owner review settings are needed to require specific approvals.

When a question is about forcing pre merge approvals think about repository settings such as CODEOWNERS and branch protection because these are configured in the Git hosting service rather than in CI tools or license files.

A development team at AzureNova is building an Azure Pipelines release flow and they need an automated check that queries Azure Boards to confirm there are no active work items before a build goes to production. Which type of check should they add?

  • ✓ B. Pre deployment gates

The correct option is Pre deployment gates.

Pre deployment gates run before a release moves to the next stage and they can perform automated checks against external systems such as Azure Boards. You can configure gates to call REST APIs or use built in validations to confirm there are no active work items and block the deployment until the condition is satisfied.

Manual validations are human driven steps that require a person to confirm readiness and they do not provide an automated query of Azure Boards so they do not meet the requirement.

Post deployment approvals occur after deployment to the target environment and they cannot prevent a build from going to production before the deployment happens.

Pre deployment approvals are manual approvals that require explicit consent from approvers and they do not perform automated checks or queries against work items.

When an exam question asks for an automated pre release check look for answers that mention gates or automated validations rather than manual approvals.

The engineering team at Meridian Financial runs Team Foundation Server 2015 and plans to move its projects to Azure DevOps Services. You must provide a migration plan that maintains Team Foundation Version Control changeset timestamps and work item revision timestamps. The migration should carry all TFS artifacts and minimize manual effort. You have recommended upgrading TFS to the latest RTW release. What other step should you advise?

  • ✓ B. Use the TFS Database Import Service for Azure DevOps Services

The correct answer is: Use the TFS Database Import Service for Azure DevOps Services.

The TFS Database Import Service imports a TFS collection database into Azure DevOps Services and preserves changeset history and work item revision history including the original timestamps. This approach transfers version control history, work items, and other TFS artifacts with minimal manual effort when compared to manual migration approaches. You should ensure TFS is upgraded to the supported RTW release and complete the import prerequisites and readiness checks before exporting the collection for import.

Use the TFS Integration Platform to migrate artifacts is not the best choice because the TFS Integration Platform has been archived and is effectively deprecated. It often requires manual mapping and replays history in ways that do not reliably preserve original timestamps and it is therefore not recommended for a full fidelity migration to Azure DevOps Services.

Install the TFS Java SDK for integration tasks is incorrect because a Java SDK is not required for migrating a TFS collection to Azure DevOps Services. The supported migration path uses the database export and import process rather than an SDK based integration.

Upgrade PowerShell to the latest supported release is not a correct standalone step because PowerShell version is not the key enabler for preserving changeset and work item timestamps. PowerShell updates may help with automation but they do not replace the database import service that preserves full history.

When the exam asks you to preserve full history and timestamps choose a database import or native migration path and always confirm the source server meets the service prerequisites and supported versions before starting.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.