AZ-400 Microsoft DevOps Expert Practice Exams
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
AZ-400 DevOps Engineer Expert Certification Exam Topics
If you want to get certified in the AZ-400 Microsoft Certified DevOps Engineer Expert exam, you need to do more than just study. You need to practice by completing AZ-400 practice exams, reviewing DevOps sample questions, and spending time with a reliable AZ-400 certification exam simulator.
In this quick AZ-400 practice test tutorial, we will help you get started by providing a carefully written set of AZ-400 exam questions and answers. These questions mirror the tone and difficulty of the actual AZ-400 exam, giving you a clear sense of how prepared you are.
AZ-400 DevOps Engineer Expert Practice Questions
Study thoroughly, practice consistently, and gain hands-on familiarity with Azure DevOps, CI/CD pipelines, GitHub, IaC, and release workflows. With the right preparation, you will be ready to pass the AZ-400 certification exam with confidence.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Certification Practice Exam Questions
A continuous integration pipeline at Meridian Software intermittently fails because a single test that measures API endpoint latency sometimes times out. You need to keep that intermittent test from causing the build to fail while allowing real test failures to still fail the pipeline. What actions should you take? (Choose 2)
-
❏ A. Disable flaky test detection
-
❏ B. Exclude flaky tests from the test pass calculation
-
❏ C. Enable retry of failed tests for the job
-
❏ D. Manually mark the test as flaky in the build settings
-
❏ E. Enable Test Impact Analysis
Finity Systems is building several services that will run on a set of Azure virtual machines and you must pick appropriate deployment methods for updates. ServiceAlpha requires that the new release is rolled out to a small subset of users for validation and then propagated to the entire user base once testing succeeds. ServiceBeta will have instances of the old release replaced by instances of the new release on a fixed group of virtual machines. Which deployment strategy should you choose for ServiceAlpha?
-
❏ A. Blue green deployment
-
❏ B. Canary rollout
-
❏ C. Rolling update
Read the following statement and decide whether it is true. “To install an application across several Azure virtual machines you should create a universal group in Active Directory.” Is this statement true?
-
❏ A. Yes
-
❏ B. No
You are responsible for an Azure App Service application at Finity Labs and you need to create a release pipeline that follows a blue green deployment approach using deployment slots while keeping downtime low and administrative effort minimal. You created a new deployment slot named preprod in the App Service. You added a preproduction stage in the Release pipeline and configured a task to deploy the application to the preprod slot. You added a production stage and added an App Service Manage task to swap the preprod slot into production. Is this process correct?
-
❏ A. No this process is not sufficient
-
❏ B. Yes the steps implement blue green deployment using slots
Your engineering team wants to strengthen security in the development lifecycle. Which category of security tool should you apply at the pull request stage of the pipeline?
-
❏ A. Threat modeling
-
❏ B. Dynamic application security testing
-
❏ C. Static code analysis
A development group at Nimbus Apps needs its Git history to remain strictly linear and avoid merge commit nodes. Which merge approach should they choose?
-
❏ A. Merge without fast forward
-
❏ B. Squash merge
-
❏ C. Rebase then fast forward merge
-
❏ D. Rebase then create a merge commit
A company named TailwindTech has an Azure subscription that includes a virtual machine scale set called ScaleSetA which is configured for autoscaling. TailwindTech also has an Azure DevOps project named PipelineX that builds a web app named SiteAlpha and deploys SiteAlpha to ScaleSetA. You must ensure that an email notification is sent whenever ScaleSetA scales in or out. The proposed solution is to create an action group in Azure Monitor. Does this meet the requirement?
-
❏ A. No
-
❏ B. Yes
A small firm named CloudForge Labs is using a free Azure DevOps account and has created multiple private projects. Each project defines a build pipeline that uses a self hosted agent and builds are frequently queued which slows delivery. What can the team do to reduce the build queue delays?
-
❏ A. Add more self hosted build machines
-
❏ B. Introduce pipeline caching to speed builds
-
❏ C. Switch to Microsoft hosted agents
-
❏ D. Purchase extra parallel jobs from Azure DevOps
NordicApps is standing up a new Azure DevOps project with strict least privilege requirements. The chief engineer must be able to create repositories manage access control set policies and commit code. Developers must be allowed to commit code and create branches but they must not be permitted to bypass policies when submitting builds. Project stakeholders must have view only access to the repository. To assign the Project Manager role in this Azure DevOps project which Azure DevOps group should the Project Manager be placed in?
-
❏ A. Contributors
-
❏ B. Project Collection Administrators
-
❏ C. Readers
-
❏ D. Build Administrators
-
❏ E. Project Administrators
-
❏ F. Project Collection Valid Users
An engineering group needs to run tests inside a container during an Azure DevOps pipeline that uses Docker. The test outputs must remain inside the build stage and must not be propagated to pipeline artifacts. What approach should they choose?
-
❏ A. Docker Compose file
-
❏ B. Single stage Dockerfile
-
❏ C. Multi stage Dockerfile
-
❏ D. Google Kubernetes Engine pod
A software group at NovaSoft keeps its code in a git repository. They need to label specific commits as milestone releases and you must tag the initial release for a particular commit. Which git command should you run?
-
❏ A. git push origin v2.0
-
❏ B. git commit -m “Release v2.0”
-
❏ C. git tag -a v2.0 -m “Release v2.0”
-
❏ D. git tag v2.0
A development group at NovaApps uses an Azure DevOps pipeline to build and deploy containerized services to an AKS cluster and the images are pushed to Azure Container Registry. The security mandate requires that application vulnerabilities are identified as early as possible in the delivery workflow. What action satisfies this requirement?
-
❏ A. Run a pipeline task that inspects running pods in the AKS cluster for security vulnerabilities
-
❏ B. Use Microsoft Defender for Cloud to continuously scan deployed workloads for threats
-
❏ C. Scan the container image stored in Azure Container Registry during the build pipeline
A mobile software firm called Solace Apps is creating Android and iOS clients and needs to manage work items and release pipelines in Azure DevOps while also capturing crash telemetry distributing beta builds to testers and gathering user feedback on new features. Which components should be included in the solution?
-
❏ A. Visual Studio App Center
-
❏ B. Azure DevOps Boards
-
❏ C. Microsoft Test & Feedback extension
-
❏ D. TFS Integration Platform
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
Review the Nexora Solutions case brief at https://example.com/doc/9aBcD and decide what modification is needed to the Register-AzureRmAutomationDscNode command to fix the current automation problem?
-
❏ A. Add the DefaultProfile parameter to the command
-
❏ B. Change the ConfigurationMode parameter to a mode such as ApplyAndMonitor or ApplyAndAutoCorrect
-
❏ C. Swap Register-AzureRmAutomationDscNode for Register-AzureRmAutomationScheduledRunbook
-
❏ D. Include the AllowModuleOverwrite parameter on the registration command
A regional software house operates Team Foundation Server 2015 and intends to move all assets to Azure DevOps while maintaining original TFVC changeset timestamps and the modified timestamps on work item revisions and while keeping migration work to a minimum. What action should be taken on the TFS application server?
-
❏ A. Perform a database attach migration without upgrading the server
-
❏ B. Install the TFS Java SDK
-
❏ C. Upgrade the TFS deployment to the latest RTW release
A development team packages NuGet libraries and stores them in Azure Artifacts for a company called SummitSoft. You need to make a single package accessible to anonymous external users while minimizing the number of publication endpoints. What action should you perform?
-
❏ A. Store the package files inside a Git repository
-
❏ B. Modify the feed URL for the existing package
-
❏ C. Provision a separate feed for the NuGet package
-
❏ D. Add the package feed URL into the Visual Studio environment settings
Your engineering group at Meridian Tech manages a project in Azure DevOps and needs a dashboard widget that reports the total time from when a work item is created until it is closed. Which widget provides that measurement?
-
❏ A. Cycle Time
-
❏ B. Velocity
-
❏ C. Lead Time
-
❏ D. Burndown chart
Riverton Software uses ServiceNow for incident tracking and you have deployed a new application to Azure. You must ensure that failed authentication events from the application automatically create incident tickets in ServiceNow. Which Azure Log Analytics solution should you deploy?
-
❏ A. Application Insights Connector
-
❏ B. Azure Integration Connector
-
❏ C. IT Service Management Connector (ITSM)
-
❏ D. SQL Server Connector
While running PowerShell in their CI pipeline on Azure a cloud operations team at Cedar Software needs to list all resources that do not comply with current policies, which PowerShell cmdlet should they execute?
-
❏ A. Get-AzPolicyAssignment
-
❏ B. Get-AzPolicyRemediation
-
❏ C. Get-AzPolicyState
-
❏ D. Get-AzPolicyDefinition
Riverton Labs is creating a new Java application and they already use a SonarQube server to evaluate their .NET work. They want SonarQube to analyze and monitor the Java code as part of automated builds. Which build task type should they add to their CI pipeline?
-
❏ A. Gulp
-
❏ B. CocoaPods
-
❏ C. Xcode
-
❏ D. Apache Maven
The engineering team at Solstice Labs maintains a production application named Beacon and they are adding a new microservice that depends on another application called Atlas which is still under development and you must be able to deploy the update to Beacon before Atlas is available and later enable the microservice once Atlas is ready. What should you do?
-
❏ A. Create a feature branch in the repository
-
❏ B. Deploy a lightweight stub that returns default responses
-
❏ C. Implement a feature flag
-
❏ D. Apply branch protection rules on the main branch
A team maintains an Azure DevOps project named DevWorkspace that runs two pipelines called PublishApp and PublishAux. You must allow only PublishApp to deploy to an Azure App Service named site-production and prevent PublishAux from gaining deployment rights. Available actions are Create a system assigned managed identity in Microsoft Entra, Create a service principal in Microsoft Entra, Configure project level permissions in DevWorkspace, Add a variable to the PublishApp pipeline, Create a service connection in DevWorkspace, and Approve the service connection inside PublishApp. Which three actions should you perform in order?
-
❏ A. 5, 2, 3
-
❏ B. 1, 5, 4
-
❏ C. 2, 3, 5
-
❏ D. 5, 6, 2
-
❏ E. 1, 4, 5
-
❏ F. 2, 5, 6
A development group is automating builds for a Java application with Nimbus Pipelines and they need to gather code coverage metrics and then publish the coverage reports into the pipeline. Which tool can produce Java code coverage results that are suitable for publishing to the pipeline?
-
❏ A. JaCoCo
-
❏ B. JUnit
-
❏ C. Cobertura
-
❏ D. Cloud Build
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
A regional retailer named Harbor Retail has an Azure subscription tied to an Azure Active Directory tenant that is licensed with Azure AD Premium Plan 1. A security audit found that too many employees have elevated permissions. You need to implement a privileged access solution that enforces time limited elevation, requires approvals to activate privileges, and keeps costs low. What should you do first?
-
❏ A. Require Multi-Factor Authentication for privileged role activation
-
❏ B. Create activity alerts for privileged role activations
-
❏ C. Upgrade the Azure Active Directory license to Azure AD Premium Plan 2
-
❏ D. Enable Azure Active Directory Privileged Identity Management
A development group at Meridian Analytics is starting a new initiative in Azure DevOps and they need a metric that indicates the percentage of defects discovered after a release to production. Which DevOps KPI should they inspect?
-
❏ A. Cycle time
-
❏ B. Error budget burn rate
-
❏ C. Defect escape rate
-
❏ D. Sprint burndown
-
❏ E. Mean time to recover
-
❏ F. Bug submission rate
-
❏ G. Deployment frequency
-
❏ H. Service failure rate
You manage a containerized application called ServiceX that is built from a Dockerfile. The Dockerfile for ServiceX is presented as a single line for review. FROM example.com/dotnet/sdk:9.0 AS base WORKDIR /service EXPOSE 8080 EXPOSE 8443 FROM example.com/dotnet/sdk:9.0 AS build WORKDIR /src COPY [“ServiceX.csproj”, “”] RUN dotnet restore “./ServiceX.csproj” COPY . . WORKDIR “/src/.” RUN dotnet build “ServiceX.csproj” -c Release -o /service/build FROM build AS publish RUN dotnet publish “ServiceX.csproj” -c Release -o /service/publish FROM base AS final WORKDIR /app COPY –from=publish /service/publish . ENTRYPOINT [“dotnet”, “ServiceX.dll”] Given this Dockerfile is it correct to say that the build is performed using a debug configuration?
-
❏ A. Yes
-
❏ B. No
A release engineering group at Nimbus Labs uses Calendar Versioning CalVer for their software and they want to append an optional label like “beta” to a version string. Which component of a CalVer version should contain that label?
-
❏ A. micro
-
❏ B. modifier
-
❏ C. major
-
❏ D. minor
A development team at Meridian Labs must configure building a Java application under these constraints. The build must reach an internal dependency repository that resides on site. The produced artifacts must be stored as server artifacts in Azure DevOps. The source must remain in an Azure Repos Git repository. The proposed configuration is to install a self hosted build agent on a local server register it to the default agent pool and include a Java Tool Installer task in the pipeline. Does this approach meet the requirements?
-
❏ A. No the proposed configuration does not meet the requirements
-
❏ B. Yes the proposed configuration meets the stated requirements
Your engineering group at Cloudlane uses Azure DevOps to manage builds and tests. The following actions are available 1 Check the solution into Azure Repos. 2 Create a work item. 3 Configure the automated test to run in the build pipeline. 4 Debug the solution locally. 5 Create a test project. Which three actions should you perform in order to associate an automated test with a test case?
-
❏ A. 3-5-1
-
❏ B. 1-3-5
-
❏ C. 5-2-3
-
❏ D. 5-1-3
At Northbridge Systems the engineering group uses Azure Pipelines to build and deploy applications and they want to send a notification to the legal channel in Slack when a build is ready for release. You must enable a setting in the Azure DevOps organization configuration so Azure Pipelines can post messages to Slack. Which setting should you turn on?
-
❏ A. Azure Active Directory Conditional Access Policy Validation
-
❏ B. Third party application access via OAuth
-
❏ C. Personal access tokens
-
❏ D. Alternate authentication credentials
A development group at a fintech startup needs a Git branching model that allows multiple people to work on independent tasks at the same time and keeps the main branch deployable at all times. The chosen model must allow incomplete features to be abandoned without affecting production and it should foster experimentation. Which branching approach should you propose?
-
❏ A. Several long lived branches maintained concurrently
-
❏ B. Individual forks for each contributor
-
❏ C. One main branch with multiple short lived feature branches
-
❏ D. A single persistent branch used for all development
A development group at BrightArc uses Azure DevOps to compile and deliver a microservice that will run in an AKS cluster. The group needs to scan the container image for vulnerabilities within the CI pipeline before it is deployed to the cluster. Which Microsoft product should be added to the pipeline to perform the image scanning?
-
❏ A. Microsoft Defender for Storage
-
❏ B. Microsoft Defender for App Service
-
❏ C. Microsoft Defender for Containers
-
❏ D. Microsoft Defender for DevOps
Which Azure DevOps service should a development team use to manage product backlogs plan sprints and visualize the progress of work items?
-
❏ A. Artifact Registry
-
❏ B. Azure Repos
-
❏ C. Azure Boards
-
❏ D. Azure Pipelines
A development group at Meridian Systems uses a continuous integration pipeline to build and release a web application. They need a testing approach that checks a single code component without invoking that component’s external dependencies. Which testing approach should they adopt?
-
❏ A. Integration test
-
❏ B. Acceptance test
-
❏ C. Unit test
-
❏ D. Performance test
-
❏ E. Sanity test
NimbusSoft maintains a containerized application named ServiceA and the engineering team supplied the Dockerfile below for review. FROM mcnz.com/dotnet/sdk:6.0 AS base WORKDIR /serviceA EXPOSE 8080 EXPOSE 8443 FROM mcnz.com/dotnet/sdk:6.0 AS build WORKDIR /src COPY [“ServiceA.csproj”, “”] RUN dotnet restore “./ServiceA.csproj” COPY . . WORKDIR “/src/.” RUN dotnet build “ServiceA.csproj” -c Release -o /serviceA/build FROM build AS publish RUN dotnet publish “ServiceA.csproj” -c Release -o /serviceA/publish FROM base AS final WORKDIR /app COPY –from=publish /serviceA/publish . ENTRYPOINT [“dotnet”, “ServiceA.dll”] Does this Dockerfile utilize Docker multi stage build functionality?
-
❏ A. No
-
❏ B. Yes
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
A development team at Aurora Software maintains Git hosted packages and follows Semantic Versioning. You add a new feature that keeps existing interfaces compatible and you need to know which portion of the version string to increment?
-
❏ A. Patch
-
❏ B. Major
-
❏ C. Minor
You manage an Azure subscription that contains an Azure Key Vault named VaultA a pipeline named BuildPipelineX and an Azure SQL database named SalesDB. BuildPipelineX deploys an application that will authenticate to SalesDB by using a password. You must save the password in VaultA and ensure that BuildPipelineX can retrieve it during deployment. How should you store the password?
-
❏ A. Key
-
❏ B. Certificate
-
❏ C. Managed Identity
-
❏ D. Secret
AtlasApps maintains an Azure DevOps project with a build pipeline that pulls in roughly 35 third party libraries. The engineering team wants automated detection of known security vulnerabilities in those libraries as part of the continuous integration workflow. Which object should be created to enable those scans?
-
❏ A. An Azure Artifacts feed
-
❏ B. A deployment job
-
❏ C. A build task
A site reliability team uses Azure DevOps and has set up a project that relies on Azure Boards to manage work items. They plan to add dashboard chart widgets to monitor several project metrics. Which widget can be used to present the metric “Show the current status of each release pipeline”?
-
❏ A. Query chart tile
-
❏ B. Sprint capacity widget
-
❏ C. Release pipelines overview
-
❏ D. Build run history
A payments startup collects telemetry from the Intelligent Insights capability of its Azure SQL Database and from Application Insights for its web front end and it must run interactive ad hoc queries against the gathered monitoring data, which query language should it use?
-
❏ A. Transact-SQL
-
❏ B. PL/pgSQL
-
❏ C. BigQuery SQL
-
❏ D. Kusto Query Language (KQL)
How can a development team best prevent direct pushes from feature branches into the main branch?
-
❏ A. Lock the main branch
-
❏ B. Adopt a team guideline to use pull requests
-
❏ C. Branch policies
A development team at Orion Labs has an Azure Pipelines build that runs separate jobs to compile their application for eight different CPU targets and the build currently requires about eighteen hours to finish. The team wants to shorten the overall build time. What two changes should they make to accomplish this? (Choose 2)
-
❏ A. Enable pipeline caching for build outputs
-
❏ B. Set up an agent pool for the pipeline
-
❏ C. Adopt blue green deployment for releases
-
❏ D. Increase the pipeline parallel job quota
-
❏ E. Use deployment groups to target servers
Orchid Systems uses Black Duck to confirm that every open source dependency complies with the company licensing policies. Is that statement correct?
-
❏ A. Bamboo
-
❏ B. No change needed
-
❏ C. Cloud Build
-
❏ D. Maven
Your team at Nimbus Systems manages several package feeds in an Azure DevOps project and you want to consolidate them into a single feed that contains packages you publish internally and packages pulled from external registries both anonymous and authenticated. Which Azure DevOps feature should you enable?
-
❏ A. Universal Packages
-
❏ B. Views in Azure Artifacts
-
❏ C. Upstream sources
-
❏ D. Symbol server
Which reporting widget shows the elapsed time to complete work items after they transition to the active state?
-
❏ A. Velocity
-
❏ B. Lead Time
-
❏ C. Cycle Time
-
❏ D. Burndown Chart
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
Which statement best explains a primary benefit of adopting a version control system for a development team?
-
❏ A. Version control systems automatically increment build or release version numbers
-
❏ B. Cloud Build
-
❏ C. Version control records who changed files when and what differences were made so teams can restore prior working states
-
❏ D. Version control only stores binary snapshots without preserving who made the changes
A software company is designing its deployment workflow for a customer portal and they want to expose new releases gradually to subsets of users to reduce risk. Which approaches support staged exposure of a new application version? (Choose 2)
-
❏ A. Blue green deployment
-
❏ B. Incremental rollout
-
❏ C. Deployment rings
-
❏ D. Feature flags
At NovaTech your team integrates a cloud hosted Jenkins server with a fresh Azure DevOps environment and you need Azure DevOps to notify Jenkins when a developer pushes commits to a branch in Azure Repos. You configure a service hook subscription that listens for the Code Pushed event. Does this meet the requirement?
-
❏ A. No
-
❏ B. Yes
You need to strengthen engineering practices at NovaApps. Which kind of security tool should you integrate into the continuous integration phase of your build pipeline?
-
❏ A. Threat modeling
-
❏ B. Penetration testing
-
❏ C. Static analysis tools
A regional fintech firm named NovaStream is updating its Azure DevOps workflows for a multi-site engineering group. The team must detect license noncompliance and banned third-party libraries during development. You plan to add automated security testing to the build pipeline. Will this change meet the stated requirement?
-
❏ A. Add a software composition analysis tool to the pipeline
-
❏ B. Implement automated security testing in the build pipeline
-
❏ C. Create a policy gate that rejects builds with disallowed licenses
-
❏ D. No, automated security testing alone is insufficient
Certification Practice Exam Questions Answered
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
A continuous integration pipeline at Meridian Software intermittently fails because a single test that measures API endpoint latency sometimes times out. You need to keep that intermittent test from causing the build to fail while allowing real test failures to still fail the pipeline. What actions should you take? (Choose 2)
-
✓ B. Exclude flaky tests from the test pass calculation
-
✓ D. Manually mark the test as flaky in the build settings
The correct options are Exclude flaky tests from the test pass calculation and Manually mark the test as flaky in the build settings.
Exclude flaky tests from the test pass calculation is correct because it prevents an intermittently failing test from affecting the overall pass rate that gates the build. Excluding the test lets the pipeline ignore occasional timeouts for that test when computing pass thresholds while still treating other tests as normal.
Manually mark the test as flaky in the build settings is correct because explicitly flagging the test as flaky gives you a targeted way to handle that specific test. Marking it lets the CI system treat its results differently and avoids broad changes that could hide real regressions.
Disable flaky test detection is incorrect because turning off detection removes the mechanism that identifies and handles flakiness. That approach would not keep intermittent failures from breaking the build and would make it harder to manage flaky tests over time.
Enable retry of failed tests for the job is incorrect because retries can mask intermittent issues and make it harder to notice real failures. Retries may reduce noise but they do not explicitly exclude the test from pass calculations or provide a clear signal that the test is flaky.
Enable Test Impact Analysis is incorrect because Test Impact Analysis selects which tests to run based on code changes and does not address intermittent test flakiness or pass rate calculations. It will not prevent a flaky test from affecting build pass criteria.
When you see questions about flaky tests pick answers that let you flag or exclude the test from pass metrics so the pipeline stays sensitive to real failures while ignoring intermittent noise.
Finity Systems is building several services that will run on a set of Azure virtual machines and you must pick appropriate deployment methods for updates. ServiceAlpha requires that the new release is rolled out to a small subset of users for validation and then propagated to the entire user base once testing succeeds. ServiceBeta will have instances of the old release replaced by instances of the new release on a fixed group of virtual machines. Which deployment strategy should you choose for ServiceAlpha?
-
✓ B. Canary rollout
The correct option is Canary rollout.
A Canary rollout deploys the new release to a small subset of users or instances for validation and then gradually increases exposure if the changes are successful. This pattern reduces risk by limiting the blast radius and letting operators observe real user behavior and metrics before propagating the release to the entire user base, which matches the ServiceAlpha requirement.
Blue green deployment is not correct because it involves switching traffic between two complete environments in an all or nothing cutover and it does not target a small subset of users for validation first.
Rolling update is not correct because it focuses on replacing instances across a fixed group of machines sequentially and it does not emphasize validating the new release with a small subset of users before full rollout. The rolling approach is better suited to the ServiceBeta scenario where instances on a fixed set of VMs are replaced.
When a question mentions validating with a small group of users choose a canary style strategy. When it mentions updating a fixed fleet of machines choose a rolling deployment.
Read the following statement and decide whether it is true. “To install an application across several Azure virtual machines you should create a universal group in Active Directory.” Is this statement true?
-
✓ B. No
The correct answer is No.
No is correct because creating a universal group in Active Directory does not deploy software to machines. Universal groups are a group scope used for membership and permission assignment across domains and they replicate group membership to the global catalog.
To install an application across several Azure virtual machines you must use a deployment or management mechanism such as Group Policy for domain joined machines, Azure VM extensions, Desired State Configuration, Azure Automation, VM scale sets, or Microsoft Intune for Azure AD joined devices. Those tools execute installers or configuration on target machines and a group scope alone will not push an installer.
Yes is wrong because simply creating a universal group does not trigger installation actions and it is not the mechanism used to distribute and install applications across virtual machines.
When a question asks about deploying software to multiple machines think about management and deployment tools such as Group Policy, VM extensions, or Intune rather than changing group scopes in Active Directory.
You are responsible for an Azure App Service application at Finity Labs and you need to create a release pipeline that follows a blue green deployment approach using deployment slots while keeping downtime low and administrative effort minimal. You created a new deployment slot named preprod in the App Service. You added a preproduction stage in the Release pipeline and configured a task to deploy the application to the preprod slot. You added a production stage and added an App Service Manage task to swap the preprod slot into production. Is this process correct?
-
✓ B. Yes the steps implement blue green deployment using slots
Yes the steps implement blue green deployment using slots is correct. You deployed the app to a preprod deployment slot and then swapped that slot into production with an App Service Manage task which implements a blue green style release while keeping downtime low and administrative effort minimal.
The App Service slot swap is a quick operation that routes traffic from the production slot to the preprod slot almost instantly which reduces downtime. The App Service Manage task in the Release pipeline performs the swap so you do not need complex manual steps. You should also take advantage of slot warm up and slot specific settings to ensure the new instance is ready to serve requests immediately after the swap.
You can also use traffic routing on slots to shift traffic gradually if you want staged verification before the full swap. That capability is optional and it complements the swap when you need more cautious rollouts.
No this process is not sufficient is incorrect because the described sequence of deploying to a preprod slot and then swapping into production is the standard blue green approach on Azure App Service. The only potential gaps are operational details such as marking configuration as slot settings or warming the app which are implementation details rather than a problem with the core approach.
Make sure to mark environment specific application settings and connection strings as slot settings and perform a warm up in the preprod slot before swapping to minimize the chance of downtime or configuration drift.
Your engineering team wants to strengthen security in the development lifecycle. Which category of security tool should you apply at the pull request stage of the pipeline?
-
✓ C. Static code analysis
The correct option is Static code analysis.
Static code analysis examines source code and committed changes without running the application so it can provide fast, actionable feedback at the pull request stage. It is commonly integrated into pull request checks or CI pipelines so that issues such as insecure coding patterns, hardcoded secrets, and basic dataflow flaws can be detected before code is merged.
Threat modeling is a design and architecture activity that helps identify potential threats and mitigation strategies for a system. It is not an automated check that runs on each pull request and it focuses on system design rather than scanning the actual code changes.
Dynamic application security testing analyzes a running application and finds runtime and environment specific issues. It is typically performed later in the pipeline or against deployed test environments and it is not suitable for the quick, pre-merge feedback that is needed at the pull request stage.
When a question asks which tool fits the pull request stage think of tools that run on source code without executing the app and look for static analysis solutions that integrate with pull request checks.
A development group at Nimbus Apps needs its Git history to remain strictly linear and avoid merge commit nodes. Which merge approach should they choose?
-
✓ C. Rebase then fast forward merge
The correct option is Rebase then fast forward merge.
Rebase then fast forward merge reapplies the feature branch commits on top of the target branch and then advances the branch pointer without creating a merge commit. This preserves each original commit and produces a strictly linear commit graph without merge nodes.
Merge without fast forward is incorrect because it forces a merge commit even when a fast forward is possible and therefore adds merge nodes to the history.
Squash merge is incorrect because it collapses all branch commits into a single commit which loses the separate commits. Even though it avoids merge commits it does not preserve the original, per commit history that a team might require.
Rebase then create a merge commit is incorrect because creating a merge commit reintroduces a merge node into the graph and so the history is not strictly linear.
Rebase rewrites commits and a fast forward merge moves the branch pointer without a merge commit. On the exam pick the option that both avoids merge commits and preserves the commits you need.
A company named TailwindTech has an Azure subscription that includes a virtual machine scale set called ScaleSetA which is configured for autoscaling. TailwindTech also has an Azure DevOps project named PipelineX that builds a web app named SiteAlpha and deploys SiteAlpha to ScaleSetA. You must ensure that an email notification is sent whenever ScaleSetA scales in or out. The proposed solution is to create an action group in Azure Monitor. Does this meet the requirement?
-
✓ B. Yes
The correct answer is Yes.
An action group in Azure Monitor is the mechanism that defines notification receivers such as email, SMS, and webhook. When you need an email whenever ScaleSetA scales in or out you create an action group with an email receiver and then use that action group from the autoscale settings or from an alert rule that monitors scaling events. The action group is what actually sends the email when a linked autoscale rule or alert fires.
To implement this you create the action group and add the desired email address as a receiver. You then attach the action group to the virtual machine scale set autoscale configuration or to an activity log or metric alert that detects scale operations. Once associated the action group will receive the notification and send the email whenever ScaleSetA scales in or out.
No is incorrect because simply rejecting the action group ignores how Azure Monitor delivers notifications. Action groups are the correct notification target and must be used with autoscale settings or alerts to produce the email notifications.
When you see questions about notifications look for an answer that involves creating an action group and then remember to attach it to an autoscale setting or an alert so the notifications are actually sent.
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
A small firm named CloudForge Labs is using a free Azure DevOps account and has created multiple private projects. Each project defines a build pipeline that uses a self hosted agent and builds are frequently queued which slows delivery. What can the team do to reduce the build queue delays?
-
✓ D. Purchase extra parallel jobs from Azure DevOps
The correct option is Purchase extra parallel jobs from Azure DevOps.
Purchase extra parallel jobs from Azure DevOps is correct because Azure DevOps enforces a limit on how many jobs can run at the same time for an organization. Buying additional parallel jobs increases the allowed concurrency so more builds can run simultaneously and the queue delays are reduced.
Add more self hosted build machines is not the best answer because simply adding more machines does not change the Azure DevOps parallel job quota. If the organization has reached its concurrency limit additional self hosted agents will remain idle while jobs queue.
Introduce pipeline caching to speed builds can make individual builds faster by reusing outputs and dependencies but it does not increase how many builds can run in parallel. Caching helps reduce build time but it will not remove queue delays when concurrency is the bottleneck.
Switch to Microsoft hosted agents is also not sufficient on its own because Microsoft hosted agents are still subject to the organization parallel job limits and free accounts have restricted parallel jobs and minutes. Changing the agent type will not increase the number of concurrent jobs without purchasing additional parallel capacity.
When a question describes many builds queuing determine whether the bottleneck is concurrency or build duration. If concurrency is the issue look for answers that increase parallel jobs or parallelism.
NordicApps is standing up a new Azure DevOps project with strict least privilege requirements. The chief engineer must be able to create repositories manage access control set policies and commit code. Developers must be allowed to commit code and create branches but they must not be permitted to bypass policies when submitting builds. Project stakeholders must have view only access to the repository. To assign the Project Manager role in this Azure DevOps project which Azure DevOps group should the Project Manager be placed in?
-
✓ C. Readers
The correct option is Readers.
The Readers group grants read only access to repositories and related artifacts which aligns with the Project Manager requirement to have view only access without the ability to commit code or change branch policies. Placing the Project Manager in Readers enforces least privilege while still allowing them to review code and project state.
The Contributors group is incorrect because it allows users to commit code and create branches which is more permission than a view only Project Manager should have.
The Project Collection Administrators group is incorrect because it grants organization level administrative privileges and can change policies and security across projects which is far too broad for a Project Manager with view only needs.
The Build Administrators group is incorrect because it focuses on pipeline and build resource administration and does not provide the intended read only repository role for a Project Manager.
The Project Administrators group is incorrect because it can modify project settings and branch policies which would violate the least privilege requirement for a Project Manager.
The Project Collection Valid Users group is incorrect because it is a broad membership group that effectively represents all users and is not a specific role for granting controlled view only access.
When a role requires only view permissions choose the default Readers group since it maps to read only access and supports strict least privilege on Azure DevOps projects.
An engineering group needs to run tests inside a container during an Azure DevOps pipeline that uses Docker. The test outputs must remain inside the build stage and must not be propagated to pipeline artifacts. What approach should they choose?
-
✓ C. Multi stage Dockerfile
The correct choice is Multi stage Dockerfile.
Multi stage Dockerfile lets you run compilation and tests in one or more intermediate stages and then produce a final image that does not include the test outputs. This makes it straightforward to keep test artifacts only inside the build stages and not copy them into the final image or expose them as pipeline artifacts.
Docker Compose file defines and orchestrates multiple containers and it is useful for local integration scenarios, but it does not by itself provide the build stage isolation needed to guarantee that test outputs remain inside the pipeline build and are not propagated as artifacts.
Single stage Dockerfile builds everything into one image so tests and their outputs are part of the same build unless you perform explicit cleanup, and that makes it harder to ensure outputs are not included in the final image or published by the pipeline.
Google Kubernetes Engine pod runs workloads on an external cluster and is not relevant to keeping test outputs strictly inside an Azure DevOps build stage. Running tests on GKE would move execution outside the build environment and could lead to persistence or collection of outputs that violates the requirement.
When a question requires keeping outputs confined to the build, think about stage isolation. Using a multi stage Docker build lets you run tests in an intermediate stage and discard outputs before producing the final artifact.
A software group at NovaSoft keeps its code in a git repository. They need to label specific commits as milestone releases and you must tag the initial release for a particular commit. Which git command should you run?
-
✓ C. git tag -a v2.0 -m “Release v2.0”
git tag -a v2.0 -m “Release v2.0” is correct.
The annotated tag creates a tag object that records the tagger name, email, date, and the message you supply. That makes it the proper choice for a milestone release because it preserves metadata and it can be signed to prove provenance.
You can also target a specific commit by appending the commit hash to the command and then push the tag to the remote so other collaborators can fetch it.
git push origin v2.0 is incorrect because pushing only sends a local ref to the remote and does not by itself create a tag locally. You must create the tag first before pushing it.
git commit -m “Release v2.0” is incorrect because that command creates a new commit rather than creating a tag label for an existing commit. Tags are separate refs from commits.
git tag v2.0 is incorrect for this scenario because that command makes a lightweight tag that does not include a message or the metadata you get with an annotated tag. Lightweight tags are fine for quick labels but they are not ideal for formal release milestones.
Annotated tags are preferred for release milestones because they include a message and metadata. Remember to create the tag locally and then run git push origin v2.0 or git push –tags to share it with others.
A development group at NovaApps uses an Azure DevOps pipeline to build and deploy containerized services to an AKS cluster and the images are pushed to Azure Container Registry. The security mandate requires that application vulnerabilities are identified as early as possible in the delivery workflow. What action satisfies this requirement?
-
✓ C. Scan the container image stored in Azure Container Registry during the build pipeline
Scan the container image stored in Azure Container Registry during the build pipeline is the correct action because it finds vulnerabilities before images are pushed to production and it allows the build pipeline to fail or block deployment when known issues are detected.
Scanning images as part of the build pipeline catches vulnerabilities at the earliest point in the delivery workflow. This approach lets developers remediate issues in source or image layers before the image reaches Azure Container Registry or the AKS cluster and it prevents vulnerable artifacts from being deployed.
Run a pipeline task that inspects running pods in the AKS cluster for security vulnerabilities is incorrect because inspecting running pods happens after deployment. That method is useful for runtime discovery but it cannot prevent a vulnerable image from being deployed in the first place.
Use Microsoft Defender for Cloud to continuously scan deployed workloads for threats is incorrect because Microsoft Defender for Cloud focuses on runtime threat detection and posture management. It provides valuable protection and alerts after deployment but it does not provide the same early, pre-deployment vulnerability blocking that image scanning in the build pipeline does.
When you see choices that occur before or after deployment pick the one that is pre-deployment if the question asks to identify vulnerabilities as early as possible. Scanning the image in CI is typically the earliest gate.
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
A mobile software firm called Solace Apps is creating Android and iOS clients and needs to manage work items and release pipelines in Azure DevOps while also capturing crash telemetry distributing beta builds to testers and gathering user feedback on new features. Which components should be included in the solution?
-
✓ C. Microsoft Test & Feedback extension
Microsoft Test & Feedback extension is the correct option.
The Microsoft Test & Feedback extension supplies a lightweight tester client that captures screenshots reproduction steps system and console logs and file attachments and it lets testers file bugs and other work items directly into Azure DevOps so feedback is recorded and traceable to releases and fixes.
The extension fits scenarios where you need to gather rich user feedback and exploratory test artifacts and then link those artifacts to work items and release context in Azure DevOps so teams can triage and act on issues discovered by testers.
Visual Studio App Center is not the correct choice in this question because it is a separate service that focuses on crash reporting distribution and analytics rather than the Azure DevOps integrated feedback workflow that the Test & Feedback extension provides.
Azure DevOps Boards is not correct because Boards manages work items and backlogs but it does not itself capture screenshots session logs or in‑app feedback from testers.
TFS Integration Platform is not correct because it is a legacy migration tool and not used to capture telemetry or user feedback and it is deprecated and unlikely to be the focus of current exam scenarios.
Focus on matching the specific capability described in the question to the product that explicitly offers that capability and watch for legacy tools which may be deprecated.
Review the Nexora Solutions case brief at https://example.com/doc/9aBcD and decide what modification is needed to the Register-AzureRmAutomationDscNode command to fix the current automation problem?
-
✓ B. Change the ConfigurationMode parameter to a mode such as ApplyAndMonitor or ApplyAndAutoCorrect
Change the ConfigurationMode parameter to a mode such as ApplyAndMonitor or ApplyAndAutoCorrect is the correct option.
The ConfigurationMode parameter determines how the node’s Local Configuration Manager behaves after you register the node. Choosing ApplyAndMonitor enables reporting and monitoring of configuration drift and choosing ApplyAndAutoCorrect enables automatic remediation when drift is detected. Changing the registration so the node is not in ApplyOnly addresses the automation problem where configurations are applied but not monitored or corrected. Note that the cmdlet shown in the brief comes from the older AzureRM module which is deprecated and newer exams and guides use the Az equivalents.
Add the DefaultProfile parameter to the command is incorrect because DefaultProfile only sets which Azure subscription or context the cmdlet runs under and it does not change the Local Configuration Manager’s behavior that controls monitoring or auto correction.
Swap Register-AzureRmAutomationDscNode for Register-AzureRmAutomationScheduledRunbook is incorrect because a scheduled runbook registration is used to associate runbooks with schedules and it does not register or configure DSC nodes or their LCM modes.
Include the AllowModuleOverwrite parameter on the registration command is incorrect because AllowModuleOverwrite relates to module import and update behavior and it does not affect a node’s configuration mode or enable drift monitoring or automatic correction.
When troubleshooting DSC node behavior check the Local Configuration Manager’s ConfigurationMode first to see if monitoring or auto correction is enabled.
A regional software house operates Team Foundation Server 2015 and intends to move all assets to Azure DevOps while maintaining original TFVC changeset timestamps and the modified timestamps on work item revisions and while keeping migration work to a minimum. What action should be taken on the TFS application server?
-
✓ C. Upgrade the TFS deployment to the latest RTW release
Upgrade the TFS deployment to the latest RTW release is the correct action to take on the TFS application server.
Upgrading the TFS deployment to the latest RTW release brings the server and databases to a supported level for migration to Azure DevOps Services. That compatibility is required for the import and migration tooling to preserve original TFVC changeset timestamps and the modified timestamps on work item revisions while keeping the amount of manual migration work to a minimum.
The upgrade updates the TFS schema and server components so that the Azure DevOps import process can carry over history and metadata cleanly. Using the supported upgrade path avoids custom scripts or manual edits and reduces the risk of losing timestamps or revision data during the migration.
Perform a database attach migration without upgrading the server is incorrect because attaching databases from an older, unsupported TFS build does not meet the import requirements and can result in migration failures or loss of metadata needed to preserve original timestamps.
Install the TFS Java SDK is incorrect because there is no Java SDK step that affects server version compatibility or that will enable preservation of timestamps during a migration. Installing client libraries does not replace the need to upgrade the TFS deployment itself.
When answering migration questions focus on compatibility and supported upgrade paths. Upgrading the source server to a supported release is often the key step to preserve history and timestamps.
A development team packages NuGet libraries and stores them in Azure Artifacts for a company called SummitSoft. You need to make a single package accessible to anonymous external users while minimizing the number of publication endpoints. What action should you perform?
-
✓ C. Provision a separate feed for the NuGet package
Provision a separate feed for the NuGet package is correct.
Provision a separate feed lets you publish exactly the single NuGet package that should be accessible to anonymous users while keeping other packages private. A distinct feed can be configured for public visibility so you do not have to expose your internal feed and this minimizes the number of publication endpoints since you publish that package only to the public feed.
Store the package files inside a Git repository is incorrect because Git repositories are not package feeds and NuGet clients expect a package registry or feed to resolve and install packages.
Modify the feed URL for the existing package is incorrect because changing a URL does not change feed visibility and you cannot selectively make a single package public within a private feed by only altering the feed address.
Add the package feed URL into the Visual Studio environment settings is incorrect because adding a feed URL to a developer environment grants that client access but it does not enable anonymous external access or change the feed visibility for all external users.
When a question asks for exposing a single package publicly prefer creating a separate feed and setting its visibility to public. This keeps other artifacts private and reduces the number of publication endpoints you must manage.
Your engineering group at Meridian Tech manages a project in Azure DevOps and needs a dashboard widget that reports the total time from when a work item is created until it is closed. Which widget provides that measurement?
-
✓ C. Lead Time
The correct option is Lead Time.
The Lead Time widget measures the elapsed time from when a work item is created until it is closed. It reports the total end to end time for an item so you can track delivery speed and trends across work.
Cycle Time is incorrect because it usually measures the time from when work actually starts on an item until it is finished rather than from creation to closure.
Velocity is incorrect because it reports how much work a team completes per iteration or sprint and it does not measure the time to close an individual work item.
Burndown chart is incorrect because it visualizes remaining work over the course of a sprint or iteration and it does not report the total elapsed time for a single work item.
Look for wording that refers to the time from when a work item is created to when it is closed to identify Lead Time on the exam.
Riverton Software uses ServiceNow for incident tracking and you have deployed a new application to Azure. You must ensure that failed authentication events from the application automatically create incident tickets in ServiceNow. Which Azure Log Analytics solution should you deploy?
-
✓ C. IT Service Management Connector (ITSM)
IT Service Management Connector (ITSM) is the correct option.
The IT Service Management Connector (ITSM) integrates Azure Monitor and Log Analytics with ITSM platforms such as ServiceNow so that alerts and log based conditions can automatically create incidents in the external ticketing system. This connector maps alert or log event fields to incident fields and supports automated ticket creation for events like failed authentication attempts.
Application Insights Connector is not correct because Application Insights is focused on collecting application telemetry and diagnostics and it does not itself provide the dedicated, supported integration to create ServiceNow incidents from Log Analytics alerts in the same way that the ITSM connector does.
Azure Integration Connector is not correct because there is no Log Analytics solution by that exact name that provides direct incident creation in ServiceNow. It is not the supported connector for forwarding alerts to an ITSM tool.
SQL Server Connector is not correct because that connector targets ingestion of SQL Server telemetry and performance data into Log Analytics and it does not handle creating incidents in ServiceNow for authentication failures from an application.
When a question asks which connector will create tickets in an ITSM tool think about which integration is explicitly built for that purpose. In this case use the IT Service Management Connector and then configure alert rules to send events to it.
While running PowerShell in their CI pipeline on Azure a cloud operations team at Cedar Software needs to list all resources that do not comply with current policies, which PowerShell cmdlet should they execute?
-
✓ C. Get-AzPolicyState
The correct answer is Get-AzPolicyState.
Get-AzPolicyState retrieves policy evaluation results and the compliance state for individual resources. It returns entries that indicate which resources are compliant or noncompliant and includes details about the assignment and policy that caused the evaluation. Running Get-AzPolicyState in a CI pipeline lets you filter the output for noncompliant states to produce a list of resources that do not comply with current policies.
Get-AzPolicyAssignment returns metadata about policy assignments and their scopes. It does not provide per resource compliance evaluations so it cannot directly list noncompliant resources.
Get-AzPolicyRemediation shows remediation resources and remediation runs that attempt to bring resources into compliance. It focuses on remediation operations rather than producing a general report of current noncompliant resources.
Get-AzPolicyDefinition retrieves the policy definition documents themselves. This cmdlet helps you inspect or manage the policy rules but it does not report evaluation results or which resources are failing policy checks.
Run Get-AzPolicyState and then filter for the NonCompliant state when you need an automated list of resources that fail policy checks.
Riverton Labs is creating a new Java application and they already use a SonarQube server to evaluate their .NET work. They want SonarQube to analyze and monitor the Java code as part of automated builds. Which build task type should they add to their CI pipeline?
-
✓ D. Apache Maven
The correct answer is Apache Maven.
Apache Maven is a Java build and dependency management tool. SonarQube integrates directly with Maven through the SonarScanner for Maven so the CI pipeline can run the scanner and analyze Java source code as part of automated builds.
Gulp is a JavaScript task runner that is used for front end and Node projects and it is not a standard Java build system so it is not the right choice to run SonarQube analysis for Java.
CocoaPods is an iOS and macOS dependency manager for Swift and Objective C and it does not apply to Java builds or SonarQube Java analysis.
Xcode is Apple’s IDE and build environment for macOS and iOS development and it is not used to build or manage Java projects so it is not appropriate for this CI task.
When a question asks about build tasks for Java choose a Java build tool such as Maven or Gradle and check whether SonarQube offers a dedicated integration for that tool.
The engineering team at Solstice Labs maintains a production application named Beacon and they are adding a new microservice that depends on another application called Atlas which is still under development and you must be able to deploy the update to Beacon before Atlas is available and later enable the microservice once Atlas is ready. What should you do?
-
✓ C. Implement a feature flag
The correct answer is Implement a feature flag.
A feature flag lets you deploy the new microservice code to production with the feature turned off while Atlas is still under development. This approach decouples deployment from release and lets you enable the microservice later when Atlas is ready without additional merges or rollbacks.
Feature flags also allow gradual rollout and testing in production environments so you can control exposure and monitor behavior before enabling the feature for all users. You can implement the flag as a simple configuration switch or use a feature flag service for targeting and analytics.
Create a feature branch in the repository is not ideal because working in a separate branch prevents deploying the change to the main production line until you merge. That approach does not let you deploy the new code safely to production while keeping it disabled.
Deploy a lightweight stub that returns default responses is more work to maintain and can lead to incorrect behavior or hidden bugs when the real service arrives. Stubs do not provide the same controlled rollout and observability that feature flags offer.
Apply branch protection rules on the main branch does not solve the requirement to deploy the update before the dependent service is available. Branch protection is about preventing unwanted changes and it does not provide a runtime mechanism to toggle a feature on or off.
When a question asks about releasing code before a dependency is ready look for answers that decouple deployment from release. Feature flags are a common and safe pattern to enable this behavior.
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
A team maintains an Azure DevOps project named DevWorkspace that runs two pipelines called PublishApp and PublishAux. You must allow only PublishApp to deploy to an Azure App Service named site-production and prevent PublishAux from gaining deployment rights. Available actions are Create a system assigned managed identity in Microsoft Entra, Create a service principal in Microsoft Entra, Configure project level permissions in DevWorkspace, Add a variable to the PublishApp pipeline, Create a service connection in DevWorkspace, and Approve the service connection inside PublishApp. Which three actions should you perform in order?
-
✓ F. 2, 5, 6
The correct sequence is 2, 5, 6.
2 creates a service principal in Microsoft Entra which gives an identity that can be granted RBAC rights to the App Service. Creating the service principal is the authentication foundation that lets Azure DevOps act against Azure resources without using user credentials.
5 is to create a service connection in the DevWorkspace project that uses the service principal to connect to the target subscription and App Service. The service connection stores the service principal credentials and is the object pipelines use to deploy to Azure.
6 is to approve or authorize that service connection specifically inside the PublishApp pipeline so only that pipeline is allowed to use it. By not granting the connection to all pipelines and by authorizing it only for PublishApp you prevent PublishAux from gaining deployment rights to the site-production App Service.
1. Create a system assigned managed identity in Microsoft Entra is not appropriate because a system assigned managed identity is tied to an Azure resource and cannot be directly used as a pipeline identity in Azure DevOps. Managed identities are for resources running in Azure and do not replace a service principal for DevOps service connections.
3. Configure project level permissions in DevWorkspace is not the right control for this requirement because project level permissions affect users and groups broadly and do not directly restrict which pipeline can use a given service connection. The precise control is to create and authorize a service connection for a single pipeline.
4. Add a variable to the PublishApp pipeline does not provide secure authentication or enforce access control to Azure resources. A pipeline variable cannot prevent another pipeline from deploying unless it is paired with a properly scoped service connection and authorization.
When a question asks how to allow only one pipeline to deploy, think in terms of creating a service principal for Azure authentication then creating a service connection and authorizing it only for the target pipeline. Use service connections and the pipeline authorization settings rather than broad project permissions.
A development group is automating builds for a Java application with Nimbus Pipelines and they need to gather code coverage metrics and then publish the coverage reports into the pipeline. Which tool can produce Java code coverage results that are suitable for publishing to the pipeline?
-
✓ C. Cobertura
The correct answer is Cobertura.
Cobertura produces Java code coverage reports in the Cobertura XML format which many CI and pipeline systems can ingest and publish. It integrates with common Java build tools such as Maven and Ant and can generate both human readable HTML and machine readable XML that you can upload into a pipeline step.
JaCoCo is a popular Java coverage tool but its default reporting format differs and some pipeline publishing steps expect the Cobertura XML schema unless you convert the output first.
JUnit is a unit testing framework that reports test results and assertions. It does not produce code coverage metrics by itself so it is not the correct tool for publishing coverage reports.
Cloud Build is a continuous integration and delivery service for running build and test steps. It can run coverage tools and upload their outputs but it is not itself a code coverage generator.
When a pipeline expects coverage input look for tools that export the expected report format. Prefer tools that produce Cobertura XML or convert other coverage outputs to that schema before publishing.
A regional retailer named Harbor Retail has an Azure subscription tied to an Azure Active Directory tenant that is licensed with Azure AD Premium Plan 1. A security audit found that too many employees have elevated permissions. You need to implement a privileged access solution that enforces time limited elevation, requires approvals to activate privileges, and keeps costs low. What should you do first?
-
✓ C. Upgrade the Azure Active Directory license to Azure AD Premium Plan 2
Upgrade the Azure Active Directory license to Azure AD Premium Plan 2 is correct. Azure AD Premium Plan 2 is required to use Azure AD Privileged Identity Management which provides time limited elevation approval workflows and just in time access controls so the organization can remove standing elevated roles and reduce ongoing cost and risk.
Enable Azure Active Directory Privileged Identity Management is not the correct first step because the full PIM feature set that enforces time limited elevation and approval requires the Premium P2 license. You must upgrade the tenant to P2 before you can rely on PIM to implement the requested controls.
Require Multi-Factor Authentication for privileged role activation is not sufficient on its own because MFA only strengthens authentication and does not provide time limited role assignments automatic activation approvals or the other lifecycle controls that PIM provides.
Create activity alerts for privileged role activations is not the right initial action because alerts only detect or notify about activity after it occurs and they do not enforce time limited elevation or require activation approvals. Alerts are useful for monitoring but they do not implement the privileged access model described.
When a question asks for time limited elevation and approval workflows think of Privileged Identity Management and remember that PIM requires Azure AD Premium P2 before you can enable those capabilities.
A development group at Meridian Analytics is starting a new initiative in Azure DevOps and they need a metric that indicates the percentage of defects discovered after a release to production. Which DevOps KPI should they inspect?
-
✓ C. Defect escape rate
Defect escape rate is the correct KPI for indicating the percentage of defects discovered after a release to production.
Defect escape rate measures the proportion of bugs or defects that are found in production compared to those found during development and testing. This metric shows how many issues were not caught earlier and it helps teams understand testing effectiveness and the risk to customers after a release.
Cycle time measures how long it takes work to move from start to finish and it does not specifically track defects found after release.
Error budget burn rate tracks how quickly an error budget is being consumed relative to service level objectives and it focuses on reliability rather than the share of defects found in production.
Sprint burndown shows progress against planned work within a sprint and it does not indicate the percentage of defects that escape into production.
Mean time to recover measures how quickly a service is restored after an incident and it relates to recovery performance not to defect discovery rates after release.
Bug submission rate counts how many bugs are being reported over time but it does not by itself distinguish whether those bugs were discovered pre release or post release.
Deployment frequency tracks how often code is deployed to production and it is a delivery velocity measure rather than a quality metric that shows escaped defects.
Service failure rate measures how often a service fails or experiences errors and it is focused on operational failures rather than the percentage of defects identified after a release.
When you see questions about defects after production look for metrics that compare pre release and post release defects. Pay attention to the wording that distinguishes quality after release from delivery or reliability metrics.
You manage a containerized application called ServiceX that is built from a Dockerfile. The Dockerfile for ServiceX is presented as a single line for review. FROM example.com/dotnet/sdk:9.0 AS base WORKDIR /service EXPOSE 8080 EXPOSE 8443 FROM example.com/dotnet/sdk:9.0 AS build WORKDIR /src COPY [“ServiceX.csproj”, “”] RUN dotnet restore “./ServiceX.csproj” COPY . . WORKDIR “/src/.” RUN dotnet build “ServiceX.csproj” -c Release -o /service/build FROM build AS publish RUN dotnet publish “ServiceX.csproj” -c Release -o /service/publish FROM base AS final WORKDIR /app COPY –from=publish /service/publish . ENTRYPOINT [“dotnet”, “ServiceX.dll”] Given this Dockerfile is it correct to say that the build is performed using a debug configuration?
-
✓ B. No
The correct answer is No.
The Dockerfile uses explicit Release configuration in the build and publish steps. The build stage runs dotnet build ServiceX.csproj -c Release and the publish stage runs dotnet publish ServiceX.csproj -c Release so the artifact is compiled in Release mode and not Debug.
The option Yes is incorrect because the commands explicitly pass -c Release to dotnet build and dotnet publish. If the -c flag were omitted dotnet would default to Debug but that default does not apply here because Release is specified.
When reading Dockerfiles check command flags carefully because an explicit -c Release means the build is not using Debug and explicit flags override defaults.
A release engineering group at Nimbus Labs uses Calendar Versioning CalVer for their software and they want to append an optional label like “beta” to a version string. Which component of a CalVer version should contain that label?
-
✓ B. modifier
modifier is the correct component to contain an optional label like beta.
The modifier is intended for non numeric qualifiers and pre release or metadata labels. In Calendar Versioning the numeric parts represent the date and release sequence and the modifier lets you append identifiers such as beta or rc without altering the chronological meaning of the numeric components.
micro is a numeric patch level and it is not used for textual labels. Placing a label in the patch position would confuse sorting and the intended meaning of micro as small revisions.
major represents the primary date or year component in CalVer and it must remain numeric. Labels do not belong there because that would break the date based semantics of the version string.
minor is the secondary numeric release counter and it is also not for optional textual labels. The modifier is the proper place for identifiers like beta so that numeric components stay machine sortable and meaningful.
When deciding where to place a textual label remember that numeric parts encode date or sequence and the optional label belongs in the separate label field so that sorting and chronology remain clear.
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
A development team at Meridian Labs must configure building a Java application under these constraints. The build must reach an internal dependency repository that resides on site. The produced artifacts must be stored as server artifacts in Azure DevOps. The source must remain in an Azure Repos Git repository. The proposed configuration is to install a self hosted build agent on a local server register it to the default agent pool and include a Java Tool Installer task in the pipeline. Does this approach meet the requirements?
-
✓ B. Yes the proposed configuration meets the stated requirements
Yes the proposed configuration meets the stated requirements.
A self hosted build agent installed on a local server can reach an internal dependency repository that resides on site because it runs inside the same network or can be given the required network access. Registering that agent to the default agent pool allows Azure Pipelines to schedule the build on the machine that has network access. Including a Java Tool Installer task in the pipeline provides or configures a JDK for the build if the agent does not already have Java installed. Publishing the produced artifacts as server artifacts is supported from pipelines running on self hosted agents, so the artifacts can be stored in Azure DevOps as required while the source remains in an Azure Repos Git repository.
No the proposed configuration does not meet the requirements is incorrect because the described approach does provide access to on premises dependencies and it supports publishing server artifacts from Azure DevOps. The main practical caveat is that the agent must have the necessary network connectivity to the internal repository and to Azure DevOps, and if the environment is air gated you may need to preinstall the JDK rather than rely on the Java Tool Installer.
When a build must access on premises resources prefer a self hosted agent because it can run inside the network and reach internal services that hosted agents cannot.
Your engineering group at Cloudlane uses Azure DevOps to manage builds and tests. The following actions are available 1 Check the solution into Azure Repos. 2 Create a work item. 3 Configure the automated test to run in the build pipeline. 4 Debug the solution locally. 5 Create a test project. Which three actions should you perform in order to associate an automated test with a test case?
-
✓ D. 5-1-3
The correct option is 5-1-3.
You start by creating a test project and test cases so you have a place to define and store the test case that will be associated with an automated test. The test project holds the test plan and test case work items that you will map to automation.
Next you check the solution into Azure Repos so the build pipeline has access to the source code and the compiled test assemblies. The repository must contain the test code for the pipeline to run the automated tests.
Finally you configure the automated test to run in the build pipeline and map the automated test to the test case. Configuring the pipeline is the step that actually runs the automated test and links the test execution results back to the test case.
3-5-1 is incorrect because configuring the pipeline before creating the test project and before checking the code into the repository means there is no test project or code available for the pipeline to run.
1-3-5 is incorrect because checking in the code and then configuring the pipeline before creating the test project leaves you without a test case or test project to map the automated test to when the pipeline runs.
5-2-3 is incorrect because creating a work item is not the required step for associating an automated test to a test case. You still need to check the solution into the repository so the pipeline can access the test code.
Always create the test project and test cases before you wire up automation and make sure the test code is checked into the repository so the build pipeline can find and run the automated tests.
At Northbridge Systems the engineering group uses Azure Pipelines to build and deploy applications and they want to send a notification to the legal channel in Slack when a build is ready for release. You must enable a setting in the Azure DevOps organization configuration so Azure Pipelines can post messages to Slack. Which setting should you turn on?
-
✓ B. Third party application access via OAuth
Third party application access via OAuth is the correct setting to turn on.
This setting allows Azure DevOps to authorize third party applications using OAuth so that services such as the Azure Pipelines Slack app can post messages to a Slack channel on behalf of the organization. Enabling Third party application access via OAuth permits the OAuth consent flow and organization level approval that the Slack integration requires to send build notifications.
Azure Active Directory Conditional Access Policy Validation is not the correct choice because that setting deals with Azure AD sign in and access policies and it does not enable OAuth app access for Azure DevOps integrations.
Personal access tokens are individual credentials used for API or CI automation and they do not provide the organization level OAuth approval flow that third party apps like the Slack integration need.
Alternate authentication credentials is a legacy method for basic authentication and it is not used to enable OAuth integrations. This approach has been deprecated for Azure DevOps Services and is not the mechanism for allowing Slack to post notifications.
When a question is about allowing an external app to act on behalf of your organization look for an option that mentions third party application access or OAuth because those settings control the consent and approval flow for integrations.
A development group at a fintech startup needs a Git branching model that allows multiple people to work on independent tasks at the same time and keeps the main branch deployable at all times. The chosen model must allow incomplete features to be abandoned without affecting production and it should foster experimentation. Which branching approach should you propose?
-
✓ C. One main branch with multiple short lived feature branches
The correct option is One main branch with multiple short lived feature branches.
One main branch with multiple short lived feature branches keeps the main branch deployable while letting each developer work on independent tasks in isolation. Short lived feature branches encourage experimentation and make it simple to abandon incomplete work because those branches can be deleted without touching the main branch. Combining this model with continuous integration and code review ensures only tested and reviewed changes are merged into main.
Several long lived branches maintained concurrently is incorrect because long lived branches increase the risk of large merge conflicts and make it harder to guarantee the main branch stays deployable. They also complicate abandoning or isolating incomplete features.
Individual forks for each contributor is incorrect because forks add operational overhead for an internal team and slow down small iterative changes. Forks are better suited for external contributors rather than rapid collaborative development that must keep main deployable.
A single persistent branch used for all development is incorrect because a single development branch prevents safe isolation of work and causes incomplete or experimental code to affect the deployable main line. It makes rollbacks and controlled releases more difficult.
When the question emphasizes a constantly deployable main and the ability to abandon incomplete work look for answers that use short lived feature branches and mention automated testing or protected merges.
A development group at BrightArc uses Azure DevOps to compile and deliver a microservice that will run in an AKS cluster. The group needs to scan the container image for vulnerabilities within the CI pipeline before it is deployed to the cluster. Which Microsoft product should be added to the pipeline to perform the image scanning?
-
✓ C. Microsoft Defender for Containers
The correct answer is Microsoft Defender for Containers.
Microsoft Defender for Containers is designed to discover and report vulnerabilities in container images and it integrates with Azure Container Registry and CI pipelines such as Azure DevOps so images can be scanned before they are deployed to an AKS cluster. This product performs image vulnerability assessment and produces actionable findings that can be used to block or remediate builds before deployment.
Microsoft Defender for Storage protects Azure Storage accounts and provides threat detection for storage operations but it does not perform container image vulnerability scanning and it is therefore not the right choice for CI pipeline image scans.
Microsoft Defender for App Service focuses on protecting App Service web apps and related platform threats and it does not provide container image scanning in the CI pipeline, so it is not the correct product for this scenario.
Microsoft Defender for DevOps is not the product that performs container image vulnerability scanning in Azure DevOps pipelines. If the term refers to broader DevSecOps guidance it still would not replace the dedicated image scanning capabilities provided by Microsoft Defender for Containers.
When a question asks about scanning container images in a CI pipeline look for products that mention Containers or integration with container registries and pipelines. That phrasing usually points to the correct service.
Which Azure DevOps service should a development team use to manage product backlogs plan sprints and visualize the progress of work items?
-
✓ C. Azure Boards
Azure Boards is the correct option.
Azure Boards provides work item tracking along with product backlog management and sprint planning tools. It includes Kanban boards, sprint backlogs, queries, and dashboards so teams can visualize progress and manage work through iterations.
Artifact Registry is incorrect because it is a package and container artifact storage service and not part of Azure DevOps for managing backlogs or sprints.
Azure Repos is incorrect because it offers source control hosting such as Git repositories and pull request workflows and it does not provide backlog or sprint planning features.
Azure Pipelines is incorrect because it is focused on continuous integration and continuous delivery for building testing and deploying code and it does not handle product backlog management or work item visualization.
When a question mentions backlog management or sprint planning look for services that track work items and boards and not for code or CI/CD services. Remember that Azure Boards is the service for planning and tracking work.
A development group at Meridian Systems uses a continuous integration pipeline to build and release a web application. They need a testing approach that checks a single code component without invoking that component’s external dependencies. Which testing approach should they adopt?
-
✓ C. Unit test
The correct answer is Unit test.
A Unit test focuses on a single code component and exercises its logic while isolating it from external dependencies. Teams typically use test doubles such as mocks or stubs so the component can be validated without invoking databases, network services, or other modules, and that isolation makes Unit test the right approach for the described continuous integration pipeline.
Integration test is incorrect because integration tests exercise interactions between multiple components or services and therefore do not isolate a single component from its external dependencies.
Acceptance test is incorrect because acceptance tests validate end to end business requirements and user workflows and they usually involve many integrated components rather than isolating one unit.
Performance test is incorrect because performance testing measures throughput and latency under load and it typically uses realistic environments and dependencies instead of isolating a single component.
Sanity test is incorrect because sanity tests are quick checks to see if a build is stable enough for further testing and they do not provide the focused, isolated verification of a single code component that unit tests do.
When a question asks about testing a single component in isolation look for the option that explicitly targets a unit of code and mentions using mocks or stubs. Unit tests are the fastest and most targeted choice for that scenario.
NimbusSoft maintains a containerized application named ServiceA and the engineering team supplied the Dockerfile below for review. FROM mcnz.com/dotnet/sdk:6.0 AS base WORKDIR /serviceA EXPOSE 8080 EXPOSE 8443 FROM mcnz.com/dotnet/sdk:6.0 AS build WORKDIR /src COPY [“ServiceA.csproj”, “”] RUN dotnet restore “./ServiceA.csproj” COPY . . WORKDIR “/src/.” RUN dotnet build “ServiceA.csproj” -c Release -o /serviceA/build FROM build AS publish RUN dotnet publish “ServiceA.csproj” -c Release -o /serviceA/publish FROM base AS final WORKDIR /app COPY –from=publish /serviceA/publish . ENTRYPOINT [“dotnet”, “ServiceA.dll”] Does this Dockerfile utilize Docker multi stage build functionality?
-
✓ B. Yes
The correct answer is Yes.
The Dockerfile shows a multi stage build because it uses multiple FROM instructions to create separate stages named base, build, publish and final. The project is restored, built and published inside intermediate stages and only the published output is copied into the final image, which is the hallmark of a multi stage build.
The use of stage names with AS and the fact that the final stage pulls artifacts from an earlier stage confirms the pattern. This approach allows build tools and intermediate files to be excluded from the final image, which reduces size and surface area.
No is wrong because the file does not perform all work in a single stage. It clearly defines distinct stages and copies the build output from a publish stage into the final runtime image.
When you see multiple FROM instructions or named stages with AS check whether the final image copies artifacts from an earlier stage. That pattern indicates a multi stage build.
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
A development team at Aurora Software maintains Git hosted packages and follows Semantic Versioning. You add a new feature that keeps existing interfaces compatible and you need to know which portion of the version string to increment?
-
✓ C. Minor
The correct option is Minor.
Semantic Versioning uses a three part format written as Major.Minor.Patch and when you add a new feature that preserves existing interfaces you increment the Minor component to indicate new backwards compatible functionality.
Patch is reserved for backwards compatible bug fixes and not for new features which is why it does not apply here.
Major is incremented when you introduce incompatible API changes that break existing interfaces and so it is not the right choice when compatibility is preserved.
If the change adds functionality but does not break existing interfaces choose Minor. Remember that Patch is for bug fixes and Major is for breaking changes.
You manage an Azure subscription that contains an Azure Key Vault named VaultA a pipeline named BuildPipelineX and an Azure SQL database named SalesDB. BuildPipelineX deploys an application that will authenticate to SalesDB by using a password. You must save the password in VaultA and ensure that BuildPipelineX can retrieve it during deployment. How should you store the password?
-
✓ D. Secret
Secret is the correct option and you should store the database password in VaultA as a Secret.
Azure Key Vault Secret objects are designed to hold arbitrary string values such as passwords and connection strings. They support versioning and controlled access and can be retrieved by BuildPipelineX at deployment time when the pipeline has permission to read secrets.
To allow BuildPipelineX to retrieve the Secret you grant the pipeline a service principal or a Managed Identity read access to VaultA or configure an Azure DevOps service connection that reads Key Vault secrets during deployment.
Key is incorrect because Key Vault keys are cryptographic keys used for encryption signing and key management and they are not intended to store plaintext passwords.
Certificate is incorrect because certificates are used to store X509 certificate material and private keys for TLS and signing and they are not the right place for application passwords.
Managed Identity is incorrect as a storage option because managed identities are an authentication mechanism for Azure resources and they do not hold secret values. You can use a Managed Identity to access a Secret but not to store the password itself.
When you need to store a password use an Azure Key Vault Secret and remember to grant the pipeline explicit read access before running the deployment.
AtlasApps maintains an Azure DevOps project with a build pipeline that pulls in roughly 35 third party libraries. The engineering team wants automated detection of known security vulnerabilities in those libraries as part of the continuous integration workflow. Which object should be created to enable those scans?
-
✓ C. A build task
A build task is correct.
A build task is the unit you add to a pipeline to run tools and scripts during the build. You add a task that invokes a dependency or vulnerability scanner and it will analyze the roughly 35 third party libraries and report known issues as part of continuous integration.
An Azure Artifacts feed is used to host and manage packages and does not itself perform vulnerability scans during the build. You can store packages there but scanning still requires a task or external scanner to run in the pipeline.
A deployment job defines deployment steps that target environments and release stages and it is not the object you create to run CI vulnerability scans. Scans are typically executed as build tasks inside the build job so results appear during the continuous integration run.
When a question asks which pipeline object executes tools during CI remember that a task is the unit that runs commands or scanners inside a job.
A site reliability team uses Azure DevOps and has set up a project that relies on Azure Boards to manage work items. They plan to add dashboard chart widgets to monitor several project metrics. Which widget can be used to present the metric “Show the current status of each release pipeline”?
-
✓ C. Release pipelines overview
The correct option is Release pipelines overview.
The Release pipelines overview widget is designed to show the current status of each release pipeline in a project. It surfaces the state of pipelines and deployments such as in progress succeeded and failed so teams can monitor releases directly from a dashboard.
The Release pipelines overview connects to the release pipelines feature and displays deployment stages and the latest run status which makes it the appropriate widget when the goal is to show current release pipeline status.
Query chart tile is incorrect because it visualizes work item query results and aggregated work item metrics rather than pipeline run status. It is tied to Boards and work items and not to release pipelines.
Sprint capacity widget is incorrect because it displays team capacity and allocation for an iteration and helps with sprint planning. It does not provide information about pipeline or release statuses.
Build run history is incorrect because it shows recent build pipeline runs and trends for CI builds and not the status of release pipelines. Build run history focuses on build pipelines while release overview focuses on release pipelines.
When a question asks for live status of deployments pick a widget that is tied to Pipelines or Releases. The Release pipelines overview widget shows deployment status while query charts and capacity widgets show work item or sprint data.
A payments startup collects telemetry from the Intelligent Insights capability of its Azure SQL Database and from Application Insights for its web front end and it must run interactive ad hoc queries against the gathered monitoring data, which query language should it use?
-
✓ D. Kusto Query Language (KQL)
The correct answer is Kusto Query Language (KQL).
The telemetry from Azure SQL Database Intelligent Insights and from Application Insights is collected into Azure Monitor Logs and Log Analytics where the native query language is Kusto Query Language (KQL). Kusto Query Language (KQL) is designed for interactive ad hoc queries over telemetry and log data and it provides rich operators for filtering, summarizing, joining, and time series analysis which make exploratory queries fast and expressive.
Transact-SQL is the SQL dialect used to query relational data in SQL Server and Azure SQL Database. It is not the native language for querying Azure Monitor or Application Insights logs so it is not appropriate for ad hoc telemetry queries in this scenario.
PL/pgSQL is a procedural extension for PostgreSQL and it applies to PostgreSQL databases. It is unrelated to Azure Monitor Logs and cannot be used to query Application Insights telemetry.
BigQuery SQL is the SQL dialect used by Google BigQuery. It is a Google Cloud product and is not used to query Azure Monitor or Application Insights data.
When a question mentions Application Insights or Azure Monitor Logs remember that the log analytics language is Kusto Query Language (KQL) and that database SQL dialects are not used for interactive log queries.
How can a development team best prevent direct pushes from feature branches into the main branch?
-
✓ C. Branch policies
Branch policies is the correct option.
Branch policies enforce repository level rules so you can require pull requests, require a number of approvers, require successful CI checks, and explicitly block direct pushes to the protected branch. These protections are applied by the hosting system so the workflow is enforced automatically rather than relying on individual developers.
Lock the main branch is not the best choice because locking is a blunt action that can block needed automated operations and it does not provide the finer controls such as required reviews or status checks that prevent direct pushes while allowing approved merges.
Adopt a team guideline to use pull requests is not sufficient because guidelines depend on human compliance and cannot prevent someone from pushing directly. An enforced policy is required to reliably stop direct pushes.
On exam questions look for answers that describe enforcement by the system rather than actions that rely on people. Policies or protections are typically the correct choice when the goal is to prevent direct pushes.
A development team at Orion Labs has an Azure Pipelines build that runs separate jobs to compile their application for eight different CPU targets and the build currently requires about eighteen hours to finish. The team wants to shorten the overall build time. What two changes should they make to accomplish this? (Choose 2)
-
✓ B. Set up an agent pool for the pipeline
-
✓ D. Increase the pipeline parallel job quota
The correct options are Set up an agent pool for the pipeline and Increase the pipeline parallel job quota.
Set up an agent pool for the pipeline lets you provision multiple build agents that can run jobs at the same time. By adding agents to a pool you can run the eight compilation jobs concurrently instead of sequentially and that directly reduces the total wall clock time for the build.
Increase the pipeline parallel job quota raises how many jobs Azure Pipelines is allowed to execute in parallel. Even with many agents available you still need sufficient parallel job quota to run them simultaneously. Increasing the quota and providing more agents work together to shorten the overall build duration.
Enable pipeline caching for build outputs is incorrect because caching mainly speeds up incremental or repeated builds by reusing artifacts. It does not allow multiple independent compilation jobs to run in parallel and it will not substantially shorten a full build that can be parallelized.
Adopt blue green deployment for releases is incorrect because blue green is a deployment strategy for releases and does not affect how build jobs are scheduled or executed. It improves deployment availability and rollback safety rather than build performance.
Use deployment groups to target servers is incorrect because deployment groups are for organizing target machines for release deployments. They do not provide additional build agents or increase the number of parallel build jobs and so they do not reduce compilation time.
When asked how to shorten build time focus on increasing parallelism and adding more agent capacity. Look for answers that let multiple jobs run at once rather than strategies that only improve deployment or cached increments.
Orchid Systems uses Black Duck to confirm that every open source dependency complies with the company licensing policies. Is that statement correct?
-
✓ B. No change needed
No change needed is correct.
Black Duck is a software composition analysis product from Synopsys that scans open source dependencies for known vulnerabilities and for license and policy issues, and teams commonly use it to confirm that dependencies comply with company licensing policies.
Bamboo is incorrect because it is an Atlassian continuous integration and delivery server and it is not itself a dedicated license compliance scanner.
Cloud Build is incorrect because it is Google Cloud’s CI/CD service and it does not by itself perform Black Duck style license scanning unless you integrate a separate SCA tool into the pipeline.
Maven is incorrect because it is a Java build and dependency management tool and not a software composition analysis product, though plugins can report some dependency metadata they do not replace a dedicated license scanning solution.
When a question names a specific tool check whether the tool is a dedicated software composition analysis product or a general CI or build system. Look for wording like license scanning or SBOM to identify the correct choice.
All Azure questions come from my AZ-400 Udemy course and certificationexams.pro
Your team at Nimbus Systems manages several package feeds in an Azure DevOps project and you want to consolidate them into a single feed that contains packages you publish internally and packages pulled from external registries both anonymous and authenticated. Which Azure DevOps feature should you enable?
-
✓ C. Upstream sources
The correct option is Upstream sources.
Upstream sources let you consolidate packages in a single Azure Artifacts feed by pulling from external registries as well as by hosting your own internal packages. Upstream sources support anonymous public registries and authenticated upstreams so you can cache packages from npmjs, NuGet, Maven, PyPI and other registries while applying feed-level permissions and retention.
Universal Packages is incorrect because Universal Packages is a package type for storing large binary artifacts and build outputs in Azure Artifacts rather than a feature for aggregating external registries into a single feed.
Views in Azure Artifacts is incorrect because views are used to manage the lifecycle and visibility of package versions within a feed and they do not provide the ability to pull or proxy packages from external registries.
Symbol server is incorrect because the symbol server stores debug symbols for debugging scenarios and it does not aggregate or proxy general package registries into a feed.
When you need to merge internal and external package sources into one feed look for features that mention upstream or proxying because those are the capabilities that enable caching and authentication of external registries.
Which reporting widget shows the elapsed time to complete work items after they transition to the active state?
-
✓ C. Cycle Time
The correct option is Cycle Time.
Cycle Time measures the elapsed time to complete a work item after it transitions to the active or in progress state. It captures the duration from when work actually starts until it is finished which matches the question about the time to complete work items after they become active.
Velocity is a throughput metric that shows how much work a team completes in an iteration and it is usually expressed in story points or completed items so it does not represent elapsed time for individual work items.
Lead Time measures the total time from when a request or ticket is created until it is delivered and it therefore includes waiting time before work starts which makes it broader than the metric asked for in the question.
Burndown Chart visualizes remaining work over time for a sprint or release and it tracks progress rather than showing the elapsed time for individual items after they move to an active state.
When a question asks about the time after work moves to active think Cycle Time and when it asks about time from request to delivery think Lead Time.
Which statement best explains a primary benefit of adopting a version control system for a development team?
-
✓ C. Version control records who changed files when and what differences were made so teams can restore prior working states
The correct option is Version control records who changed files when and what differences were made so teams can restore prior working states.
Version control systems record a history of changes so teams can see who made each change and when the change occurred. They also store the differences between versions which makes it possible to compare revisions and restore a prior working state when needed.
Beyond restoring prior states this history supports collaboration through branching and merging and it enables code review and auditing because authorship and timestamps are preserved for each change.
Version control systems automatically increment build or release version numbers is incorrect because maintaining build or release numbers is normally handled by build or release tooling and by explicit tagging or scripting rather than by the version control system itself.
Cloud Build is incorrect because it names a specific CI CD service and not a general benefit of version control. Cloud Build can integrate with version control systems but it is a build and delivery tool rather than an explanation of version control advantages.
Version control only stores binary snapshots without preserving who made the changes is incorrect because modern version control systems store metadata about each change including the author and timestamp and they typically record changes as diffs rather than simple opaque binary snapshots.
When reading options look for keywords about tracking history, authorship, and diffs because those are core features of version control and often appear in correct answers.
A software company is designing its deployment workflow for a customer portal and they want to expose new releases gradually to subsets of users to reduce risk. Which approaches support staged exposure of a new application version? (Choose 2)
-
✓ C. Deployment rings
-
✓ D. Feature flags
The correct options are Deployment rings and Feature flags.
Deployment rings are designed to release a new application version to progressively larger groups of users so you can monitor behavior and rollback if issues appear before affecting everyone.
Feature flags let you enable or disable specific functionality for defined cohorts or individual users so you can expose new features gradually without redeploying code.
Blue green deployment focuses on switching traffic between two full environments to achieve low downtime and quick rollback, and it is typically an all or nothing swap unless you add extra traffic splitting mechanisms.
Incremental rollout is an ambiguous term that overlaps with the controlled exposure achieved by deployment rings and feature flags, and it is not presented here as a distinct staged exposure mechanism for the purposes of this question.
When a question asks about staged exposure look for answers that enable targeting specific user cohorts such as rings or feature flags.
At NovaTech your team integrates a cloud hosted Jenkins server with a fresh Azure DevOps environment and you need Azure DevOps to notify Jenkins when a developer pushes commits to a branch in Azure Repos. You configure a service hook subscription that listens for the Code Pushed event. Does this meet the requirement?
-
✓ B. Yes
The correct answer is Yes.
Configuring a service hook subscription that listens for the Code Pushed event in Azure DevOps will send an HTTP notification to the configured endpoint when commits are pushed to a branch. This meets the requirement to notify a cloud hosted Jenkins server because Azure DevOps service hooks can target Jenkins directly or use a generic webhook to call Jenkins.
You must still configure the Jenkins side to accept the incoming request and to trigger the correct job. That usually means supplying the Jenkins job URL and any required token or credentials in the service hook subscription or using a Jenkins plugin that handles the incoming webhook payload so the build is started on push.
No is incorrect because the Code Pushed service hook is specifically intended to notify external systems on pushes and can be used to call Jenkins as long as the endpoint and authentication are configured correctly.
When a question mentions Azure Repos notifications remember that Azure DevOps service hooks can call external endpoints such as Jenkins and verify the configured event and authentication before choosing your answer.
You need to strengthen engineering practices at NovaApps. Which kind of security tool should you integrate into the continuous integration phase of your build pipeline?
-
✓ C. Static analysis tools
The correct option is Static analysis tools.
Static analysis tools are designed to scan source code and catch security flaws, coding errors, and insecure patterns during the continuous integration build. They run automatically as part of the CI pipeline, provide quick feedback to developers, and can block merges or fail builds when high severity issues are found which helps shift security left and reduce fix cost.
Threat modeling is a design and planning activity that helps identify potential attackers, assets, and attack paths. It is not an automated CI check and it belongs earlier in the development lifecycle when architecture and design decisions are being made.
Penetration testing involves active testing of running applications or systems and is usually manual or semi automated which makes it unsuitable as a regular automated gate in the CI phase. Pen tests are valuable later in the pipeline or in production but they do not replace automated static analysis during continuous integration.
When a question asks which tool to add to the CI phase pick solutions that can scan source code automatically and fail the build. Save manual and architectural activities for earlier or later stages of the lifecycle.
A regional fintech firm named NovaStream is updating its Azure DevOps workflows for a multi-site engineering group. The team must detect license noncompliance and banned third-party libraries during development. You plan to add automated security testing to the build pipeline. Will this change meet the stated requirement?
-
✓ D. No, automated security testing alone is insufficient
No, automated security testing alone is insufficient. Automated tests in the build pipeline help find issues, but the requirement calls for reliable detection of license noncompliance and a way to stop banned third party libraries from entering the deliverable, and testing by itself does not provide policy enforcement across artifacts and releases.
Automated security testing improves visibility and can surface vulnerabilities and some risky dependencies. However you also need dedicated software composition analysis and a governance process to identify license types and to keep an approved or disallowed list. You also need enforcement mechanisms so that detection actually prevents noncompliant code from being merged or published.
Add a software composition analysis tool to the pipeline is useful for detecting licenses and third party components, but it is not the correct answer to whether automated security testing alone meets the requirement. Detection tools must be combined with enforcement and artifact controls to achieve compliance.
Implement automated security testing in the build pipeline restates the proposed change and is therefore not the correct choice. The change described in the question is exactly automated testing, and the point is that testing alone does not guarantee license compliance or blocking of banned libraries.
Create a policy gate that rejects builds with disallowed licenses provides enforcement, but it is not sufficient by itself either. A gate needs accurate detection data from an SCA or similar scanner and reliable artifact metadata. Without integrated detection and proper configuration the gate cannot reliably identify offending components.
When you answer CI pipeline security questions think about both detection and enforcement. The exam often expects a solution that includes a scanning tool plus a blocking or policy mechanism.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
