Microsoft Azure AI Engineer Practice Exams

Free Microsoft Azure AI Engineer Exam Topics Tests

Over the past few months, I have been helping developers, data scientists, and cloud engineers prepare for roles that thrive in the Microsoft Azure AI ecosystem.

The goal is simple: to help you design, build, and operationalize AI-powered applications using the same Azure AI services trusted by top enterprises worldwide.

A key step in that journey is earning the Microsoft Certified Azure AI Engineer Associate credential.

This certification demonstrates your ability to integrate AI into modern applications using services such as Azure OpenAI, Azure Machine Learning, Cognitive Services, and Azure Bot Service. It validates that you can plan, develop, test, deploy, and monitor AI solutions responsibly and effectively.

Whether you are an application developer, data engineer, or solutions architect expert, the Azure AI Engineer Associate certification provides a strong foundation in responsible AI development and intelligent system design.

You will learn to integrate vision, language, and conversational AI into solutions, leverage Azure AI infrastructure, apply responsible AI principles, and optimize models for scalability and performance.

That is exactly what the Microsoft Azure AI Engineer Associate Certification Exam measures. It validates your ability to apply AI services, manage data pipelines, and deploy intelligent solutions on Azure, ensuring you can create responsible, secure, and high-performing AI systems that deliver measurable business value.

Microsoft Azure AI Engineer Exam Simulator

Through my Udemy courses on Microsoft AI certifications and through the free question banks at certificationexams.pro, I have identified the areas where most learners need extra practice. That experience helped shape a complete set of Azure AI Engineer Practice Questions that closely match the structure and challenge of the real Microsoft certification exam.

You will also find Azure AI Engineer Sample Questions and full-length Azure AI Engineer Practice Tests to measure your readiness. Each Azure AI Engineer Question and Answer set includes clear explanations that teach you how to think through AI design, model deployment, and solution integration scenarios.

These materials are not about memorizing content. They teach you to think like an Azure AI Engineer, someone who understands model lifecycle management, responsible AI frameworks, and intelligent system design using Azure’s integrated services.

Real Microsoft AI Engineer Exam Questions

If you are searching for Microsoft AI Engineer Real Exam Questions, this collection provides realistic examples of what to expect on test day.

Each question is crafted to capture the tone, logic, and difficulty of the official exam. These are not Microsoft AI Engineer Exam Dumps or copied content. They are authentic, instructor-created learning resources designed to help you build genuine expertise.

The Microsoft Azure AI Engineer Exam Simulator recreates the look, feel, and pacing of the real certification exam, helping you practice under authentic testing conditions.

You can also explore focused Azure AI Engineer Braindump style study sets that group questions by topic, such as cognitive services, natural language processing, or AI solution deployment. These are structured to reinforce your understanding through repetition and practical context.

Each Azure AI Engineer Practice Test is designed to challenge you slightly more than the real exam, ensuring you walk into test day with confidence.

Good luck, and remember that every successful cloud operations career begins with mastering the tools and services that drive automation and continuous delivery on AWS.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Azure AI-102 Certification Questions

A fintech company named Northbridge AI hosts a cognitive services instance on Azure and wants to know how often its subscription keys are retrieved. What should the operations team do?

  • ❏ A. Enable diagnostic settings to send logs to Log Analytics

  • ❏ B. Store the subscription keys in Azure Key Vault

  • ❏ C. Rotate the subscription keys on a monthly cadence

  • ❏ D. Create an Azure Monitor alert for the cognitive service

How many labeled images does each class need at minimum to begin training a successful image classification model on the vision service?

  • ❏ A. 15 labeled images

  • ❏ B. 1 labeled image

  • ❏ C. 30 labeled images

  • ❏ D. 5 labeled images

  • ❏ E. 3 labeled images

A support team at NovaAssist prepared sample utterances and intents for a conversational understanding system and then trained the model. After inspecting the monitoring panel the top scoring intent and the second place intent show almost identical confidence levels and they might trade ranks after the next training. To reduce that possibility the team deletes many example utterances across several intents which substantially changes the distribution of sample counts. After retraining the model you look at the review dashboard again. What issue would you expect to see on the dashboard?

  • ❏ A. Ambiguous intent predictions

  • ❏ B. Class imbalance

  • ❏ C. Increased misclassification rate

  • ❏ D. No observable dashboard problems

A retail startup called Solara Labs needs to build a conversational assistant that can carry on natural language exchanges with customers and extract detailed intents and entities from the dialog. Which Azure offering best matches this requirement?

  • ❏ A. Azure AI Speech

  • ❏ B. Azure Cognitive Search

  • ❏ C. Azure AI Language

  • ❏ D. Azure Bot Service

You are advising Sentinel Systems and you are meeting with Robin the head of the platform team about Azure Speech. The group needs to know which object indicates that the audio to be transcribed is provided as a file. Which object should be used to indicate that the input is an audio file?

  • ❏ A. SpeechRecognizer

  • ❏ B. SpeechSynthesizer

  • ❏ C. SpeechConfig

  • ❏ D. AudioConfig

A logistics startup named ParcelFlow is building an application called ShipTrack that will use Azure AI Document Intelligence to extract these fields from scanned invoices shipping address billing address customer ID amount due due date total tax and subtotal. You must choose the model that reduces development work the most and is ready to extract those invoice fields. Which model should be used?

  • ❏ A. Receipt

  • ❏ B. Contract

  • ❏ C. Custom extraction model

  • ❏ D. Invoice

A development group at Meridian Analytics needs to create an agent that merges and analyzes multiple files uploaded by users and they are evaluating the Azure AI Agent Service for this task. What is the maximum size the service permits for a single uploaded file?

  • ❏ A. 300 MB

  • ❏ B. 2 GB

  • ❏ C. 512 MB

  • ❏ D. 128 MB

A regional online retailer is building a conversational language understanding model for its shopping assistant and customers may either speak or type their payment address when prompted by the assistant. You must design an entity that will capture full billing addresses. Which entity type should you use?

  • ❏ A. Regex

  • ❏ B. Pattern.any

  • ❏ C. List

  • ❏ D. Machine learned

A developer at NovaStream is creating an InsightFinder platform for a large film archive. The solution must extract searchable metadata from video files to improve the library search and to help users create short social clips based on those insights. Which Azure AI service should be used?

  • ❏ A. Face API

  • ❏ B. Computer Vision

  • ❏ C. Video Indexer

  • ❏ D. Bing Video Search

Your team at Meridian Insights must deploy the most recent published Language Understanding model inside an isolated environment to satisfy compliance rules and you plan to use Docker to handle prediction requests for user utterances. You exported the package from the Language Understanding portal and you picked the newest published prediction build. You then selected the export for containers GZIP option. After that you copied the archive to the host running the container and placed it in the output mount directory before starting the container to call the prediction endpoint. You are not receiving any responses to prediction queries. Which action in the sequence above was performed incorrectly?

  • ❏ A. Start the container without providing the required prediction subscription key as an environment variable

  • ❏ B. Select the most recent trained or published build

  • ❏ C. Place the exported container archive in the container host output mount directory

  • ❏ D. Use the export for containers GZIP option

A startup called VocaVista is building an online foreign language course and they need a service that can scan lesson text and offer illustrative pictures for commonly used words and brief phrases while keeping development work to a minimum. Which Azure service should you suggest?

  • ❏ A. Azure AI Custom Vision

  • ❏ B. Azure AI Document Intelligence

  • ❏ C. Azure AI Search

  • ❏ D. Immersive Reader

You manage an Azure subscription for a small startup named NovaApps and you are creating a conversational assistant that will use Azure OpenAI models. Choose the three actions to perform in the correct order. Actions are listed as follows. 1 Deploy the Vision model. 2 Deploy an embeddings model. 3 Provision an Azure OpenAI resource. 4 Deploy the ChatGPT model. 5 Apply for access to Azure OpenAI. 6 Configure Azure API Management. Which three steps should you execute in sequence?

  • ❏ A. 3-1-6

  • ❏ B. 5-3-4

  • ❏ C. 5-3-2

  • ❏ D. 6-3-4

A marketing firm called Nimbus Analytics is evaluating the built in entity categories in its Text Analysis toolkit and wants to know which labels are provided as prebuilt entity types for named entity recognition. Which of the following labels are valid prebuilt entity types? (Choose 3)

  • ❏ A. Gender

  • ❏ B. Organization

  • ❏ C. Identification

  • ❏ D. Location

  • ❏ E. Name

  • ❏ F. Person

  • ❏ G. Age

Your Azure account contains an App Service app named WebApp2 and you create a multi-service Azure Cognitive Services account named CogSvcProd. You need to allow WebApp2 to call CogSvcProd while keeping administrative overhead as low as possible. What should you configure WebApp2 to use?

  • ❏ A. The endpoint and an OAuth access token

  • ❏ B. The service endpoint and a subscription key

  • ❏ C. A user assigned managed identity

  • ❏ D. A system assigned managed identity with an X509 certificate

A remote assessment provider called SummitPrep streams examinees during proctored tests and needs to ensure that the people in the live feeds are actual humans and not static photos or prerecorded video. Which method should their application use with the face detection service to verify liveness?

  • ❏ A. Invoke the face detection endpoint and use FaceLandmarks to calculate the distance between the pupils

  • ❏ B. Call the face detection endpoint once and record the FaceRectangle values for the detected face

  • ❏ C. Call the face detection endpoint continuously and monitor FaceAttributes.HeadPose for natural orientation changes

  • ❏ D. Analyze the video stream with Cloud Video Intelligence API to detect anomalies in the footage

You are building a chat based form for an employee portal at Meridian Logistics to handle time off submissions. The bot must gather the leave start date the leave end date and the total number of paid days requested. The flow must keep the conversation as simple as possible. Which dialog pattern should you use?

  • ❏ A. Adaptive dialog

  • ❏ B. Component dialog

  • ❏ C. Skill dialog

  • ❏ D. Waterfall dialog

A developer is integrating with Aurora Search APIs to monitor user queries and traffic patterns and they enable the feature called “Search Metrics” to examine “Top Query Terms” and “Request Volume” so they can optimize system performance. Which of the following statements about Search Metrics is incorrect?

  • ❏ A. Sharing aggregated dashboard data from Search Metrics with partner organizations is prohibited

  • ❏ B. Enabling Search Metrics adds an extra charge and could slightly increase your subscription rate

  • ❏ C. You can activate Search Metrics using the cloud provider portal interface

  • ❏ D. The Search Metrics feature appears under the available service resources and is accessible

Maple Grove Motors is a regional used car dealer in Colorado and it collects customer comments on its website for service monitoring. Which Azure Language capability will allow the team to identify and flag comments that express negative sentiment?

  • ❏ A. Use the Language service to perform entity linking

  • ❏ B. Use the Language service to perform sentiment analysis

  • ❏ C. Use the Language service to extract key phrases

  • ❏ D. Use the Azure Translator service for language detection

  • ❏ E. Use the Language service to extract named entities

A regional streaming studio needs to scan archived videos to find spoken mentions of particular companies in their footage and they plan to use Azure tools to do this. What steps should they follow in order to configure the system to recognize those company names?

  • ❏ A. Sign in to the Azure Video Indexer web portal and under Content model customization choose Language then add the target company names to the include list

  • ❏ B. Sign in to the Azure Video Indexer web portal and under Content model customization choose Brands then add the target company names to the include list

  • ❏ C. Sign in to the Google Cloud Video Intelligence console and define a custom glossary then add the target company names to the include list

Scenario Northstar Media is a startup that built a public photo gallery and its lead engineer Rajani Patel reviewed Azure and chose to deploy it. The website lets visitors locate pictures by entering simple phrases. If a visitor signs in and searches for “images with a beach” what is the typed search phrase called?

  • ❏ A. Query

  • ❏ B. Entity

  • ❏ C. Intent

  • ❏ D. Utterance

A retail analytics team at Summit Analytics runs a Summit Cognitive Search deployment and query volume has climbed steadily over the past ten months. They have observed that some search queries are being throttled by the search service. Will adding replicas reduce the likelihood that query requests will be throttled?

  • ❏ A. Only if you also increase partitions

  • ❏ B. Upgrade to a higher service tier

  • ❏ C. Yes adding replicas will reduce query throttling

  • ❏ D. No adding replicas will not reduce query throttling

Vendor managed encryption keys protect your Orion AI Platform resources by default and your team has been instructed to switch to customer managed keys for greater control. Which of the statements below are true? (Choose 3)

  • ❏ A. No configuration updates are required on the AI resources when you rotate the encryption key

  • ❏ B. You must disable “soft delete” and “recovery protection” on the key to retrieve your data

  • ❏ C. Provide the key identifier URI from KeySafe Vault as the AI platform key URI

  • ❏ D. To use customer managed keys you must provision or import a key into the key vault

  • ❏ E. Using a dedicated key vault such as KeySafe Vault to store customer managed keys is the recommended approach

A consultant scanned a client note into a PDF document and needs to extract the contained text for automated processing. Which Azure capability should be used?

  • ❏ A. Azure AI Vision Image Analysis

  • ❏ B. Azure AI Document Intelligence Read model

  • ❏ C. Custom model in Azure AI Document Intelligence

Meridian Transport uses Azure AI Document Intelligence for standard receipt and invoice parsing but needs to build models for its custom shipment forms. Sequence the following tasks in the proper execution order to create, validate, and manage a custom Document Intelligence model? Task 1 Manage model versions and lifecycle. Task 2 Upload example documents to an Azure Blob storage container. Task 3 Define and prepare the training dataset. Task 4 Train the model using labeled or unlabeled examples. Task 5 Validate the model with a holdout dataset not used during training?

  • ❏ A. Upload example documents to the Blob storage then define the training dataset then train the model then validate with a holdout dataset then manage model versions

  • ❏ B. Define and prepare the training dataset then upload example documents to the Azure Blob container then train the model with labeled or unlabeled samples then validate using a withheld dataset then manage the model lifecycle

  • ❏ C. Manage model versions first then define the dataset then upload files then train the model then validate with a holdout set

  • ❏ D. Define the training dataset then manage models then upload documents to Blob then train with labeled or unlabeled data then validate with an independent dataset

Maya Brooks works for the Morning Star and she is building a GPT powered chat assistant to respond to reader inquiries about Morning Star articles on the publication web portal. She plans to validate the assistant using Microsoft recommended prompt engineering practices. Which prompt engineering techniques should Maya apply when she is testing the assistant? (Choose 4)

  • ❏ A. Vertex AI

  • ❏ B. Use descriptive analogies and richer context

  • ❏ C. Be minimalistic and avoid extra detail

  • ❏ D. Be specific and reduce ambiguity in prompts

  • ❏ E. Place instructions both before and after the main content

  • ❏ F. Order matters in how you present prompt elements

At Solstice AI Studio you are building a conversational agent in the Dialogue sandbox and you want to reduce repeated words in the bot replies while making each response less random. Which two parameters should you adjust? (Choose 2)

  • ❏ A. Top P sampling

  • ❏ B. Presence penalty

  • ❏ C. Temperature

  • ❏ D. Max tokens

  • ❏ E. Frequency penalty

  • ❏ F. Top K sampling

A regional merchant named NorthStar Retail keeps data across several systems. The accounting system runs on an on premises Microsoft SQL Server instance. The online store uses Azure Cosmos DB with the Core SQL API. Operational telemetry lands in Azure Table storage. People operations maintains an Azure SQL Database. You need to make all of these sources searchable through the Azure AI Search REST API. What should you do?

  • ❏ A. Replicate the accounting SQL Server to an Azure SQL Database instance

  • ❏ B. Send the telemetry from Azure Table storage into Microsoft Sentinel

  • ❏ C. Migrate the ecommerce Cosmos DB data to the MongoDB API

  • ❏ D. Ingest the telemetry into Azure Data Explorer for analysis

The Custom Vision platform in Nimbus Cloud allows engineers to train models for image classification and object detection using labeled images and then deploy those models so applications can request predictions. Training requires a training resource and deployed models require a prediction resource. Which type of Nimbus Cloud resource can host a published Custom Vision model?

  • ❏ A. Vision API

  • ❏ B. Custom Vision Trainer

  • ❏ C. Intelligent AI Services

  • ❏ D. Edge Predictions

You have a helper C# routine named provisionResource that creates Azure AI accounts programmatically. static void provisionResource(CognitiveServicesManagementClient client, string name, string serviceKind, string sku, string region) { CognitiveServicesAccount parameters = new CognitiveServicesAccount(null, null, serviceKind, region, name, new CognitiveServicesAccountProperties(), new Sku(sku)); var result = client.Accounts.Create(resourceGroupName, sku, parameters); } You must invoke this routine to provision a free Azure AI account in the West US region that will be used to automatically generate captions for images. Which invocation should you use?

  • ❏ A. provisionResource(client, “imgsvc01”, “CustomVision.Prediction”, “S1”, “westus2”)

  • ❏ B. provisionResource(client, “imgsvc01”, “CustomVision.Prediction”, “F0”, “westus2”)

  • ❏ C. provisionResource(client, “imgsvc01”, “ComputerVision”, “S1”, “westus2”)

  • ❏ D. provisionResource(client, “imgsvc01”, “ComputerVision”, “F0”, “westus2”)

A regional marketing agency is building an Azure Cognitive Search index in the Azure portal that will index data from an Azure SQL database. The database contains a table called SocialPosts and each row has a field named ContentText that stores users’ social media messages. Users will perform full text searches against the ContentText field and they must see the message text in the search results. Which field attributes should be enabled to satisfy this requirement?

  • ❏ A. Retrievable and Sortable

  • ❏ B. Searchable and Facetable

  • ❏ C. Searchable and Retrievable

  • ❏ D. Filterable and Retrievable

A legal technology firm named Meridian Legal runs a service called ContractAnalyzerService that relies on a custom Azure Document Intelligence model to classify contract documents. The team needs the model to support one additional contract layout and they want to minimize development effort. What should they do?

  • ❏ A. Lower the application accuracy threshold

  • ❏ B. Add the new contract layout to the existing training set and retrain the custom model

  • ❏ C. Switch to the Azure prebuilt Document Intelligence contract model

  • ❏ D. Create a separate training set for the new layout and train a new custom model

A regional publishing startup called Harbor Books has an Azure subscription that includes an Azure AI Content Safety resource named SafeScan2. The team plans to build a service to scan user uploaded manuscripts for obscure offensive phrases and they need to maintain a custom list of prohibited terms while keeping development effort to a minimum. What should they use?

  • ❏ A. Language detection

  • ❏ B. Text moderation

  • ❏ C. Blocklist

  • ❏ D. Text classifier

A healthcare support team at Meridian Care is developing an AI solution that uses sentiment analysis of client surveys to set quarterly bonuses for support agents and they need to comply with Microsoft responsible AI guidelines. What practice should they adopt?

  • ❏ A. Publish raw customer feedback and processed sentiment scores in a central repository and give staff direct access

  • ❏ B. Apply sentiment analysis outcomes even when the model reports low confidence

  • ❏ C. Require a human review and formal approval before finalizing any bonus decisions that affect employee compensation

  • ❏ D. Retain and use feedback from users who have requested account deletion and data removal

Aerial analytics provider BlueHorizon uses spatial analysis on satellite and drone imagery. Which kinds of objects can these spatial analysis capabilities detect in imagery?

  • ❏ A. Motor vehicles only

  • ❏ B. Individuals vehicles and buildings

  • ❏ C. Pedestrians only

  • ❏ D. Individuals and vehicles

Certification Sample Questions Answered

A fintech company named Northbridge AI hosts a cognitive services instance on Azure and wants to know how often its subscription keys are retrieved. What should the operations team do?

  • ✓ B. Store the subscription keys in Azure Key Vault

The correct answer is Store the subscription keys in Azure Key Vault.

Storing keys in Key Vault centralizes secret management and gives the operations team a place to collect and audit access events. Key Vault emits diagnostic logs for secret reads and you can route those logs to Log Analytics or another sink to query how often a secret was retrieved and which identity performed the read.

Enable diagnostic settings to send logs to Log Analytics is incorrect because enabling diagnostics on the cognitive service itself will not record accesses to subscription keys unless the keys are stored and accessed from a resource that emits those secret access logs. You must store the secrets where access events are generated.

Rotate the subscription keys on a monthly cadence is incorrect because rotation improves security but it does not provide visibility into how often keys are retrieved. Rotation does not answer the monitoring question.

Create an Azure Monitor alert for the cognitive service is incorrect because alerts on the cognitive service will monitor service metrics and logs but they will not report secret retrievals unless those retrieval events are logged by the service that stores the keys and an appropriate diagnostic pipeline and alert rule are configured there.

When you need to track how often secrets are accessed think about where the secret is stored and enable logging on that store. Enabling diagnostics on Key Vault gives you the audit events you can query in Log Analytics.

How many labeled images does each class need at minimum to begin training a successful image classification model on the vision service?

  • ✓ D. 5 labeled images

The correct option is 5 labeled images.

Most managed AutoML vision services require a small minimum number of examples per class to begin training. Starting with the required number lets the model form initial patterns and you can improve accuracy as you add more diverse labeled images.

15 labeled images is incorrect because it specifies a larger minimum than the documented threshold and you do not need that many to start training.

1 labeled image is incorrect because a single example does not provide sufficient variation for the model to learn class features.

30 labeled images is incorrect because it overestimates the documented minimum and is not the smallest required to begin training.

3 labeled images is incorrect because three examples remain below the service threshold and will not meet the minimum needed to create a usable dataset.

When a question asks for a minimum number of examples remember that managed AutoML vision services typically require at least 5 labeled images per class and that model quality improves as you collect more diverse and correctly labeled samples.

A support team at NovaAssist prepared sample utterances and intents for a conversational understanding system and then trained the model. After inspecting the monitoring panel the top scoring intent and the second place intent show almost identical confidence levels and they might trade ranks after the next training. To reduce that possibility the team deletes many example utterances across several intents which substantially changes the distribution of sample counts. After retraining the model you look at the review dashboard again. What issue would you expect to see on the dashboard?

  • ✓ B. Class imbalance

The correct option is Class imbalance.

Removing many example utterances across several intents will change the count of training samples per intent and that creates a Class imbalance in the dataset. The review dashboard and monitoring panels will surface this imbalance because the model can become biased toward intents with more examples and the console often highlights uneven class distributions after retraining.

Ambiguous intent predictions is not the best choice because ambiguity refers to overlapping or very similar utterances across intents. The scenario describes deliberately changing sample counts which produces imbalance rather than creating new overlap between intent phrases.

Increased misclassification rate is a possible downstream effect but it is not the primary immediate dashboard signal you would expect. The dashboard will first call out the shifted data distribution and class support issues which may then lead to higher misclassification if not corrected.

No observable dashboard problems is incorrect because substantially changing the distribution of sample counts is exactly the kind of change that monitoring and review dashboards are designed to detect and flag.

When a question mentions changing the number of training examples look for issues with data distribution or class imbalance rather than assuming only accuracy or error rates will change.

A retail startup called Solara Labs needs to build a conversational assistant that can carry on natural language exchanges with customers and extract detailed intents and entities from the dialog. Which Azure offering best matches this requirement?

  • ✓ C. Azure AI Language

The correct option is Azure AI Language.

Azure AI Language is the service focused on natural language understanding and it provides conversational features such as intent classification and entity extraction from dialog which makes it the best match for building a conversational assistant that must pull detailed intents and entities from customer exchanges.

Azure AI Speech is centered on speech to text and text to speech capabilities and on audio features and it does not itself provide the deep conversational intent and entity extraction that a language understanding service provides.

Azure Cognitive Search is designed for indexing and retrieving information across documents and it is not intended to perform conversational intent classification and entity extraction from dialogs in the same way a dedicated language service does.

Azure Bot Service provides the framework and hosting to build and deploy bots and connectors to channels and it commonly integrates with a language understanding service like Azure AI Language for NLU tasks, so the Bot Service alone is not the primary choice for extracting intents and entities.

When a question asks about extracting intents and entities from dialog choose the managed language or NLU offering and remember that bot frameworks and speech services often rely on the language service for actual intent recognition.

You are advising Sentinel Systems and you are meeting with Robin the head of the platform team about Azure Speech. The group needs to know which object indicates that the audio to be transcribed is provided as a file. Which object should be used to indicate that the input is an audio file?

  • ✓ D. AudioConfig

The correct option is AudioConfig.

AudioConfig is the Speech SDK object that specifies the audio input source and it supports creation from a file path or a stream so you use it to indicate that the input is an audio file for transcription.

SpeechRecognizer is the component that performs recognition when given a speech configuration and an audio source and it is not the object used to point to a file.

SpeechSynthesizer is used for text to speech and it controls audio output rather than audio input so it is not relevant for specifying an input audio file.

SpeechConfig holds subscription credentials and general service settings and it does not describe the audio input source so it is not used to indicate a file.

When a question asks which object represents an audio file look for AudioConfig in the SDK documentation or examples and remember that recognizers accept an audio config rather than containing the file path themselves.

A logistics startup named ParcelFlow is building an application called ShipTrack that will use Azure AI Document Intelligence to extract these fields from scanned invoices shipping address billing address customer ID amount due due date total tax and subtotal. You must choose the model that reduces development work the most and is ready to extract those invoice fields. Which model should be used?

  • ✓ D. Invoice

The correct answer is Invoice.

The Invoice prebuilt model is purpose built to extract common invoice fields such as shipping address, billing address, customer ID, amount due, due date, total, tax, and subtotal. It is ready to use so it minimizes development work because you do not need to label documents and train a custom model first.

Receipt is optimized for retail receipts and focuses on merchant, transaction line items, totals, and dates rather than full invoice data like billing address or customer identifiers.

Contract is intended for agreements and contract extraction tasks and does not target invoice fields.

Custom extraction model can extract arbitrary fields but requires labeling training data and validating the model, which increases development effort compared with using the prebuilt invoice model.

When the question asks to minimize development work look for a prebuilt model that matches the document type and choose it instead of a custom trained option.

A development group at Meridian Analytics needs to create an agent that merges and analyzes multiple files uploaded by users and they are evaluating the Azure AI Agent Service for this task. What is the maximum size the service permits for a single uploaded file?

  • ✓ C. 512 MB

512 MB is the correct option.

The Azure AI Agent Service permits a maximum single uploaded file size of 512 MB for agents that ingest and process user files. This limit is the documented per file cap that governs how large each uploaded file can be when an agent merges or analyzes user content.

300 MB is incorrect because it understates the allowed per file size. The service accepts files larger than 512 MB so 300 MB is below the actual maximum.

2 GB is incorrect because it overstates the per file limit. The documented maximum is 512 MB so a 2 GB limit would exceed what the service permits for a single uploaded file.

128 MB is incorrect because it is well below the actual per file cap. The correct maximum is 512 MB so 128 MB would not represent the service limit.

When a question asks about service limits check the official documentation and pay close attention to the unit of measure. MB versus GB makes a big difference on these questions.

A regional online retailer is building a conversational language understanding model for its shopping assistant and customers may either speak or type their payment address when prompted by the assistant. You must design an entity that will capture full billing addresses. Which entity type should you use?

  • ✓ B. Pattern.any

Pattern.any is correct.

Pattern.any acts as a catch all entity that can capture free form, multi token input such as a full billing address whether the user speaks or types it. It accepts variable length text and does not require enumerating every possible address format, so it is well suited to the unpredictable and diverse ways users provide addresses.

Regex is incorrect because it relies on a fixed pattern. Real world addresses vary widely and creating a single regex that reliably matches all spoken and typed address formats would be brittle and error prone.

List is incorrect because it is intended for a predefined set of values and synonyms. You cannot realistically enumerate all possible billing addresses in a list entity.

Machine learned is incorrect in this scenario because those entities are trained from many labeled examples and work best for well defined and repeatable entity types. They are not the ideal choice when you need a general purpose, open ended capture of long free text like a complete address.

When you need to capture long or variable user input choose a catch all or open ended entity such as Pattern.any rather than a list or a strict regex.

A developer at NovaStream is creating an InsightFinder platform for a large film archive. The solution must extract searchable metadata from video files to improve the library search and to help users create short social clips based on those insights. Which Azure AI service should be used?

  • ✓ C. Video Indexer

The correct answer is Video Indexer.

Video Indexer is designed to ingest video files and extract rich, searchable metadata such as speech to text, speaker separation, scene and shot detection, face and celebrity recognition, OCR, keywords, and sentiment. It produces time coded insights and supports generating clips and thumbnails from those insights which makes it ideal for improving a film archive search and for creating short social clips.

Face API is specialized for detecting and recognizing faces in images and in individual frames. It does not provide the full set of timeline based transcription and multimodal video insights needed to index whole videos or to automate clip creation.

Computer Vision focuses on image analysis and can extract tags, descriptions, and text from images or frames. It does not offer end to end video ingestion with automatic timeline indexing and clip extraction across an entire video file.

Bing Video Search is a web search service for finding videos on the internet and it does not process or index your internal archive to produce time coded metadata or create clips.

When a question asks about extracting searchable insights and producing clips from video files look for a service that provides transcription, timeline based insights, and clip export like Video Indexer.

Your team at Meridian Insights must deploy the most recent published Language Understanding model inside an isolated environment to satisfy compliance rules and you plan to use Docker to handle prediction requests for user utterances. You exported the package from the Language Understanding portal and you picked the newest published prediction build. You then selected the export for containers GZIP option. After that you copied the archive to the host running the container and placed it in the output mount directory before starting the container to call the prediction endpoint. You are not receiving any responses to prediction queries. Which action in the sequence above was performed incorrectly?

  • ✓ C. Place the exported container archive in the container host output mount directory

Place the exported container archive in the container host output mount directory is the correct answer because that action was performed incorrectly in the sequence.

The exported container package must be placed in the container host input mount directory so the container can detect and extract the model at startup. If the archive is left in the output mount the container will not see the package and the prediction endpoint cannot load the model, which results in no prediction responses.

Start the container without providing the required prediction subscription key as an environment variable is not the correct choice because omitting the subscription key would cause authentication errors or explicit authorization failures rather than the silent lack of model loading described. The scenario points to the model not being available to the container.

Select the most recent trained or published build is not the correct choice because choosing the newest published prediction build is the recommended practice for exporting a container model. Using the latest published build does not cause the described failure.

Use the export for containers GZIP option is not the correct choice because exporting as a GZIP archive is the supported and expected format for container exports. The GZIP export itself will not prevent the container from loading the model when the archive is placed in the correct input mount.

When deploying exported models to containers always place the archive in the input mount before starting the container and then verify required environment variables such as subscription keys are set.

A startup called VocaVista is building an online foreign language course and they need a service that can scan lesson text and offer illustrative pictures for commonly used words and brief phrases while keeping development work to a minimum. Which Azure service should you suggest?

  • ✓ D. Immersive Reader

The correct option is Immersive Reader.

Immersive Reader includes a picture dictionary and other reading aids that can show illustrative images for individual words and short phrases while presenting lesson text. The feature is built for language learning and accessibility and it can be integrated into web and mobile apps with SDKs and simple API calls so it keeps development work to a minimum.

The service provides built in language tools such as read aloud translation text highlighting and the picture dictionary so you do not need to train custom machine learning models or assemble a separate image search pipeline to get illustrative pictures for lesson content.

Azure AI Custom Vision is focused on training and deploying image classification and object detection models from labeled images. It is not designed to scan lesson text and automatically offer illustrative pictures without significant model training and integration work.

Azure AI Document Intelligence is intended for extracting structured data and understanding documents and forms. It helps with parsing and analyzing document content rather than supplying a picture dictionary for language lessons.

Azure AI Search is a managed search and indexing service that can be extended with cognitive skills to enrich content. Implementing automatic image suggestions with it would require building and integrating additional components and it is not an out of the box picture dictionary solution like Immersive Reader.

When a question asks for minimal development and direct support for showing images with text look for services that offer built in reading and visual aids such as Immersive Reader.

You manage an Azure subscription for a small startup named NovaApps and you are creating a conversational assistant that will use Azure OpenAI models. Choose the three actions to perform in the correct order. Actions are listed as follows. 1 Deploy the Vision model. 2 Deploy an embeddings model. 3 Provision an Azure OpenAI resource. 4 Deploy the ChatGPT model. 5 Apply for access to Azure OpenAI. 6 Configure Azure API Management. Which three steps should you execute in sequence?

  • ✓ B. 5-3-4

The correct option is 5-3-4.

You must first 5-3-4 because Azure OpenAI access is gated and you need to request approval before you can create an Azure OpenAI resource. After approval you provision the Azure OpenAI resource so you have a place to deploy models. Finally you deploy the ChatGPT model to the provisioned resource so your conversational assistant can use the chat completion capability.

The initial step is the access request. Microsoft requires that you request access to Azure OpenAI and meet any account requirements before creating the service. This is why applying for access comes before provisioning the resource.

Provisioning the resource is the middle step. Once you have access you create the Azure OpenAI resource which holds model deployments and keys. You must have this resource before you can deploy models such as ChatGPT.

Deploying the ChatGPT model is the final step in this sequence. Deploying the chat model provides the conversational endpoints you will call from your assistant.

3-1-6 is wrong because it tries to provision a resource before requesting access and it includes deploying a Vision model and configuring API Management which are not required steps for launching a ChatGPT based assistant.

5-3-2 is wrong because although it starts with the required access request and provisioning the resource it ends with deploying an embeddings model instead of the ChatGPT chat model that you need for a conversational assistant.

6-3-4 is wrong because configuring API Management is not a prerequisite for provisioning the Azure OpenAI resource and you also still need to request access to Azure OpenAI before creating the resource.

On exam questions about Azure OpenAI remember to choose requesting access before provisioning resources and then deploying the appropriate model for the scenario.

A marketing firm called Nimbus Analytics is evaluating the built in entity categories in its Text Analysis toolkit and wants to know which labels are provided as prebuilt entity types for named entity recognition. Which of the following labels are valid prebuilt entity types? (Choose 3)

  • ✓ B. Organization

  • ✓ D. Location

  • ✓ F. Person

The correct options are Organization, Location, and Person.

These prebuilt entity types represent the standard named entity classes that the Text Analysis toolkit extracts. Organization covers companies and institutions, Location covers geographic and geopolitical places, and Person covers individual names. These types are commonly provided out of the box for entity recognition.

Gender is not a standard prebuilt entity type for named entity recognition. Gender is a demographic attribute and it is not returned as an entity label by most NER models unless you use a separate attribute extraction model.

Identification is not a prebuilt entity type. The service might detect specific identifier tokens in some contexts but there is no generic Identification label in the standard entity set.

Name is not listed separately because personal names are included under the Person entity type. The toolkit uses Person to represent names so there is no distinct Name category.

Age is not provided as a prebuilt entity type. Age is a numeric attribute and if it is extracted it is usually handled as a value or by a dedicated attribute extractor rather than as a named entity label.

When you answer NER questions check the provider documentation for standard entity classes and focus on common categories such as Person, Organization, and Location rather than demographic or attribute terms.

Your Azure account contains an App Service app named WebApp2 and you create a multi-service Azure Cognitive Services account named CogSvcProd. You need to allow WebApp2 to call CogSvcProd while keeping administrative overhead as low as possible. What should you configure WebApp2 to use?

  • ✓ B. The service endpoint and a subscription key

The service endpoint and a subscription key is the correct choice for allowing WebApp2 to call CogSvcProd while keeping administrative overhead as low as possible.

Using a service endpoint and a subscription key is straightforward because Cognitive Services issues keys that you can place in your App Service configuration and the app can call the single multi-service endpoint without additional Azure AD configuration or role assignments.

The endpoint and an OAuth access token is incorrect because obtaining and refreshing OAuth tokens requires registering an application, configuring permissions, and implementing token exchange logic, which increases administrative work compared to using a subscription key.

A user assigned managed identity is incorrect in this scenario because it requires creating and assigning an identity and then ensuring the Cognitive Services resource supports and is configured for Azure AD authentication, so it adds more setup than a simple key when low overhead is the priority.

A system assigned managed identity with an X509 certificate is incorrect because managed identities do not rely on X509 certificate provisioning for standard App Service scenarios and using certificates would add unnecessary complexity and lifecycle management compared with using a subscription key.

When a question emphasizes low administrative overhead prefer solutions that need no extra Azure AD app registrations or role assignments and that work with simple keys or built in credentials when the service supports them.

A remote assessment provider called SummitPrep streams examinees during proctored tests and needs to ensure that the people in the live feeds are actual humans and not static photos or prerecorded video. Which method should their application use with the face detection service to verify liveness?

  • ✓ C. Call the face detection endpoint continuously and monitor FaceAttributes.HeadPose for natural orientation changes

The correct answer is Call the face detection endpoint continuously and monitor FaceAttributes.HeadPose for natural orientation changes.

This option is correct because continuously calling the face detection endpoint lets the application observe changes in head orientation over time and HeadPose provides pitch, yaw, and roll values that vary naturally for a live person. Monitoring those values for ongoing, somewhat unpredictable motion makes it possible to distinguish a live subject from a static photo and it increases confidence in liveness when combined with challenge response or other checks.

Invoke the face detection endpoint and use FaceLandmarks to calculate the distance between the pupils is incorrect because landmark positions by themselves do not prove liveness. A static image will produce the same pupil distances as a live person and landmarks are subject to detection noise, so they are not reliable for spoof detection on their own.

Call the face detection endpoint once and record the FaceRectangle values for the detected face is incorrect because a single detection offers no temporal information. FaceRectangle coordinates can be identical for a photo or a recorded video frame, so one snapshot cannot verify that the subject is live.

Analyze the video stream with Cloud Video Intelligence API to detect anomalies in the footage is incorrect because the Video Intelligence API is designed for content analysis such as label detection and shot changes and it is not specialized for fine grained facial pose or liveness detection. Using it alone does not provide the head pose time series needed to robustly assert liveness.

When a question asks about proving liveness prefer solutions that use continuous monitoring or challenge response rather than a single snapshot or static landmarks.

You are building a chat based form for an employee portal at Meridian Logistics to handle time off submissions. The bot must gather the leave start date the leave end date and the total number of paid days requested. The flow must keep the conversation as simple as possible. Which dialog pattern should you use?

  • ✓ D. Waterfall dialog

The correct option is Waterfall dialog.

Waterfall dialog fits this scenario because it implements a simple linear sequence of steps where the bot prompts the user for one piece of information at a time. It maps naturally to a form style interaction so the bot can ask for the leave start date then the leave end date then the total paid days and it can validate or confirm the collected values before completing the process.

Adaptive dialog is designed for more complex and dynamic conversational flows that require conditional branching and flexible state management. That power is unnecessary for a straightforward, step by step form and it would add avoidable complexity.

Component dialog is a mechanism for packaging and reusing dialogs as modular components. It helps organize dialog code but it is not itself the linear prompt pattern you would choose for a simple form unless you are specifically composing reusable pieces.

Skill dialog is used to expose or call dialogs as remote skills between bots. It is about cross bot composition and reuse and not about selecting the simplest in bot prompting pattern for gathering a few form fields.

When the question describes a fixed sequence of prompts for collecting form fields, think Waterfall dialog because it models step by step interactions in the most straightforward way.

A developer is integrating with Aurora Search APIs to monitor user queries and traffic patterns and they enable the feature called “Search Metrics” to examine “Top Query Terms” and “Request Volume” so they can optimize system performance. Which of the following statements about Search Metrics is incorrect?

  • ✓ B. Enabling Search Metrics adds an extra charge and could slightly increase your subscription rate

The correct answer is Enabling Search Metrics adds an extra charge and could slightly increase your subscription rate.

This statement is incorrect because Search Metrics is typically provided as an included analytics capability and does not by itself trigger a higher subscription tier. The feature collects aggregated query terms and request volume to help optimize performance and it is billed under the service’s normal usage and storage rules rather than as an automatic subscription increase.

Sharing aggregated dashboard data from Search Metrics with partner organizations is prohibited is not the correct answer because aggregated and non identifying metrics are commonly shareable. Most platforms let you export or share dashboards and summaries as long as you comply with privacy policies and contractual obligations.

You can activate Search Metrics using the cloud provider portal interface is not the correct answer because enabling metrics through the provider console is a supported and common method. Many services also allow activation via API or CLI for automation.

The Search Metrics feature appears under the available service resources and is accessible is not the correct answer because the metrics functionality normally shows up as a resource or dashboard entry once enabled and you can view it from the service UI or via API endpoints.

When questions contrast billing and feature availability, focus on whether the feature is described as an “included” capability or as an explicit paid add on. Check wording about usage versus subscription to decide if enabling something changes your tier.

Maple Grove Motors is a regional used car dealer in Colorado and it collects customer comments on its website for service monitoring. Which Azure Language capability will allow the team to identify and flag comments that express negative sentiment?

  • ✓ B. Use the Language service to perform sentiment analysis

Use the Language service to perform sentiment analysis is correct. This capability is built to detect positive negative or neutral tone in text and it can flag comments that express negative sentiment.

The Azure Language service returns sentiment scores at the document and sentence level and it also supports opinion mining to show which aspects of a service customers speak about and how they feel about each aspect. That makes sentiment analysis the right choice for monitoring customer comments and alerting on negative feedback.

Use the Language service to perform entity linking is incorrect because entity linking maps mentions in text to entries in a knowledge base and it does not provide sentiment labels or scores.

Use the Language service to extract key phrases is incorrect because key phrase extraction identifies important words and phrases but it does not determine whether the text expresses positive or negative feelings.

Use the Azure Translator service for language detection is incorrect because the Translator service focuses on translating text and detecting language and it does not perform sentiment analysis to classify emotional tone.

Use the Language service to extract named entities is incorrect because named entity recognition finds people places and organizations and it does not assess sentiment or emotional polarity in comments.

When a question asks about identifying positive or negative feelings look for services that explicitly mention sentiment analysis or opinion mining rather than extraction or translation features.

A regional streaming studio needs to scan archived videos to find spoken mentions of particular companies in their footage and they plan to use Azure tools to do this. What steps should they follow in order to configure the system to recognize those company names?

  • ✓ B. Sign in to the Azure Video Indexer web portal and under Content model customization choose Brands then add the target company names to the include list

Sign in to the Azure Video Indexer web portal and under Content model customization choose Brands then add the target company names to the include list

This option is correct because Azure Video Indexer provides a Content model customization area where you can add known brands and company names to the Brands include list so the indexer will try to detect and tag those mentions in audio and video. Using the Brands list trains the indexer to recognize spoken or visual references to specific companies and to surface those matches in search and metadata.

Sign in to the Azure Video Indexer web portal and under Content model customization choose Language then add the target company names to the include list is incorrect because the Language setting controls language and speech related configuration and not the Brands entity list. Company names for detection belong in the Brands include list rather than the Language configuration.

Sign in to the Google Cloud Video Intelligence console and define a custom glossary then add the target company names to the include list is incorrect because the question specifies using Azure tools and not Google Cloud. The Google Cloud approach is a different vendor workflow and does not apply to configuring Azure Video Indexer.

Read the question for the vendor named and then choose the feature within that product that matches the goal. In Video Indexer the Content model area is where you add custom Brands for recognition.

Scenario Northstar Media is a startup that built a public photo gallery and its lead engineer Rajani Patel reviewed Azure and chose to deploy it. The website lets visitors locate pictures by entering simple phrases. If a visitor signs in and searches for “images with a beach” what is the typed search phrase called?

  • ✓ D. Utterance

The correct answer is Utterance.

A user typed search phrase like “images with a beach” is called an Utterance because it is the exact sentence or phrase the user speaks or types when interacting with a language system. Utterances are the surface inputs that the system receives and that language models use to learn how to map words to meaning.

Query is not the best choice because query is a broader term for a search request or database call and it does not name the specific NLP concept that labels the user’s literal input. The expected term for the typed or spoken input is utterance.

Entity is incorrect because an entity refers to a specific piece of information extracted from the utterance, such as the word “beach” being tagged as a location or object. The whole typed phrase is the utterance, while entities are components within it.

Intent is wrong because intent denotes the user’s goal or purpose like wanting to find images. The intent describes what the user wants to accomplish and not the exact words they typed, which are called an utterance.

When a question asks about the literal words a user types or speaks think utterance. Remember that intent is the user’s goal and entities are values extracted from the utterance.

A retail analytics team at Summit Analytics runs a Summit Cognitive Search deployment and query volume has climbed steadily over the past ten months. They have observed that some search queries are being throttled by the search service. Will adding replicas reduce the likelihood that query requests will be throttled?

  • ✓ C. Yes adding replicas will reduce query throttling

Yes adding replicas will reduce query throttling is correct.

Adding replicas increases the number of query serving copies of the index and spreads incoming requests across those copies. Each replica can handle queries so adding replicas increases concurrency and reduces the chance that individual requests are throttled by the search service.

Only if you also increase partitions is incorrect because partitions are meant to distribute index storage and to scale indexing throughput. Partitions do not by themselves increase the number of query handlers. You would only need more partitions if the index exceeds storage or memory capacity on the current configuration.

Upgrade to a higher service tier is not the best answer because changing tier may change overall resource limits but the direct action that reduces query throttling is adding replicas so that queries are served by more instances. In some platforms a tier change can be required to allow more replicas but the mechanism that reduces throttling remains the replicas.

No adding replicas will not reduce query throttling is incorrect because adding replicas does reduce throttling by increasing query capacity and concurrency.

When a question contrasts query and indexing performance focus on which component serves queries. Adding replicas scales query capacity while adding partitions scales storage and indexing throughput.

Vendor managed encryption keys protect your Orion AI Platform resources by default and your team has been instructed to switch to customer managed keys for greater control. Which of the statements below are true? (Choose 3)

  • ✓ C. Provide the key identifier URI from KeySafe Vault as the AI platform key URI

  • ✓ D. To use customer managed keys you must provision or import a key into the key vault

  • ✓ E. Using a dedicated key vault such as KeySafe Vault to store customer managed keys is the recommended approach

Provide the key identifier URI from KeySafe Vault as the AI platform key URI, To use customer managed keys you must provision or import a key into the key vault, and Using a dedicated key vault such as KeySafe Vault to store customer managed keys is the recommended approach are correct.

Provide the key identifier URI from KeySafe Vault as the AI platform key URI is required because the AI platform needs a full key resource reference so it can call the key management service to perform encryption and decryption on behalf of the service. Supplying the key URI allows the platform and the key vault to enforce access controls and to audit key usage.

To use customer managed keys you must provision or import a key into the key vault means an actual cryptographic key must exist in the vault before the AI service can use it. You can generate a key in the vault or import external key material and then grant the platform the required permissions to use that key.

Using a dedicated key vault such as KeySafe Vault to store customer managed keys is the recommended approach improves security by isolating keys for sensitive workloads and by enabling focused access policies and logging. A dedicated vault also makes key lifecycle management and rotation easier to control.

No configuration updates are required on the AI resources when you rotate the encryption key is incorrect because rotation behavior varies by service and may require updating a referenced key version or reconfiguring resources if the service does not automatically pick up the new version. Always verify how the platform handles rotation and plan any necessary updates.

You must disable “soft delete” and “recovery protection” on the key to retrieve your data is incorrect because those features protect keys from accidental or malicious deletion and help with recovery. Disabling protections is not required to access data and would reduce your security posture. Instead ensure the key remains available and that the correct permissions and retention policies are in place.

When switching to customer managed keys always provide the exact key URI to the AI service and verify the service account has the necessary encrypt and decrypt permissions before testing.

A consultant scanned a client note into a PDF document and needs to extract the contained text for automated processing. Which Azure capability should be used?

  • ✓ B. Azure AI Document Intelligence Read model

The correct answer is Azure AI Document Intelligence Read model.

The Azure AI Document Intelligence Read model is the prebuilt document OCR capability that extracts printed and handwritten text while preserving page and line layout. It supports multi page PDF inputs and returns text with layout information and confidence scores so automated processing and indexing are straightforward.

Azure AI Vision Image Analysis is focused on general image understanding and single image scenarios. It can perform OCR on images but it is not the prebuilt, document oriented OCR that is optimized for multi page PDFs and complex document layouts.

Custom model in Azure AI Document Intelligence requires collecting labeled examples and training to extract specific fields or structures. That approach is useful when you need repeated, structured field extraction from known forms, but it is not necessary for simple raw text extraction from a scanned PDF because the prebuilt Azure AI Document Intelligence Read model already provides OCR and layout output out of the box.

When the task is to extract raw text from scanned documents choose the prebuilt Read OCR capability first. Reserve a Custom model for cases where you need consistent extraction of specific labeled fields.

Meridian Transport uses Azure AI Document Intelligence for standard receipt and invoice parsing but needs to build models for its custom shipment forms. Sequence the following tasks in the proper execution order to create, validate, and manage a custom Document Intelligence model? Task 1 Manage model versions and lifecycle. Task 2 Upload example documents to an Azure Blob storage container. Task 3 Define and prepare the training dataset. Task 4 Train the model using labeled or unlabeled examples. Task 5 Validate the model with a holdout dataset not used during training?

  • ✓ B. Define and prepare the training dataset then upload example documents to the Azure Blob container then train the model with labeled or unlabeled samples then validate using a withheld dataset then manage the model lifecycle

The correct answer is Define and prepare the training dataset then upload example documents to the Azure Blob container then train the model with labeled or unlabeled samples then validate using a withheld dataset then manage the model lifecycle.

You start by defining and preparing the training dataset so you know which fields to extract and how examples must be labeled. After the dataset is defined you upload the example documents to a Blob container so the training service can access them. Training then runs on the labeled or unlabeled examples to produce a model and you validate that model using a withheld or holdout dataset that was not used during training to check generalization. Managing model versions and the lifecycle comes last so you can track the validated model, roll out updates, and maintain reproducibility.

The option Upload example documents to the Blob storage then define the training dataset then train the model then validate with a holdout dataset then manage model versions is incorrect because it reverses the logical order. You should define the dataset and labeling approach before or as you upload examples so the data is prepared for consistent training and labeling.

The option Manage model versions first then define the dataset then upload files then train the model then validate with a holdout set is incorrect because versioning cannot meaningfully occur before a model is trained. Model lifecycle tasks belong after training and validation when you have artifacts to manage.

The option Define the training dataset then manage models then upload documents to Blob then train with labeled or unlabeled data then validate with an independent dataset is incorrect because it places model management before uploading and training. Managing versions before creating any trained model is premature and breaks the expected flow of prepare then train then validate then manage.

For sequencing questions think about the natural pipeline. Emphasize dataset definition first, then data collection and upload, then training, then validation with a holdout, and finally versioning and lifecycle management.

Maya Brooks works for the Morning Star and she is building a GPT powered chat assistant to respond to reader inquiries about Morning Star articles on the publication web portal. She plans to validate the assistant using Microsoft recommended prompt engineering practices. Which prompt engineering techniques should Maya apply when she is testing the assistant? (Choose 4)

  • ✓ B. Use descriptive analogies and richer context

  • ✓ D. Be specific and reduce ambiguity in prompts

  • ✓ E. Place instructions both before and after the main content

  • ✓ F. Order matters in how you present prompt elements

The correct options are Use descriptive analogies and richer context, Be specific and reduce ambiguity in prompts, Place instructions both before and after the main content, and Order matters in how you present prompt elements.

The choice Use descriptive analogies and richer context is correct because giving the model vivid examples and article specific context helps it map reader questions to the right content and reduces the chance of vague or off topic answers. Rich context supplies relevant facts and boundaries so the assistant can produce accurate and relevant responses for Morning Star articles.

The choice Be specific and reduce ambiguity in prompts is correct because clear, unambiguous instructions narrow the model’s possible interpretations and improve answer quality. Specific prompts guide the assistant toward the expected style and factual constraints for responding to reader inquiries.

The choice Place instructions both before and after the main content is correct because priming instructions up front can set role and tone and adding reminders or output constraints afterward can shape the final response. Placing instructions in both positions lets you set context and then reinforce or refine the expected output.

The choice Order matters in how you present prompt elements is correct because models are sensitive to sequence and earlier content often carries more weight. Presenting context, examples, and constraints in an intentional order helps ensure the most important instructions influence the response.

Vertex AI is incorrect because it is a Google Cloud platform product and not a Microsoft prompt engineering technique, so it does not answer which Microsoft recommended practices to apply when testing prompts.

Be minimalistic and avoid extra detail is incorrect because overly minimal prompts often leave the model guessing and increase ambiguity, which reduces consistency and factual accuracy when validating a chat assistant.

When testing prompts focus on clarity and context and try different orderings and placements of instructions to see how the model behavior changes.

At Solstice AI Studio you are building a conversational agent in the Dialogue sandbox and you want to reduce repeated words in the bot replies while making each response less random. Which two parameters should you adjust? (Choose 2)

  • ✓ C. Temperature

  • ✓ E. Frequency penalty

The correct options are Temperature and Frequency penalty.

Lowering Temperature makes the model more deterministic and reduces the chance of unusual or highly varied word choices, so responses become less random and more focused.

Raising the Frequency penalty applies a cost to tokens that appear repeatedly in the generated text, so it directly discourages repeated words and phrases within a reply.

Top P sampling is a probabilistic sampling method that limits the cumulative probability of candidate tokens and it can affect randomness, but it is not the direct control for reducing repeated words in a reply.

Presence penalty encourages the model to introduce new tokens that have not appeared yet and it helps topic diversity, but it does not target the frequency of repeated tokens as directly as the frequency penalty.

Max tokens only limits the output length and it does not control randomness or repetition behavior within the allowed output.

Top K sampling restricts the candidate pool to the highest probability tokens and it influences randomness, but it is not the primary parameter to reduce repeated words compared with adjusting temperature and the frequency penalty.

To make replies less random set a lower temperature and to reduce repeated words increase the frequency penalty. Tweak them gradually and test sample prompts to find the right balance.

A regional merchant named NorthStar Retail keeps data across several systems. The accounting system runs on an on premises Microsoft SQL Server instance. The online store uses Azure Cosmos DB with the Core SQL API. Operational telemetry lands in Azure Table storage. People operations maintains an Azure SQL Database. You need to make all of these sources searchable through the Azure AI Search REST API. What should you do?

  • ✓ A. Replicate the accounting SQL Server to an Azure SQL Database instance

Replicate the accounting SQL Server to an Azure SQL Database instance is correct.

Azure Cognitive Search indexers can natively connect to an Azure SQL Database data source and extract rows to build searchable indexes. Replicating the on premises SQL Server into Azure SQL Database makes the accounting tables directly accessible to the search service without special gateway workarounds and lets you use the built in SQL indexer for scheduled indexing and change detection.

Send the telemetry from Azure Table storage into Microsoft Sentinel is incorrect because Microsoft Sentinel is a security information and event management service and not a required intermediary for making telemetry searchable with Azure Cognitive Search. Moving telemetry into Sentinel would add unnecessary complexity and would not by itself expose the data to the search indexer.

Migrate the ecommerce Cosmos DB data to the MongoDB API is incorrect because the online store already uses Azure Cosmos DB with the Core SQL API which is directly supported by Azure Cognitive Search indexers. Migrating to the MongoDB API is not needed and could complicate indexing because the Core SQL API is the supported path.

Ingest the telemetry into Azure Data Explorer for analysis is incorrect because Azure Data Explorer is optimized for fast analytical queries and is not a native data source for Azure Cognitive Search indexers. You could export from Data Explorer to a supported data source, but ingesting into ADX does not directly solve the requirement to make the sources searchable through the Azure AI Search REST API.

When you see questions about making data searchable with Azure Cognitive Search look for a supported data source such as Azure SQL Database or the Cosmos DB Core SQL API and prefer moving or replicating on premises data into those services so the built in indexers can do the work.

A team at Harbor Data is building a tool to extract named entities from client messages with Azure Text Analytics and they need to determine which endpoint performs Named Entity Recognition?

The https://eastus2.cognitiveservices.example.com/text/analytics/v3.2/entities/recognition/general endpoint is the Text Analytics named entity recognition API for the general domain and it extracts entities such as people, organizations, locations, dates, and other common entity types from unstructured text.

https://eastus2.cognitiveservices.example.com/text/analytics/v3.2/entities/linking is incorrect because that endpoint performs entity linking to map recognized mentions to knowledge base entries and it is not the general named entity recognition endpoint.

https://eastus2.cognitiveservices.example.com/text/analytics/v3.2/entities/recognition/pii is incorrect because that endpoint is for detecting personally identifiable information and it focuses on sensitive data rather than general entity types.

https://eastus2.cognitiveservices.example.com/text/analytics/v3.2/entities/recognition/pii?domain=phi is incorrect because the domain query parameter specializes PII detection for protected health information and it still targets PHI rather than providing general named entity recognition.

Check the endpoint path for the keyword recognition when you need standard named entity recognition and look for linking or pii when the task is entity linking or PII detection.

The Custom Vision platform in Nimbus Cloud allows engineers to train models for image classification and object detection using labeled images and then deploy those models so applications can request predictions. Training requires a training resource and deployed models require a prediction resource. Which type of Nimbus Cloud resource can host a published Custom Vision model?

  • ✓ C. Intelligent AI Services

The correct option is Intelligent AI Services.

Custom Vision workflows require two distinct resources. One is a training resource that you use to upload labeled images and train a model. The other is a prediction or hosting resource that serves the published model so applications can request inferences. In Nimbus Cloud the prediction and hosting role is fulfilled by Intelligent AI Services, and that resource provides the endpoint and compute to host published Custom Vision models.

Vision API is incorrect because it refers to general prebuilt vision APIs rather than the prediction resource that hosts a published Custom Vision model. Prebuilt APIs do not host your custom trained models for online prediction.

Custom Vision Trainer is incorrect because it describes the training resource. The trainer holds your labeled images and runs the training jobs, and it does not host the published model for prediction.

Edge Predictions is incorrect because it describes deployment to edge devices or edge runtimes rather than the cloud prediction resource. Edge deployment is for offline or on-device inference and is not the cloud host used when a model is published for online requests.

When answering, watch for the words train and publish or host. Training resources are not the same as prediction resources, and the question will often ask specifically which resource provides the prediction endpoint.

You have a helper C# routine named provisionResource that creates Azure AI accounts programmatically. static void provisionResource(CognitiveServicesManagementClient client, string name, string serviceKind, string sku, string region) { CognitiveServicesAccount parameters = new CognitiveServicesAccount(null, null, serviceKind, region, name, new CognitiveServicesAccountProperties(), new Sku(sku)); var result = client.Accounts.Create(resourceGroupName, sku, parameters); } You must invoke this routine to provision a free Azure AI account in the West US region that will be used to automatically generate captions for images. Which invocation should you use?

  • ✓ D. provisionResource(client, “imgsvc01”, “ComputerVision”, “F0”, “westus2”)

The correct answer is provisionResource(client, “imgsvc01”, “ComputerVision”, “F0”, “westus2”).

Computer Vision is the Azure service that provides image description and captioning capabilities and is the appropriate kind to generate captions automatically. The F0 SKU is the free tier that applies to Computer Vision and so it satisfies the requirement to provision a free account. The region westus2 is a valid Azure region and works for creating the resource.

provisionResource(client, “imgsvc01”, “CustomVision.Prediction”, “S1”, “westus2”) is incorrect because Custom Vision Prediction is focused on image classification and object detection rather than general image captioning. The S1 SKU is a paid tier and does not meet the requirement for a free account.

provisionResource(client, “imgsvc01”, “CustomVision.Prediction”, “F0”, “westus2”) is incorrect because using Custom Vision Prediction will not provide the built in captioning and description models that Computer Vision offers even though the F0 SKU would be free.

provisionResource(client, “imgsvc01”, “ComputerVision”, “S1”, “westus2”) is incorrect because although it uses the correct service for captioning the S1 SKU is not the free tier and therefore does not meet the requirement to provision a free account.

When a question asks for a free resource that performs a specific capability match the service kind to the capability first and then check the SKU for a free tier. Pay attention to the exact service name and the SKU string when the code must be invoked exactly as shown.

A regional marketing agency is building an Azure Cognitive Search index in the Azure portal that will index data from an Azure SQL database. The database contains a table called SocialPosts and each row has a field named ContentText that stores users’ social media messages. Users will perform full text searches against the ContentText field and they must see the message text in the search results. Which field attributes should be enabled to satisfy this requirement?

  • ✓ C. Searchable and Retrievable

The correct option is text: Searchable and Retrievable.

This combination is correct because making the ContentText field searchable enables full text indexing and tokenization so users can perform full text queries against the messages. Making the field retrievable ensures that the actual message text is returned in the search results so users can see the message content.

text: Retrievable and Sortable is incorrect because being retrievable alone does not enable full text search and sortable applies to fields used for ordering results rather than for full text querying.

text: Searchable and Facetable is incorrect because facetable is used for aggregations and counts and it is not suitable for returning long free form message text in search results.

text: Filterable and Retrievable is incorrect because filterable supports exact matching and boolean filters and it does not provide full text search capabilities that are required for searching message content.

Read the requirement and map it to field attributes. If the question asks for full text search use searchable. If it asks that the text be shown in results use retrievable.

A legal technology firm named Meridian Legal runs a service called ContractAnalyzerService that relies on a custom Azure Document Intelligence model to classify contract documents. The team needs the model to support one additional contract layout and they want to minimize development effort. What should they do?

  • ✓ B. Add the new contract layout to the existing training set and retrain the custom model

The correct option is Add the new contract layout to the existing training set and retrain the custom model.

This approach is correct because adding labeled examples for the new contract layout and retraining the custom model lets the model learn the new structure while preserving the knowledge it already has. Retraining the existing model is usually the least disruptive choice when you need to support one more layout because the application can continue to call the same model endpoint and you avoid building extra routing logic or new integrations.

Lower the application accuracy threshold is incorrect because lowering the acceptance threshold does not add any support for the new layout and it reduces the reliability of classification rather than solving the underlying coverage gap.

Switch to the Azure prebuilt Document Intelligence contract model is incorrect because prebuilt models may not cover organization specific or uncommon contract layouts and they offer less flexibility for custom fields or labels. Using a prebuilt model could require additional validation and may not meet the firm specific extraction needs.

Create a separate training set for the new layout and train a new custom model is incorrect for this scenario because it adds more development and operational overhead. Running and maintaining multiple custom models requires extra routing logic and increases maintenance burden, so it does not meet the requirement to minimize development effort.

When a question asks to add support for a new document layout with minimal development effort choose the option that extends or retrains the existing custom model rather than replacing models or lowering quality expectations. Retraining the existing model usually preserves integrations and is the least disruptive path.

A regional publishing startup called Harbor Books has an Azure subscription that includes an Azure AI Content Safety resource named SafeScan2. The team plans to build a service to scan user uploaded manuscripts for obscure offensive phrases and they need to maintain a custom list of prohibited terms while keeping development effort to a minimum. What should they use?

  • ✓ C. Blocklist

The correct answer is Blocklist. A Blocklist lets the team maintain a custom list of prohibited terms so they can block or flag uploaded manuscripts with minimal development work.

Within Azure AI Content Safety a Blocklist can be managed and applied as part of the scanning pipeline so developers do not need to build and maintain their own matching or filtering logic. This makes it the simplest option for enforcing obscure or organization specific offensive phrases.

Language detection only identifies the language of a piece of text and does not provide a way to enforce or manage a custom prohibited-terms list. It will not meet the requirement to block specific offensive phrases.

Text moderation generally refers to automated checks and categories for harmful content but it does not offer the same lightweight, user managed term list capability that a blocklist provides for exact phrase blocking.

Text classifier is designed to categorize or label text with model-driven classes and it is not intended for maintaining an explicit list of banned phrases for direct blocking or exact matching.

For questions about enforcing exact banned words with minimal coding look for features that let you upload or manage term lists such as a blocklist rather than relying on classifiers or language detection.

A healthcare support team at Meridian Care is developing an AI solution that uses sentiment analysis of client surveys to set quarterly bonuses for support agents and they need to comply with Microsoft responsible AI guidelines. What practice should they adopt?

  • ✓ C. Require a human review and formal approval before finalizing any bonus decisions that affect employee compensation

The correct answer is Require a human review and formal approval before finalizing any bonus decisions that affect employee compensation.

This option is correct because compensation decisions are high risk and require human judgment to check model outputs for bias and errors and to ensure fairness. Requiring a human review and formal approval provides accountability and aligns with Microsoft responsible AI principles that call for human oversight and the ability to contest automated outcomes.

Publish raw customer feedback and processed sentiment scores in a central repository and give staff direct access is wrong because publishing raw feedback without strict access controls can expose sensitive personal data and violate privacy rules. Centralizing processed scores may be useful for analytics but direct access by all staff is not appropriate for confidential personnel decisions.

Apply sentiment analysis outcomes even when the model reports low confidence is wrong because acting on low confidence predictions increases the risk of incorrect or biased decisions. Responsible AI practice requires handling uncertainty explicitly and ensuring human review for low confidence cases before making impactful decisions.

Retain and use feedback from users who have requested account deletion and data removal is wrong because honoring user deletion requests is a legal and ethical requirement. Keeping and using data after a deletion request would violate user rights and responsible data handling practices.

When the question involves automated decisions that affect people ask whether the task is high risk and whether the policy requires human oversight or explicit user consent.

Aerial analytics provider BlueHorizon uses spatial analysis on satellite and drone imagery. Which kinds of objects can these spatial analysis capabilities detect in imagery?

  • ✓ D. Individuals and vehicles

The correct answer is Individuals and vehicles.

Modern spatial analysis and object detection models applied to satellite and drone imagery can identify people and different types of vehicles in images. These models output labeled detections and bounding boxes which let analysts locate and count Individuals and vehicles over large areas and across time.

Motor vehicles only is incorrect because the capability is not limited to vehicles and it can detect people as well when the model is trained for that class.

Individuals vehicles and buildings is incorrect because this option adds buildings as a required detection target. Building extraction is often handled by different geospatial methods and the exam answer focuses on detecting people and vehicles.

Pedestrians only is incorrect because the detections are not restricted to pedestrians and include vehicles too when the models are configured for both classes.

Focus on the exact object classes named in the options and remember that vision models commonly detect both people and vehicles as distinct categories rather than only one or including unrelated targets like buildings.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.