AI-102 Azure AI Engineer Practice Tests on Exam Topics

Microsoft Certified Azure Exam Topics

Want to pass the Microsoft AI-102 Azure AI Engineer certification exam on your first try? You are in the right place, because we have put together a collection of sample AI-102 exam questions that will help you not only learn key concepts, but also give you the confidence to pass the AI-102 exam with a strong score.

All of these AI-102 Azure AI Engineer certification questions and answers come from my Udemy Practice Exams course and the certificationexams.pro practice test website, two resources that have helped thousands of students pass the AI-102 exam. If you are interested in even more AI-102 practice tests, Azure AI Engineer exam simulators with realistic test questions are highly recommended.

Get Azure Certified Fast

Well, they are not real test questions. They are not AI-102 exam dumps or Azure AI Engineer braindumps. They are expertly sourced questions that accurately represent the style, reasoning, and difficulty of what you will encounter on the real Azure AI Engineer certification exam. They will help prepare you and get you AI-102 certified honestly and with integrity.

So, are you ready to test your skills? Good luck with these practice exam questions, and even better luck when you take the actual Microsoft Azure exam.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.

AI-102 Azure AI Practice Exam Questions

Nimbus Speech Systems provides a batch transcription REST interface to process large audio collections and return transcripts, and you are evaluating statements about how its API behaves. Which of the following statements about the voicebatch/v4_0alpha/transcripts endpoint are correct? (Choose 4)

  • ❏ A. Use the voicebatch/v4_0alpha/transcripts endpoint with the PATCH method to modify editable details of an existing transcript

  • ❏ B. Use the voicebatch/v4_0alpha/transcripts endpoint with the GET method to retrieve the result files and status for a transcript identified by an ID

  • ❏ C. Call the voicebatch/v4_0alpha/transcripts endpoint with the POST method to submit a new bulk transcription job

  • ❏ D. Send a DELETE request to the voicebatch/v4_0alpha/transcripts endpoint to remove a specified transcript

Your team at AzureVista trained a custom image classification model with the vision service and you need to fetch the model accuracy and related metrics through the REST API. Which API method should you call?

  • ❏ A. ExportIteration

  • ❏ B. GetImagePerformances

  • ❏ C. GetIterationPerformance

  • ❏ D. GetProject

Your organization maintains an Azure subscription that hosts an Azure Cognitive Search service named SearchProd1 and you develop a custom skill in SearchProd1 to perform language detection and sentiment scoring for documents. When documents move through an enrichment pipeline which sequence do they follow to be indexed by SearchProd1?

  • ❏ A. 5 then 3 then 2 then 4 then 1

  • ❏ B. 3 then 4 then 2 then 1 then 5

  • ❏ C. 1 then 4 then 2 then 5 then 3

  • ❏ D. 4 then 2 then 3 then 1 then 5

Maya Chen recently joined Solstice Analytics and she is creating an application that uses Azure AI Vision to detect whether people appear in a live camera stream. Which Azure AI Vision feature should she use?

  • ❏ A. Optical Character Recognition

  • ❏ B. Face Detection

  • ❏ C. Spatial Analysis

  • ❏ D. Image Analysis

Your team at Nimbus Analytics is switching from Microsoft managed encryption keys to customer managed keys for an Azure Cognitive Services deployment to gain more control over key lifecycle and access. Which of the following statements about configuring customer managed keys for the service are correct? (Choose 3)

  • ❏ A. You must supply the Key Identifier URI from Key Vault as the key reference in the Azure Cognitive Services key URI

  • ❏ B. You need to disable both Key Vault soft delete and purge protection to recover encrypted data

  • ❏ C. To use customer managed keys you must create or import a key inside the Key Vault

  • ❏ D. Azure Managed HSM is required to store customer managed keys for Cognitive Services

  • ❏ E. Microsoft recommends storing customer managed keys in Azure Key Vault

What approach minimizes manual labeling when retraining an Azure Custom Vision classifier using 1,000 unlabeled photos?

  • ❏ A. Use Azure Machine Learning data labeling then import labeled data into Custom Vision

  • ❏ B. Upload all images to Custom Vision and request suggested tags then review and confirm those suggestions

  • ❏ C. Upload all images and tag each image manually before training

The Aegis Collective feared that escalating geopolitical tensions would cause a large scale disaster and they built an underground refuge for their affluent patrons under the leadership of Gideon Vale. Gideon is exploring text analytics for Aegis so the organization can extract insights from large volumes of written data. Which capabilities does text analytics provide? (Choose 4)

  • ❏ A. Translator text

  • ❏ B. Sentiment analysis

  • ❏ C. QnA Maker

  • ❏ D. Language detection

  • ❏ E. Authoring

  • ❏ F. Entity recognition

  • ❏ G. Key phrase extraction

Scenario: Priya Anand joined Aurora Security Agency and plans to build a tool that will transcribe a large backlog of audio files by using the Azure AI Speech batch transcription feature. Priya intends to save the transcription outputs to an Azure Blob container by passing a destinationContainerUrl in the batch transcription request and she supplies an ad hoc SAS URI for that destination. Why is Priya using an ad hoc SAS URI in this setup?

  • ❏ A. The storage account is configured to allow only trusted Azure services so SAS usage is blocked

  • ❏ B. The destination container must permit external network access so the Speech service can write the results

  • ❏ C. The Speech batch output requires an ad hoc SAS because access policy based SAS is not supported

  • ❏ D. The Speech service must use the storage account key to authenticate to the container

Your team is building a customer support chatbot that uses Azure Cognitive Service for Language question answering. You upload a PDF named Catalogue.pdf that contains a product catalogue and pricing and you train the model. During validation the chatbot correctly answers the user question “What is the price of Widget Z?” but it does not respond to the user question “How much does Widget Z cost?” You plan to use Language Studio to create an entity called price and then retrain and republish the model. Will that change ensure the chatbot responds correctly to both phrasings?

  • ❏ A. Yes

  • ❏ B. No

HarborTech Manufacturing was started by Maya and Jonas Lee and the company makes lubricants solvents and farm supplies across North America. They elevated their son Evan to the IT director role and his team will build a system to transcribe large batches of audio using the Azure AI Speech batch transcription capability. Evan wants a storage solution for the audio files that demands minimal development effort. Which storage option should Evan recommend?

  • ❏ A. Azure Cosmos DB

  • ❏ B. Azure Files

  • ❏ C. Azure Blob Storage

  • ❏ D. Azure SQL Database

A startup named Nova Imaging is developing an image classifier with Azure Custom Vision and they are using the Custom Vision REST API to upload and label training images. Please share the code fragment so the correct client class to call PublishIteration for the current training iteration can be determined?

  • ❏ A. ApiKeyServiceClientCredentials

  • ❏ B. VertexAIClient

  • ❏ C. CustomVisionTrainingClient

  • ❏ D. CustomVisionPredictionClient

An organization needs a simple deployment to analyze sales invoices and expects up to six simultaneous requests. Which resource should be created?

  • ❏ A. Azure Cognitive Services resource

  • ❏ B. Document Intelligence standard tier

  • ❏ C. Document Intelligence free tier

Your Azure subscription contains an Azure OpenAI instance where the GPT four model is deployed. You must allow teams to upload documents that will serve as grounding material for the model. Which two types of Azure resources should you provision? (Choose 2)

  • ❏ A. Azure SQL

  • ❏ B. Azure AI Search

  • ❏ C. Azure AI Bot Service

  • ❏ D. Azure Blob Storage

  • ❏ E. Azure AI Document Intelligence

A travel technology startup named HarborTravel is creating a conversational assistant that must consult an FAQ knowledge base to answer customer inquiries. Which dialog class should the developer use?

  • ❏ A. SkillDialog

  • ❏ B. Dialogflow CX

  • ❏ C. AdaptiveDialog

  • ❏ D. QnAMakerDialog

A startup named FrameLoop is building a photo sharing platform where users upload images. The product team needs to automatically block explicit or offensive photos and they want to minimize development work while relying on Azure managed services. Which services should they choose? (Choose 2)

  • ❏ A. Azure AI Document Intelligence

  • ❏ B. Azure AI Content Safety

  • ❏ C. Azure AI Vision

  • ❏ D. Azure AI Custom Vision

  • ❏ E. Azure Content Moderator

A regional learning portal wants to make its articles and textbooks easier to read for students with dyslexia and other processing differences. Which Azure AI service should the team add to enable an adjustable and reader friendly experience?

  • ❏ A. Text Analytics

  • ❏ B. Custom Vision

  • ❏ C. Immersive Reader

  • ❏ D. Speech service

Aurora Components Ltd was the firm inherited by Dame Eleanor Voss from her late partner and she is debugging the AI Search indexing workflow you configured and she sees a single custom skill that should call Form Recognizer but the skill never receives requests so which stage of the indexing pipeline might be responsible?

  • ❏ A. Output field mapping stage

  • ❏ B. formSasToken authentication

  • ❏ C. File parsing and content extraction phase

  • ❏ D. Push to index final write stage

Which two translation modes are supported by the Speech Translation endpoint?

  • ❏ A. Speech to Text and Text to Speech

  • ❏ B. Speech to Speech and Speech to Text

  • ❏ C. Speech to Intent and Intent to Speech

Within a multi turn dialog session which methods should you invoke to start a waterfall style dialog by placing a new instance of that dialog on top of the dialog stack? (Choose 2)

  • ❏ A. CancelAllDialogs

  • ❏ B. PromptAsync

  • ❏ C. ReplaceDialog

  • ❏ D. BeginDialogAsync

A regional online retailer is evaluating Azure Search pricing tiers to choose capacity and security features for their product index. Which statements about the feature limits and capabilities of the different pricing tiers are correct? (Choose 3)

  • ❏ A. The storage optimized tier always experiences higher query latency than the standard tier

  • ❏ B. The free tier does not include AI enrichment capabilities

  • ❏ C. The selected pricing tier sets the ceiling for the number of provisioned query units

  • ❏ D. The free tier supports configuring an IP firewall for access control

  • ❏ E. Customer managed keys are unavailable on the free plan

A small retail analytics firm called Meridian Insights must accelerate image annotation for an Azure based object detection and image tagging project because of a compressed timeline. What approach will most quickly speed up the labeling while leveraging Azure services?

  • ❏ A. Use the Smart Labeler in Custom Vision on the latest trained iteration to auto-suggest labels for faster annotation

  • ❏ B. Manually tag every photo to guarantee label quality

  • ❏ C. Attempt to apply the Smart Labeler on a brand new untrained Custom Vision project to label images immediately

  • ❏ D. Use Azure Machine Learning data labeling workflows or the Visual Object Tagging Tool to speed up annotations

  • ❏ E. Try to train the detection model without any human labeled images by using unsupervised methods

A company named ClearWave is improving an automated transcription pipeline for engineering webinars and it often mis-transcribes domain specific jargon. You need to improve the transcription accuracy for specialized terminology. You can perform the following actions. 1 Create a Custom Speech workspace. 2 Instantiate a speech recognition model. 3 Upload annotated audio corpora. 4 Train the recognition model. 5 Provision the model endpoint. 6 Create a Speaker ID model. 7 Create a Conversation Understanding model. Which five steps should you perform in sequence to improve transcription accuracy for the technical vocabulary?

  • ❏ A. 6-1-2-4-5

  • ❏ B. 1-2-3-4-5

  • ❏ C. 1-3-2-4-5

  • ❏ D. 3-2-1-4-5

Your team at Northbridge Surveys is evaluating Azure Document Intelligence to automate extraction of answers from standardized questionnaires and you want to train a model so the service returns consistent JSON output for your data store. Which programming languages are provided as native Forms Recognizer SDKs?

  • ❏ A. Google Cloud Vision Vertex AI and Cloud Functions

  • ❏ B. Python and R only

  • ❏ C. JavaScript Java Python and C#/.NET

  • ❏ D. Go and Ruby

  • ❏ E. All of the above

Which Azure services can automatically detect and block sexually explicit or violent images while requiring minimal developer effort? (Choose 2)

  • ❏ A. Azure AI Custom Vision

  • ❏ B. Azure AI Content Safety

  • ❏ C. Azure AI Document Intelligence

  • ❏ D. Azure AI Vision

What extra capability do neural speech models offer compared with older synthetic voices?

  • ❏ A. Cloud Text-to-Speech

  • ❏ B. Broader selection of female voice tones

  • ❏ C. Ability to render natural prosody with nuanced intonation and stress

  • ❏ D. Voices that are entirely gender neutral

A development group at Fabrikam trained and published an Azure Custom Vision project and they want to know whether altering the published model is usually difficult and if they should build a new model for configuration changes or switch to the Computer Vision service when changes are expected?

  • ❏ A. True

  • ❏ B. False

Within the context of Verdant Cloud Search fill in the missing phrase in this sentence After you create and populate an index you can query it to find information in the indexed documents While you could fetch index entries by matching simple field values most search systems use what to query an index?

  • ❏ A. Term synonym map

  • ❏ B. Cloud Natural Language API

  • ❏ C. Search suggester

  • ❏ D. Full text search semantics

  • ❏ E. Relevance scoring profile

A financial analytics firm must retain regulated records and securely archive machine learning experiment artifacts to meet audit requirements. Which Azure service should they use to manage compliance and documentation storage?

  • ❏ A. Microsoft Service Trust Portal

  • ❏ B. Compliance Manager

  • ❏ C. Microsoft Defender for Cloud

  • ❏ D. Azure Purview

A conversational agent being built by Aurora Systems needs a language understanding training template that maps intents to example utterances so the agent can be validated locally with the Bot Framework CLI. The CLI requires an input file in a supported format when invoking the bf luis test command. Which file extension should you provide to perform a local test of the language model?

  • ❏ A. .qna

  • ❏ B. .lg

  • ❏ C. .lu

  • ❏ D. .dialog

Which three file formats are the most suitable for importing textual content into an Azure OpenAI assistant for processing? (Choose 3)

  • ❏ A. CSV

  • ❏ B. JSON

  • ❏ C. DOCX

  • ❏ D. MD

  • ❏ E. TXT

A startup called EchoWorks is transcribing a large backlog of meeting recordings with Azure Speech and it will analyze the resulting text for insights and automation. The team uses batch transcription which involves four operations. Operation Start (021) creates a transcription. Operation Fetch (022) retrieves a transcription. Operation Amend (023) updates transcription settings. Operation Remove (024) deletes a transcription. The available REST endpoints and methods are Call Alpha (B31) GET speech2text/v3.1/transcripts/{id} Call Beta (B32) POST speech2text/v3.1/transcripts Call Gamma (B33) DELETE speech2text/v3.1/transcripts/{id} Call Delta (B34) PATCH speech2text/v3.1/transcripts/{id} Which API call corresponds to each operation?

  • ❏ A. 021 maps to B31, 022 maps to B32, 023 maps to B34, 024 maps to B33

  • ❏ B. 021 maps to B31, 022 maps to B32, 023 maps to B33, 024 maps to B34

  • ❏ C. 021 maps to B32, 022 maps to B31, 023 maps to B34, 024 maps to B33

  • ❏ D. 021 maps to B32, 022 maps to B31, 023 maps to B33, 024 maps to B34

A startup named Acme Vision Labs needs to identify faces from a crowd of people and they have created PersonGroups that store PersistedFace records for each individual. The engineers want to compare the accuracy of different Azure Face recognition models by setting the recognitionModel parameter in their API calls. They will run one test using the service default recognition model and a second test using the newest available recognition model. Complete the sample call for the first run by choosing the correct recognitionModel value in the example code string imageUrl = “<image url>”; var faces = await faceClient.Face.DetectWithUrlAsync(imageUrl, true, true, recognitionModel:”…​…​…​…​…​…​…​…​…​”, returnRecognitionModel: true);?

  • ❏ A. recognition_v3

  • ❏ B. recognition_v1

  • ❏ C. recognition_v4

  • ❏ D. recognition_v2

Your team at Crestwave Solutions is building an image classification workflow using the Custom Vision service and you documented the sequence but four steps are missing. Step 1 Step 2 Step 3 Obtain the sample images Step 4 Add the client code Step 5 Step 6 Step 7 Upload and tag images Step 8 Train and publish the project Step 9 Use the prediction endpoint Step 10 Run the application What steps should fill the blank positions? (Choose 4)

  • ❏ A. Step 6 corresponds to creating tags in the project

  • ❏ B. Step 2 corresponds to installing the Custom Vision client library package

  • ❏ C. Step 5 corresponds to creating the Custom Vision project

  • ❏ D. Step 1 corresponds to installing the Custom Vision client library package

  • ❏ E. Step 6 corresponds to creating the Custom Vision project

  • ❏ F. Step 1 corresponds to retrieving the training and prediction keys

  • ❏ G. Step 2 corresponds to retrieving the training and prediction keys

  • ❏ H. Step 5 corresponds to creating tags in the project

A client uses Azure Cognitive Search to provide document retrieval for Northbridge Media, and the client intends to enable server side encryption with customer managed keys stored in Azure Key Vault. What three consequences will this configuration change cause? (Choose 3)

  • ❏ A. Index storage footprint will grow

  • ❏ B. A self signed X509 certificate must be provided

  • ❏ C. Index size will shrink

  • ❏ D. Search response latency will increase

  • ❏ E. Query throughput will improve

  • ❏ F. Azure Key Vault must be used

True or False The Vision Face Detector returns details about faces found in a photo but it is not designed to identify or match a specific person?

  • ❏ A. False

  • ❏ B. True

Azure AI-102 Practice Exam Answers

Nimbus Speech Systems provides a batch transcription REST interface to process large audio collections and return transcripts, and you are evaluating statements about how its API behaves. Which of the following statements about the voicebatch/v4_0alpha/transcripts endpoint are correct? (Choose 4)

  • ✓ A. Use the voicebatch/v4_0alpha/transcripts endpoint with the PATCH method to modify editable details of an existing transcript

  • ✓ B. Use the voicebatch/v4_0alpha/transcripts endpoint with the GET method to retrieve the result files and status for a transcript identified by an ID

  • ✓ C. Call the voicebatch/v4_0alpha/transcripts endpoint with the POST method to submit a new bulk transcription job

  • ✓ D. Send a DELETE request to the voicebatch/v4_0alpha/transcripts endpoint to remove a specified transcript

The correct answers are Use the voicebatch/v4_0alpha/transcripts endpoint with the PATCH method to modify editable details of an existing transcript, Use the voicebatch/v4_0alpha/transcripts endpoint with the GET method to retrieve the result files and status for a transcript identified by an ID, Call the voicebatch/v4_0alpha/transcripts endpoint with the POST method to submit a new bulk transcription job, and Send a DELETE request to the voicebatch/v4_0alpha/transcripts endpoint to remove a specified transcript.

Call the voicebatch/v4_0alpha/transcripts endpoint with the POST method to submit a new bulk transcription job is correct because POST is the standard REST operation to create a new resource and a batch transcription API accepts job submissions that reference audio files and processing options.

Use the voicebatch/v4_0alpha/transcripts endpoint with the GET method to retrieve the result files and status for a transcript identified by an ID is correct because GET returns the current representation of a resource and the transcripts endpoint provides job status and links to output when queried with the transcript identifier.

Use the voicebatch/v4_0alpha/transcripts endpoint with the PATCH method to modify editable details of an existing transcript is correct because PATCH is intended for partial updates so the service can accept changes to metadata or processing parameters without requiring a full resubmission.

Send a DELETE request to the voicebatch/v4_0alpha/transcripts endpoint to remove a specified transcript is correct because DELETE is the standard REST verb for removing a resource and the endpoint supports deleting transcripts by id.

Remember REST method semantics and map POST to create, GET to read, PATCH to partial update, and DELETE to remove when you evaluate API endpoint behaviors.

Your team at AzureVista trained a custom image classification model with the vision service and you need to fetch the model accuracy and related metrics through the REST API. Which API method should you call?

  • ✓ C. GetIterationPerformance

The correct answer is GetIterationPerformance.

GetIterationPerformance is the Training API method that returns evaluation metrics for a specific model iteration. It provides iteration accuracy and related statistics such as precision, recall and average precision so you can assess how well that iteration performs on your images.

ExportIteration is used to export a trained iteration for deployment and it does not return evaluation metrics.

GetImagePerformances is not the operation to retrieve overall iteration accuracy. That call, where available, would focus on per image assessment details and not the aggregate metrics requested here.

GetProject returns project metadata such as name and settings and it does not include iteration performance or accuracy metrics.

When a question asks for model accuracy or metrics look for training API methods that include the words iteration or performance in their name. Check the operation response for fields like precision and recall to confirm it returns the metrics you need.

Your organization maintains an Azure subscription that hosts an Azure Cognitive Search service named SearchProd1 and you develop a custom skill in SearchProd1 to perform language detection and sentiment scoring for documents. When documents move through an enrichment pipeline which sequence do they follow to be indexed by SearchProd1?

  • ✓ B. 3 then 4 then 2 then 1 then 5

The correct option is 3 then 4 then 2 then 1 then 5.

This sequence corresponds to the normal Azure Cognitive Search enrichment pipeline where the indexer retrieves documents from the data source first and then the content is extracted and parsed. After parsing the skillset runs and your custom skill performs language detection and sentiment scoring to produce enriched fields. The enrichment outputs are merged back into the document and then the enriched documents are written into the index for querying.

5 then 3 then 2 then 4 then 1 is incorrect because it places the final indexing step at the start. Indexing cannot occur before document extraction and enrichment have produced the fields that need to be indexed.

1 then 4 then 2 then 5 then 3 is incorrect because the steps are out of order and would represent running later pipeline actions before the necessary parsing and enrichment stages have completed.

4 then 2 then 3 then 1 then 5 is incorrect because it implies the skill execution or other intermediate steps happen before the document has been properly extracted and prepared for enrichment, so the enrichment would not have the correct inputs.

Read the question and map each numbered step to the conceptual pipeline stages. Focus on the logical flow of data from the data source to the index and remember that extraction and skillset enrichment must occur before the final indexing step. Use the process “retrieve, extract, enrich, merge, index” to guide your answer.

Maya Chen recently joined Solstice Analytics and she is creating an application that uses Azure AI Vision to detect whether people appear in a live camera stream. Which Azure AI Vision feature should she use?

  • ✓ C. Spatial Analysis

The correct option is Spatial Analysis.

Spatial Analysis in Azure AI Vision is built to analyze live camera feeds and to detect people while providing spatial metadata such as bounding boxes, counts, positions and movement. It delivers real time scene level insights like occupancy and direction which make it suitable for monitoring people in a live stream.

Spatial Analysis focuses on contextual and positional data across frames and cameras rather than on extracting text or generating general image labels, so it is the right feature when the requirement is to detect whether people appear in a live camera stream.

Optical Character Recognition is incorrect because it is used to extract text from images and scanned documents and it does not provide people detection or spatial information for live camera streams.

Face Detection is incorrect in this context because it targets faces and facial landmarks and is provided by a different capability that emphasizes identifying faces rather than delivering broad spatial analytics and occupancy metrics across a live scene.

Image Analysis is incorrect because it generates labels, descriptions and general attributes for static images and it is not tailored for continuous people tracking or real time spatial metrics in a live camera feed.

When a question mentions live camera feeds and people detection look for features that emphasize spatial or real time analytics and pay attention to words like occupancy and bounding box.

Your team at Nimbus Analytics is switching from Microsoft managed encryption keys to customer managed keys for an Azure Cognitive Services deployment to gain more control over key lifecycle and access. Which of the following statements about configuring customer managed keys for the service are correct? (Choose 3)

  • ✓ A. You must supply the Key Identifier URI from Key Vault as the key reference in the Azure Cognitive Services key URI

  • ✓ C. To use customer managed keys you must create or import a key inside the Key Vault

  • ✓ E. Microsoft recommends storing customer managed keys in Azure Key Vault

The correct options are You must supply the Key Identifier URI from Key Vault as the key reference in the Azure Cognitive Services key URI, To use customer managed keys you must create or import a key inside the Key Vault, and Microsoft recommends storing customer managed keys in Azure Key Vault.

You must supply the Key Identifier URI from Key Vault as the key reference in the Azure Cognitive Services key URI is correct because Azure Cognitive Services needs the Key Vault key identifier so it can reference the exact key to use for encryption and decryption. When you configure customer managed keys you provide the key URI so the service can locate the key and the service principal or managed identity must be granted permissions to use that key.

To use customer managed keys you must create or import a key inside the Key Vault is correct because customer managed keys are actual keys that live in Key Vault and you either generate a new key in Key Vault or import an existing key into it. The key must exist with the proper key operations and access permissions before Cognitive Services can use it.

Microsoft recommends storing customer managed keys in Azure Key Vault is correct because Key Vault is the managed service designed for key storage and lifecycle management and Microsoft documentation and guidance point to Key Vault as the supported and recommended location for customer managed keys. Using Key Vault also integrates with Azure identities and logging for operational control.

You need to disable both Key Vault soft delete and purge protection to recover encrypted data is incorrect because soft delete and purge protection are safeguards that protect keys from accidental or malicious deletion. Disabling those protections is not required to recover data and is generally not recommended when protecting keys used for encryption.

Azure Managed HSM is required to store customer managed keys for Cognitive Services is incorrect because Azure Key Vault supports storing keys for customer managed keys and Managed HSM is an optional offering. Managed HSM provides a hardware-isolated, single-tenant option for keys but it is not a prerequisite for Cognitive Services to use customer managed keys.

When answering questions about customer managed keys focus on the required configuration steps such as supplying the Key Identifier URI and creating or importing the key, and remember that Azure Key Vault is the recommended store and protections like soft delete are normally enabled rather than disabled.

What approach minimizes manual labeling when retraining an Azure Custom Vision classifier using 1,000 unlabeled photos?

  • ✓ B. Upload all images to Custom Vision and request suggested tags then review and confirm those suggestions

Upload all images to Custom Vision and request suggested tags then review and confirm those suggestions is correct.

Custom Vision can analyze uploaded images and produce suggested tags based on learned visual patterns, and confirming those suggestions is much faster than labeling each photo by hand. Using the suggested tag workflow lets you create a labeled dataset quickly and then retrain the classifier with minimal manual effort.

Use Azure Machine Learning data labeling then import labeled data into Custom Vision is not the best choice because it requires setting up a separate labeling pipeline and moving data between services, which adds complexity and extra manual steps. It can work for large coordinated labeling projects but it does not minimize manual work in this scenario.

Upload all images and tag each image manually before training is incorrect because tagging every one of the 1,000 images by hand defeats the purpose of minimizing manual labeling. That approach guarantees label accuracy but it is the most time consuming option.

When a question asks how to reduce manual labeling prefer answers that use built in suggestion or auto tagging features so a model proposes labels and a human only needs to confirm or correct them.

The Aegis Collective feared that escalating geopolitical tensions would cause a large scale disaster and they built an underground refuge for their affluent patrons under the leadership of Gideon Vale. Gideon is exploring text analytics for Aegis so the organization can extract insights from large volumes of written data. Which capabilities does text analytics provide? (Choose 4)

  • ✓ B. Sentiment analysis

  • ✓ D. Language detection

  • ✓ F. Entity recognition

  • ✓ G. Key phrase extraction

The correct options are Sentiment analysis, Language detection, Entity recognition, and Key phrase extraction.

These four features are core text analytics capabilities. Language detection automatically identifies the language of a text sample which helps route data to language specific processing. Entity recognition locates and classifies named entities such as people places organizations dates and other domain specific items. Sentiment analysis evaluates text to determine positive neutral or negative sentiment at the document or sentence level. Key phrase extraction pulls out the most important words and phrases to summarize content and highlight main topics.

Translator text is not a text analytics output. It is a separate translation service that converts text from one language to another rather than extracting insights such as sentiment or entities.

QnA Maker is a question answering and knowledge base service and it is not a core text analytics capability. The QnA Maker offering has been replaced by newer question answering features in the Language service which is why it is less likely to appear as a text analytics answer on newer exams.

Authoring refers to tools for building or managing content and knowledge bases and it is not an analytics operation that extracts entities key phrases language or sentiment from text.

When deciding which options belong to text analytics focus on features that analyze or extract information from text such as sentiment entities key phrases and language detection. Use the wording of the option to spot analysis and extraction terms which are strong clues.

Scenario: Priya Anand joined Aurora Security Agency and plans to build a tool that will transcribe a large backlog of audio files by using the Azure AI Speech batch transcription feature. Priya intends to save the transcription outputs to an Azure Blob container by passing a destinationContainerUrl in the batch transcription request and she supplies an ad hoc SAS URI for that destination. Why is Priya using an ad hoc SAS URI in this setup?

  • ✓ C. The Speech batch output requires an ad hoc SAS because access policy based SAS is not supported

The Speech batch output requires an ad hoc SAS because access policy based SAS is not supported is correct.

The Speech batch output requires an ad hoc SAS because access policy based SAS is not supported is required because the Speech batch API expects a fully signed SAS URI that contains the signature and permissions directly in the URI. The service does not accept a SAS token that only references a stored access policy on the container, so you must provide an ad hoc SAS that grants the Speech service temporary write access for the job.

The storage account is configured to allow only trusted Azure services so SAS usage is blocked is incorrect because a trusted services setting affects network access rules and does not prevent the use of SAS tokens by services that are allowed to reach the storage account. This setting is not the reason Speech batch requires an ad hoc SAS.

The destination container must permit external network access so the Speech service can write the results is incorrect because the Speech service writes to the container using the provided SAS URI over HTTPS and the container does not need to be publicly accessible. The SAS grants the needed access without exposing the container to external anonymous access.

The Speech service must use the storage account key to authenticate to the container is incorrect because the Speech service does not require the storage account key for batch output. Instead you supply a SAS URI so the service can use temporary scoped permissions rather than the account key.

When a service asks for a destinationContainerUrl provide a full ad hoc SAS URI because many Azure services do not accept SAS tokens that reference a stored access policy.

Your team is building a customer support chatbot that uses Azure Cognitive Service for Language question answering. You upload a PDF named Catalogue.pdf that contains a product catalogue and pricing and you train the model. During validation the chatbot correctly answers the user question “What is the price of Widget Z?” but it does not respond to the user question “How much does Widget Z cost?” You plan to use Language Studio to create an entity called price and then retrain and republish the model. Will that change ensure the chatbot responds correctly to both phrasings?

  • ✓ B. No

No is correct.

Creating an entity called price in Language Studio and then retraining and republishing will not by itself make the question answering system match different phrasings of the same question that are found in your PDF. The question answering capability uses the knowledge base content and matching or semantic ranking to map user queries to document text or QnA pairs. Defining an entity schema is for extracting structured values or for use in intent and entity driven dialogs and it does not add alternative question phrasings to the knowledge base.

To ensure the chatbot answers both “What is the price of Widget Z?” and “How much does Widget Z cost?” you should add alternative phrasings or explicit QnA pairs to the knowledge base or use semantic ranking and paraphrase handling features when indexing the document. Another option is to use an NLU layer that maps paraphrases to the same intent and then queries the knowledge base with a normalized question.

Yes is incorrect because simply creating a price entity will not teach the retrieval model new surface forms or synonyms and it will not modify how the document based question answering matches user text to the content of Catalogue.pdf.

When you see questions about whether a change will fix QA mismatches think about whether the change affects the knowledge base content or only the NLU layer. Adding entities helps extraction but adding alternate phrasings or QnA pairs changes how the QA system matches user questions.

HarborTech Manufacturing was started by Maya and Jonas Lee and the company makes lubricants solvents and farm supplies across North America. They elevated their son Evan to the IT director role and his team will build a system to transcribe large batches of audio using the Azure AI Speech batch transcription capability. Evan wants a storage solution for the audio files that demands minimal development effort. Which storage option should Evan recommend?

  • ✓ C. Azure Blob Storage

The correct option is Azure Blob Storage.

Azure Blob Storage is purpose built for large amounts of unstructured data such as audio files and it integrates directly with the Azure AI Speech batch transcription workflow so you can supply containers or SAS URLs for transcription jobs. It scales easily and offers hot and cool tiers to control cost while exposing simple REST APIs and SDKs which minimizes development effort.

Using Azure Blob Storage avoids running file servers or shoehorning binary data into a database and it keeps the architecture simple because the Speech batch service is designed to read audio from blob containers. That combination reduces custom coding and operational work for Evan and his team.

Azure Cosmos DB is a globally distributed NoSQL database and it is not intended for storing large binary audio files. Putting blobs into Cosmos DB would be expensive and would add unnecessary complexity compared with object storage.

Azure Files provides managed SMB or NFS file shares and it is useful when you need file system semantics. It often requires mounting or extra configuration and it does not offer the same native, minimal effort integration point for Azure Speech batch transcription as blob storage, so it is not the best fit for this scenario.

Azure SQL Database is a relational database designed for structured data and transactional workloads. Storing large audio files in a SQL database causes database bloat and poor performance and it is less cost effective and more work to develop against than using object storage.

When you need to store large media files for other Azure services prefer Blob Storage for its direct integration and simple SDK or REST access unless the scenario explicitly requires file share semantics or relational indexing.

A startup named Nova Imaging is developing an image classifier with Azure Custom Vision and they are using the Custom Vision REST API to upload and label training images. Please share the code fragment so the correct client class to call PublishIteration for the current training iteration can be determined?

  • ✓ C. CustomVisionTrainingClient

CustomVisionTrainingClient is correct.

The training client exposes the PublishIteration operation which registers a trained iteration under a published name and associates it with a prediction resource so the model can be called by the prediction endpoint.

ApiKeyServiceClientCredentials is incorrect because it is a credentials helper used to authenticate API calls and it does not implement training operations such as PublishIteration.

VertexAIClient is incorrect because it is part of Google Cloud Vertex AI and not related to Azure Custom Vision and it will not contain Azure Custom Vision operations.

CustomVisionPredictionClient is incorrect because that client is intended for running inference against published models and it does not provide PublishIteration which is a training management call.

When a question mentions lifecycle methods like PublishIteration choose the training client rather than the prediction client and confirm the client belongs to the correct cloud provider.

An organization needs a simple deployment to analyze sales invoices and expects up to six simultaneous requests. Which resource should be created?

  • ✓ B. Document Intelligence standard tier

The correct option is Document Intelligence standard tier.

The Document Intelligence standard tier is designed for production workloads and offers higher throughput and concurrency which makes it appropriate for handling up to six simultaneous invoice analysis requests. It provides paid capacity and service limits that support multiple parallel calls and predictable performance for real world usage.

The Document Intelligence free tier is intended for evaluation and low volume testing and it imposes strict transaction and concurrency limits that make it unsuitable for six concurrent requests.

The Azure Cognitive Services resource option is too general for this scenario because you specifically need the Document Intelligence service at the appropriate tier to guarantee the required throughput and quotas rather than a generic cognitive services resource.

For questions about handling multiple simultaneous requests choose the standard or paid tier which is aimed at production workloads. The free tier is primarily for testing and has tight limits.

Your Azure subscription contains an Azure OpenAI instance where the GPT four model is deployed. You must allow teams to upload documents that will serve as grounding material for the model. Which two types of Azure resources should you provision? (Choose 2)

  • ✓ B. Azure AI Search

  • ✓ D. Azure Blob Storage

The correct options are Azure AI Search and Azure Blob Storage.

Teams upload documents to Azure Blob Storage where files are stored durably and accessed by processing pipelines. Blob Storage is designed for large amounts of unstructured data and it integrates with indexing and enrichment services.

Azure AI Search provides indexing, enrichment, and semantic or vector retrieval capabilities so the GPT-4 model can be grounded with relevant passages. It can ingest content from blob storage and expose search and vector queries that are useful for retrieval augmented generation scenarios.

Together Azure Blob Storage and Azure AI Search provide the storage and the searchable vector index needed to supply grounding material to an Azure OpenAI GPT-4 deployment during inference.

Azure SQL is a relational database service and it is not intended for storing large collections of unstructured documents or for providing semantic or vector search capabilities.

Azure AI Bot Service is focused on building and hosting conversational bots and channels and it does not provide the storage or indexing features needed to ground a model with document content.

Azure AI Document Intelligence is useful for extracting structured data and text from documents and it can be part of a preprocessing step. It is not a replacement for a storage system and a search/index service that supplies passages for model grounding.

When the question asks about grounding a model think separately about where files are stored and which service provides search or retrieval for the model to reference.

A travel technology startup named HarborTravel is creating a conversational assistant that must consult an FAQ knowledge base to answer customer inquiries. Which dialog class should the developer use?

  • ✓ D. QnAMakerDialog

The correct option is QnAMakerDialog.

The QnAMakerDialog is the Bot Framework dialog class that is specifically built to query a question and answer knowledge base and return FAQ style answers to user queries. It wraps the integration with a QnA knowledge base so the conversational assistant can match user utterances to the best KB answer and handle follow up prompts and metadata when needed.

Be aware that the QnA Maker capability has been succeeded by Azure Cognitive Services for Language question answering in newer stacks. Newer SDKs and exams may reference the Language service Question Answering features or updated dialog wrappers rather than the legacy QnAMakerDialog.

SkillDialog is intended for invoking external skill bots and delegating parts of a conversation and not for looking up answers from an FAQ knowledge base.

Dialogflow CX is a Google Cloud conversational platform and not a Bot Framework dialog class, so it is not the correct choice when the question asks which dialog class to use in this context.

AdaptiveDialog is a general purpose Bot Framework dialog for building complex, state driven conversational flows and it does not provide the built in knowledge base lookup and answer matching that QnA Maker provides.

When a question mentions an FAQ or knowledge base think of the service that maps user questions to KB answers. Watch for deprecated names and prefer the latest question answering service if the exam wording hints at newer SDKs.

A startup named FrameLoop is building a photo sharing platform where users upload images. The product team needs to automatically block explicit or offensive photos and they want to minimize development work while relying on Azure managed services. Which services should they choose? (Choose 2)

  • ✓ B. Azure AI Content Safety

  • ✓ C. Azure AI Vision

The correct answer is Azure AI Content Safety and Azure AI Vision.

Azure AI Vision provides built in image analysis and explicit content detection so it can automatically flag adult or offensive photos with minimal custom development. It returns moderation labels and confidence scores that let the application block or route images without having to build and maintain your own models.

Azure AI Content Safety supplies managed moderation policies and safety scoring that work across images and text and it complements Vision by centralizing moderation rules and reducing the amount of custom code and ongoing model maintenance required.

Azure AI Document Intelligence is focused on extracting structured data from documents and scanned pages and it is not designed for image moderation, so it is not appropriate for blocking explicit photos.

Azure AI Custom Vision can be used to build custom image classifiers but it requires training data and ongoing model management, so it does not minimize development work as effectively as the managed moderation services.

Azure Content Moderator is a legacy service and Microsoft is steering customers toward the newer Content Safety and Vision offerings. It is therefore deprecated for new deployments and is less likely to be the correct choice on current exams.

When a question emphasizes minimal development and managed services look for built in moderation tools and safety products. Pay attention to the words managed and moderation in the prompt.

A regional learning portal wants to make its articles and textbooks easier to read for students with dyslexia and other processing differences. Which Azure AI service should the team add to enable an adjustable and reader friendly experience?

  • ✓ C. Immersive Reader

The correct answer is Immersive Reader.

Immersive Reader offers adjustable reading tools such as text spacing line focus syllable highlighting and read aloud which help students with dyslexia and other processing differences read more comfortably. These tools are designed to change the visual presentation and provide audio support so learners can tailor the experience to their needs.

Immersive Reader is available as part of Azure Cognitive Services and can be embedded in web and mobile applications with simple SDKs and REST APIs so the learning portal can add the reader friendly features without rebuilding its content pipeline.

Text Analytics is focused on extracting sentiment key phrases entities and language detection and it does not provide interactive reading or formatting tools for learners.

Custom Vision is used for image classification and object detection and it does not offer features to improve text readability or provide reading aids for dyslexia.

Speech service provides speech to text and text to speech capabilities and it can read content aloud but it does not provide the on page adjustable reading interface features like text spacing line focus and syllable highlighting that Immersive Reader provides.

Focus on whether the service explicitly provides adjustable reading tools such as line focus text spacing and built in read aloud when the question mentions accessibility for dyslexia. Immersive Reader is the Azure feature built for that purpose.

Aurora Components Ltd was the firm inherited by Dame Eleanor Voss from her late partner and she is debugging the AI Search indexing workflow you configured and she sees a single custom skill that should call Form Recognizer but the skill never receives requests so which stage of the indexing pipeline might be responsible?

  • ✓ C. File parsing and content extraction phase

The correct option is File parsing and content extraction phase.

The File parsing and content extraction phase is where the indexer extracts text and metadata and where AI enrichment runs, so custom skills that call Form Recognizer are invoked during or immediately after parsing. If the parser fails, the file type is unsupported, or extraction yields no content there will be nothing to send to the custom skill, and the skill will not receive any requests.

Output field mapping stage is incorrect because field mapping occurs after enrichment and after custom skills have run. If a skill never received requests the cause is upstream of mapping rather than in the mapping stage.

formSasToken authentication is incorrect because this describes an authentication mechanism for storage access rather than a pipeline stage. A bad SAS token would prevent the indexer from reading documents at all and would produce access errors rather than a silent absence of calls to the skill.

Push to index final write stage is incorrect because the final write to the index happens after enrichment and after custom skills execute. If the skill never receives requests the failure occurred before the final write stage.

When a custom skill is not invoked start by checking the AI enrichment and parsing steps and verify the skillset mappings and supported file types before inspecting authentication or final write behavior.

Which two translation modes are supported by the Speech Translation endpoint?

  • ✓ B. Speech to Speech and Speech to Text

Speech to Speech and Speech to Text is correct.

The Speech Translation endpoint supports Speech to Speech which accepts spoken input and returns translated spoken output, and it also supports Speech to Text which accepts spoken input and returns translated text transcripts.

The service is designed to translate spoken language and to produce either translated audio or translated text depending on the chosen mode, and both Speech to Speech and Speech to Text reflect those two output options.

Speech to Text and Text to Speech is incorrect because that pairing mixes a translation output with a synthesis capability and it does not describe the two translation modes offered by the Speech Translation endpoint.

Speech to Intent and Intent to Speech is incorrect because intent detection is a separate conversational or natural language feature and it is not one of the translation modes supported by the Speech Translation endpoint.

When a question asks about endpoint modes focus on the input and the output types and match modes that describe translating spoken input into either spoken output or text output. Remember that translation modes refer to what you get back from the service.

Within a multi turn dialog session which methods should you invoke to start a waterfall style dialog by placing a new instance of that dialog on top of the dialog stack? (Choose 2)

  • ✓ B. PromptAsync

  • ✓ D. BeginDialogAsync

The correct answers are BeginDialogAsync and PromptAsync.

BeginDialogAsync pushes a new dialog instance onto the dialog stack and begins that dialog immediately. You use it to start a waterfall or any dialog so that the new dialog runs on top of the current conversation flow and receives the next turn.

PromptAsync starts a prompt dialog by placing the prompt on top of the stack and awaiting the user response. In waterfall steps you commonly call a prompt so the prompt becomes the active dialog and then returns the collected result to the calling step.

CancelAllDialogs is incorrect because it clears or cancels dialogs on the stack instead of starting a new dialog on top. It is used to end or reset the dialog stack.

ReplaceDialog is incorrect because it replaces the current dialog with a new one rather than pushing a new instance onto the stack. Replace is used when you want to swap or restart the active dialog instead of stacking a new dialog above it.

When a question asks which method starts a dialog on top of the stack ask whether the method pushes a new dialog or whether it modifies or clears the stack. BeginDialogAsync and PromptAsync push and start, while ReplaceDialog and CancelAllDialogs change or remove existing stack entries.

A regional online retailer is evaluating Azure Search pricing tiers to choose capacity and security features for their product index. Which statements about the feature limits and capabilities of the different pricing tiers are correct? (Choose 3)

  • ✓ B. The free tier does not include AI enrichment capabilities

  • ✓ C. The selected pricing tier sets the ceiling for the number of provisioned query units

  • ✓ E. Customer managed keys are unavailable on the free plan

The correct options are The free tier does not include AI enrichment capabilities, The selected pricing tier sets the ceiling for the number of provisioned query units and Customer managed keys are unavailable on the free plan.

The free tier does not include AI enrichment capabilities is correct because the free plan is intended for development and basic testing and it does not provide advanced AI enrichment features such as cognitive skillsets or built in cognitive services integration that are available in paid tiers.

The selected pricing tier sets the ceiling for the number of provisioned query units is correct because the tier determines the maximum capacity in terms of replicas, partitions, and related query capacity and you cannot provision more query capacity than the tier allows.

Customer managed keys are unavailable on the free plan is correct because customer managed encryption keys require higher service tiers and are not offered on the free tier which uses platform managed encryption only.

The storage optimized tier always experiences higher query latency than the standard tier is wrong because storage optimized tiers trade off storage density and cost against some performance characteristics and they do not universally produce higher latency. Actual query latency depends on the specific configuration of replicas, partitions, and workload rather than the tier name alone.

The free tier supports configuring an IP firewall for access control is wrong because network level controls such as IP firewall rules are not supported on the free plan and are features of paid tiers that include enhanced security and networking options.

When answering tier and capability questions read the wording carefully and compare the features matrix in the official docs. Pay attention to limits on replicas and partitions and check which security features require paid tiers.

A small retail analytics firm called Meridian Insights must accelerate image annotation for an Azure based object detection and image tagging project because of a compressed timeline. What approach will most quickly speed up the labeling while leveraging Azure services?

  • ✓ A. Use the Smart Labeler in Custom Vision on the latest trained iteration to auto-suggest labels for faster annotation

Use the Smart Labeler in Custom Vision on the latest trained iteration to auto-suggest labels for faster annotation is correct.

The Use the Smart Labeler in Custom Vision on the latest trained iteration to auto-suggest labels for faster annotation workflow leverages predictions from a recently trained iteration to propose tags and bounding boxes so human reviewers can accept or refine suggestions rather than label every image from scratch. This integration inside Custom Vision directly uses the model you have already trained and so it provides the fastest, lowest friction way to accelerate annotation while staying within Azure services.

Manually tag every photo to guarantee label quality is incorrect because manually labeling each image will meet quality goals but it does not speed up the process. Manual tagging is reliable but it is the slowest option and it does not leverage automated suggestions that reduce human effort.

Attempt to apply the Smart Labeler on a brand new untrained Custom Vision project to label images immediately is incorrect because the Smart Labeler depends on predictions from a trained iteration. Without a trained model the suggestions will be poor or unavailable and you will not get the quick, accurate auto-labeling that the Smart Labeler provides.

Use Azure Machine Learning data labeling workflows or the Visual Object Tagging Tool to speed up annotations is incorrect in this context because although Azure Machine Learning labeling workflows can help organize and manage labeling tasks, they do not provide the same out of the box, iteration‑based auto-suggestion inside Custom Vision. The Visual Object Tagging Tool is an older open source tool that is not actively maintained and so it is less likely to be the fastest supported choice on newer Azure workflows or exams.

Try to train the detection model without any human labeled images by using unsupervised methods is incorrect because unsupervised methods do not produce reliable, production ready object detection labels quickly. Object detection usually requires curated labeled data and attempting to avoid human labeling entirely will not meet the project timeline or accuracy needs.

When a question asks how to speed labeling inside an Azure computer vision workflow look for built in automation that leverages a trained iteration such as Smart Labeler rather than options that require labeling everything by hand or unproven unsupervised shortcuts.

A company named ClearWave is improving an automated transcription pipeline for engineering webinars and it often mis-transcribes domain specific jargon. You need to improve the transcription accuracy for specialized terminology. You can perform the following actions. 1 Create a Custom Speech workspace. 2 Instantiate a speech recognition model. 3 Upload annotated audio corpora. 4 Train the recognition model. 5 Provision the model endpoint. 6 Create a Speaker ID model. 7 Create a Conversation Understanding model. Which five steps should you perform in sequence to improve transcription accuracy for the technical vocabulary?

  • ✓ B. 1-2-3-4-5

The correct option is 1-2-3-4-5.

You should create a Custom Speech workspace first to organize projects and data and to manage permissions. After that you instantiate a speech recognition model so you have a model resource to receive training data and configuration. Next you upload annotated audio corpora that include the domain specific jargon so the model can learn the correct pronunciations and transcriptions. Then you train the recognition model to adapt its acoustic and language components to the specialized vocabulary. Finally you provision the model endpoint so the improved model can be called by the transcription pipeline in production.

6-1-2-4-5 is incorrect because creating a Speaker ID model is not part of improving vocabulary accuracy. Speaker ID is used to recognize who is speaking and it does not teach the recognition model new terminology.

1-3-2-4-5 is incorrect because uploading corpora before instantiating the model is not the recommended sequence. The training data is typically associated with a specific model or project so you should create the model resource before submitting annotated data for that model.

3-2-1-4-5 is incorrect because the workspace should exist before you instantiate models or upload data. The workspace provides the container and permissions for models and datasets and it is the logical first step.

On the exam remember to set up the workspace first then create the model then upload annotated training data before training and finally provision the endpoint.

Your team at Northbridge Surveys is evaluating Azure Document Intelligence to automate extraction of answers from standardized questionnaires and you want to train a model so the service returns consistent JSON output for your data store. Which programming languages are provided as native Forms Recognizer SDKs?

  • ✓ C. JavaScript Java Python and C#/.NET

The correct answer is JavaScript Java Python and C#/.NET.

Azure Document Intelligence provides official client libraries for JavaScript Java Python and C#/.NET and those SDKs include tools to train models and return structured JSON that you can store in your data store.

You can also call the service by using the REST API from other languages if needed but the native, first party SDK support is for JavaScript Java Python and C#/.NET.

Google Cloud Vision Vertex AI and Cloud Functions is incorrect because those are Google Cloud services and not the native SDKs for Azure Document Intelligence or Forms Recognizer.

Python and R only is incorrect because Python is supported but R is not provided as a native Forms Recognizer SDK and the service also supports other languages.

Go and Ruby is incorrect because Go and Ruby are not offered as official first party SDKs for Forms Recognizer, although you can integrate via the REST API or community libraries.

All of the above is incorrect because several of the listed options are wrong and the statement that all of them are correct is therefore false.

When you see a question about SDK support look for the official client libraries page and remember that Azure usually publishes first party SDKs for JavaScript, Java, Python, and .NET. Use the REST API for other languages.

Which Azure services can automatically detect and block sexually explicit or violent images while requiring minimal developer effort? (Choose 2)

  • ✓ B. Azure AI Content Safety

  • ✓ D. Azure AI Vision

The correct answers are Azure AI Content Safety and Azure AI Vision.

The Azure AI Content Safety service is built to detect and block sexually explicit and violent content and it provides ready made moderation models that return labels and confidence scores so developers can call the API and take action without building or training custom models.

The Azure AI Vision service includes image analysis and content moderation features that can identify explicit or violent imagery and it exposes simple REST and SDK endpoints so teams can integrate detection and blocking into applications with minimal development effort.

Azure AI Custom Vision is aimed at creating custom image classifiers where you label images and train models, so it requires more developer work and is not an out of the box moderation service.

Azure AI Document Intelligence focuses on extracting structured data and text from documents and forms and it is not designed to moderate images for sexual or violent content.

When the question emphasizes minimal developer effort pick services that provide built in moderation models and simple APIs rather than solutions that require collecting labels and training custom models.

What extra capability do neural speech models offer compared with older synthetic voices?

  • ✓ C. Ability to render natural prosody with nuanced intonation and stress

Ability to render natural prosody with nuanced intonation and stress is correct.

Neural speech models are trained end to end on large amounts of natural speech and they model timing, intonation, and stress patterns so output sounds more human and expressive. This allows them to render natural prosody and nuanced intonation in ways older synthesis techniques could not reliably reproduce.

Older synthesis approaches tended to be concatenative or parametric and they produced flatter, more robotic speech with limited control over subtle intonation and stress. Neural models overcome those limitations by learning prosodic patterns directly from data.

Cloud Text-to-Speech is the name of a service that can deliver neural voices but the service name is not an extra capability. The question asks about what neural speech models can do so a product name is not the correct choice.

Broader selection of female voice tones is incorrect because the improvement from neural models is about expressiveness and natural prosody across voices and styles rather than simply offering more female tone options.

Voices that are entirely gender neutral is incorrect because neural models can produce gender ambiguous voices but their defining extra capability is natural prosody and nuanced intonation and not a guarantee of complete gender neutrality.

When choosing between capabilities and product names look for phrases about speech behavior such as natural prosody or nuanced intonation and prefer those over options that name a service or state a demographic claim.

A development group at Fabrikam trained and published an Azure Custom Vision project and they want to know whether altering the published model is usually difficult and if they should build a new model for configuration changes or switch to the Computer Vision service when changes are expected?

  • ✓ B. False

The correct answer is False.

This is false because Azure Custom Vision is designed for iterative development and published models are not usually difficult to alter. You can add or remove images, refine tags, continue training, and publish a new iteration from the same project. The training API and portal support incremental updates and model export which makes configuration changes manageable without rebuilding from scratch.

If you expect frequent changes it is generally better to continue using Custom Vision rather than switch to the prebuilt Computer Vision service. Computer Vision provides general purpose, pre trained capabilities and it is not intended for frequent custom retraining or specialized labels. Custom Vision gives you the control to update labels and retrain as requirements evolve.

True is incorrect because it assumes published models are hard to change and that you must build a new model for configuration changes. Custom Vision supports iterative updates and republishing so creating an entirely new project is often unnecessary, and moving to Computer Vision would lose the benefit of a custom, trainable model.

When a question contrasts custom and prebuilt services remember that Custom Vision is for iterative, domain specific model updates while Computer Vision provides prebuilt general capabilities.

Within the context of Verdant Cloud Search fill in the missing phrase in this sentence After you create and populate an index you can query it to find information in the indexed documents While you could fetch index entries by matching simple field values most search systems use what to query an index?

  • ✓ D. Full text search semantics

Full text search semantics is the correct option.

Full text search semantics describes the way search systems analyze and match text by using tokenization, stemming, stop word handling, phrase matching, and relevance ranking. This approach lets queries match documents based on the linguistic content and relevance rather than only on exact field value equality, and that is why most search engines and index query systems use full text processing.

Term synonym map is incorrect because a synonym map is a supporting feature that expands or rewrites queries to include equivalent terms and it does not itself define the primary matching semantics for searching indexed content.

Cloud Natural Language API is incorrect because that API analyzes the meaning of text and extracts entities and sentiment and it is not the core mechanism used by search systems to perform full text queries against an index.

Search suggester is incorrect because a suggester provides autocomplete and suggestion functionality for user input and it is not the main query semantics used to retrieve relevant documents from an index.

Relevance scoring profile is incorrect because scoring profiles tune how results are ranked and weighted but they do not replace the underlying full text matching and analysis that finds candidate documents to be scored.

When a question asks how queries find text look for terms like full text, tokenization, or stemming as these refer to the core search semantics rather than peripheral features.

A financial analytics firm must retain regulated records and securely archive machine learning experiment artifacts to meet audit requirements. Which Azure service should they use to manage compliance and documentation storage?

  • ✓ B. Compliance Manager

The correct option is Compliance Manager.

Compliance Manager is designed to help organizations assess and manage regulatory requirements and it centralizes assessment templates, control tracking, and evidence management. It lets teams record assessment results and upload customer evidence and supporting documentation which auditors can review, so it fits a workflow that needs retained regulated records and archived experiment artifacts for audit purposes.

Microsoft Service Trust Portal provides Microsoft produced compliance reports and certifications for customers to download but it is mainly a repository of Microsoft audit artifacts and does not provide an operational compliance management platform for tracking controls or customer evidence.

Microsoft Defender for Cloud focuses on cloud security posture management and threat protection for workloads and it helps detect and remediate security issues but it is not intended to be used as a compliance documentation and evidence management solution.

Azure Purview is primarily a data governance and data catalog service that offers discovery, classification, and lineage and it helps with data governance and retention policies. It is not the central tool for managing compliance assessments and storing audit evidence. Note that Azure Purview has been rebranded to Microsoft Purview which is the current product name you may see on newer materials.

When choosing between compliance related services look for the one that explicitly manages controls and evidence rather than just providing reports or security alerts.

A conversational agent being built by Aurora Systems needs a language understanding training template that maps intents to example utterances so the agent can be validated locally with the Bot Framework CLI. The CLI requires an input file in a supported format when invoking the bf luis test command. Which file extension should you provide to perform a local test of the language model?

  • ✓ C. .lu

The correct answer is .lu.

.lu files use the LU format which maps intents to example utterances so the language model can be validated locally with the Bot Framework CLI. The bf luis test command expects an LU formatted file when running a local test and the .lu file contains the intent definitions and utterances that the test harness needs.

.qna files are used to define QnA Maker knowledge bases or QnA content in Composer and they do not provide the intent to utterance mappings required by the LUIS test tool. QnA Maker has been migrated to the Azure Cognitive Service for Language and the older QnA Maker service is deprecated so .qna is not the correct choice for a bf luis test run.

.lg files are Language Generation templates used to author bot responses and they do not contain intent definitions or example utterances. Because they focus on response templates the CLI will not accept an .lg file for local LUIS validation.

.dialog files store dialog flow or Composer dialog schemas and they capture conversation logic rather than the intent utterance pairs that LUIS needs. The LUIS testing command requires the LU format so .dialog is not appropriate for bf luis test.

When a question mentions the bf luis test command look for the LU file format. The .lu file contains the intent to utterance mappings that the local test harness requires.

Which three file formats are the most suitable for importing textual content into an Azure OpenAI assistant for processing? (Choose 3)

  • ✓ B. JSON

  • ✓ D. MD

  • ✓ E. TXT

The correct options are JSON, MD, and TXT.

The JSON format is structured and can include fields and metadata which makes it straightforward for an assistant to parse and map context to text during ingestion.

The MD format is Markdown and it is plain text with lightweight formatting so it preserves headings lists and code blocks while remaining easy to tokenize and process.

The TXT format is raw plain text and it contains no markup overhead so the content is ingested exactly as written which is useful for general textual data.

CSV is a tabular format that is best for rows and columns and it can break freeform passages when fields contain commas or line breaks so it is not ideal for general textual content.

DOCX is a complex document format that may require conversion and can include styling and embedded objects that do not map cleanly to plain text so it is less reliable for direct ingestion.

When choosing file types for import prefer plain text or lightweight markup so the data is easy to parse and tokenization behaves predictably.

A startup called EchoWorks is transcribing a large backlog of meeting recordings with Azure Speech and it will analyze the resulting text for insights and automation. The team uses batch transcription which involves four operations. Operation Start (021) creates a transcription. Operation Fetch (022) retrieves a transcription. Operation Amend (023) updates transcription settings. Operation Remove (024) deletes a transcription. The available REST endpoints and methods are Call Alpha (B31) GET speech2text/v3.1/transcripts/{id} Call Beta (B32) POST speech2text/v3.1/transcripts Call Gamma (B33) DELETE speech2text/v3.1/transcripts/{id} Call Delta (B34) PATCH speech2text/v3.1/transcripts/{id} Which API call corresponds to each operation?

  • ✓ C. 021 maps to B32, 022 maps to B31, 023 maps to B34, 024 maps to B33

The correct option is 021 maps to B32, 022 maps to B31, 023 maps to B34, 024 maps to B33.

Start is the operation that creates a transcription and creation is done with a POST to the transcripts endpoint so the Start operation maps to the POST call. Fetch is the operation that retrieves an existing transcription and retrieval is done with a GET to transcripts/{id} so Fetch maps to the GET call. Amend updates transcription settings and updates are performed with a PATCH to transcripts/{id} so Amend maps to the PATCH call. Remove deletes a transcription and deletion is done with a DELETE to transcripts/{id} so Remove maps to the DELETE call.

021 maps to B31, 022 maps to B32, 023 maps to B34, 024 maps to B33 is wrong because it assigns the Start operation to a GET call. A GET retrieves an existing resource and it cannot create a new transcription which requires POST.

021 maps to B31, 022 maps to B32, 023 maps to B33, 024 maps to B34 is wrong for the same reason because Start is again mapped to GET instead of POST. In addition this option swaps the Amend and Remove mappings so the update and delete verbs are reversed.

021 maps to B32, 022 maps to B31, 023 maps to B33, 024 maps to B34 is wrong because it swaps Amend and Remove. The Amend operation should use PATCH and the Remove operation should use DELETE so the PATCH and DELETE assignments are reversed in that choice.

When you see operation to API mapping questions remember that POST creates resources, GET reads them, PATCH updates them, and DELETE removes them. Apply that pattern to match operations to REST endpoints.

A startup named Acme Vision Labs needs to identify faces from a crowd of people and they have created PersonGroups that store PersistedFace records for each individual. The engineers want to compare the accuracy of different Azure Face recognition models by setting the recognitionModel parameter in their API calls. They will run one test using the service default recognition model and a second test using the newest available recognition model. Complete the sample call for the first run by choosing the correct recognitionModel value in the example code string imageUrl = “<image url>”; var faces = await faceClient.Face.DetectWithUrlAsync(imageUrl, true, true, recognitionModel:”…​…​…​…​…​…​…​…​…​”, returnRecognitionModel: true);?

  • ✓ B. recognition_v1

The correct option is recognition_v1.

You should use recognition_v1 in the sample call when you want to run the test with the service default recognition model. Passing that string in the recognitionModel parameter tells the Face service to use the default model for that deployment and using returnRecognitionModel true will confirm which model was actually applied.

recognition_v2 is not correct because it specifies a newer recognition model rather than the service default. The engineers would use a newer model like this for the second test when they compare against the newest available model.

recognition_v3 is not correct because it also refers to a newer model and not the service default. Specifying that value would run the evaluation against a later model instead of the service default.

recognition_v4 is not correct because it names an even later model and not the default model used in the first run. Newer exams may refer to updated model naming so always check the current documentation.

When a question asks for the service default model pick the model explicitly documented as the default and use returnRecognitionModel in your call to verify which model the service actually used.

Your team at Crestwave Solutions is building an image classification workflow using the Custom Vision service and you documented the sequence but four steps are missing. Step 1 Step 2 Step 3 Obtain the sample images Step 4 Add the client code Step 5 Step 6 Step 7 Upload and tag images Step 8 Train and publish the project Step 9 Use the prediction endpoint Step 10 Run the application What steps should fill the blank positions? (Choose 4)

  • ✓ A. Step 6 corresponds to creating tags in the project

  • ✓ C. Step 5 corresponds to creating the Custom Vision project

  • ✓ D. Step 1 corresponds to installing the Custom Vision client library package

  • ✓ G. Step 2 corresponds to retrieving the training and prediction keys

The correct options are Step 1 corresponds to installing the Custom Vision client library package, Step 2 corresponds to retrieving the training and prediction keys, Step 5 corresponds to creating the Custom Vision project, and Step 6 corresponds to creating tags in the project.

Step 1 corresponds to installing the Custom Vision client library package is correct because installing the SDK or client library is a prerequisite on the developer machine before any code can call the service or perform automated uploads and training.

Step 2 corresponds to retrieving the training and prediction keys is correct because those keys are required to authenticate API calls and to configure the client once the library is installed and before you create or train a project.

Step 5 corresponds to creating the Custom Vision project is correct because you must create the project resource in the Custom Vision service before you can add tags and upload images into that project.

Step 6 corresponds to creating tags in the project is correct because tags define the labels you will apply to images and they need to exist in the project prior to image tagging and training.

Step 2 corresponds to installing the Custom Vision client library package is wrong because installation belongs in Step 1 and Step 2 should be about obtaining the keys needed to use the service.

Step 6 corresponds to creating the Custom Vision project is wrong because the project must be created earlier in the sequence at Step 5 and tags are created after the project exists.

Step 1 corresponds to retrieving the training and prediction keys is wrong because keys are retrieved after the client tools are in place and so the retrieval fits Step 2 rather than Step 1.

Step 5 corresponds to creating tags in the project is wrong because tagging is done after the project is created and so tag creation is correctly placed at Step 6.

When you see sequence questions identify setup tasks that must happen before calling the service and project tasks that occur after resource creation. Installing SDKs and obtaining keys are usually early steps so check for those before creation and labeling.

A client uses Azure Cognitive Search to provide document retrieval for Northbridge Media, and the client intends to enable server side encryption with customer managed keys stored in Azure Key Vault. What three consequences will this configuration change cause? (Choose 3)

  • ✓ A. Index storage footprint will grow

  • ✓ D. Search response latency will increase

  • ✓ F. Azure Key Vault must be used

The correct answers are Index storage footprint will grow, Search response latency will increase, and Azure Key Vault must be used.

Enabling server side encryption with customer managed keys causes additional encryption metadata and wrapped key information to be stored alongside index data. This extra metadata increases the amount of stored data which is why Index storage footprint will grow.

Using customer managed keys requires fetching or unwrapping keys from the key store during cryptographic operations which adds processing and network hops. Those extra operations tend to increase response times which is why Search response latency will increase.

Azure Cognitive Search uses Azure Key Vault as the supported place to store and manage customer managed keys so you must use that service when enabling CMK. That is the reason Azure Key Vault must be used.

A self signed X509 certificate must be provided is incorrect because Key Vault holds the key material and access is handled by Key Vault permissions and managed identities rather than by uploading a self signed certificate for the search service.

Index size will shrink is incorrect because encryption adds overhead and metadata so it will not reduce the stored index size.

Query throughput will improve is incorrect because the added cryptographic work and key access usually add overhead and they will not increase throughput.

When you see questions about customer managed keys think about extra storage and extra latency and remember that keys are stored in Azure Key Vault rather than supplied as a certificate.

True or False The Vision Face Detector returns details about faces found in a photo but it is not designed to identify or match a specific person?

  • ✓ B. True

True is correct. The Vision Face Detector returns details about faces that it finds in an image but it does not identify or match a specific person.

The face detection feature provides bounding boxes, landmark and contour information, detection confidence, and likelihoods for facial expressions and attributes. These outputs help locate faces and describe facial features, and they do not include identity resolution or a person matching capability.

False is incorrect because it states the opposite. Saying the detector performs identification or matching would be wrong since the Vision Face Detector focuses on detection and attribute analysis rather than recognizing or linking faces to known identities.

When you see questions that contrast detection and recognition remember that detection finds faces and reports attributes while recognition attempts to identify people and is a different capability.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Copilot Expert, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.