Azure AI Engineer Exam Dumps and Braindumps (AI-102)

Azure AI Engineer Practice Test

Despite the title of this article, this is not a Microsoft Azure AI Engineer Braindump in the traditional sense.

I don’t believe in cheating.

Traditionally, the term “braindump” referred to someone taking an exam, memorizing the questions, and sharing them online for others to use. That approach is unethical and violates the Microsoft certification agreement. It offers no integrity, no real learning, and no professional growth.

Microsoft AI Exam Simulator

This is not an Azure AI Engineer Braindump.

All of these questions come from my Microsoft Azure AI Engineer Udemy course and from the certificationexams.pro Azure website, which offers hundreds of free Azure AI Engineer Practice Questions.

Each question has been carefully written to align with the official Microsoft Certified Azure AI Engineer Associate exam topics.

They mirror the tone, logic, and technical depth of real Microsoft exam scenarios, but none are copied from the actual test. Every question is designed to help you learn, reason, and master Azure AI concepts such as Cognitive Services, Azure Machine Learning, AI solution deployment, and responsible AI practices the right way.

If you can answer these questions and understand why the incorrect options are wrong, you will not only pass the real Microsoft Azure AI Engineer Associate exam, you will gain a deep understanding of how to design, build, and optimize intelligent AI solutions using Azure’s cloud platform.

So if you want to call this your Azure AI Engineer exam dump, that’s fine, but remember that every question here is built to teach, not to cheat.

Each item includes detailed explanations, realistic examples, and insights that help you think like an Azure AI Engineer during the exam.

Study with purpose, practice consistently, and approach your certification with integrity. Success as a Microsoft AI Engineer comes not from memorizing answers but from understanding how Azure AI services, automation, and responsible AI design come together to create intelligent and trustworthy cloud solutions.

Git, GitHub & GitHub Copilot Certification Made Easy

Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.

Get certified in the latest AI, ML and DevOps technologies. Advance your career today.


AI-102 Azure AI Engineer Questions and Answers

The Continental Combat League was launched by promoter Simon Lowell and he is seeking help with the organization’s Microsoft Azure setup. Simon is developing a video analysis application that uses Azure AI Video Indexer to extract insights from recordings that contain multiple spoken languages and he needs to set the sourceLanguage parameter in the API to enable multilingual recognition. What value should he assign to the sourceLanguage parameter?

  • ❏ A. Multiple language recognition

  • ❏ B. Multi language detection

  • ❏ C. Auto language identification

  • ❏ D. Language detection

A data engineering team at Arcadia Analytics must add a table projection to a skillset JSON so that enriched documents are written into the knowledge store. Which three properties should be defined on the table node in the skillset JSON configuration? (Choose 3)

  • ❏ A. indexName

  • ❏ B. generatedKeyName

  • ❏ C. source

  • ❏ D. tableName

  • ❏ E. dataSource

Is it true that selecting a synthetic voice that is not native to the language of your content can lead to unintended pronunciation or speech patterns?

  • ❏ A. False

  • ❏ B. True

Your team at Aerotech Systems is building a multivariate anomaly detection pipeline for a telemetry platform that ingests streams such as temperature humidity and pressure readings. The solution must identify unusual multivariate patterns in real time and it must let operators tune detection sensitivity. What implementation considerations should you evaluate? (Choose 2)

  • ❏ A. Train a single multivariate model that captures relationships across temperature humidity and pressure channels

  • ❏ B. Use Google Cloud Monitoring anomaly detection

  • ❏ C. Implement a threshold adaptation system that uses historical baselines to adjust sensitivity in real time

  • ❏ D. Set a single fixed sensitivity level across all sensor dimensions

  • ❏ E. Apply simple univariate detectors to each metric separately for reduced complexity

SnackWave Ltd is a regional quick service chain led by Maria Chen and its corporate offices and central kitchen are based in Boulder Colorado. The company operates locations across North America and plans to expand into Europe. The operations team is learning conversational language features and needs to return only the single most likely intent detected in a piece of text. Which response field or parameter should they check to retrieve that most likely intent?

  • ❏ A. kind

  • ❏ B. utterance

  • ❏ C. query

  • ❏ D. topIntent

  • ❏ E. sentiment

A small imaging company named Lumina Labs is comparing two Custom Vision capabilities called Image Classification and Object Detection. Match Image Classification and Object Detection with the correct descriptions. Definition one applies one or more labels to an image and returns image coordinates that indicate where those labels occur. Definition two applies one or more labels to an entire image. Which mapping is correct? (Choose 2)

  • ❏ A. Object Detection corresponds with Definition one

  • ❏ B. Image Classification corresponds with Definition one

  • ❏ C. Object Detection corresponds with Definition two

  • ❏ D. Image Classification corresponds with Definition two

CedarSoft maintains a library of product manuals stored as many PDF files and the support team needs to deploy a chatbot that answers user questions using the manual content while keeping development time and expenses as low as possible What Azure capability should be used to build this solution?

  • ❏ A. Azure OpenAI

  • ❏ B. Azure Cognitive Search

  • ❏ C. Azure AI Language Conversational Language Understanding

  • ❏ D. Azure AI Language Custom Question Answering

Your organization Northwind Labs has an Azure subscription that contains a multi service Cognitive Services Translator resource named TranslateSvcA. You are building an application that will send text and documents to TranslateSvcA by using the REST API. Which request headers must you include to authenticate the call and to declare the request payload format?

  • ❏ A. The access control request header the content type and the content length

  • ❏ B. The subscription key header and a client trace identifier

  • ❏ C. The subscription key header the subscription region header and the content type header

  • ❏ D. An authorization bearer token and the content type

Aurora Loft is a fashionable venue in Eastport and it doubles as Hollis Drake’s base for side ventures. The Loft hired you to advise on information technology and the team has started building conversational bots with the Bot Framework. The developers are unsure which activity sends a welcome when a user joins the bot conversation. Which activity should they use?

  • ❏ A. UserGreetingAdded() or userGreetingAdded() activity

  • ❏ B. OnMessage() or onMessage() activity

  • ❏ C. OnMembersAdded() or onMembersAdded() activity

  • ❏ D. UserGreeting() or userGreeting() activity

Aurora Renewable Research located in Boulder Colorado employs scientists Maya and Erik who used a recovered manuscript to invent Temporal Cells and Maya has asked you to deploy an AI Services container image. Which parameter must you provide when performing this deployment?

  • ❏ A. Azure Resource Group

  • ❏ B. None of the listed options

  • ❏ C. End User License Agreement parameter with value “yes”

  • ❏ D. Azure Failover Group

  • ❏ E. Azure subscription name

A company called BreezeChat runs a live messaging platform and must detect the language of incoming text messages instantly so they can route content to appropriate moderation and localization flows. Which Azure service should the developers integrate to perform instantaneous language detection?

  • ❏ A. Azure Speech Services

  • ❏ B. Google Cloud Translation API

  • ❏ C. Azure Language Service

  • ❏ D. Azure Translator

How can the community team at BrightBridge monitor user comments on their online store and detect posts that convey negative sentiment?

  • ❏ A. Azure AI Language key phrase extraction

  • ❏ B. Azure Content Safety

  • ❏ C. Azure AI Language sentiment analysis

  • ❏ D. Azure AI Language named entity recognition

This item belongs to a group of items that share the same scenario. A company called NovaAssist operates a virtual assistant that uses the question answering capability in Azure Cognitive Services for Language. End users have complained that the assistant replies informally to irrelevant or off topic prompts. You must ensure the assistant returns more formal replies for those off topic prompts. From the Language Portal you switch the chitchat dataset to qna_chitchat_formal.tsv and then you retrain and republish the model. Will this change meet the requirement?

  • ❏ A. No

  • ❏ B. Yes

Northbridge Academy is creating a browser based training platform for remote pupils and they notice some participants step away or become distracted during sessions. The system must use each pupil’s webcam and microphone streams to confirm presence and attention while keeping development effort low and it also needs to identify each pupil. From the facial expressions captured in the webcam stream how can the system verify whether the learner is paying attention?

  • ❏ A. Text Analytics

  • ❏ B. Speech

  • ❏ C. Face

A team at BrightLex is building a key phrase extractor with the Azure Text Analytics client and they implement a function def extract_phrases(text_analytics_client, text) where they set response = text_analytics_client.extract_key_phrases(text, language = “en”) then they print the literal string “keyphrases” and then they iterate for phrase in response.key_phrases printing each phrase They call the function with extract_phrases(text_analytics_client, “the cat sat on the mat”) The claim is that the printed output will include the individual words the cat sat on and mat as separate entries, is that claim true?

  • ❏ A. No

  • ❏ B. Yes

Which Azure service is most commonly used to implement conversational AI “agents” that manage dialogs with users?

  • ❏ A. Azure Cognitive Search

  • ❏ B. Azure Bot Service

  • ❏ C. Azure Blob Storage

  • ❏ D. Azure Virtual Network

  • ❏ E. Azure Cognitive Services

Prairie Lending Group is a regional mortgage firm with branches across Nebraska and it is owned by Maria and Daniel Shaw. Their child Evan Shaw is leading a new effort to design an orchestration pipeline that will route user queries to multiple Conversational Language Understanding models and question answering projects for a virtual assistant. Which tasks can Evan carry out inside an orchestration workflow built on the Azure AI Language service? (Choose 2)

  • ❏ A. Add entities directly into the orchestration workflow

  • ❏ B. Connect to question answering projects hosted in the same Azure AI Language resource as the orchestration workflow

  • ❏ C. Mark the same token as two distinct entity labels within an orchestration workflow

  • ❏ D. Link to Conversational Language Understanding applications that are owned by the same Azure AI Language resource as the orchestration workflow

A small image sharing startup named CloudPalette is adding a feature that lets users upload photos. The feature must automatically suggest descriptive alt text for each image and it should require minimal engineering effort. Which Azure AI Vision endpoint should you recommend?

Riverton Analytics is building an internal search application that uses Azure AI Search to index confidential knowledge articles and they need to enforce document level authorization for users. Which three tasks should the engineers implement to ensure each user only receives search results they are permitted to view? (Choose 3)

  • ❏ A. Create a separate search index for every security group

  • ❏ B. Store permitted group identifiers in a document field within the search index

  • ❏ C. Send identity access tokens with the search queries so the search service validates user permissions

  • ❏ D. Apply the user’s group memberships as a filter when running the search

  • ❏ E. Use Azure Role Based Access Control to assign per document visibility inside the index

  • ❏ F. Fetch the authenticated user’s group memberships from the identity provider at query time

A client named Velvet Room runs an upscale club in New Harbor and the proprietor Victor Crowe uses it as a front for illicit activity, and you are consulting on their IT systems and the team wants to launch a conversational customer support system that serves both email and a website chat widget, what approach should they adopt?

  • ❏ A. Build separate conversational agents for email and for the website and connect them via private endpoints

  • ❏ B. Develop a single conversational agent and connect it to both the web chat widget and the email channel

  • ❏ C. Use Dialogflow CX for the website chat and implement a separate email pipeline with Cloud Pub/Sub and Cloud Functions

  • ❏ D. Deploy a web chat bot and send an automated email autoreply that directs users to use the web chat

Aurora Telecom is creating an automated voice response solution that must interact with callers in either French or English. The system will receive spoken input and it must identify which language the caller is using. Which Azure Cognitive Services offering should be used to detect the incoming spoken language?

  • ❏ A. Translator

  • ❏ B. Speaker Recognition

  • ❏ C. Speech to Text

  • ❏ D. Text to Speech

  • ❏ E. Speech Translation

A media startup called Aurora Labs is configuring its ClipIndexer API from the developer console to process uploaded footage. Which values must be specified to authenticate requests and route them to the correct location? (Choose 3)

  • ❏ A. Project ID

  • ❏ B. API Key

  • ❏ C. Endpoint URL

  • ❏ D. Account ID

  • ❏ E. Region

  • ❏ F. Subscription Key

  • ❏ G. OAuth Client Secret

Atlas Publishing Collective was started by Marcus Hale and grew into a large independent publisher. Marcus wants to train an Azure AI model to assign multiple genre labels to book summaries that range from personal safety to political commentary. The engineering group set up a multi label classification project and trained the model using documents from a single content feed even though production extraction will pull from several different sources. Which data quality metric should they increase?

  • ❏ A. Distribution

  • ❏ B. Coverage

  • ❏ C. Accuracy

  • ❏ D. Diversity

  • ❏ E. Relevance

  • ❏ F. Authority

Which prompt engineering technique best matches this description where you specify what the model should and should not respond to and you also define the exact structure of its replies?

  • ❏ A. Grounding context

  • ❏ B. System message

  • ❏ C. Few shot learning

A retailer has deployed Azure AI and can provision two kinds of resources named multi service resources and single service resources. Assign each trait to the proper resource where Resource A denotes the multi service resource and Resource B denotes the single service resource. Trait one is access to an individual Azure AI capability with a service specific key and endpoint. Trait two is consolidated billing across the used services. Trait three is a single key and endpoint that can call several Azure AI services. Trait four is eligibility for a free tier. Which traits belong to each resource type? (Choose 2)

  • ❏ A. Resource A has Trait one and Trait two

  • ❏ B. Resource B has Trait one and Trait four

  • ❏ C. Resource A has Trait two and Trait four

  • ❏ D. Resource B has Trait two and Trait three

  • ❏ E. Resource A has Trait two and Trait three

  • ❏ F. Resource B has Trait one and Trait two

  • ❏ G. Resource A has Trait one and Trait three

  • ❏ H. Resource B has Trait three and Trait four

Nordic Archives is a historical publisher that digitizes handwritten municipal record books into searchable text and you have been tasked with converting old registry pages using the Azure Read API from the Azure AI vision services which returned results in JSON format. Which nested JSON key is used to indicate how closely an extracted handwritten token matches the recognized text?

  • ❏ A. score

  • ❏ B. accuracy

  • ❏ C. certainty

  • ❏ D. confidence

A cloud IoT hub ingests telemetry from industrial equipment at a manufacturing plant. You must develop an application that detects anomalies across multiple interrelated sensors, determines the root cause of production stoppages, and sends incident alerts, while minimizing development time. Which Azure service should you choose?

  • ❏ A. Anomaly Detector

  • ❏ B. Azure Digital Twins

  • ❏ C. Azure Machine Learning

  • ❏ D. Azure Metrics Advisor

A company named Veridian Labs is building a message analysis tool that uses Azure AI Language to extract meaning from text messages and they want to include links to relevant Wikipedia articles for the entities found in those messages. Which Azure Language feature should they use?

  • ❏ A. Key phrase extraction

  • ❏ B. Azure AI Content Safety

  • ❏ C. Entity linking

  • ❏ D. Custom entity recognition

A regional property portal managed by BlueStone Realty wants to add multilingual support to its property search index and it can either store every translated field manually or run an enrichment pipeline to produce translated fields. Which Azure product should the team use to generate translated text for enriching the index?

  • ❏ A. Azure Cognitive Search

  • ❏ B. Azure Translator

  • ❏ C. Azure Speech Service

  • ❏ D. Azure AI Services

A communications startup called Skyline Chat needs live text translation in its messaging platform and it has integrated with the Azure Translator service. They require that profane words are removed from translated messages without being replaced by symbols or placeholders. What configuration should they apply?

  • ❏ A. Set the profanityAction setting to Masked

  • ❏ B. Surround text with the notranslate tag

  • ❏ C. Configure the profanityAction setting to Deleted

Lena Hart works at Meridian Risk Analysis Bureau and she must build an application that uses Azure AI Search to index a set of documents. Which step represents the final phase of the indexing workflow?

  • ❏ A. Running enrichment skillsets

  • ❏ B. Applying output field mappings

  • ❏ C. Ingesting the processed documents into the search index

  • ❏ D. Parsing and extracting file content

  • ❏ E. Defining the index schema and field types

Aegis Security began as a group of contractors formed by Avery Cross who led the team and now manages a mid sized professional services firm and they want to build a facial recognition system to identify named staff members in photos. Which service should they use?

  • ❏ A. Custom Vision

  • ❏ B. Form Recognizer

  • ❏ C. Face

  • ❏ D. Computer Vision

Assess the following statements and indicate which ones are accurate. Statement 1 Text Studio or the REST endpoint can be used to perform PII identification. Statement 2 Unstructured content is acceptable for submitting data for PII scanning. Statement 3 Data may be held for a brief period when using the real time mode for PII scanning?

  • ❏ A. Statement 2 and Statement 3

  • ❏ B. Statement 1 and Statement 2

  • ❏ C. Statement 1, Statement 2 and Statement 3

  • ❏ D. Statement 1 and Statement 3

A product unit at NimbusWorks is implementing search features and intends to use the Astra Search platform. The team lists the setup steps and assigns numbers where 1 means dispatch queries and process responses, 2 means define the index schema, 3 means create the search instance and 4 means ingest the documents. What is the correct sequence for preparing the Astra Search deployment?

  • ❏ A. 2 > 3 > 4 > 1

  • ❏ B. 3 > 2 > 4 > 1

  • ❏ C. 4 > 2 > 3 > 1

  • ❏ D. 1 > 2 > 3 > 4

You create a conversational understanding model with Language Studio for a startup named NovaChat. During validation the assistant returns incorrect answers for user inputs that are not related to the model’s capabilities. You must enable the model to recognize irrelevant or spurious requests. What should you do?

  • ❏ A. Raise the inference confidence cutoff

  • ❏ B. Add examples to the None intent

  • ❏ C. Enable active learning

  • ❏ D. Add entity types to the model

Certification Braindump Questions Answered

The Continental Combat League was launched by promoter Simon Lowell and he is seeking help with the organization’s Microsoft Azure setup. Simon is developing a video analysis application that uses Azure AI Video Indexer to extract insights from recordings that contain multiple spoken languages and he needs to set the sourceLanguage parameter in the API to enable multilingual recognition. What value should he assign to the sourceLanguage parameter?

  • ✓ B. Multi language detection

The correct option is Multi language detection.

Multi language detection is the value to assign to the sourceLanguage parameter when you want Azure AI Video Indexer to detect and transcribe multiple spoken languages within the same recording. Setting this value enables the service to run its multilingual recognition models so it can identify different languages and produce language specific transcriptions and insights.

Multiple language recognition is not correct because that phrase is not the exact parameter value expected by the Video Indexer API even though it describes the capability in plain language.

Auto language identification is not correct because it is not the documented value for the sourceLanguage parameter and may imply a different internal behavior than the API expects.

Language detection is not correct because it is a generic term and not the precise parameter value required by the Azure AI Video Indexer API.

Read the API parameter names carefully and match the value exactly as shown in the documentation. Pay attention to spacing and wording when the exam asks about parameter values.

A data engineering team at Arcadia Analytics must add a table projection to a skillset JSON so that enriched documents are written into the knowledge store. Which three properties should be defined on the table node in the skillset JSON configuration? (Choose 3)

  • ✓ B. generatedKeyName

  • ✓ C. source

  • ✓ D. tableName

The correct options are generatedKeyName, source, and tableName.

The tableName property specifies the name of the knowledge store table that will receive the enriched documents. This tells the skillset where to project the output rows so that the knowledge store creates or populates the correct table.

The source property identifies which skill output or path from the enrichment pipeline provides the content for the table rows. This links the table projection to the skill output so that the correct data is written into the knowledge store table.

The generatedKeyName property defines the column that will hold a generated row key for each projected record. This is used to create unique identifiers for table rows when the knowledge store writes the enriched data.

indexName is incorrect because that property relates to a search index configuration and not to a knowledge store table projection in the skillset JSON.

dataSource is incorrect because the table node in a skillset uses source to reference the skill output. The term dataSource typically refers to an indexer data source or other pipeline configuration rather than the table projection properties.

When you see a question about knowledge store table projections focus on properties that control the target table and the mapping from skill outputs. Remember that tableName, source, and generatedKeyName are table projection properties and do not confuse them with index or data source settings.

Is it true that selecting a synthetic voice that is not native to the language of your content can lead to unintended pronunciation or speech patterns?

  • ✓ B. True

True is correct because selecting a synthetic voice that is not native to the language of your content can lead to unintended pronunciation or speech patterns.

Speech synthesis models are trained on language specific phonetics and prosody so a voice optimized for one language may render phonemes, stress, and intonation differently when used with another language. That mismatch often causes words and names to be pronounced incorrectly or to sound unnatural.

To reduce these problems choose a voice that matches the content language and locale and use available controls such as language tags and SSML phoneme or pronunciation features to correct troublesome words. It is also important to test short audio samples before deploying.

False is incorrect because it claims that choosing a non native voice does not affect pronunciation. In practice a mismatched voice will commonly alter phoneme interpretation and prosody and produce the unintended speech patterns that the question describes.

When answering remember that voice selection should match the content language and locale and that using SSML or phoneme controls can fix many pronunciation issues.

Your team at Aerotech Systems is building a multivariate anomaly detection pipeline for a telemetry platform that ingests streams such as temperature humidity and pressure readings. The solution must identify unusual multivariate patterns in real time and it must let operators tune detection sensitivity. What implementation considerations should you evaluate? (Choose 2)

  • ✓ A. Train a single multivariate model that captures relationships across temperature humidity and pressure channels

  • ✓ C. Implement a threshold adaptation system that uses historical baselines to adjust sensitivity in real time

Train a single multivariate model that captures relationships across temperature humidity and pressure channels and Implement a threshold adaptation system that uses historical baselines to adjust sensitivity in real time are correct.

A multivariate model is appropriate because it learns joint relationships across temperature humidity and pressure and it can detect complex anomalies that only appear when multiple channels interact. This approach reduces false positives that arise from examining each metric alone and it supports techniques such as multivariate Gaussian models principal component analysis based reconstruction and machine learning autoencoders or sequence models for streaming inference.

A threshold adaptation system that uses historical baselines lets the pipeline adjust sensitivity for seasonality sensor drift and changing operating conditions and it gives operators a simple control to tune detection thresholds in real time without retraining the core model. Adaptive thresholds help limit alert fatigue and keep detection performance stable as the environment evolves.

Use Google Cloud Monitoring anomaly detection is not the best fit because the built in Monitoring anomaly features are primarily aimed at single metric baselines and alerting and they provide limited support for custom multivariate modeling and fine grained operator tuning. For complex telemetry correlations a custom multivariate solution is more flexible.

Set a single fixed sensitivity level across all sensor dimensions is incorrect because sensors have different noise characteristics and dynamics and a single fixed threshold will either miss real anomalies on some channels or generate excessive false positives on others. Sensitivity should be adjustable and often adaptive.

Apply simple univariate detectors to each metric separately for reduced complexity is wrong because univariate detectors cannot detect anomalies that only manifest in cross metric relationships and they typically produce more false alarms when metrics are treated independently. Multivariate detection captures the correlated behavior that matters for telemetry like temperature humidity and pressure.

Focus on whether the scenario requires multivariate analysis and real time handling. Prefer answers that support joint modeling and adaptive thresholds rather than fixed or per metric approaches.

SnackWave Ltd is a regional quick service chain led by Maria Chen and its corporate offices and central kitchen are based in Boulder Colorado. The company operates locations across North America and plans to expand into Europe. The operations team is learning conversational language features and needs to return only the single most likely intent detected in a piece of text. Which response field or parameter should they check to retrieve that most likely intent?

  • ✓ D. topIntent

The correct option is topIntent.

The topIntent field represents the single most likely intent detected by a conversational intent recognizer or language understanding component. When you only need the highest probability intent to route or handle a user message the top intent field is the appropriate value to check.

kind is incorrect because that name usually denotes the type of resource or message and it does not carry the detected intent information.

utterance is incorrect because that term normally refers to the user text or the phrasing of an input and not to the computed intent result.

query is incorrect because that label typically contains the input text or search query used for intent detection rather than the resulting intent.

sentiment is incorrect because it provides sentiment analysis such as positive negative or neutral and it does not identify the user’s intent.

When a question asks for the single most likely intent look for fields named topIntent or topScoringIntent in the response and avoid fields that describe the input text or sentiment.

A small imaging company named Lumina Labs is comparing two Custom Vision capabilities called Image Classification and Object Detection. Match Image Classification and Object Detection with the correct descriptions. Definition one applies one or more labels to an image and returns image coordinates that indicate where those labels occur. Definition two applies one or more labels to an entire image. Which mapping is correct? (Choose 2)

  • ✓ A. Object Detection corresponds with Definition one

  • ✓ D. Image Classification corresponds with Definition two

The correct options are Object Detection corresponds with Definition one and Image Classification corresponds with Definition two.

Object Detection corresponds with Definition one is correct because object detection models identify one or more objects in an image and return localization information such as bounding box coordinates for each detected object. This capability labels individual instances and gives the positions in the image where those labels occur which matches Definition one.

Image Classification corresponds with Definition two is correct because image classification assigns one or more labels that describe the entire image and it does not provide coordinates or bounding boxes. This makes it appropriate when you only need to know what is present in the image as a whole which matches Definition two.

Image Classification corresponds with Definition one is incorrect because classification does not return location coordinates and it does not indicate where labels occur within the image.

Object Detection corresponds with Definition two is incorrect because object detection does provide coordinates and localization and it does not only apply labels to the entire image without location information.

When reading these questions look for words like coordinates or where to indicate object detection. If the labels describe the whole image then the correct choice is image classification.

CedarSoft maintains a library of product manuals stored as many PDF files and the support team needs to deploy a chatbot that answers user questions using the manual content while keeping development time and expenses as low as possible What Azure capability should be used to build this solution?

  • ✓ D. Azure AI Language Custom Question Answering

Azure AI Language Custom Question Answering is the correct option.

Azure AI Language Custom Question Answering is designed to ingest manuals and other documents and to build a knowledge base that answers user questions directly from those files. The service provides document ingestion, retrieval, and answer generation with source attribution so you can deploy a document Q and A chatbot with less custom development work and lower engineering cost.

Azure AI Language Custom Question Answering includes features for handling PDFs and other common file formats, automatic chunking and indexing, and a managed retrieval workflow that reduces the need to build and maintain your own vector store and retrieval pipeline. That combination makes it the fastest path to a production chatbot over product manuals.

Azure OpenAI is not the best choice because it provides general purpose models but it does not include a built in document ingestion and knowledge base pipeline. Using it would require you to build and host the retrieval, indexing, and prompt integration yourself which increases development effort and cost.

Azure Cognitive Search can index documents and power retrieval but it is primarily a search and indexing service. It does not natively provide the managed question answering and generative answer features that simplify building a chatbot that uses manuals unless you combine it with additional services and custom code.

Azure AI Language Conversational Language Understanding is focused on understanding intents and managing conversational flows and it is not intended to ingest large document collections and produce document based answers. That makes it the wrong fit for a manuals driven Q and A bot.

When a question mentions answering from manuals or documents look for a service that offers built in document ingestion and retrieval plus Q and A features so you can minimize custom engineering and deployment time.

Your organization Northwind Labs has an Azure subscription that contains a multi service Cognitive Services Translator resource named TranslateSvcA. You are building an application that will send text and documents to TranslateSvcA by using the REST API. Which request headers must you include to authenticate the call and to declare the request payload format?

  • ✓ C. The subscription key header the subscription region header and the content type header

The subscription key header the subscription region header and the content type header is correct. This combination matches the Translator REST API expectations for a multi service Cognitive Services resource and covers authentication and payload format.

The subscription key header is required to authenticate the request and is typically sent as Ocp-Apim-Subscription-Key. The subscription region header is required when you use a multi service Cognitive Services resource or when the resource requires a region value and it is sent as Ocp-Apim-Subscription-Region. The content type header declares the payload format so the service can parse text or document uploads and is usually set to application.json for text requests or to the appropriate multipart type for file uploads.

The access control request header the content type and the content length is incorrect because Access Control Request headers relate to browser CORS preflight requests and they do not provide authentication. The Content Length header is not used for authenticating the call.

The subscription key header and a client trace identifier is incorrect because the client trace identifier is optional and is used only for debugging and telemetry. The trace id does not replace the need for the region header for multi service resources or for the explicit content type header.

An authorization bearer token and the content type is incorrect for the typical subscription key based setup described in this question. Azure AD bearer tokens can be used in some scenarios but the exam and the multi service Translator setup expect the subscription key plus region header for authentication.

When the question mentions a multi service Cognitive Services resource remember to include both the subscription key and the subscription region headers and always set the Content-Type for the request payload.

Aurora Loft is a fashionable venue in Eastport and it doubles as Hollis Drake’s base for side ventures. The Loft hired you to advise on information technology and the team has started building conversational bots with the Bot Framework. The developers are unsure which activity sends a welcome when a user joins the bot conversation. Which activity should they use?

  • ✓ C. OnMembersAdded() or onMembersAdded() activity

The correct option is OnMembersAdded() or onMembersAdded() activity.

The OnMembersAdded() or onMembersAdded() activity corresponds to the ConversationUpdate activity that fires when new members are added to a conversation. The bot can inspect the list of added members and send a welcome message to the user while avoiding sending the welcome to the bot itself.

In typical Bot Framework code you implement an OnMembersAdded handler such as OnMembersAddedAsync in C# or onMembersAdded in JavaScript and place your greeting logic there. This makes that handler the proper place for welcome messages when a user joins.

UserGreetingAdded() or userGreetingAdded() activity is not a standard Bot Framework activity and so it will not be fired by the framework as a system event when users join.

OnMessage() or onMessage() activity is the handler for incoming user messages and it runs when the user sends text or other message types. It does not automatically run when a user is added to the conversation so it is not the correct trigger for a join welcome.

UserGreeting() or userGreeting() activity is not part of the Bot Framework activity types and it is not a valid built in trigger for new members joining a conversation.

When a question asks about sending a welcome on join look for wording that matches members added or conversation update because those indicate the system event that triggers welcome handlers.

Aurora Renewable Research located in Boulder Colorado employs scientists Maya and Erik who used a recovered manuscript to invent Temporal Cells and Maya has asked you to deploy an AI Services container image. Which parameter must you provide when performing this deployment?

  • ✓ C. End User License Agreement parameter with value “yes”

End User License Agreement parameter with value “yes” is the correct option.

When deploying an AI Services container you must explicitly accept the software license so the deployment command or template requires an End User License Agreement parameter set to the value “yes”. This parameter acknowledges the terms and allows the container image to run, and deployments will fail or be blocked if the EULA is not accepted.

Azure Resource Group is incorrect because the question asks which parameter must be provided during the container deployment and accepting the EULA is the mandatory deployment parameter. A resource group is a management construct in Azure but it is not the specific required parameter referenced by this deployment requirement.

None of the listed options is incorrect because there is a listed option that is required, namely the End User License Agreement parameter with value “yes”.

Azure Failover Group is incorrect because failover groups relate to database high availability and are not a parameter required for deploying an AI Services container image.

Azure subscription name is incorrect because the subscription context is part of your Azure account configuration and is not the specific deployment parameter you must supply to accept the container license.

Look for answers that relate to legal or runtime acknowledgements when a question mentions deploying vendor provided container images. The EULA acceptance is often mandatory and is a common parameter to check for.

A company called BreezeChat runs a live messaging platform and must detect the language of incoming text messages instantly so they can route content to appropriate moderation and localization flows. Which Azure service should the developers integrate to perform instantaneous language detection?

  • ✓ D. Azure Translator

The correct option is Azure Translator.

Azure Translator provides a dedicated language detection endpoint and real time translation capabilities that are designed for low latency text processing. The service can detect the language of short messages instantly with a single API call and it scales to handle high throughput so developers can route messages to moderation and localization flows without added delay.

Azure Speech Services is focused on audio and speech processing rather than direct text language detection for live messaging. It is the right choice when you need speech to text or speech recognition features but it is not the dedicated text detection service for this scenario.

Google Cloud Translation API can perform language detection but it is a Google Cloud product and not the Azure service requested by this question. The exam expects the Azure offering that best fits the instantaneous text routing requirement.

Azure Language Service offers broader natural language understanding and text analytics capabilities and it may include language identification features. For a live messaging pipeline that needs the fastest, dedicated detection and translation endpoints the exam answer favors Azure Translator as the appropriate Azure service.

When a question asks for instant language detection for live text streams look for a service that explicitly offers a dedicated detect endpoint and real time or low latency text processing. Also check that the service belongs to the same cloud vendor named in the question and choose the vendor aligned offering.

How can the community team at BrightBridge monitor user comments on their online store and detect posts that convey negative sentiment?

  • ✓ C. Azure AI Language sentiment analysis

The correct answer is Azure AI Language sentiment analysis.

Azure AI Language sentiment analysis is designed to determine the emotional tone of text and it can classify comments as positive negative or neutral while providing confidence scores. This makes it the appropriate choice for monitoring user comments and detecting posts that convey negative sentiment because it directly measures tone and can be integrated into alerting or moderation workflows.

Azure AI Language key phrase extraction extracts important words and phrases from text and it does not provide sentiment classification so it cannot tell you whether comments are negative.

Azure Content Safety focuses on detecting harmful or policy violating content and it is useful for moderation but it is not primarily designed to measure emotional tone or sentiment.

Azure AI Language named entity recognition identifies people organizations locations and other entities in text and it does not indicate the emotional tone of a message.

When a question asks about tone or emotion look for sentiment or sentiment analysis in the options and do not confuse those with tools for extracting entities or key phrases.

This item belongs to a group of items that share the same scenario. A company called NovaAssist operates a virtual assistant that uses the question answering capability in Azure Cognitive Services for Language. End users have complained that the assistant replies informally to irrelevant or off topic prompts. You must ensure the assistant returns more formal replies for those off topic prompts. From the Language Portal you switch the chitchat dataset to qna_chitchat_formal.tsv and then you retrain and republish the model. Will this change meet the requirement?

  • ✓ B. Yes

The correct answer is: Yes.

Switching the chitchat dataset in the Language Portal to qna_chitchat_formal.tsv and then retraining and republishing the model will change the trained chitchat responses so the assistant uses a more formal tone for off topic or conversational prompts. The chitchat dataset controls the style and canned replies for out of scope interactions and retraining plus republishing applies those changes to the deployed Question Answering model.

No is incorrect because the described action does meet the requirement. Since you replaced the chitchat file with a formal variant and completed the retrain and republish steps the assistant will return more formal replies for those off topic prompts.

When a question mentions changing a dataset and then retraining and republishing, verify whether that dataset controls the target behavior and whether the retrain and republish steps were performed. If both are true the change usually meets the requirement. Use retrain and republish as key words to spot effective configuration updates.

Northbridge Academy is creating a browser based training platform for remote pupils and they notice some participants step away or become distracted during sessions. The system must use each pupil’s webcam and microphone streams to confirm presence and attention while keeping development effort low and it also needs to identify each pupil. From the facial expressions captured in the webcam stream how can the system verify whether the learner is paying attention?

  • ✓ C. Face

The correct option is Face.

The Face service analyzes facial landmarks and expressions from webcam frames so it can infer gaze direction, eye openness, head pose and microexpressions that indicate whether a learner is paying attention. It also supports face identification which lets the system confirm each pupil’s identity while keeping development effort low because the API provides prebuilt models for detection and recognition.

Text Analytics is focused on processing text for sentiment, key phrases and language detection so it cannot interpret visual facial expressions from a webcam stream and it cannot identify or verify presence from face data.

Speech can transcribe audio and detect voice activity which may help confirm presence through the microphone but it cannot read facial expressions or gaze and it does not perform face identification, so it cannot by itself verify attention from the webcam stream.

When a question names a specific modality like facial expressions or webcam stream choose the service that directly analyzes that modality, since that usually yields the most accurate and lowest effort solution.

A team at BrightLex is building a key phrase extractor with the Azure Text Analytics client and they implement a function def extract_phrases(text_analytics_client, text) where they set response = text_analytics_client.extract_key_phrases(text, language = “en”) then they print the literal string “keyphrases” and then they iterate for phrase in response.key_phrases printing each phrase They call the function with extract_phrases(text_analytics_client, “the cat sat on the mat”) The claim is that the printed output will include the individual words the cat sat on and mat as separate entries, is that claim true?

  • ✓ A. No

The correct option is No. The Azure Text Analytics key phrase extractor will not output every word from the input sentence as separate entries.

The extract_key_phrases operation identifies important phrases and returns them in response.key_phrases rather than acting as a tokenizer. The service typically removes common stop words like “the” and “on” and returns the meaningful words or multiword phrases, so you will not get each token such as every instance of “the” printed as a separate entry.

Printing the literal string “keyphrases” before iterating will only produce that label and does not change the contents of response.key_phrases. The loop will print whatever phrases the service returned and those are chosen by the key phrase algorithm rather than by simple splitting on spaces.

The option Yes is incorrect because the API is not a simple splitter that returns every word. It performs higher level extraction and filters out stop words so the claimed output of every individual word is not what extract_key_phrases guarantees.

When you see questions about API outputs focus on whether the operation is tokenization or semantic extraction and remember to check the documentation for how stop words and phrases are handled.

Which Azure service is most commonly used to implement conversational AI “agents” that manage dialogs with users?

  • ✓ B. Azure Bot Service

The correct option is Azure Bot Service.

Azure Bot Service is the platform that provides hosting and orchestration for conversational applications and it integrates with the Bot Framework SDK and Composer to implement dialog management, state handling, and channel connectors. It also connects to Cognitive Services for language and speech features, but the Bot Service is the primary runtime and management layer for building, deploying, and scaling conversational agents.

Azure Cognitive Search is focused on indexing and retrieving content and it is useful for search scenarios but it does not provide dialog orchestration or bot hosting. You could use it to surface search results inside a bot, but it is not the service that implements the conversational agent.

Azure Blob Storage is an object storage service for storing files and binary data and it does not include dialog logic, connectors, or a runtime for bots. It can store assets or logs for a bot but it is not the bot platform.

Azure Virtual Network provides networking and isolation for resources in Azure and it is not a service for building or managing conversational agents. It can secure deployments but it does not handle dialogs or user interactions.

Azure Cognitive Services offers AI building blocks for vision, speech, and language and it can supply natural language understanding and speech capabilities to a bot. It is not itself the dialog orchestration and hosting platform that Azure Bot Service provides, so it is not the correct answer.

When a question asks which service runs and manages dialogs and connects to channels think of Azure Bot Service. Remember that Cognitive Services provide language and speech capabilities but they are building blocks rather than the bot runtime.

Prairie Lending Group is a regional mortgage firm with branches across Nebraska and it is owned by Maria and Daniel Shaw. Their child Evan Shaw is leading a new effort to design an orchestration pipeline that will route user queries to multiple Conversational Language Understanding models and question answering projects for a virtual assistant. Which tasks can Evan carry out inside an orchestration workflow built on the Azure AI Language service? (Choose 2)

  • ✓ B. Connect to question answering projects hosted in the same Azure AI Language resource as the orchestration workflow

  • ✓ D. Link to Conversational Language Understanding applications that are owned by the same Azure AI Language resource as the orchestration workflow

The correct options are Connect to question answering projects hosted in the same Azure AI Language resource as the orchestration workflow and Link to Conversational Language Understanding applications that are owned by the same Azure AI Language resource as the orchestration workflow.

The orchestration workflow can route queries to question answering projects that live in the same Azure AI Language resource. Question answering projects are separate components that host knowledge bases and the orchestrator can call them so the virtual assistant can return precise answers from those projects.

The orchestration workflow can also reference Conversational Language Understanding applications that are owned by the same resource. The orchestrator delegates intent and entity resolution to those CLU apps so you can combine multiple language models and routing logic within one workflow.

Add entities directly into the orchestration workflow is incorrect because entity definitions and recognition belong to the underlying language projects and models. The orchestrator routes requests to those models and does not itself host or author entity schemas.

Mark the same token as two distinct entity labels within an orchestration workflow is incorrect because overlapping or multi-label entity assignments are handled at the model training and labeling level and not by the orchestration layer. The orchestrator invokes models that apply their own entity rules and it does not perform conflicting token labeling itself.

When you see questions about orchestration remember to focus on what the orchestrator routes to and what must be defined in the underlying language projects. Pay attention to whether a capability is part of the workflow routing or part of the individual model configuration.

A small image sharing startup named CloudPalette is adding a feature that lets users upload photos. The feature must automatically suggest descriptive alt text for each image and it should require minimal engineering effort. Which Azure AI Vision endpoint should you recommend?

The https://eastus.api.cognitive.example.com/vision/v4.0/analyze?visualFeatures=Description endpoint is part of Azure Computer Vision and it returns descriptive captions and tags for images which makes it suitable for generating suggested alt text automatically. This approach requires minimal engineering effort because the service provides built in models that produce human readable descriptions and confidence scores without training a custom model.

The Analyze endpoint with the Description visual feature returns one or more caption candidates and associated confidence values so the startup can surface a suggested alt text string or choose the highest confidence caption. It also returns tags that can help give context for accessibility or for additional UI suggestions.

The https://eastus.api.cognitive.example.com/customvision/v4.1/prediction/projects/abcd1234/classify/iterations/publishedA/image endpoint points to Custom Vision prediction and it is intended for running a custom trained classifier. It is not ideal when the requirement is minimal engineering because you would need to label training data and maintain the custom model.

The https://eastus.api.cognitive.example.com/contentmoderator/moderate/v2.1/ProcessImage/Evaluate endpoint is for content moderation and evaluating images for unsafe or inappropriate content. It does not provide descriptive captions for use as alt text and so it does not meet the requirement.

When a question asks for automatic descriptive alt text with minimal engineering look for an endpoint that returns image captions or a Description visual feature. Remember that Custom Vision implies training a model and Content Moderator focuses on safety checks rather than generating descriptions.

Riverton Analytics is building an internal search application that uses Azure AI Search to index confidential knowledge articles and they need to enforce document level authorization for users. Which three tasks should the engineers implement to ensure each user only receives search results they are permitted to view? (Choose 3)

  • ✓ B. Store permitted group identifiers in a document field within the search index

  • ✓ D. Apply the user’s group memberships as a filter when running the search

  • ✓ F. Fetch the authenticated user’s group memberships from the identity provider at query time

Store permitted group identifiers in a document field within the search index, Fetch the authenticated user’s group memberships from the identity provider at query time, and Apply the user’s group memberships as a filter when running the search are correct.

Store permitted group identifiers in a document field within the search index lets you attach the allowed principals to each document so the index contains the metadata needed for security trimming. You typically model this as a collection field that is filterable so queries can match on group identifiers.

Fetch the authenticated user’s group memberships from the identity provider at query time ensures your application has the user’s current groups or claims before building the query. That step keeps permissions up to date and avoids embedding long lived access rules in the index.

Apply the user’s group memberships as a filter when running the search is the actual enforcement mechanism. You construct an OData style filter that checks for overlap between the document group field and the user’s groups so only authorized documents are returned.

Create a separate search index for every security group is wrong because it does not scale and it makes indexing and query logic extremely complex. Filters on a single index are the intended pattern for document level security.

Send identity access tokens with the search queries so the search service validates user permissions is wrong because tokens authenticate the request against the search service but Azure Cognitive Search does not perform per document authorization based on a user token. You must implement document level filtering in your application or query layer.

Use Azure Role Based Access Control to assign per document visibility inside the index is wrong because RBAC controls management and administrative access to the service and indexes. RBAC does not provide per document visibility for search results returned to end users.

Remember that Azure Cognitive Search enforces document level security by storing allowed principals in fields and by filtering at query time. Azure RBAC controls management access not per document visibility.

A client named Velvet Room runs an upscale club in New Harbor and the proprietor Victor Crowe uses it as a front for illicit activity, and you are consulting on their IT systems and the team wants to launch a conversational customer support system that serves both email and a website chat widget, what approach should they adopt?

  • ✓ B. Develop a single conversational agent and connect it to both the web chat widget and the email channel

The correct answer is Develop a single conversational agent and connect it to both the web chat widget and the email channel.

A single agent provides a unified natural language model and conversational logic so the same intents and flows power both the website chat and email. This approach reduces duplication and keeps training data, dialog state design, and business rules consistent across channels. It also makes it easier to maintain a single source of truth for responses and to apply updates and improvements in one place.

From an implementation perspective you can connect the web chat widget to the agent using the platform integration or client libraries and you can route email through an ingest pipeline that forwards parsed messages to the same agent for intent detection and fulfillment. Using a single agent also simplifies analytics and monitoring so you can measure user satisfaction and handoffs across channels in a consistent way.

Build separate conversational agents for email and for the website and connect them via private endpoints is incorrect because splitting agents creates duplicate training and inconsistent behavior. Private endpoints add complexity and do not solve the core problem of maintaining a single conversational model.

Use Dialogflow CX for the website chat and implement a separate email pipeline with Cloud Pub/Sub and Cloud Functions is incorrect because it describes keeping conversation logic separate across channels. Dialogflow CX and pipeline tools are useful but they should be used to connect a single agent to multiple channels rather than to create separate conversational systems.

Deploy a web chat bot and send an automated email autoreply that directs users to use the web chat is incorrect because that approach abandons email as a first class conversational channel. An autoreply degrades the email experience and fails to handle asynchronous, multi turn email conversations that users expect.

When a question contrasts separate agents with a single agent ask whether the requirement needs different conversational logic or just different delivery channels. Prefer a single agent for consistent NLU and easier maintenance.

Aurora Telecom is creating an automated voice response solution that must interact with callers in either French or English. The system will receive spoken input and it must identify which language the caller is using. Which Azure Cognitive Services offering should be used to detect the incoming spoken language?

  • ✓ C. Speech to Text

Speech to Text is the correct option for detecting which language an incoming caller is using.

Speech to Text transcribes spoken audio into text and supports automatic language detection and many spoken languages so the system can identify whether a caller is speaking French or English before further processing.

Translator is oriented to translating text between languages and does not perform primary speech recognition or spoken language detection on live caller audio.

Speaker Recognition is designed to identify or verify a speaker by voice characteristics and it does not determine the language being spoken.

Text to Speech generates synthetic audio from text and it does not analyze or detect the language of incoming speech.

Speech Translation focuses on translating spoken audio into another language and while it can handle speech in multiple languages its primary purpose is translation rather than simply detecting which language the caller is using, so Speech to Text is the more appropriate choice for language identification in this scenario.

When the question is about identifying the language of spoken input look for services that transcribe or recognize speech. That usually points to Speech to Text.

A media startup called Aurora Labs is configuring its ClipIndexer API from the developer console to process uploaded footage. Which values must be specified to authenticate requests and route them to the correct location? (Choose 3)

  • ✓ B. API Key

  • ✓ D. Account ID

  • ✓ E. Region

The correct answer is API Key, Account ID, and Region.

The API Key is provided with each request so the service can authenticate the caller and apply any usage limits or permissions tied to that key.

The Account ID identifies the customer account or tenant so the request is routed to the correct owner and billing context within the service.

The Region selects the geographic endpoint and runtime location so requests are sent to the appropriate regional backend and any region specific resources are used.

Project ID is typically a separate cloud project identifier and it is not the value required here when the service expects an account level identifier.

Endpoint URL is not something you must manually specify in this console flow because the chosen Region is used to determine the endpoint automatically.

Subscription Key is a term used by other platforms and it is not the authentication mechanism required here when an API Key is used.

OAuth Client Secret is only used when performing OAuth flows to obtain bearer tokens and it is not required for simple key based access.

When deciding between options focus on which fields control authentication and which control routing and remember that API Key, Account ID, and Region often appear together in API setup screens.

Atlas Publishing Collective was started by Marcus Hale and grew into a large independent publisher. Marcus wants to train an Azure AI model to assign multiple genre labels to book summaries that range from personal safety to political commentary. The engineering group set up a multi label classification project and trained the model using documents from a single content feed even though production extraction will pull from several different sources. Which data quality metric should they increase?

  • ✓ D. Diversity

The correct option is Diversity.

Diversity should be increased because the model was trained on documents from a single content feed while production will pull from several different sources. Increasing Diversity means adding examples that reflect the variety of writing styles, tones, topical focus, and source characteristics the model will encounter in production and that improves generalization for multi label genre assignment across topics from personal safety to political commentary.

Distribution is not the best choice because the scenario describes a lack of varied source representation rather than a formal change in probability estimates. Addressing the variety of inputs by increasing Diversity is the practical way to reduce mismatch between training and production.

Coverage usually refers to whether all relevant labels or categories are present in the dataset. The problem described is not missing genre labels but rather that the training examples all come from one feed, so increasing Coverage would not directly fix the lack of source variety.

Accuracy measures model performance and is an outcome to improve. It is not the specific data quality property to increase in this context. Improving Diversity of the training data is a way to raise Accuracy in production but accuracy itself is not the data attribute you should change.

Relevance refers to how pertinent the training examples are to the task. The summaries used for genre labeling are relevant to the task but they lack representation across sources. Therefore increasing Relevance will not address the cross source generalization gap as directly as increasing Diversity.

Authority concerns source trustworthiness and provenance. That can matter for some downstream policies but it does not address the core issue of representing multiple content sources and styles. Increasing Authority would not solve the representational gap the way increasing Diversity would.

When a question contrasts single source training with multi source production look for wording about variety or representation and choose Diversity rather than outcome metrics like Accuracy.

Which prompt engineering technique best matches this description where you specify what the model should and should not respond to and you also define the exact structure of its replies?

  • ✓ B. System message

The correct option is System message.

A System message is the chat role that you use to provide high level instructions that govern what the model should and should not do and it is also where you define precise reply structure by giving templates examples or a required schema. Putting output format rules and prohibitions in a System message signals that these are authoritative instructions that apply throughout the conversation.

Grounding context provides external facts or documents to make responses more accurate but it does not by itself enforce behavioral rules or a strict response format so it does not match this description.

Few shot learning uses example input and output pairs to teach a pattern through examples but examples rarely serve as explicit prohibitions or guaranteed structural templates the way a System message does.

When the question asks about enforcing global instructions or exact output format think of the system message role as the primary place to set those constraints for chat models.

A retailer has deployed Azure AI and can provision two kinds of resources named multi service resources and single service resources. Assign each trait to the proper resource where Resource A denotes the multi service resource and Resource B denotes the single service resource. Trait one is access to an individual Azure AI capability with a service specific key and endpoint. Trait two is consolidated billing across the used services. Trait three is a single key and endpoint that can call several Azure AI services. Trait four is eligibility for a free tier. Which traits belong to each resource type? (Choose 2)

  • ✓ B. Resource B has Trait one and Trait four

  • ✓ E. Resource A has Trait two and Trait three

The correct answers are Resource B has Trait one and Trait four and Resource A has Trait two and Trait three.

Resource A has Trait two and Trait three denotes the multi service resource. This resource provides consolidated billing across the used services which matches Trait two and it exposes a single key and endpoint that can call several Azure AI services which matches Trait three. That design lets you manage authentication and billing centrally when you use multiple Azure AI capabilities.

Resource B has Trait one and Trait four denotes the single service resource. This resource gives access to an individual Azure AI capability with a service specific key and endpoint which is Trait one and it is eligible for a free tier which is Trait four. Single service resources are chosen when you need per service keys or want to use a service specific free tier.

Resource A has Trait one and Trait two is incorrect because the multi service resource does not provide a service specific key and endpoint which is Trait one. The multi service approach uses a single key and endpoint instead.

Resource A has Trait two and Trait four is incorrect because the multi service resource supports consolidated billing which is Trait two but it is not the resource type that is eligible for the free tier which is Trait four. Free tier eligibility is tied to single service resources.

Resource B has Trait two and Trait three is incorrect because a single service resource does not offer consolidated billing across services which is Trait two and it does not provide a single key and endpoint that calls several services which is Trait three. Single service resources are scoped to one capability.

Resource B has Trait one and Trait two is incorrect because although the single service resource does provide a service specific key and endpoint which is Trait one it does not provide consolidated billing across multiple services which is Trait two.

Resource A has Trait one and Trait three is incorrect because the multi service resource does provide a single key and endpoint which is Trait three but it does not provide service specific keys and endpoints which is Trait one. Those belong to single service resources.

Resource B has Trait three and Trait four is incorrect because the single service resource does not provide a single key and endpoint that can call several services which is Trait three. The single service resource can be eligible for a free tier which is Trait four but it cannot act as a unified multi service endpoint.

Remember that multi service resources use a single key and consolidated billing while single service resources use service specific keys and are the ones that commonly offer a free tier.

Nordic Archives is a historical publisher that digitizes handwritten municipal record books into searchable text and you have been tasked with converting old registry pages using the Azure Read API from the Azure AI vision services which returned results in JSON format. Which nested JSON key is used to indicate how closely an extracted handwritten token matches the recognized text?

  • ✓ D. confidence

The correct answer is confidence.

The Azure Read API returns recognized text in nested structures such as pages, lines, and words and each word or token typically includes a numeric confidence field that indicates how closely the extracted handwritten token matches the recognized text. The confidence value is usually expressed as a number between 0 and 1 and represents the model’s estimated probability for that specific token.

score is incorrect because the Read JSON does not use a per-token key named score to convey match strength.

accuracy is incorrect because the service does not provide a per-token accuracy field and it instead reports per-token confidence values.

certainty is incorrect because that key is not used in the Read API responses and the standard key for token-level match information is confidence.

When you study API output look at a sample JSON and find the nested words or tokens arrays to see the confidence values. This quickly confirms which key the service uses for match scores.

A cloud IoT hub ingests telemetry from industrial equipment at a manufacturing plant. You must develop an application that detects anomalies across multiple interrelated sensors, determines the root cause of production stoppages, and sends incident alerts, while minimizing development time. Which Azure service should you choose?

  • ✓ D. Azure Metrics Advisor

The correct answer is Azure Metrics Advisor.

Azure Metrics Advisor is a managed service built for multivariate time series monitoring. It can detect anomalies across many interrelated sensors, perform causal analysis to help surface likely root causes of incidents, and create incidents and alerts through built in connectors and workflows. This managed, low code approach minimizes development time compared with building and wiring together separate detection and orchestration components.

Anomaly Detector focuses on detecting anomalies in individual or simple time series through an API and SDKs. It requires additional integration and custom logic to correlate multiple sensors and to produce root cause analysis and incident workflows, so it is less suitable when you need a quick end to end solution.

Azure Digital Twins models device relationships and spatial or semantic representations of environments. It is useful to represent how equipment relates to one another but it does not by itself provide built in multivariate anomaly detection, causal analysis, and alerting, so you would need extra services to meet the full requirement.

Azure Machine Learning is a general purpose ML platform that lets you build custom models. It offers maximum flexibility but also requires substantially more development and operational work to design, train, deploy, and orchestrate a complete anomaly detection and incident management pipeline.

When the question emphasizes multivariate sensor monitoring and minimal development time look for managed services that explicitly mention built in root cause or incident features.

A company named Veridian Labs is building a message analysis tool that uses Azure AI Language to extract meaning from text messages and they want to include links to relevant Wikipedia articles for the entities found in those messages. Which Azure Language feature should they use?

  • ✓ C. Entity linking

The correct option is Entity linking.

Entity linking is the feature that maps detected entities in text to entries in a knowledge base such as Wikipedia and it returns canonical identifiers and links that you can use to point users to the relevant articles.

Key phrase extraction finds important words and phrases that summarize text but it does not map those phrases to external knowledge base entries and so it cannot provide Wikipedia links.

Azure AI Content Safety is focused on detecting harmful or policy violating content and it does not perform entity linking or return knowledge base URLs.

Custom entity recognition lets you train models to identify domain specific entities but it labels text spans and does not automatically resolve those entities to Wikipedia or other knowledge base links without an additional linking step.

When a question asks for linking detected entities to external knowledge bases look for the phrase entity linking in the service description because that feature provides canonical IDs and direct links to resources like Wikipedia.

A regional property portal managed by BlueStone Realty wants to add multilingual support to its property search index and it can either store every translated field manually or run an enrichment pipeline to produce translated fields. Which Azure product should the team use to generate translated text for enriching the index?

  • ✓ D. Azure AI Services

The correct option is Azure AI Services.

Azure AI Services is the right choice because it offers managed language and generative models that can produce translated text on demand and it is designed to be called from enrichment pipelines or custom skills to populate translated fields in a search index.

Azure AI Services integrates with indexing workflows so teams can generate multilingual variants programmatically and control model selection, quality, and cost when enriching documents for search.

Azure Cognitive Search is a search and indexing platform that provides an enrichment pipeline, but it does not itself generate translations unless you invoke an external AI service from a skill. It is the system that stores and queries the index rather than the generator of translated text.

Azure Translator is a dedicated translation API and it can produce translations, but the question expects the broader unified AI offering. In modern Azure architectures translation capabilities are provided or surfaced through the Azure AI Services family for enrichment scenarios.

Azure Speech Service focuses on speech to text, text to speech, and streaming speech translation and it is not the appropriate service for producing translated text fields as part of a text enrichment pipeline for a search index.

When a question asks which product should generate translated text for an enrichment pipeline choose the service that explicitly provides text generation and translation APIs and that is intended to be called from enrichment skills or pipelines.

A communications startup called Skyline Chat needs live text translation in its messaging platform and it has integrated with the Azure Translator service. They require that profane words are removed from translated messages without being replaced by symbols or placeholders. What configuration should they apply?

  • ✓ C. Configure the profanityAction setting to Deleted

The correct option is Configure the profanityAction setting to Deleted.

Setting Deleted tells the Azure Translator service to remove profane words from the translated output entirely so they do not appear as symbols or placeholders and the surrounding text remains readable.

Set the profanityAction setting to Masked is incorrect because masking replaces profane words with characters such as asterisks or symbols rather than removing them.

Surround text with the notranslate tag is incorrect because the notranslate tag prevents text from being translated and does not perform profanity removal.

When a question asks to remove profanities without leaving placeholders remember that Deleted removes words entirely and Masked replaces them with symbols.

Lena Hart works at Meridian Risk Analysis Bureau and she must build an application that uses Azure AI Search to index a set of documents. Which step represents the final phase of the indexing workflow?

  • ✓ C. Ingesting the processed documents into the search index

Ingesting the processed documents into the search index is the correct final phase of the Azure AI Search indexing workflow.

Typical indexing flows start with Defining the index schema and field types so the system knows what to store. Next you perform Parsing and extracting file content to pull raw text and metadata. You then run Running enrichment skillsets to add AI-derived enrichments and you apply Applying output field mappings to map enriched values into index fields. The last step is to write the processed documents into the index and that is why Ingesting the processed documents into the search index is the final operation.

Running enrichment skillsets is incorrect because enrichment applies transformations and adds metadata only. Those enriched results still require mapping and a final ingest to become searchable.

Applying output field mappings is incorrect because mappings route enriched values into index fields but they do not persist documents into the index. Mapping precedes ingestion.

Parsing and extracting file content is incorrect because content extraction is an early step that prepares raw text and metadata for enrichment and indexing. It is not the concluding action.

Defining the index schema and field types is incorrect because schema definition is a design time or preindexing task that sets up how data will be stored. It is completed before documents are ingested.

Focus on the end to end pipeline and remember that the final step is the actual write into the index. Visualize the sequence to eliminate options that are clearly preparatory or transformational.

Aegis Security began as a group of contractors formed by Avery Cross who led the team and now manages a mid sized professional services firm and they want to build a facial recognition system to identify named staff members in photos. Which service should they use?

  • ✓ C. Face

Face is the correct option to build a facial recognition system that identifies named staff members in photos.

The Face service is specifically designed for face detection and identification and it supports creating person groups and training them so the service can recognize specific people in images. It provides APIs for face detection, face verification, and identifying faces against a known set of people which is what you need to identify named staff in photos.

Custom Vision is for custom image classification and object detection and it helps you train models to classify images or detect objects but it is not specialized for managing labeled people and performing face identification of named individuals.

Form Recognizer extracts text and data from documents and scanned forms and it does not provide face detection or person identification capabilities so it is not suitable for identifying staff in photos.

Computer Vision provides general image analysis such as tagging, description, and basic object and face detection in some contexts but it does not offer the dedicated face identification and person group management features that the Face service provides. For identifying named people you should use Face instead.

When a question asks for identifying named people in images look for services that mention person groups or face identification. In Azure that points to the Face service rather than general image analysis services.

Assess the following statements and indicate which ones are accurate. Statement 1 Text Studio or the REST endpoint can be used to perform PII identification. Statement 2 Unstructured content is acceptable for submitting data for PII scanning. Statement 3 Data may be held for a brief period when using the real time mode for PII scanning?

  • ✓ B. Statement 1 and Statement 2

The correct option is Statement 1 and Statement 2.

Statement 1 is correct because Text Studio and the REST endpoint are supported ways to submit content for PII identification using Google Cloud inspection services. These interfaces let you submit text and receive detected PII findings back from the inspection API.

Statement 2 is correct because unstructured content such as plain text and document text is acceptable for PII scanning. The inspection APIs are designed to handle raw and unstructured text inputs and to detect patterns and entity types that indicate PII.

Statement 3 is not correct because real time inspection is intended to process data without persisting it long term. Real time modes typically operate on data transiently for the duration of the request and do not store content persistently unless you explicitly configure a storage sink or run a batch workflow.

Statement 2 and Statement 3 is incorrect because it omits Statement 1 which is true and it includes Statement 3 which is false.

Statement 1, Statement 2 and Statement 3 is incorrect because Statement 3 is false even though the other two are true.

Statement 1 and Statement 3 is incorrect because it wrongly pairs a true Statement 1 with a false Statement 3 while leaving out the true Statement 2.

When deciding between real time and batch modes focus on whether the question refers to transient processing or persistent storage. Real time inspection usually handles data transiently while batch workflows and configured sinks are where data becomes persisted.

A product unit at NimbusWorks is implementing search features and intends to use the Astra Search platform. The team lists the setup steps and assigns numbers where 1 means dispatch queries and process responses, 2 means define the index schema, 3 means create the search instance and 4 means ingest the documents. What is the correct sequence for preparing the Astra Search deployment?

  • ✓ B. 3 > 2 > 4 > 1

The correct option is 3 > 2 > 4 > 1. This sequence means you create the search instance first and then define the index schema. After the schema is defined you ingest the documents and finally you dispatch queries and process responses.

Creating the search instance first ensures that the search service and its resources exist so you can create indexes and configure settings. Defining the index schema next lets you specify which fields are searchable and how they are analyzed so that incoming documents are indexed correctly. Ingesting documents comes after the schema so data is stored in the correct structure and is discoverable. Dispatching queries and processing responses is the last step because you need an indexed dataset and a running service to return meaningful results.

2 > 3 > 4 > 1 is incorrect because it tries to define the index schema before the search instance exists. You generally need a provisioned instance or cluster to host indexes and to apply schema settings.

4 > 2 > 3 > 1 is incorrect because it lists ingesting documents first. You cannot reliably ingest and index data before the search instance and the index schema are in place.

1 > 2 > 3 > 4 is incorrect because dispatching queries is the final operational step. Queries come after you have an instance, a schema, and ingested data to search against.

Remember the logical order for search deployment. Provision the instance, define the schema, ingest your data, and then run queries to verify results.

You create a conversational understanding model with Language Studio for a startup named NovaChat. During validation the assistant returns incorrect answers for user inputs that are not related to the model’s capabilities. You must enable the model to recognize irrelevant or spurious requests. What should you do?

  • ✓ B. Add examples to the None intent

The correct answer is Add examples to the None intent.

Adding examples to the None intent trains the conversational model to recognize out of scope and irrelevant user inputs so it can classify those utterances instead of producing incorrect answers. When you provide diverse negative examples labeled as the None intent the assistant learns the pattern of spurious requests and will return a safe fallback or refusal rather than a wrong response.

Raise the inference confidence cutoff is not the best choice because changing the threshold only alters when the model abstains and does not teach it what kinds of inputs are out of scope. Adjusting confidence can cause more false negatives and reduce usability without improving classification of irrelevant queries.

Enable active learning can help improve the model over time by surfacing uncertain examples for review and labeling but it is not the immediate solution to instruct the model about irrelevant requests. Active learning is complementary and you would still need to add None intent examples to correct the behavior.

Add entity types to the model is unrelated to recognizing spurious or out of scope inputs because entities are used to extract structured information from relevant utterances and they do not teach the model to classify an entire query as irrelevant.

When a question mentions handling out of scope or irrelevant user inputs think about the model’s fallback or None intent and whether you need to add labeled examples to train that intent.

Jira, Scrum & AI Certification

Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..

You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today.

Cameron McKenzie Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.