Microsoft AI Engineer Sample Questions and Answers | AI-102
All exam questions are from certificationexams.pro and my Azure AI Udemy course.
Microsoft Certified Azure Exam Topics
The Microsoft Certified Azure AI Engineer Associate exam validates your ability to build, deploy, and manage AI solutions using Microsoft Azure services.
It focuses on integrating cognitive services, designing AI applications, managing model lifecycles, implementing responsible AI practices, and optimizing solutions for scalability, performance, and reliability.
To prepare effectively, explore these Azure AI Engineer Practice Questions that reflect the format, logic, and depth of the real certification exam.
You will find Microsoft AI Engineer Real Exam Questions that simulate practical AI development challenges, along with Azure AI Engineer Sample Questions covering Azure Machine Learning, Cognitive Services, Azure OpenAI, and Azure Bot Service integrations.
Microsoft Azure AI-102 Exam Simulator
Each section includes Azure AI Engineer Questions and Answers created to teach as well as test.
These scenario-based exercises strengthen your understanding of how to use Azure AI services to build intelligent, ethical, and efficient applications. Explanations show not only which answer is correct, but why, helping you reason through real-world AI design decisions and trade-offs.
For further preparation, use the Microsoft Azure AI Engineer Exam Simulator and take full-length Azure AI Engineer Practice Tests that measure your progress. These tests reproduce the pacing and difficulty level of the actual exam, helping you gain confidence with time management and question complexity.
If you prefer focused study sessions, the Azure AI Engineer Exam Dump, Azure AI Engineer Braindump, and Azure AI Engineer Sample Questions and Answers collections group authentic practice items by topic, such as computer vision, natural language processing, and responsible AI governance. These help you review specific areas where deeper understanding is needed.
Mastering these Microsoft AI Engineer Exam Questions gives you the skills and confidence to pass the certification and apply your knowledge in real Azure environments. You will be ready to design, deploy, and maintain intelligent solutions that align with Microsoft’s AI principles and business goals.
Start your journey today with the Azure AI Engineer Practice Questions, train using the Microsoft Azure AI Engineer Exam Simulator, and measure your readiness with comprehensive Azure AI Engineer Practice Tests. Prepare to earn your certification and advance your career in AI engineering on Microsoft Azure.
| Git, GitHub & GitHub Copilot Certification Made Easy |
|---|
| Want to get certified on the most popular AI, ML & DevOps technologies of the day? These five resources will help you get GitHub certified in a hurry.
Get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Azure AI Engineer Certification Sample Questions
All exam questions are from certificationexams.pro and my Azure AI Udemy course.
VectorAI runs an Azure Cognitive Search instance called BeaconSearch that multiple client apps call. You need to ensure each app can only perform a specific subset of search operations. What should you implement?
-
❏ A. Create a private endpoint for the search service
-
❏ B. Enable Azure Active Directory authentication and use managed identities
-
❏ C. Assign Azure role based access control permissions to each application for the search resource
You are advising a consultant team at SentinelGuard Technologies and you are working with Maria who leads the infrastructure group on building an index that contains a field named last_updated. The engineers must make sure that the last_updated field can be returned with search results. Which attribute should the engineers set on the last_updated field in the index definition?
-
❏ A. searchable
-
❏ B. retrievable
-
❏ C. sortable
-
❏ D. filterable
HelioChem is a multinational materials firm headquartered in Boston and it produces polymers and agricultural chemicals for markets such as food processing transportation healthcare and personal care. Its CEO is Maya Ross. A developer named Ethan Cole is implementing a conversational application that must interpret commands like “Book a flight to Prague.” How should Ethan model the city element of that command?
-
❏ A. Treat the city as an utterance
-
❏ B. Dialogflow
-
❏ C. Model the city as an entity
-
❏ D. Mark the city as an intent
-
❏ E. Classify the city as an execution
-
❏ F. None of the above options
Nimbus Solutions has a technical product support guide for a networking appliance. The engineering team needs to build a conversational support assistant from the guide while minimizing development time and cost. Which Azure service is the best choice for this scenario?
-
❏ A. Azure OpenAI GPT-4 with retrieval from Azure Cognitive Search
-
❏ B. Azure Cognitive Search with semantic ranking
-
❏ C. Azure AI Language custom question answering
-
❏ D. Azure AI Phi-3-medium with fine-tuning
A common pattern for intelligent applications lets users ask natural language questions and receive helpful answers, which converts a static FAQ into a conversational interface. How can you let people query your published knowledge base by email?
-
❏ A. Enable Active Learning for the knowledge base and enable auto-respond so messages are sent to the account email
-
❏ B. Add Small Talk to the knowledge base
-
❏ C. Deploy a bot from the knowledge base and configure an email channel for the bot
-
❏ D. Use Power Automate or Logic Apps to query the knowledge base endpoint and forward replies by email
BrightPath Learning adopted Azure AI Immersive Reader to raise comprehension for entry level students and to modernize its instructional workflows. Which two capabilities of Azure AI Immersive Reader are used to make reading more accessible for learners?
-
❏ A. Speech to Text and Translation
-
❏ B. Translation and Text to Speech
-
❏ C. Text to Speech and Custom Vocabulary
-
❏ D. Speech to Text and Line Focus
Asteria Robotics is a manufacturing firm located in Red Hook Brooklyn New York and it is led by Maya Jensen. The business is growing quickly and the IT staff has asked you to advise them on building a knowledge store. The project requires the knowledge store to contain a normalized relational schema for the enriched content. Which kind of projection should the team define?
-
❏ A. BigQuery
-
❏ B. Object projection
-
❏ C. Table projection
-
❏ D. File projection
-
❏ E. Shaper skill
-
❏ F. Split projection
Fill in the blank for this sentence in the context of Summit Cloud services. The response from the [ ? ] provides a list of potentially unwanted words found in the content and it indicates which categories of unwanted words were detected and any possible personally identifiable information located in the text. What term completes the sentence?
-
❏ A. Cloud Data Loss Prevention API
-
❏ B. Content Filter API
-
❏ C. Text Filter API
-
❏ D. Text Moderation API
The Hearthwell Foundation feared a Cold War era catastrophe and under director Marina Cole they built an extensive shelter to host wealthy patrons. On your current assignment you must create an alert that notifies the team when a subscription key is regenerated and that event appears in the activity log for a particular Azure AI Services resource. Which action should you take?
-
❏ A. Use an Azure Logic App action to read the activity log
-
❏ B. Create a condition that is based on resource metrics
-
❏ C. Define a condition that uses an Activity Log signal type
-
❏ D. Set the alert scope to the Activity Log
You are building a conversational assistant for NovaAnalytics using the Conversational Language Understanding service. Below is the labeled utterance JSON that will be uploaded. { “text”: “engage the lamp”, “intent”: “switch_on”, “language”: “en-us”, “entities”: [ { “category”: “device”, “offset”: 11, “length”: 4 } ] } What does the offset value represent?
-
❏ A. Byte offset of the entity in the UTF-8 encoded payload
-
❏ B. Character index where the entity begins in the utterance
-
❏ C. Total number of characters in the entire utterance
True or False Is selecting the correct pricing tier when you provision an Azure Cognitive Search resource required because you cannot change the tier later and if the tier no longer fits you must deploy a new search resource and recreate all indexes and related objects?
-
❏ A. False
-
❏ B. True
You manage an Azure subscription for Meridian Analytics and the subscription includes an Azure OpenAI resource that runs a GPT-3.5 Small model named SolverV2. You set SolverV2 to use the system message “You are an AI tutor that helps users solve number puzzles. Explain your solutions as if the question comes from a three year old.” Which prompt engineering technique does this represent?
-
❏ A. Chain of thought prompting
-
❏ B. Few shot learning
-
❏ C. System message priming
A team at NovaVoice is building an intent recognition system and they want to collect user interactions to support active learning. What change should they make to the prediction request to enable logging of interactions?
-
❏ A. Use show_all_intents=true in the prediction request
-
❏ B. Enable sentiment analysis
-
❏ C. Add log=true to the prediction request
-
❏ D. Turn on Cloud Audit Logs for the agent
Background The Meridian Precision Works was the engineering firm that Lady Isabella Ferrer inherited from her late partner and she has asked her team to begin experiments with Azure OpenAI. The provider supplies base models and the option to create custom deployments and the project lead Alex Mercer plans to pick a base model and deploy it. The Azure OpenAI Studio provides several playgrounds with adjustable parameters and Alex needs to confirm which resource values are required to call the Azure OpenAI resource. Which resource values are required to make requests to the Azure OpenAI resource?
-
❏ A. Completion ChatCompletion and Embeddings
-
❏ B. Deployment name Endpoint and Azure AD token
-
❏ C. Key Chat Embedding and Endpoint
-
❏ D. Key Endpoint and Deployment name
A startup named HarborCloud is training an intent classifier for its support chatbot and the initial intent called RetrieveSupportContact contains 250 example utterances. I want to reduce the chance that unrelated user messages are incorrectly classified as that intent. What action should I take?
-
❏ A. Increase the intent classification confidence threshold
-
❏ B. Create a machine learned entity
-
❏ C. Add negative examples to the None intent training set
-
❏ D. Enable active learning
All exam questions are from certificationexams.pro and my Azure AI Udemy course.
You manage an Azure account for a mid sized staffing company named AgileHire. You need to implement a system to process scanned job application forms and persist the extracted fields to a relational database. The forms use a consistent layout across submissions and you must minimize developer effort and operating costs. Which type of Azure Document Intelligence model would you recommend for these documents?
-
❏ A. Prebuilt layout model
-
❏ B. Prebuilt invoice model
-
❏ C. Custom template model
-
❏ D. Custom neural model
-
❏ E. Prebuilt contract model
A regional podcast network needs an automated transcription system that can label each participant and include timestamps for every spoken segment. Which service is designed to provide speaker separation and time aligned transcripts?
-
❏ A. Real time streaming transcription
-
❏ B. Conversation Transcription
-
❏ C. Speech SDK
-
❏ D. Batch transcription
A regional transport startup called MetroVista has gathered 120000 street photos and each photo already has a short label from an image classifier such as “Stop sign” and “Limit 45”. You need to create an application that will analyze those labels to produce a report of traffic sign categories and how often each category appears while keeping costs to a minimum. Which Azure service should you choose?
-
❏ A. Azure OpenAI GPT-4-Turbo
-
❏ B. Azure AI Phi-3-mini
-
❏ C. Azure AI Document Intelligence
-
❏ D. Azure AI Language
Summit Analytics offers a Key Phrase Extraction API that has implementations in both C# and Python and an engineer asks which format the API returns responses in when invoked from the Python client?
-
❏ A. HTML
-
❏ B. Both C# and Python
-
❏ C. Python
-
❏ D. JSON
After uploading photos to a custom image dataset for a retail analytics pilot what is the reason for assigning labels to each photo?
-
❏ A. To group files for Cloud Storage lifecycle and access management
-
❏ B. To make it easier to find specific images by adding searchable keywords to filenames or metadata
-
❏ C. To annotate each image so the classifier can learn which category the example belongs to
-
❏ D. To generate a label frequency visualization after training
Northwave Books is planning to add personalized product suggestions for visitors on its ecommerce site. Which Azure service should the engineering team choose to build and deploy the recommendation models?
-
❏ A. Azure Databricks
-
❏ B. Azure Personalizer
-
❏ C. Azure AI Content Safety
-
❏ D. Azure Machine Learning
A scenario for Meridian Pet Supplies has a team building a containerized OCR application that uses Azure AI Services containers. The lead developer Mira receives a status of “Mismatch” when the container attempts to connect to the AI Services resource. She must fix the issue and ensure the container can authenticate and periodically send usage metrics to the Azure AI Services resource. What should Mira do?
-
❏ A. Verify that the container image implements the intended Azure AI Services API and that the container has network access to send telemetry to Azure
-
❏ B. Verify that the API key is issued for the correct type of Azure AI Services resource and that the key corresponds to the resource region
-
❏ C. Ensure that the Azure AI Services resource is in a running state in the Azure portal
-
❏ D. Review the billing subscription and usage quota and then increase the quota or change the pricing tier if necessary
An AI team at NovaSys is building an intent model with a language comprehension service and the model contains an intent called FindPerson and the team provided training utterances such as “Locate people in Chicago” and “Who are my contacts in Boston” You are instructed to implement these example phrases in the language comprehension tool and the suggested plan is to create a new custom entity for the contacts domain Does this approach satisfy the requirement?
-
❏ A. No creating a custom entity will not satisfy the requirement
-
❏ B. Yes creating a custom entity will satisfy the requirement
You manage an Azure Cognitive Search instance named SearchPro for Northridge Analytics and several applications connect to it. The security team requires that the service be inaccessible from the public internet and that each application can only perform the types of search operations that it needs. How can you satisfy these requirements? (Choose 2)
-
❏ A. Apply Azure role based access control to the search resource
-
❏ B. Configure IP firewall rules for the search service
-
❏ C. Set up a private endpoint for the SearchPro service
-
❏ D. Use key based authentication for search requests
BrightPath is developing an online training platform for remote students and they have noticed that some students leave their desk or become distracted for long stretches. The team needs to analyze each student webcam and microphone stream to confirm whether the student is physically present while minimizing development effort and supporting learner identification. Which Azure Cognitive Services service should be used to satisfy the requirement that from a student webcam stream the system can verify whether the student is present?
-
❏ A. Speech
-
❏ B. Text Analytics
-
❏ C. Face
After adding new question and answer entries to a customer support knowledge base at AtlasTech what step ensures the new items become available to end users?
-
❏ A. Retrain the model powering the knowledge base
-
❏ B. Publish the knowledge base
-
❏ C. Translate the question and answer pairs into other languages
-
❏ D. Run end to end tests against the Q and A endpoint
A digital services firm stores roughly 6,500 scanned invoice images in a network file repository and needs to process them to capture invoice line items, total sales figures and customer information. Which Azure service should be used to perform this analysis?
-
❏ A. Azure Computer Vision
-
❏ B. Azure Cognitive Search
-
❏ C. Azure Document Intelligence
-
❏ D. Custom Vision
A research team at Meridian Insights needs to process unstructured reports and wants to know which two kinds of recognition the entity extraction tool supports when analyzing text? (Choose 2)
-
❏ A. Entity normalization
-
❏ B. Named entity recognition
-
❏ C. Entity linking
-
❏ D. Entity resolution
-
❏ E. Entity relationships
A private group called Nova Trust built a large reinforced shelter for affluent clients because its leaders feared global catastrophe and wanted to offer comfortable long term refuge. In a current task you need to use the Translator service to change the Russian word “спасибо” written in Cyrillic characters into the Latin alphabet representation “spasibo”. Which Translator function should you use?
-
❏ A. Detect
-
❏ B. Cloud Translation API
-
❏ C. Translate
-
❏ D. Transliterate
-
❏ E. Convert
-
❏ F. None of the available options are correct
Which of the following capabilities is not provided by the Cognitive Services Face API?
-
❏ A. Face detection
-
❏ B. Face verification
-
❏ C. Face grouping
-
❏ D. Face morphing
-
❏ E. Face identification
A developer at Nimbus Analytics created a phrase extraction helper. The helper is defined as def fetch_key_terms(text_client, text): response = text_client.extract_key_phrases(text, language = “en”) print(“Key phrases”) for phrase in response.key_phrases: print(f’\t{phrase}’) The developer calls fetch_key_terms(text_client, “the cat sat on the mat”). Will the invocation print the extracted key phrases to the console?
-
❏ A. No
-
❏ B. Yes
An Orion Cloud storage account contains a 30 GB video file named clip1.mp4 in a private folder and you must have Azure Video Indexer analyze clip1.mp4 through its web portal. What should you do?
-
❏ A. Create a preview sharing link in Orion Cloud and paste that link into Azure Video Indexer
-
❏ B. Upload clip1.mp4 to vimeo.com and then use the video page URL in Azure Video Indexer
-
❏ C. Download clip1.mp4 to a local workstation and then upload it to the Azure Video Indexer site
-
❏ D. Generate a direct download URL from Orion Cloud and enter that URL in Azure Video Indexer
We operate a conversational language solution for the online help desk at NorthBridge Apparel and users report the assistant often replies with “Sorry, I don’t understand that.” You must improve the assistant so it better handles those unrecognized inputs by selecting the correct sequence of actions from the list provided?
-
❏ A. Enable Log Analytics telemetry then review logged utterances and update the model then retrain and republish the language model
-
❏ B. Add vendor supplied domain models then review logged utterances and update the model then retrain and republish the language model
-
❏ C. Migrate authoring to a new authoring key then review logged utterances and update the model then retrain and republish the language model
-
❏ D. Enable active learning then review logged utterances and update the model then retrain and republish the language model
A manufacturing startup called NexaWorks ingests IoT readings from 120 production units over 18 months. Each unit has 42 different sensors that record values every 90 seconds, producing about 5,040 separate time series. You must identify abnormal readings in each time series to enable predictive maintenance. Which Azure service should you use?
-
❏ A. Azure Cognitive Search
-
❏ B. Azure Computer Vision
-
❏ C. Azure Time Series Insights
-
❏ D. Azure AI Anomaly Detector
Your team uses an Azure subscription that contains an Azure OpenAI resource for a customer assistant project. You will build an agent with the Azure AI Agent Service that must interpret typed and spoken questions create answers and present the answers as spoken audio. Which platform should you use to set up the agent project?
-
❏ A. Language Studio
-
❏ B. Speech Studio
-
❏ C. Azure AI Foundry
-
❏ D. Azure portal
Certification Practice Exam Questions Answered
All exam questions are from certificationexams.pro and my Azure AI Udemy course.
VectorAI runs an Azure Cognitive Search instance called BeaconSearch that multiple client apps call. You need to ensure each app can only perform a specific subset of search operations. What should you implement?
-
✓ C. Assign Azure role based access control permissions to each application for the search resource
The correct option is Assign Azure role based access control permissions to each application for the search resource.
Assigning Azure role based access control permissions lets you grant each client application only the specific actions it needs on the search service. You can use built in roles or create custom roles to limit operations such as querying, indexing, or managing indexes and then assign those roles to the applications. When applications authenticate and present an identity the role assignments on the search resource determine which operations are allowed.
Create a private endpoint for the search service is incorrect because a private endpoint only restricts network access to the service. It does not provide fine grained authorization to control which search operations each application can perform.
Enable Azure Active Directory authentication and use managed identities is incorrect on its own because authentication proves identity but does not by itself limit which operations are permitted. You still need to assign RBAC roles to the identities to enforce per application permissions on the search resource.
Keep in mind that authentication and network controls are not the same as authorization, and when a question asks about restricting operations you should look for answers that mention role based access control.
You are advising a consultant team at SentinelGuard Technologies and you are working with Maria who leads the infrastructure group on building an index that contains a field named last_updated. The engineers must make sure that the last_updated field can be returned with search results. Which attribute should the engineers set on the last_updated field in the index definition?
-
✓ B. retrievable
The correct option is retrievable.
Setting the field as retrievable tells the indexing system to include the last_updated value in the documents returned with search results so that clients can see that field when they receive a result.
The retrievable attribute controls whether the field value is returned in query responses and it is independent of how the field is searched filtered or ordered.
searchable marks a field for full text indexing so it can be matched by text queries and relevance scoring. It does not ensure the field is returned in search results unless it is also marked retrievable.
sortable enables ordering of results by that field. It is about result ordering and not about including the field value in the returned document, so it does not satisfy the requirement on its own.
filterable allows the field to be used in filter expressions to narrow query results. That capability does not cause the field to be included in the response, so it is not the correct attribute for returning the field with results.
When you need a field to be returned with search results remember that retrievable controls inclusion in the response while searchable, filterable and sortable control how you can query the field.
HelioChem is a multinational materials firm headquartered in Boston and it produces polymers and agricultural chemicals for markets such as food processing transportation healthcare and personal care. Its CEO is Maya Ross. A developer named Ethan Cole is implementing a conversational application that must interpret commands like “Book a flight to Prague.” How should Ethan model the city element of that command?
-
✓ C. Model the city as an entity
The correct option is Model the city as an entity.
You should model the city as an entity because entities represent the variable data that an intent needs to fulfill a request. In the example “Book a flight to Prague” the booking intent captures the user goal and the Model the city as an entity value provides the specific location parameter that is extracted from the utterance.
Treat the city as an utterance is incorrect because an utterance is the whole user input and not the structured parameter you extract. The system extracts entities from the utterance rather than treating the parameter itself as the utterance.
Dialogflow is incorrect as an answer because it names a platform and not a modeling choice for the city element. Choosing a platform does not specify whether the city is captured as an intent or an entity.
Mark the city as an intent is incorrect because intents represent user goals such as booking a flight and they are not used to store variable values like city names. The city should be a parameter that an intent uses rather than being an intent itself.
Classify the city as an execution is incorrect because execution is not the standard term used for representing parameters in conversational models and it does not describe how to extract or validate a location value.
None of the above options is incorrect because one of the provided options is the correct modeling approach and that approach is to treat the city as an entity.
When you see a choice between intent and entity remember that intents capture the user goal and entities capture the specific values such as locations dates and names.
Nimbus Solutions has a technical product support guide for a networking appliance. The engineering team needs to build a conversational support assistant from the guide while minimizing development time and cost. Which Azure service is the best choice for this scenario?
-
✓ C. Azure AI Language custom question answering
Azure AI Language custom question answering is the best choice for this scenario.
This service is designed to ingest manuals and knowledge bases and to provide a managed conversational question answering experience with built in retrieval and response synthesis. It minimizes development work because it handles document ingestion, retrieval, and answer generation so the engineering team does not need to build and tune a retrieval plus prompt orchestration stack themselves. It is also cost efficient for support workloads because you can rely on the managed features rather than running a large general purpose model continuously.
Azure OpenAI GPT-4 with retrieval from Azure Cognitive Search is not the best fit because it requires more engineering effort to orchestrate retrieval, design prompts, and manage context windows. It also tends to have higher per request costs when using GPT-4 compared with a purpose built custom question answering solution.
Azure Cognitive Search with semantic ranking by itself is not sufficient because it is a search service and does not provide the conversational answer synthesis and follow up handling that a custom question answering service provides. It can be a useful component but it would require additional development to turn search results into conversational answers.
Azure AI Phi-3-medium with fine-tuning is not ideal because fine tuning a base model adds development time and ongoing maintenance and it usually costs more effort and money than using a managed question answering product that is optimized for document grounded support scenarios.
When an exam scenario asks for a quick low cost conversational assistant built from product docs choose managed question answering or RAG services rather than raw LLMs or standalone search.
A common pattern for intelligent applications lets users ask natural language questions and receive helpful answers, which converts a static FAQ into a conversational interface. How can you let people query your published knowledge base by email?
-
✓ C. Deploy a bot from the knowledge base and configure an email channel for the bot
The correct answer is Deploy a bot from the knowledge base and configure an email channel for the bot.
Deploying a bot from your knowledge base publishes the Q and A content through the Bot Framework so it can be connected to supported channels. The Email channel for a bot lets the bot receive messages sent to a mailbox and send replies back by email so users can query the knowledge base using regular email clients.
Enable Active Learning for the knowledge base and enable auto-respond so messages are sent to the account email is incorrect because Active Learning helps improve answer ranking and suggestions and it does not provide an automatic email delivery mechanism. Also features tied to older QnA Maker workflows such as Active Learning have been superseded by the newer Language service question answering approaches on Azure.
Add Small Talk to the knowledge base is incorrect because Small Talk provides conversational niceties and canned responses and it does not create an email interface for users to query the knowledge base.
Use Power Automate or Logic Apps to query the knowledge base endpoint and forward replies by email is incorrect for this question because it describes a custom integration approach rather than the supported, built in pattern. While you could build a flow to call the endpoint and email the result, the standard and simpler solution is to publish the knowledge base as a bot and enable the Email channel.
When a question asks how to expose a knowledge base over email look for an option that uses bot channels because the email channel is the built in way to send and receive messages by email.
BrightPath Learning adopted Azure AI Immersive Reader to raise comprehension for entry level students and to modernize its instructional workflows. Which two capabilities of Azure AI Immersive Reader are used to make reading more accessible for learners?
-
✓ B. Translation and Text to Speech
Translation and Text to Speech is correct because Immersive Reader combines language support and read aloud functionality to make text more accessible for learners.
Text to Speech provides read aloud with synchronized highlighting which helps students follow along, improves decoding and fluency, and supports learners with reading difficulties or visual challenges.
Translation offers inline translation or language switching so multilingual students can access content in their preferred language and build comprehension while they learn new vocabulary.
Speech to Text and Translation is incorrect because Immersive Reader does not perform speech to text transcription. Translation is an Immersive Reader feature but speech to text is part of the Speech Services transcription product and not a capability of Immersive Reader.
Text to Speech and Custom Vocabulary is incorrect because although text to speech is provided by Immersive Reader, custom vocabulary is not. Custom vocabulary is a feature of speech recognition and transcription services rather than of Immersive Reader.
Speech to Text and Line Focus is incorrect because line focus is indeed an Immersive Reader feature but speech to text is not. The pair is therefore not the correct combination of two Immersive Reader capabilities that increase reading accessibility.
Focus on whether both items in an option are features of Immersive Reader. If one item belongs to Speech Services or another product then the pair is likely incorrect. Remember that translation and text to speech are core Immersive Reader accessibility features.
Asteria Robotics is a manufacturing firm located in Red Hook Brooklyn New York and it is led by Maya Jensen. The business is growing quickly and the IT staff has asked you to advise them on building a knowledge store. The project requires the knowledge store to contain a normalized relational schema for the enriched content. Which kind of projection should the team define?
-
✓ C. Table projection
The correct option is Table projection.
Table projection is used when you need the knowledge store to contain a normalized relational schema for enriched content. It maps extracted structured data into tables with rows and columns so the information can be represented as relational entities and queried or joined like a traditional database.
Table projection is ideal for analytics and downstream relational processing because it enforces a tabular schema and supports normalized relations rather than storing nested or blob data.
BigQuery is a separate data warehouse service and not a projection type inside the knowledge store. You may export to BigQuery but it does not define the projection used to create a normalized schema within the store.
Object projection creates object or entity documents with nested attributes and so it does not produce a normalized set of relational tables.
File projection retains files or blobs and is intended for storing documents rather than converting content into a normalized relational schema.
Shaper skill is a processing or transformation component used to reshape or enrich content and it does not itself define a storage projection for normalized relational tables.
Split projection divides documents into multiple parts for indexing or processing and it does not create a normalized relational table schema for enriched content.
When a question asks about storing enriched content in a normalized relational schema look for projection names that mention tables or relational structure and watch for product names like BigQuery which are export destinations rather than projection types.
Fill in the blank for this sentence in the context of Summit Cloud services. The response from the [ ? ] provides a list of potentially unwanted words found in the content and it indicates which categories of unwanted words were detected and any possible personally identifiable information located in the text. What term completes the sentence?
-
✓ D. Text Moderation API
Text Moderation API is correct because the response from that service returns a list of potentially unwanted words found in the content and it indicates which categories of unwanted words were detected as well as any possible personally identifiable information located in the text.
The Text Moderation API is intended for content moderation workflows and it provides matched terms and category labels that help callers decide whether to block, flag, or redact content. The API is therefore the best fit for the description in the question.
Cloud Data Loss Prevention API is incorrect because that service focuses on detecting, classifying, and transforming sensitive data such as PII for data protection and redaction rather than providing a moderation-style list of unwanted words and category labels for policy enforcement.
Content Filter API is incorrect because that name is a generic term and does not match the documented Summit Cloud service that returns categorized unwanted-word matches and PII hints. The question expects the specific product name which is the Text Moderation API.
Text Filter API is incorrect because it is not the documented name of the service described. The described behavior maps to the Text Moderation API rather than a differently named filter API.
When a question describes specific outputs such as a list of matched words, content categories, or PII hints look for the option that exactly matches that functionality and focus on the keywords unwanted words and personally identifiable information.
The Hearthwell Foundation feared a Cold War era catastrophe and under director Marina Cole they built an extensive shelter to host wealthy patrons. On your current assignment you must create an alert that notifies the team when a subscription key is regenerated and that event appears in the activity log for a particular Azure AI Services resource. Which action should you take?
-
✓ C. Define a condition that uses an Activity Log signal type
The correct option is Define a condition that uses an Activity Log signal type.
This option is correct because subscription key regeneration is a control plane operation and it is recorded in the Azure Activity Log. Using an Activity Log signal type lets you detect administrative and management events by matching the operation name or other event properties so the alert fires when the key regeneration event appears.
Use an Azure Logic App action to read the activity log is incorrect because a Logic App can respond to events or query logs but it is not the way to create a native alert rule. You should create an Activity Log alert in Azure Monitor rather than relying on a separate workflow to poll or read the log for notifications.
Create a condition that is based on resource metrics is incorrect because metrics are time series numeric telemetry for resource performance and health. Resource metrics do not record management operations like subscription key regeneration so a metrics based condition will not catch that event.
Set the alert scope to the Activity Log is incorrect because the Activity Log is a signal type rather than an alert scope. Alert scope is defined by the subscription or resources you monitor and you must choose the Activity Log signal when defining the condition to target control plane events.
When you need to alert on control plane actions use the Activity Log signal type and filter by the operation name or event properties so the rule triggers on exact management events like key regeneration.
You are building a conversational assistant for NovaAnalytics using the Conversational Language Understanding service. Below is the labeled utterance JSON that will be uploaded. { “text”: “engage the lamp”, “intent”: “switch_on”, “language”: “en-us”, “entities”: [ { “category”: “device”, “offset”: 11, “length”: 4 } ] } What does the offset value represent?
-
✓ B. Character index where the entity begins in the utterance
The correct answer is Character index where the entity begins in the utterance.
In the example the utterance “engage the lamp” uses zero based character indexing so the offset value 11 points to the letter “l” where the entity “lamp” begins. The entities entry uses the offset to mark the starting character position and the length field to indicate how many characters the entity covers.
Byte offset of the entity in the UTF-8 encoded payload is incorrect because the service records character positions and not raw byte counts and UTF-8 uses variable length encoding so bytes and characters can differ for non English text.
Total number of characters in the entire utterance is incorrect because offset identifies the start position of the entity and not the overall utterance length and the length field gives the entity size.
When checking labeled utterances confirm whether offsets are character indices and whether indexing is zero based by counting characters including spaces in the example utterance.
All exam questions are from certificationexams.pro and my Azure AI Udemy course.
True or False Is selecting the correct pricing tier when you provision an Azure Cognitive Search resource required because you cannot change the tier later and if the tier no longer fits you must deploy a new search resource and recreate all indexes and related objects?
-
✓ B. True
True is correct because Azure Cognitive Search does not let you change the pricing tier of an existing search service and you must create a new service and migrate or recreate indexes and related objects if you need a different tier.
You can scale capacity within the same tier by changing the number of replicas and partitions to handle more queries or storage. Those scaling operations do not change the pricing tier. To move to a different tier you must provision a new Azure Cognitive Search service in the desired tier and then copy or recreate index definitions and reindex your data, which you can automate with the REST API or SDKs and by using indexers where appropriate.
False is incorrect because it suggests you can change the tier in place without deploying a new service and migrating indexes, which is not supported. Remember that scaling replicas and partitions is possible but that does not equate to changing the pricing tier.
Check the pricing tier and expected scale requirements before creating an Azure Cognitive Search service because changing the tier later requires provisioning a new service and migrating indexes.
You manage an Azure subscription for Meridian Analytics and the subscription includes an Azure OpenAI resource that runs a GPT-3.5 Small model named SolverV2. You set SolverV2 to use the system message “You are an AI tutor that helps users solve number puzzles. Explain your solutions as if the question comes from a three year old.” Which prompt engineering technique does this represent?
-
✓ C. System message priming
The correct answer is System message priming.
This is an example of System message priming because you supply a system message that defines the assistant role and the style of its responses. A system message sets high level behavior and constraints so the model adopts the requested persona and tone throughout the conversation. Telling the model to act as a tutor and to explain things as if the question comes from a three year old is a direct use of a system message to prime response style.
Chain of thought prompting is incorrect because that technique asks the model to produce explicit step by step reasoning or to reveal intermediate thoughts. The scenario does not ask the model to show its internal reasoning steps so it is not chain of thought prompting.
Few shot learning is incorrect because that approach provides example input and output pairs to teach the model by demonstration. The scenario does not include example examples or demonstrations and only sets a role and tone so it is not few shot learning.
When a prompt explicitly sets the assistant role or tone look for system message priming. If the prompt contains example input output pairs it is usually few shot learning. If the prompt asks the model to show step by step reasoning it is likely chain of thought prompting.
A team at NovaVoice is building an intent recognition system and they want to collect user interactions to support active learning. What change should they make to the prediction request to enable logging of interactions?
-
✓ C. Add log=true to the prediction request
Add log=true to the prediction request is correct because it explicitly enables the prediction API to record the user interaction so the team can collect examples for active learning.
When you set Add log=true to the prediction request the service will persist the query and the prediction outcome in its conversation logging store so those records can be reviewed, annotated, and used to improve the intent model.
Use show_all_intents=true in the prediction request is incorrect because that parameter only changes the response to include multiple intent candidates and their scores and it does not cause the interaction to be saved for active learning.
Enable sentiment analysis is incorrect because adding sentiment analysis provides sentiment metadata about a request but it does not instruct the API to log the full interaction for later review and training.
Turn on Cloud Audit Logs for the agent is incorrect because audit logs capture administrative and API access events and they are not intended to collect conversational content in the way that the prediction request logging flag does.
When a question asks about capturing interactions look for an API parameter that explicitly enables logging and avoid confusing that with audit logs or unrelated analysis features.
Background The Meridian Precision Works was the engineering firm that Lady Isabella Ferrer inherited from her late partner and she has asked her team to begin experiments with Azure OpenAI. The provider supplies base models and the option to create custom deployments and the project lead Alex Mercer plans to pick a base model and deploy it. The Azure OpenAI Studio provides several playgrounds with adjustable parameters and Alex needs to confirm which resource values are required to call the Azure OpenAI resource. Which resource values are required to make requests to the Azure OpenAI resource?
-
✓ D. Key Endpoint and Deployment name
The correct option is Key Endpoint and Deployment name.
To call the Azure OpenAI resource you must provide a way to authenticate the call, the resource endpoint to send the request to, and the deployment identifier that selects which model to run. The chosen option supplies those required values by including the API key for authentication, the endpoint URL for routing, and the deployment name that identifies the deployed model.
Completion ChatCompletion and Embeddings is incorrect because it lists model capabilities or response types rather than the authentication and routing values that a request must include.
Deployment name Endpoint and Azure AD token is incorrect because it replaces the API key with an Azure AD token. Azure AD authentication can be supported in some scenarios but the typical required resource values examined in this question include the resource key, and the option omits that key requirement.
Key Chat Embedding and Endpoint is incorrect because it mixes an authentication key with feature names instead of providing the deployment name. You still need the deployment identifier to select the specific model deployment when making a request.
When a question asks which parameters are required for an API call focus on the minimum pieces needed to authenticate and route the request such as the key, the endpoint, and the deployment or model id.
A startup named HarborCloud is training an intent classifier for its support chatbot and the initial intent called RetrieveSupportContact contains 250 example utterances. I want to reduce the chance that unrelated user messages are incorrectly classified as that intent. What action should I take?
-
✓ C. Add negative examples to the None intent training set
Add negative examples to the None intent training set is the correct action to reduce the chance that unrelated user messages are incorrectly classified as that intent.
Adding negative examples to the None intent training set teaches the classifier which utterances should not map to the RetrieveSupportContact intent. This improves the model’s precision because the classifier learns boundaries between the target intent and unrelated queries when you include representative negative examples and then retrain the agent.
Increase the intent classification confidence threshold is not the best choice because changing a runtime threshold can reduce false positives but it also increases the chance of false negatives and it does not improve the underlying model. It is a blunt instrument rather than a training based fix.
Create a machine learned entity is incorrect because entities are used to extract parameters from a matched intent and they do not influence whether an utterance is classified into a particular intent. Building an ML entity does not address misclassification of whole intents.
Enable active learning is not the immediate solution because active learning helps you collect and label ambiguous or high value examples over time. It does not by itself stop current unrelated messages from being classified as the intent until you review and add those examples to the training set.
When you need to reduce false positives add representative negative examples to your catchall or None intent and retrain the agent so the classifier learns what does not belong to the target intent.
You manage an Azure account for a mid sized staffing company named AgileHire. You need to implement a system to process scanned job application forms and persist the extracted fields to a relational database. The forms use a consistent layout across submissions and you must minimize developer effort and operating costs. Which type of Azure Document Intelligence model would you recommend for these documents?
-
✓ C. Custom template model
The correct option is Custom template model.
Custom template model is built for forms that share a consistent layout and it extracts named fields with minimal labeled examples. This approach reduces developer effort and operating cost because you can train a template with only a few samples and the model directly returns structured fields that are ready to persist to a relational database.
Prebuilt layout model extracts general text blocks, tables, and layout coordinates but it does not provide reliable named field mapping for a fixed job application form. Using it would require additional postprocessing to map extracted text to specific fields.
Prebuilt invoice model is specialized for invoice fields and formats so it will not match the fields found on job application forms.
Custom neural model is intended for highly variable document layouts and it typically needs more training data and compute to achieve good results. That makes it unnecessary and more costly for consistently laid out application forms.
Prebuilt contract model is focused on contracts and clause extraction and it does not target the field extraction needed for job application forms.
When documents use the same layout prefer template models to minimize labeled samples and development work. Choose a neural model only when layouts vary widely and templates cannot cover the variation.
A regional podcast network needs an automated transcription system that can label each participant and include timestamps for every spoken segment. Which service is designed to provide speaker separation and time aligned transcripts?
-
✓ B. Conversation Transcription
The correct answer is Conversation Transcription.
Conversation Transcription is specifically designed to perform multi speaker diarization and to produce time aligned transcripts that include speaker labels for each spoken segment. It is intended for conversation and meeting scenarios and returns transcripts with timestamps and clear speaker separation which is what the podcast network requires.
Conversation Transcription can be used in real time for live conversations and it integrates with client libraries and APIs to deliver labeled, time stamped output suitable for multi participant audio like podcasts.
Real time streaming transcription focuses on low latency conversion of live audio to text and it does not inherently provide the same level of speaker separation and per segment speaker labeling that Conversation Transcription provides. It is mainly about speed rather than in depth conversation analysis.
Speech SDK is a client library that developers use to call speech services and it is not itself a transcription mode that automatically returns speaker separated, time aligned transcripts. The SDK is the tool used to access services like Conversation Transcription rather than being the feature that performs diarization.
Batch transcription is intended for asynchronous processing of large or long audio files and it emphasizes throughput and offline processing. It is not the best choice when the requirement is built in speaker separation with precise, time aligned speaker labels for conversational audio.
When a question mentions speaker separation or time aligned transcripts look for services that explicitly advertise conversation or diarization features as the most likely correct answer.
A regional transport startup called MetroVista has gathered 120000 street photos and each photo already has a short label from an image classifier such as “Stop sign” and “Limit 45”. You need to create an application that will analyze those labels to produce a report of traffic sign categories and how often each category appears while keeping costs to a minimum. Which Azure service should you choose?
-
✓ D. Azure AI Language
The correct answer is Azure AI Language.
Azure AI Language is designed for scalable text analysis and it can ingest your existing short labels to classify and normalize them, extract key phrases, and produce frequency counts across categories while keeping operational costs low. Its prebuilt text analytics and custom classification features make it a cost effective choice for batch aggregation and reporting of simple labels.
Azure OpenAI GPT-4-Turbo is a powerful generative model but it would be overkill for simple label counting and categorization and it is generally more expensive to run than a dedicated text analytics service.
Azure AI Phi-3-mini is also a general purpose large language model aimed at conversational and generative tasks and it is not the most cost effective tool for bulk analysis of short classifier labels.
Azure AI Document Intelligence focuses on extracting structured data from scanned documents, forms, and images and it is targeted at OCR and layout extraction rather than analyzing short textual labels you already have.
When a question mentions existing short text labels focus on the underlying data type and choose a specialized text analytics service rather than a general purpose LLM to keep cost and complexity down.
Summit Analytics offers a Key Phrase Extraction API that has implementations in both C# and Python and an engineer asks which format the API returns responses in when invoked from the Python client?
-
✓ D. JSON
The correct answer is JSON.
The Key Phrase Extraction API returns structured data in JSON format when called from the Python client. The client library parses the JSON payload into native Python types so developers work with dictionaries and lists rather than raw markup or free text.
The HTML option is incorrect because the service returns machine readable structured data rather than presentation markup. An extraction API provides key phrases in JSON so they are easy to consume programmatically.
The Both C# and Python option is incorrect because it names client languages rather than a response format. The actual response format is JSON regardless of whether a C# SDK or an Python SDK is used.
The Python option is incorrect because the question asks for the response format and not the client language. JSON is the data format that the API returns.
Focus on whether the question asks for a data format or a programming language and remember that web APIs commonly return JSON which client libraries parse into native types.
After uploading photos to a custom image dataset for a retail analytics pilot what is the reason for assigning labels to each photo?
-
✓ C. To annotate each image so the classifier can learn which category the example belongs to
To annotate each image so the classifier can learn which category the example belongs to is correct.
Labeling images provides the ground truth that a supervised classifier needs so it can learn the relationship between image features and categories. Without those annotations the model cannot be trained to predict the correct class for new images.
Labels are also used for evaluation and quality checks. They let you compute loss and accuracy during training and help you identify class imbalance or mislabeled examples before you deploy a model.
To group files for Cloud Storage lifecycle and access management is incorrect because storage lifecycle rules and access controls are handled by Cloud Storage settings and IAM, and not by dataset labels.
To make it easier to find specific images by adding searchable keywords to filenames or metadata is incorrect because searchable metadata can help with organization, but the question is about assigning labels for model training and not about file discovery.
To generate a label frequency visualization after training is incorrect because visualizations can be created as a byproduct, but that is not the reason labels are assigned. The primary purpose is to provide training targets for the classifier.
When a question mentions a dataset and a classifier think about supervised learning and whether labels serve as ground truth for training and evaluation.
Northwave Books is planning to add personalized product suggestions for visitors on its ecommerce site. Which Azure service should the engineering team choose to build and deploy the recommendation models?
-
✓ D. Azure Machine Learning
The correct answer is Azure Machine Learning.
Azure Machine Learning provides an end to end platform for building training and deploying machine learning models at scale. It includes managed compute, experiment tracking, pipelines for MLOps and scalable deployment endpoints which make it well suited for developing personalized recommendation models for an ecommerce site.
Azure Databricks is primarily a data engineering and analytics platform that is excellent for feature engineering and interactive model development but it is not the managed service that handles the full ML lifecycle and production deployment in the same integrated way as Azure Machine Learning.
Azure Personalizer is a managed cognitive service that offers a real time ranking API for personalization and it is useful for online decisioning. It is not intended as a full platform for building, training and deploying custom recommendation models end to end.
Azure AI Content Safety focuses on detecting and moderating harmful or unsafe content and it is unrelated to creating recommendation systems.
Read the question carefully to see if it asks for end to end model development and deployment or for a single runtime capability. Pick a managed ML platform when the requirement covers building, training, versioning and production deployment of models.
All exam questions are from certificationexams.pro and my Azure AI Udemy course.
A scenario for Meridian Pet Supplies has a team building a containerized OCR application that uses Azure AI Services containers. The lead developer Mira receives a status of “Mismatch” when the container attempts to connect to the AI Services resource. She must fix the issue and ensure the container can authenticate and periodically send usage metrics to the Azure AI Services resource. What should Mira do?
-
✓ B. Verify that the API key is issued for the correct type of Azure AI Services resource and that the key corresponds to the resource region
Verify that the API key is issued for the correct type of Azure AI Services resource and that the key corresponds to the resource region is correct.
This issue typically means the container can reach the service but the credentials or region do not match the resource. The container must present a key that was issued for the same kind of Azure AI Services resource and the same region endpoint. If the key does not match the resource type or region the service reports a mismatch and authentication and telemetry reporting will fail.
Verify that the container image implements the intended Azure AI Services API and that the container has network access to send telemetry to Azure is not the best answer. Implementing the correct API and having network access are important for functionality and telemetry, but they do not explain a mismatch authentication status which points to a key or region problem.
Ensure that the Azure AI Services resource is in a running state in the Azure portal is incorrect because Azure AI Services resources are managed platform services and do not require a separate running state like a virtual machine. A running state would not produce a mismatch error that is tied to credential or region pairing.
Review the billing subscription and usage quota and then increase the quota or change the pricing tier if necessary is not correct because billing or quota problems produce quota or billing errors rather than an authentication mismatch. Adjusting billing will not resolve a key or region mismatch that blocks authentication and telemetry.
When you see a Mismatch status check the resource key and the resource region first and confirm the container is using the exact key and endpoint for that resource.
An AI team at NovaSys is building an intent model with a language comprehension service and the model contains an intent called FindPerson and the team provided training utterances such as “Locate people in Chicago” and “Who are my contacts in Boston” You are instructed to implement these example phrases in the language comprehension tool and the suggested plan is to create a new custom entity for the contacts domain Does this approach satisfy the requirement?
-
✓ A. No creating a custom entity will not satisfy the requirement
No creating a custom entity will not satisfy the requirement is the correct choice.
Creating only a custom entity will not meet the needs because the example phrases require both intent recognition and entity extraction. The FindPerson intent must be trained with annotated utterances and the model must extract the location and the contact reference separately so that the system can perform a lookup or a filter operation against the contacts store.
A single custom entity is useful for labeling tokens but it does not provide the intent classification, the use of system location entities, or the backend integration needed to resolve actual contacts. You should annotate training phrases, use appropriate system entities for cities when available, and implement fulfillment or a lookup table to return the correct people from the contact database.
Yes creating a custom entity will satisfy the requirement is incorrect because it assumes that adding an entity alone teaches the model the intent and resolution behavior. Entities only mark spans in text and do not replace training the intent or implementing the logic to query the contacts data and handle locations.
Focus on whether a proposed change addresses both intent training and entity extraction and whether it includes integration with backend data. If a solution only adds an entity it often will be insufficient.
You manage an Azure Cognitive Search instance named SearchPro for Northridge Analytics and several applications connect to it. The security team requires that the service be inaccessible from the public internet and that each application can only perform the types of search operations that it needs. How can you satisfy these requirements? (Choose 2)
-
✓ C. Set up a private endpoint for the SearchPro service
-
✓ D. Use key based authentication for search requests
The correct answers are Set up a private endpoint for the SearchPro service and Use key based authentication for search requests.
The Set up a private endpoint for the SearchPro service option ensures the search service is placed on a private IP within your virtual network by using Azure Private Link. This removes public network exposure so clients must be on the permitted virtual network or connected networks to reach the service and it meets the requirement to make the service inaccessible from the public internet.
The Use key based authentication for search requests option provides data plane credentials that are scoped by type, for example query keys for read and search operations and admin keys for full control. Issuing only the appropriate key to each application enforces least privilege so each app can perform only the types of search operations it needs.
Apply Azure role based access control to the search resource is incorrect because Azure RBAC controls management plane permissions for the resource and does not by itself restrict which data plane search operations a client can perform in production scenarios. RBAC does not replace query keys for limiting search request capabilities.
Configure IP firewall rules for the search service is incorrect because while IP rules can restrict access to specific IP ranges they do not remove the public endpoint and they require maintaining allowed IP lists. This approach is more brittle and may not fully satisfy the requirement to prevent public internet accessibility the way a private endpoint does.
When a question separates network isolation and request permissions pick the option that gives private network access for isolation and the option that gives least privilege keys for data plane access.
BrightPath is developing an online training platform for remote students and they have noticed that some students leave their desk or become distracted for long stretches. The team needs to analyze each student webcam and microphone stream to confirm whether the student is physically present while minimizing development effort and supporting learner identification. Which Azure Cognitive Services service should be used to satisfy the requirement that from a student webcam stream the system can verify whether the student is present?
-
✓ C. Face
The correct option is Face.
The Face service analyzes images and video streams to detect and locate faces and to verify or identify people against an enrolled group. It can be used to confirm that the same student appears in successive webcam frames and to verify a live face against a stored learner identity, and that capability makes it the right choice to confirm physical presence while minimizing development effort.
Speech focuses on converting audio to text and on speech transcription and speaker recognition, and it does not provide visual face detection from a webcam stream. Speech can help with microphone analysis but it cannot verify that a specific person is present at their desk by analyzing webcam video.
Text Analytics is for extracting insights from textual content such as sentiment key phrases and named entities, and it is not used to analyze webcam video or identify faces.
Match the data modality in the question to the Cognitive Service. If the task involves webcam video choose a vision service such as Face and if the task involves audio or text choose the speech or text services respectively.
After adding new question and answer entries to a customer support knowledge base at AtlasTech what step ensures the new items become available to end users?
-
✓ B. Publish the knowledge base
The correct option is Publish the knowledge base.
After you add new question and answer entries you must Publish the knowledge base to make those entries part of the live, end user facing content. Publishing typically updates the searchable index or the served version so the support endpoint or help center returns the new pairs to users.
Retrain the model powering the knowledge base is incorrect because adding Q and A pairs usually does not require training a machine learning model. Most knowledge base systems store and serve explicit Q and A entries and only require a publish or deploy step to expose changes.
Translate the question and answer pairs into other languages is incorrect because translation is an optional localization step and it does not make the original entries available to end users. Translation only matters if you need content in additional languages.
Run end to end tests against the Q and A endpoint is incorrect because testing is a useful validation activity but it does not itself publish or deploy content. Tests help verify behavior but you still need to perform the publish action to make changes live.
When a question asks how to make content available to users look for actions like publish or deploy rather than operational tasks such as testing or translating.
A digital services firm stores roughly 6,500 scanned invoice images in a network file repository and needs to process them to capture invoice line items, total sales figures and customer information. Which Azure service should be used to perform this analysis?
-
✓ C. Azure Document Intelligence
Azure Document Intelligence is the correct answer.
Azure Document Intelligence is built to extract structured data from scanned documents and forms. It provides pretrained invoice capabilities that capture line items, totals, dates, and customer fields, and it also allows custom training when invoices follow specific vendor formats. The service is optimized for document OCR, table extraction, and key value pair detection which makes it suitable for processing thousands of invoice images in a repository.
Azure Computer Vision focuses on general image analysis and scene understanding and while it can perform basic OCR it does not provide the specialized invoice line item and table parsing that a document extraction service offers.
Azure Cognitive Search is an indexing and search platform that can enrich content for search scenarios but it is not the native service to parse and extract structured invoice data. It can be used after extraction to index results but it does not replace a document parsing model.
Custom Vision is intended for image classification and object detection and it is not designed to parse document text, tables, or key value pairs. It is unsuitable for extracting invoice line items and totals.
Look for keywords such as invoices, line items, and tables to choose a document extraction service. If the task requires structured data extraction from scanned documents then favor a document intelligence or form extraction service.
A research team at Meridian Insights needs to process unstructured reports and wants to know which two kinds of recognition the entity extraction tool supports when analyzing text? (Choose 2)
-
✓ B. Named entity recognition
-
✓ C. Entity linking
The correct options are Named entity recognition and Entity linking.
Named entity recognition extracts and classifies entities found in free text such as people, organizations, locations and dates and it returns the entity mentions and their types which is why it is one of the supported recognition kinds.
Entity linking goes further by connecting those extracted entities to entries in a knowledge base or external identifiers so that mentions can be disambiguated and mapped to canonical records and that is why it is the other supported recognition kind.
Entity normalization is not correct because normalization usually means converting values to a standard format and that is not the same as recognizing or linking entities in the extracted text.
Entity resolution is not correct because resolution refers to reconciling and merging records that refer to the same real world entity across datasets and that is a separate data integration process rather than a text extraction recognition type.
Entity relationships is not correct because extracting relationships between entities is a form of relation extraction and that is different from the two recognition outputs the question asks about.
When you see choices about what a text analysis tool returns look for one option that names the extraction of entities and another that mentions mapping to a knowledge base. Those two together usually indicate recognition and linking.
A private group called Nova Trust built a large reinforced shelter for affluent clients because its leaders feared global catastrophe and wanted to offer comfortable long term refuge. In a current task you need to use the Translator service to change the Russian word “спасибо” written in Cyrillic characters into the Latin alphabet representation “spasibo”. Which Translator function should you use?
-
✓ D. Transliterate
The correct option is Transliterate.
Transliterate performs script conversion so it converts characters from Cyrillic into the Latin alphabet without changing the word meaning. This is why it maps the Russian word “спасибо” into the Latin representation “spasibo” rather than translating the word into another language.
Detect is incorrect because it only identifies the language of the input text and does not change the script or produce a Latin alphabet representation.
Cloud Translation API is incorrect as presented because it names the overall service and not the specific transliteration function that performs script conversion.
Translate is incorrect because that function translates meaning between languages and will not necessarily convert the original script into a phonetic Latin form.
Convert is incorrect because it is not a recognized Translator function for script transliteration.
None of the available options are correct is incorrect because the transliteration function is available and is the right choice for converting Cyrillic “спасибо” to Latin “spasibo”.
When a question asks to change script or alphabet rather than the meaning look for the term transliterate and not for language detection or translation functions.
Which of the following capabilities is not provided by the Cognitive Services Face API?
-
✓ D. Face morphing
The correct answer is Face morphing.
Face morphing is not provided by the Cognitive Services Face API because the service is designed for face analysis and recognition rather than for creating or altering facial images. Morphing requires image generation or extensive pixel level editing which is outside the scope of the Face API and is usually handled by image editing tools or generative models.
Face detection is an available capability because the Face API can locate faces in images and return bounding boxes, landmarks, and basic attributes for detected faces.
Face verification is supported because the API can compare two detected faces and return a confidence score that indicates whether the faces belong to the same person.
Face grouping is offered because the service can cluster unknown faces into groups based on visual similarity without needing an existing person database.
Face identification is available because the API can match detected faces against a labeled person group or large person group to identify who the person is.
Read each option carefully and decide whether it describes analysis of faces or generation of images. The Face API focuses on analysis and recognition and not on image generation or morphing.
A developer at Nimbus Analytics created a phrase extraction helper. The helper is defined as def fetch_key_terms(text_client, text): response = text_client.extract_key_phrases(text, language = “en”) print(“Key phrases”) for phrase in response.key_phrases: print(f’\t{phrase}’) The developer calls fetch_key_terms(text_client, “the cat sat on the mat”). Will the invocation print the extracted key phrases to the console?
-
✓ B. Yes
Yes is correct because the helper explicitly prints output and iterates the response to emit each extracted phrase.
The function calls text_client.extract_key_phrases with the provided text and language and assigns the result to response. It then prints the literal “Key phrases” and runs a for loop over response.key_phrases. Inside that loop each phrase is printed with an f string so the extracted phrases are sent to standard output one per line.
This behavior depends on text_client being a valid client and on extract_key_phrases returning a response object that exposes a key_phrases attribute. If the API call fails or returns no phrases you will see output that reflects that situation but the code itself is written to print the phrases when they are present.
No is incorrect because the helper contains explicit print statements and a loop that prints each phrase so the invocation will produce console output when the API returns phrases.
When you read code on the exam trace the execution step by step and look for explicit output calls. Follow the API call and then the loop to determine what will be printed.
An Orion Cloud storage account contains a 30 GB video file named clip1.mp4 in a private folder and you must have Azure Video Indexer analyze clip1.mp4 through its web portal. What should you do?
-
✓ D. Generate a direct download URL from Orion Cloud and enter that URL in Azure Video Indexer
Generate a direct download URL from Orion Cloud and enter that URL in Azure Video Indexer is correct.
Azure Video Indexer can ingest media by fetching a file from a publicly reachable direct URL so providing a direct download link lets the service retrieve the 30 GB file without requiring you to reupload it through a browser.
If the file is private you should generate a time limited signed or direct download URL so the Video Indexer service can access the object. This approach avoids browser upload limits and network interruptions that can make uploading large files from a local workstation impractical.
Create a preview sharing link in Orion Cloud and paste that link into Azure Video Indexer is incorrect because preview pages often point to an HTML player or require session based access and they do not provide a direct file URL that the Video Indexer service can reliably download.
Upload clip1.mp4 to vimeo.com and then use the video page URL in Azure Video Indexer is incorrect because a Vimeo video page is an HTML page and not a direct media file. Video Indexer needs a downloadable file URL and third party platform pages often include protections or embedding that prevent direct server side fetches.
Download clip1.mp4 to a local workstation and then upload it to the Azure Video Indexer site is incorrect in this scenario because a 30 GB file is large for browser uploads and the portal can be unreliable or restricted for very large uploads. Using a server accessible direct URL is the more reliable ingestion method.
When a question asks about sending large files to a cloud analysis service think about whether the service can fetch a URL directly and whether the link must be a direct downloadable URL. Use a time limited signed URL for private objects when possible.
All exam questions are from certificationexams.pro and my Azure AI Udemy course.
We operate a conversational language solution for the online help desk at NorthBridge Apparel and users report the assistant often replies with “Sorry, I don’t understand that.” You must improve the assistant so it better handles those unrecognized inputs by selecting the correct sequence of actions from the list provided?
-
✓ D. Enable active learning then review logged utterances and update the model then retrain and republish the language model
Enable active learning then review logged utterances and update the model then retrain and republish the language model is correct.
Active learning is designed to surface unlabeled or low confidence user utterances so that you can review them and assign the correct intents. Reviewing logged utterances lets you capture actual user phrasing and add or correct training examples. After you update the language model you retrain and republish so the assistant learns from the new labels and reduces the frequency of “I do not understand” replies.
Enable Log Analytics telemetry then review logged utterances and update the model then retrain and republish the language model is not the best choice because telemetry can help with diagnostics and logging but it does not provide the targeted candidate utterance suggestions that active learning offers. Telemetry is useful for monitoring but it does not automate the labeling workflow.
Add vendor supplied domain models then review logged utterances and update the model then retrain and republish the language model is incorrect because vendor domain models may not reflect your users language and they will not surface the specific unrecognized utterances your assistant sees. You still need active learning to capture and label real user inputs for your own model.
Migrate authoring to a new authoring key then review logged utterances and update the model then retrain and republish the language model is wrong because changing an authoring key is an administrative task that does not improve intent recognition. Migrating keys does not help identify or label unknown utterances and it will not reduce the number of unrecognized responses.
When a question mentions unrecognized user inputs prefer workflows that provide candidate utterances for human review. Active learning combined with reviewing real utterances yields the fastest improvement to recognition.
A manufacturing startup called NexaWorks ingests IoT readings from 120 production units over 18 months. Each unit has 42 different sensors that record values every 90 seconds, producing about 5,040 separate time series. You must identify abnormal readings in each time series to enable predictive maintenance. Which Azure service should you use?
-
✓ D. Azure AI Anomaly Detector
The correct answer is Azure AI Anomaly Detector.
Azure AI Anomaly Detector is built to find anomalies in large numbers of time series and it provides APIs and SDKs for both real time and batch detection. It automatically models seasonality and trends and can scale to analyze thousands of independent series, which fits a scenario with 5,040 sensor streams reporting every 90 seconds for predictive maintenance.
Azure Cognitive Search is focused on indexing and full text search over documents and is not designed to perform time series anomaly detection, so it is not appropriate for this task.
Azure Computer Vision analyzes images and extracts visual information and it does not provide time series anomaly detection capabilities, so it is not a fit for sensor reading analysis.
Azure Time Series Insights is optimized for storing, exploring, querying, and visualizing time series data and it is great for investigation and root cause analysis. It is not primarily an automated anomaly detection service, although it can be used alongside an anomaly detection service to visualize or investigate flagged events.
When a question asks about detecting unusual patterns across many sensor streams look for services that advertise time series anomaly detection or automated modeling rather than services aimed at search or image analysis.
Your team uses an Azure subscription that contains an Azure OpenAI resource for a customer assistant project. You will build an agent with the Azure AI Agent Service that must interpret typed and spoken questions create answers and present the answers as spoken audio. Which platform should you use to set up the agent project?
-
✓ C. Azure AI Foundry
The correct option is Azure AI Foundry.
Azure AI Foundry provides the project surface and orchestration tooling to build and manage agents that combine language models and speech capabilities. It is intended to connect to Azure OpenAI and to the Azure AI Agent Service while integrating speech services so the agent can accept typed or spoken questions and return spoken audio answers.
Language Studio is focused on working with language models for tasks such as evaluation and prompt tuning and it does not provide the integrated agent project management and speech orchestration that Foundry provides.
Speech Studio provides tools for speech to text and text to speech and for tuning speech models but it is not the end to end agent project platform that orchestrates language models together with agent behaviors.
Azure portal is the general management console for Azure resources and it is used for resource setup and configuration rather than for building and managing a dedicated agent project that integrates language and speech features.
When a question asks for a platform to build a managed agent that handles both language and speech look for a product that explicitly ties together agent orchestration, Azure OpenAI, and speech services. Azure AI Foundry is designed for that integrated scenario.
| Jira, Scrum & AI Certification |
|---|
| Want to get certified on the most popular software development technologies of the day? These resources will help you get Jira certified, Scrum certified and even AI Practitioner certified so your resume really stands out..
You can even get certified in the latest AI, ML and DevOps technologies. Advance your career today. |
Cameron McKenzie is an AWS Certified AI Practitioner, Machine Learning Engineer, Solutions Architect and author of many popular books in the software development and Cloud Computing space. His growing YouTube channel training devs in Java, Spring, AI and ML has well over 30,000 subscribers.
