← Back AB-731 — AI Transformation Leader Microsoft Certification

AI Transformation Leader Study Guide

Exam format: ~40–50 questions · 45 minutes · Passing score: 700/1000 · Focus: WHY and business VALUE of AI — not how to technically build it. If you passed AB-730, you're ~80% ready. Key mindset: always choose the most correct answer from a business transformation leader perspective.
1 of 8 sections visited
How AI actually works — the core concept
All AI is doing is working with word relationships. Large Language Models (LLMs) tie every word in a language to every other word — the word "dog" is close to "bark", which is close to "tree", which is close to "leaf". When you send a prompt, the AI looks at all those word relationships and predicts the next most likely word, then the next, then the next.
Critical exam point: AI does not fact-check. It is not finding the most correct answer — it is predicting the next token. This is why human oversight is essential and why "human in the loop" is a recurring theme throughout the entire exam.

Traditional AI

Makes predictions based on patterns in previous data. Output is deterministic — ask the same question, get the same answer. Used for trend analysis, classification, pattern recognition.

Example: Analysing emails to determine if someone gives positive or negative reviews — always produces the same result for the same input.

Generative AI

Uses LLMs and word relationships to create something new — content that didn't exist before. Output is non-deterministic (indeterministic) — ask the same question three times, get three different answers.

Example: Writing an email, creating a slide deck, summarising a meeting — generates new content each time.

Exam trap: If the task is creating something new (email, document, response) → Generative AI. If the task is finding a pattern in existing data → Traditional AI. The key differentiator: are we creating something that didn't exist before?
Machine learning & deep learning
Machine learning
The computer's ability to get better over time by picking up on patterns. For the exam you just need the definition and the lifecycle: Define the task → Collect and prepare data → Train and validate the model → Deploy → Monitor and manage. You don't need technical depth.
Deep learning
Machine learning that uses many layers of connected neural networks to discover complex patterns in unstructured data automatically. Exam tip: tie the words "deep learning" to "neural network" — that's all you need to know.
AI challenges and weaknesses
Fabrication (hallucination)
Microsoft is moving from the term "hallucination" to "fabrication". AI doesn't lie — it just predicts the next token. If that prediction is wrong, it confidently states something incorrect. Expect questions on this.
Reliability of output
Ask Copilot to write the same email three times — you'll get three different results. The output is non-deterministic. This is different from fabrication — it's about consistency, not correctness.
Lack of explainability
Understanding exactly HOW the AI produced its output is difficult. Users generally don't understand what data it referenced or how it reasoned. This is a real challenge for building trust.
Data quality
All AI does is predict the next token based on the data it has access to. Flawed data → flawed responses. If an agent is grounded in bad data, its answers will be bad. Data must be accurate, up-to-date, deduplicated, and representative.
Bias and representativeness
If your training or grounding data is biased, your AI output will be biased. Ensure data is representative of ALL the people your AI serves — not just the majority. This is how you address bias in agent output.
Privacy concerns
Be careful about what data the AI has access to, especially PII. Have we cited sources? Have we considered who can see the output? These are questions the exam will ask about.
Sustainability
Running powerful AI models uses significant compute, power, and water. As a transformation leader, are we running unnecessarily powerful models for simple tasks? Cost and environmental impact are real considerations.
Grounding, RAG, and prompting
Grounding
What data is the AI using to generate its responses? By default, most agents are grounded in the general web. The power of Microsoft Copilot is grounding it in your business data — SharePoint, Teams chats, emails — so responses are specific and relevant to your organisation.
RAG (Retrieval Augmented Generation)
Before sending your prompt to the LLM, the system searches available data sources, finds relevant information, and adds it to your prompt automatically behind the scenes. You don't need to memorise the acronym — understand the concept: it's how agents pull in relevant business data to improve responses and reduce fabrication.
Semantic indexing
Ensuring that the word "customer" means the same thing across Sales, Marketing, and Shipping in your organisation. Consistent data definitions across your business are essential for AI to give coherent answers.
Writing a good prompt — four parts: (1) Context — who/what is this for? (2) Goal — what do you want? (3) Source — what data should it use? (4) Expectation — how should it respond (length, tone, format)?

Few-shot prompting: Including examples of the desired output inside your prompt. Gives the AI more context about what "good" looks like, enabling more consistent results. This is a Microsoft exam term — be aware of it.
AI model types — know when to use which
The exam will give you a scenario and ask which model type fits. The names are largely intuitive — learn the distinctions and you'll be fine.

Large Language Model (LLM)

Trained on vast amounts of text. Understands and generates language. Use for: reviewing cases to generate knowledge articles, drafting emails, summarising documents, answering questions in natural language.

Code model

Trained specifically on programming languages. Use for: generating Python to analyse an Excel spreadsheet, writing SQL queries, creating scripts, debugging code. NOT a general LLM task.

Diffusion model

Image generation model. Creates or manipulates images. Think: DALL-E, Midjourney-style outputs. Use when the task involves generating visual content.

Multimodal model

Handles multiple input/output types simultaneously — text, images, audio, code. An all-in-one model. Use when the task spans multiple media types.

Domain-specific model

Custom or fine-tuned for a specific industry or task — medical, legal, financial. Use when general LLMs lack the specialised knowledge the business process requires.

Where models live

Copilot Studio: generally LLM, you can select the model. M365 Copilot: includes code interpreter (Analyst uses this heavily). Azure Foundry: full model selection and customisation for pro-code scenarios.

Exam approach: Read the scenario carefully. Language/text task → LLM. Code/data analysis → Code model. Images → Diffusion. Multiple types → Multimodal. Regulated industry needing specialised knowledge → Domain-specific.
Copilot vs Agents — a critical distinction

Microsoft Copilot (M365)

Productivity-focused. Prompt-driven interactions embedded in workflow apps (Word, Outlook, Teams, Excel). Users chat with it to get things done faster. Summarise meetings, draft emails, create decks, prep for calls.

Think: Personal productivity assistant built into the apps you already use.

Agents

Task execution focused. Autonomous, event-driven, multi-step processes with minimal human intervention. Connect to APIs and data sources. Designed to do ONE specific thing across one or more systems in a workflow.

Think: A specialised employee assigned one job — not a general-purpose assistant.

Exam tip: Copilot = productivity (faster emails, better meeting prep). Agents = task automation (independently executing a defined business process). If the question is about doing something faster for a person → Copilot. If it's about automating a repeatable process → Agent.
The three elements of an agent
Model (the brain)
The LLM or AI model the agent uses to think and reason. Choice of model affects performance, reliability, and cost. Bigger/more powerful model = more expensive. Right-size the model to the task.
Instructions (the behaviour)
The text that tells the agent who it is, who it serves, what its purpose is, how it should behave, and what tone to use. Think of it as the agent's job description and personality definition.
Tools (the hands)
How the agent connects to external systems and actually executes tasks. Like connectors in Power Apps/Power Automate — they allow the agent to read data, write records, send messages, call APIs. Without tools, the agent can only talk; it can't act.
Agent orchestration
Instead of one agent that does six different tasks, build six specialised agents that each do one task — and connect them. This distribution of responsibilities across multiple specialised agents is called agent orchestration.
Power Platform analogy: Same concept as parent/child flows in Power Automate — a main flow hands off to child flows and they return results. Just with AI agents instead of flow steps.
Which platform to use — the three-tier decision
PlatformUse whenLicensing approach
M365 Copilot agentsProductivity within M365 apps only — Teams, Outlook, Word. Personal or team productivity. No custom data sources needed.Subscription (per user/month)
Copilot StudioNeed to go beyond M365 — interact with business data (Dataverse, etc.), publish to custom channels (website, Teams), low-code agent building. Organisation-level deployment.Subscription or pay-as-you-go
Azure AI FoundryNeed to customise or fine-tune the actual model. Highly regulated industries. Maximum control. Pro-code agent building. Custom generative AI development.Pay-as-you-go or prepaid/reserved capacity
Simple rule: Buy (M365 Copilot) for productivity. Extend (Copilot Studio) for business processes. Build (Foundry) for custom AI models. The further you go toward Build, the more time, cost, and control is involved.
M365 Copilot — Researcher vs Analyst
Researcher agent
Finds and synthesises information from research sources — documents, PDFs, websites, email threads. Best for: drawing conclusions from reading lots of content, responding to an RFP, researching a topic across multiple documents. Think: web research + document analysis → synthesis.
Analyst agent
Derives insights from structured data — Excel spreadsheets, databases, graphs. Uses the code interpreter (a code model) to generate Python code and charts. Best for: looking at a spreadsheet and surfacing trends, analysing database outputs, generating charts. Think: data → insights.
Pages
Quick collaboration output — like a SharePoint page. For sharing and light collaboration on Copilot-generated content.
Notebooks
Deep research and complex projects. Organise multiple prompts, documents, and responses in one place over time. Best for long-running projects like RFP responses where context builds up across many sessions.
Azure AI services — what each one does
You need to know which Azure service fits a given business scenario. You don't need to know how to configure them — just match the service to the use case.

Azure Vision

Enables AI to see. Detects objects in images, generates captions from video, reads text in images (OCR).

Example: Security camera in a warehouse detecting when something falls on the floor and alerting a manager. Monitoring a retail store for hazards.

Azure Language

Enables AI to understand text at scale. Key information extraction, sentiment analysis — is this review positive or negative? Would this be a 1-star or 5-star rating?

Example: Analysing customer feedback to gauge sentiment. Extracting key topics from support tickets.

Azure Document Intelligence

Extracts structured information from documents. Pre-built models for invoices, receipts, shipping labels. Uses OCR but is document-focused, not camera/scene-focused.

Example: Uploading an invoice and automatically extracting vendor name, amount, date, and line items.

Azure AI Search

Turns content into a searchable index. Uses AI to understand and organise data so users can find information quickly across large data estates.

Example: Making thousands of internal documents searchable with AI-powered relevance ranking.

Azure Speech

Deals with audio. Contact centre as a service, voice-activated agents, transcribing calls, speech-to-text.

Example: A Copilot agent that listens to inbound customer calls and routes them appropriately.

Natural Language Processing

Analyses documents and websites to determine intent and sentiment. Different from Speech (audio) — NLP is text-based understanding of meaning, not spoken word.

Example: Determining customer intent from a support chat message.

Azure AI Foundry — the pro-code platform
What it is
The orchestration layer and governance platform for building custom generative AI and AI agents in Azure. Provides tools for model selection, safety controls, evaluation, observability, and lifecycle management.
When to use it
When you need to customise or fine-tune the actual model. Highly regulated industries. Maximum control and differentiation. When Copilot Studio doesn't provide enough customisation. When you need to build a domain-specific model.
Key capabilities
Model selection across all available models. Safety controls and content filtering. Evaluation of model performance. Information extraction. Decision support. Vision, speech, and NLP integration. Full lifecycle management.
Vision vs Document Intelligence
Classic exam trap: Vision = security camera watching a scene (detecting a hazard on a warehouse floor). Document Intelligence = extracting fields from an uploaded invoice or shipping label. Both use OCR but serve very different purposes.
Security in the Microsoft AI ecosystem
Encryption
Copilot encrypts data at rest (while being processed) and in transit (while moving between systems). Data cannot be intercepted by third parties.
Privacy by design
Microsoft's term for building privacy into the agent from the beginning — not as an afterthought. Agents in the Microsoft ecosystem inherit your organisation's existing security infrastructure.
Authentication
Ensuring only trusted identities (people and systems) can interact with your agents and access your data. Just like requiring sign-in to a CRM, agents should require appropriate authentication.
Governance
A centralised authority (admins + executive leadership) that defines who can build agents, what policies apply, controls agent lifecycle, and can instantly restrict or disable agents when risk is detected.
Permissions & sensitivity
Copilot respects the existing permissions in your M365 environment. Users only see information they already have access to — Copilot doesn't elevate privileges.
Microsoft's six Responsible AI principles
Commit these to memory. They are thematically woven into every question on the exam. Microsoft's view: AI must be governed responsibly. We are not moving fast and breaking things — the potential for harm is too significant.

1. Fairness

AI must not discriminate against any group of people. Every user interacting with the AI has a fair chance of getting a helpful response. Example of a fairness violation: a loan approval model that automatically declines anyone who wore a red hoodie in their photo — the model is discriminating based on an irrelevant attribute.

2. Reliability & Safety

Have we tested the AI rigorously to ensure we are not getting harmful, fabricated, or inconsistent outputs? Have we protected against exploitation and jailbreaking (getting the AI to say things it shouldn't)? Is the AI grounded in the right data and regularly tested?

3. Privacy & Security

Ensure the AI is not connected to systems containing PII unnecessarily. Apply the principle of least privilege — the AI should only access the data it needs for its specific task. Data handling must comply with privacy regulations.

4. Inclusiveness

All users must have an equal chance of interacting with the AI regardless of their abilities. A chatbot that works perfectly for sighted users but is unusable for visually impaired users fails inclusiveness — not fairness. Think: accessibility for people with disabilities, hearing impairments, neurodivergence.

5. Transparency

Users should understand what the AI is doing and where its answers come from. Copilot cites its sources — this is transparency in action. Users should know they're interacting with AI and understand (at a reasonable level) how it generates responses. It's not magic — it's math.

6. Accountability

The human is responsible for AI output. If Copilot drafts a document and you send it to a client without reviewing it and it contains errors, you are accountable — not Copilot. This is why humans must be in the loop, reviewing AI output before it reaches consequential decisions or external parties.

Fairness vs Inclusiveness — the exam trap: Fairness = the AI discriminates against a group (red hoodie example). Inclusiveness = some users can't effectively use the AI due to their abilities (visually impaired user can't interact with the chatbot). These feel similar but are distinctly different principles.
Sensitive use cases to avoid
Microsoft outlines three categories where AI agents should generally be avoided or handled with extreme caution and human oversight:
Denial of consequential services
An agent that could automatically deny someone's ability to buy a home (mortgage approval), access healthcare, or lose employment. The consequences of a wrong answer are too significant to leave to AI alone.
Risk of harm
Using AI to diagnose illness or prescribe medication without human medical oversight. Any scenario where an incorrect AI response could physically harm a person.
Infringement on human rights
If the output of the agent could, in any way, restrict a person's fundamental human rights, the agent should not be built or must have robust human oversight at every decision point.
Note: These are not absolute prohibitions — organisations do use AI in sensitive scenarios. But they require humans in the loop for final decisions, governance systems, and clear accountability frameworks.
Governance systems & Microsoft's governing bodies
Chief AI Ethics Officer
A designated individual who owns AI governance decisions — understands both the technology and the business process. Makes decisions about what is acceptable in sensitive AI scenarios.
Cross-functional committee
AI governance must NOT sit with IT alone. It requires executive leadership + technical staff + business process owners + third-party experts. Diverse, cross-department, cross-level representation from the very beginning of AI strategy development.
Microsoft Senior Leadership Team
Fully integrated into how Microsoft manages AI and develops responsible AI practices and tools.
Office of Responsible AI
Microsoft's cross-organisational responsible AI governance function. Shapes norms and standards for AI inside and outside Microsoft.
AETHER Committee
AI and Ethics in Engineering and Research — a body of members that develops tools, best practices, and standards for responsible AI at Microsoft.
Identifying business value — the three key questions
Memorise these. The exam will ask what questions a business leader should ask to evaluate AI use cases. The most correct answer will reference these — not just "build it in Copilot Studio."

1. What problem does it solve?

Identify the specific pain point. Don't use AI when an IF statement will do. AI should solve real, meaningful problems — not be used for the sake of saying you've done an AI project.

2. What measurable outcomes?

How will you know if it worked? Define KPIs upfront — hours saved, error rate reduction, customer satisfaction improvement, escalation decrease. If you can't measure it, you can't manage it.

3. Does it align with strategy?

Does this AI initiative support the organisation's actual strategic goals? Growing revenue, improving customer lifetime value, reducing costs, increasing satisfaction? AI for its own sake wastes resources.

How to pick good AI use cases
Look for tasks and processes with these four characteristics — they are the best indicators of where Copilot or an agent will deliver real value:

⏱ Time-consuming

What takes your people a lot of time? Manual summarisation, report generation, email drafting — tasks where volume of effort is high but the underlying work is repetitive.

📄 Document-heavy

Lots of text for Copilot to analyse and reason over — contracts, case notes, RFPs, policy documents. The more language involved, the more value Copilot can add.

📊 Data-heavy

Large volumes of structured data where humans struggle to identify patterns or trends manually. AI can surface insights much faster than manual analysis.

🎤 Meeting-heavy

Teams, calls, workshops, stand-ups. Copilot can summarise, extract actions, prepare briefings, and eliminate the need for a dedicated note-taker.

The AI readiness framework — five pillars
1. Business strategy
Before AI — what are the organisation's goals? Revenue growth, customer acquisition, satisfaction improvement, cost reduction? Understand the KPIs your organisation cares about. AI must serve these goals, not exist in isolation.
2. Technology & data
Prepare your data estate. Break down data silos. Improve data quality (clean, deduplicated, representative). Create data dictionaries. Set access controls. Key point: data readiness is continuous — not a one-time task. Trustworthy AI depends on ongoing data hygiene.
3. AI strategy & experience
Start small, learn fast. Tightly scoped pilots with clear objectives, data inputs, and success criteria. Microsoft recommends 6–12 week timelines. Define a hypothesis. Instrument it. Run post-deployment reviews. Iterate. Don't plan 18-month projects.
4. Organisational & culture
Build diverse cross-functional teams (technical + business + risk + executive). Lead from the top — communicate the AI vision from leadership. Embed AI goals into performance reviews. Build sponsorship and momentum. Include AI strategy in organizational goal-setting.
5. AI governance
Three components: Data governance (who can access what data), AI governance (managing model risks and AI challenges), Regulatory governance (what laws apply to your organisation and how to comply). Build guardrails. Create transparency. Anticipate and mitigate common risks.
Tokens — the currency of AI
What a token is
The unit of currency in the AI world. All generative AI is doing is predicting the next token in a sequence. Roughly, one token ≈ one word (or part of a word). Everything is broken down into tokens.
Why tokens matter for cost
The more tokens you use, the more it costs. Bad prompts waste tokens — if you say "write an email" and then follow up with five corrections, you've used far more tokens than if you gave a clear, complete prompt the first time. Good prompting = cost optimisation.
Subscription vs pay-as-you-go
M365 Copilot: subscription-based (per user/month). Tokens are still being consumed behind the scenes, but your bill doesn't directly reflect token count. Foundry/Azure: pay-as-you-go — the more tokens consumed, the higher your bill. Copilot Studio: both options available.
Buy vs Extend vs Build — the core cost decision
ApproachPlatformBusiness fitTime to valueCostControl
BuyM365 CopilotStandard productivity — meetings, email, documents. Most businesses have the same meeting summarisation needs.FastestSubscriptionLowest
ExtendCopilot StudioNeed to interact with line-of-business systems, custom data, or publish to custom channels. Low-code.MediumSubscription or PAYGMedium
BuildAzure FoundryCustom AI models, highly regulated industries, unique business processes that off-the-shelf tools can't serve. Pro-code.SlowestPrepaid or PAYGMaximum
Key exam insight: Most businesses think their processes are more unique than they are. Don't default to Build when Buy will do. The goal is NOT to use the most powerful option — it's to use the most appropriate one. The most advanced model is not always the right answer.
Foundry licensing — pay-as-you-go vs prepaid
Pay-as-you-go
Pay only for what you consume, billed monthly. No upfront commitment. Use for: pilots, testing, getting started. Low barrier to entry — turn it on, see how much you use, pay accordingly. If the pilot fails, you haven't committed to a contract.
Prepaid / reserved capacity
Commit to a certain amount of capacity upfront — get better pricing because you're entering a contract with Microsoft. Use for: production workloads with predictable consumption. Once you know how many tokens three scaled agents consume, lock in a prepaid plan for better value.
Exam question pattern: "Your organisation wants to start a pilot of a custom AI agent. Which licensing?" → Pay-as-you-go. "Your organisation has three widely adopted agents in production. Which licensing?" → Prepaid (better cost control, supports long-term planning).
Cost optimisation strategies
Start small, scale gradually
Tightly scope the pilot. Define a specific task. Measure ROI before expanding. Resist pressure from business stakeholders to "make it do everything at once."
Right-size the model
Don't use a sledgehammer to swat a mosquito. If you just need to summarise documents, don't build a custom Foundry model — M365 Copilot does it. Using an unnecessarily powerful model wastes tokens and money.
Target repetitive tasks
Focus AI on work that's high volume and low variation — meeting summaries, email drafting, data extraction. This is where ROI is fastest and most measurable.
Write good prompts
Clear, complete prompts with context, goal, source, and expectation get the right answer first time — using far fewer tokens than iterating through bad prompts.
Monitor consumption
Continuously track what's being consumed and whether it's justified. Monitoring is not optional — it's an ongoing responsibility of AI governance.
Reserved/excess capacity
Using reserved capacity or excess capacity when available reduces costs — particularly relevant for Foundry workloads with predictable usage patterns.
Key terms — flash reference
Fabrication
Microsoft's preferred term for hallucination. AI producing incorrect output because it predicts the next token — not because it intends to deceive.
Non-deterministic
Generative AI produces different output each time for the same input. Ask for the same email three times → three different emails.
Deterministic
Traditional AI produces the same output for the same input every time — because it's pattern-based.
Grounding
What data the AI uses to generate responses. Copilot's power = grounded in your business data (SharePoint, Teams, emails), not just the web.
RAG
Retrieval Augmented Generation. The system pulls relevant business data and adds it to your prompt before sending to the LLM. Reduces fabrication, improves relevance.
Few-shot prompting
Including examples of the desired output inside your prompt to guide the AI toward more consistent results.
Agent orchestration
Multiple specialised agents each doing one task, connected together — rather than one agent doing everything.
Human in the loop
A human reviews and is accountable for AI output before it reaches consequential decisions or external parties. Fundamental to responsible AI.
Tokens
The unit of AI consumption and cost. More tokens used = more money spent. Good prompting = fewer tokens = lower cost.
Buy / Extend / Build
Buy = M365 Copilot (productivity). Extend = Copilot Studio (business processes). Build = Foundry (custom models). Match the approach to what you actually need.
Fairness vs Inclusiveness
Fairness = AI doesn't discriminate against groups. Inclusiveness = all users can interact with the AI regardless of their abilities.
Semantic indexing
Ensuring consistent data definitions across your organisation ("customer" means the same thing in every department).
Data readiness
Continuous — not a one-time exercise. Ongoing data hygiene is required for trustworthy AI.

Scenario Q&A — click to reveal ↓

Your organisation wants to deploy an AI agent that automatically approves or denies mortgage applications. From a Responsible AI perspective, what should you advise?
Advise against full automation — require human review of every decision. This is a denial of consequential services scenario. An agent that automatically denies a mortgage denies someone the ability to buy a home — a consequential outcome too significant to leave to AI alone. A human must be in the loop for final decisions. Additionally, the agent must be checked for fairness (not discriminating on irrelevant factors) and the data used to train/ground it must be representative.
A company has three Copilot Studio agents running in production, widely used across the organisation. They want to optimise their Azure costs. Which Foundry licensing model should they use?
Prepaid (reserved capacity). With three production agents running at scale, consumption is predictable. Prepaid gives better pricing by committing capacity upfront, supports long-term planning, and provides more cost control than pay-as-you-go. Pay-as-you-go is for pilots and testing when consumption is unknown.
A sales team wants to use AI to help prepare for customer meetings. They need it to search through recent emails and Teams conversations and generate a briefing. Which platform and which M365 Copilot feature is most appropriate?
M365 Copilot — Researcher agent. This is a productivity task (meeting prep) fully within the M365 ecosystem using M365 data (emails, Teams). No custom line-of-business systems needed → M365 Copilot, not Copilot Studio. Researcher is most appropriate because the task involves finding and synthesising information from documents and communications (not a structured data/spreadsheet task, which would be Analyst).
A manufacturing company wants to use AI to detect defects on a production line by analysing live camera feeds. Which Azure AI service should they use?
Azure Vision. This is a scene-detection / object-detection use case using live camera feeds — not document processing. Azure Vision enables AI to see and identify objects in images and video streams. Azure Document Intelligence would be wrong — it extracts structured fields from uploaded documents like invoices, not live camera scenes.
Your organisation is considering an AI initiative. Leadership asks: "What questions should we be asking to evaluate the business value?" What are the three questions?
1. What problem does it solve? 2. What measurable outcomes will it deliver? 3. Does it align with our organisation's strategic goals? These are the Microsoft-defined questions for evaluating AI business value. On the exam, if one answer references these questions and another just says "build an agent in Copilot Studio," the business-value answer is the most correct choice.
What is the difference between Traditional AI and Generative AI? Give the key distinguishing features.
Traditional AI identifies patterns in previous data to make predictions about what will happen next. It is deterministic — the same input always produces the same output. Use cases: fraud detection, demand forecasting, sentiment classification. Generative AI uses LLMs and word/semantic relationships to create entirely new content that didn't exist before. It is non-deterministic — the same prompt produces different outputs each time. Use cases: drafting emails, generating documents, answering questions in natural language. The key differentiator: are we creating something new (generative) or finding a pattern in existing data (traditional)?
What are the six Microsoft Responsible AI principles and what makes Fairness different from Inclusiveness?
The six principles are: Fairness, Reliability & Safety, Privacy & Security, Inclusiveness, Transparency, and Accountability. Fairness means the AI does not discriminate against groups of people — it gives everyone an equally fair chance of a helpful response. Example violation: a loan model that automatically declines people based on an irrelevant attribute (red hoodie). Inclusiveness means all users can effectively interact with the AI regardless of their abilities — sighted vs visually impaired, hearing vs hearing impaired. Example violation: a chatbot that works perfectly for sighted users but is unusable for visually impaired users. The distinction: Fairness is about whether the AI treats different groups equitably in its decisions. Inclusiveness is about whether all users can access and use the AI effectively regardless of their physical or cognitive abilities.
What are the three elements of an agent, and what is agent orchestration?
Every agent has three elements: (1) Model — the LLM or AI model that powers the agent's reasoning (the brain). Choice affects performance, cost, and reliability. (2) Instructions — the text defining who the agent is, who it serves, what it does, and how it behaves (the job description). (3) Tools — the connections to external systems and APIs that allow the agent to actually execute tasks — not just talk, but act (the hands). Agent orchestration is the practice of building multiple specialised agents that each perform one specific task, then connecting them so they work together as a system. Instead of one agent doing six tasks, build six agents doing one task each. This mirrors the parent/child flow pattern in Power Automate.
What are the five pillars of the AI readiness framework, and which one is explicitly described as continuous?
The five pillars are: (1) Business Strategy — identifying the outcomes and KPIs your organisation cares about, before any AI discussion. (2) Technology & Data — preparing your data estate: breaking down silos, improving quality, ensuring clean/deduplicated/representative data, creating data dictionaries and access controls. This is the pillar explicitly described as continuous — data readiness is not a one-time exercise but an ongoing cycle of hygiene. (3) AI Strategy & Experience — starting small, tightly scoped pilots with 6–12 week timelines, clear success criteria, and fast iteration loops. (4) Organisational & Culture — leading from the top, building diverse cross-functional teams, embedding AI goals in performance reviews, building executive sponsorship. (5) AI Governance — covering data governance, AI governance (managing model risks), and regulatory governance (legal compliance and ethics).