Appearance
How To Select And Prioritize Agentic AI Use Cases
Use this guide for scoring and decision mechanics. Use the agent lifecycle roadmap as the broader "when to use what" reference across conception, data readiness, governance, build, validation, rollout, operation, and scale.
1. Agent Fit Filter
Use this filter before platform selection. It is not a stop-at-first-no checklist. It is a routing sequence that decides whether the idea should become a custom agent, a SaaS/prebuilt capability, automation, search/RAG, analytics, or no build.
Step 1: Confirm the outcome
Ask whether the business outcome, workflow, owner, and KPI are clear.
- If yes, continue.
- If no, do not select a platform yet. Clarify outcome, owner, workflow, and measurement.
Step 2: Check for an existing capability
Ask whether an existing Microsoft SaaS capability or prebuilt agent can satisfy the requirement with acceptable governance.
- If yes, prefer the SaaS or prebuilt pattern. Validate fit, licensing, data access, and admin controls.
- If no, continue classification.
Step 3: Look for deterministic workflow
Ask whether the workflow is mostly deterministic, rules-based, or a known integration sequence.
- If yes, prefer standard app configuration, Power Automate, Logic Apps, workflow, rules engine, or API integration.
- If no, continue classification.
Step 4: Look for grounded knowledge work
Ask whether the workflow is primarily static Q&A, summarization, or content generation over approved knowledge sources.
- If yes, prefer Microsoft 365 Copilot, Copilot connectors, Copilot Studio knowledge, Azure AI Search, or another RAG/search pattern.
- If no, continue classification.
Step 5: Look for analytics or model work
Ask whether the need is prediction, classification, extraction, optimization, reporting, or analytics rather than autonomous task execution.
- If yes, prefer Microsoft Fabric, Power BI, Azure Machine Learning, Foundry Models, or embedded AI in the business app.
- If no, continue agent assessment.
Step 6: Confirm agentic behavior
Ask whether the workflow requires adaptive reasoning, planning, context-dependent tool use, or autonomous task execution.
- If yes, continue to agent platform selection.
- If no, route to the best non-agent pattern identified above, or stop if there is no measurable value.
Step 7: Add controls for high-risk action
Ask whether the action is high risk, regulated, financially material, externally visible, or hard to reverse.
- If yes, add human approval, deterministic workflow boundaries, least privilege, audit, rollback, and stronger validation.
- If no, standard controls may be sufficient.
Recommended output: every idea should receive a routing decision, not just an agent/no-agent decision.
| Routing Decision | Use When |
|---|---|
| SaaS/prebuilt agent | Existing Microsoft product capability meets the requirement with acceptable controls. |
| Microsoft 365 extension | Work happens primarily in Microsoft 365 and needs grounded productivity support or limited actions. |
| Copilot Studio agent | Low-code agent, knowledge, connectors, actions, channels, and Power Platform governance are sufficient. |
| Foundry/custom agent | Pro-code orchestration, model/tool control, advanced evaluation, custom runtime, or complex integration is required. |
| Automation/workflow | The process is deterministic or can be reliably represented as rules, workflow, or API integration. |
| Search/RAG | The need is grounded answers or content generation without adaptive tool use. |
| Analytics/model | The need is prediction, classification, extraction, optimization, or reporting. |
| Stop/defer | The outcome is unclear, value is weak, data cannot be used, or residual risk is unacceptable. |
2. Use-Case Classification
| Classification | Description | Typical Microsoft Pattern |
|---|---|---|
| Productivity | Helps users search, summarize, draft, analyze, or prepare work products. | Microsoft 365 Copilot, Copilot extensions, Microsoft 365 agents, Copilot Studio |
| Action | Takes bounded actions in business systems with approvals, tools, or connectors. | Copilot Studio, Power Platform, Dynamics 365, Microsoft Graph connectors, MCP where appropriate |
| Automation | Runs repeatable process steps with deterministic workflow and occasional AI decision support. | Power Automate, Copilot Studio agent flows, Foundry workflows, Microsoft Agent Framework |
| Knowledge/RAG | Answers questions or summarizes grounded content without adaptive tool use. | Microsoft 365 Copilot, Copilot Studio knowledge, Azure AI Search, Foundry |
| Model/analytics | Predicts, classifies, extracts, optimizes, or analyzes data without an agent loop. | Microsoft Fabric, Azure Machine Learning, Foundry Models, Power BI |
| Custom agent | Requires code-first orchestration, complex integrations, custom runtime, or advanced model/tool control. | Microsoft Foundry, Azure, Microsoft Agent Framework, custom app architecture |
3. Prioritization Scoring
Score each dimension from 1 to 5. Use weighted scoring only after decision owners agree on strategic priorities.
| Dimension | Weight | 1 | 3 | 5 |
|---|---|---|---|---|
| Business impact | 30% | Nice-to-have improvement | Meaningful local improvement | Directly tied to funded strategic outcome |
| Technical feasibility | 20% | Complex integration or unclear build path | Feasible with known gaps | Clear build path using existing Microsoft capabilities |
| Data readiness | 20% | Data inaccessible, stale, low quality, or unpermissioned | Key data exists with remediation needed | Authoritative, clean, current, permissioned, compliant data |
| User desirability | 15% | Low adoption pull or unclear workflow fit | Some user demand and change effort | Strong user pain, clear workflow placement, willing pilot group |
| Risk/control feasibility | 15% | Controls unclear or residual risk likely unacceptable | Controls possible with effort | Risks understood and controls available |
Recommended interpretation:
| Score | Decision |
|---|---|
| 4.0-5.0 | Strong pilot candidate |
| 3.0-3.9 | Candidate if gaps can be resolved before build |
| 2.0-2.9 | Redesign, simplify, or defer |
| Below 2.0 | Stop or route to non-agent backlog |
4. Platform Selection
Choose the simplest Microsoft-aligned option that meets functional, governance, and lifecycle requirements.
| Decision Point | Recommended Pattern |
|---|---|
| Need is standard productivity, research, drafting, analysis, or Microsoft 365-grounded work | Use Microsoft 365 Copilot or Microsoft 365 Copilot agents/extensions |
| Need is a business-user configurable agent with knowledge, topics, connectors, actions, channels, and governance | Use Copilot Studio |
| Need is a pro-code agent, custom model selection, advanced evaluation, model routing, custom tools, or deeper AI engineering | Use Microsoft Foundry and Azure services |
| Need is deterministic workflow with light AI support | Use Power Automate, Copilot Studio agent flows, or Foundry workflows |
| Need is Dynamics 365 process augmentation | Extend Dynamics 365 Copilot and related business app capabilities before custom build |
| Need is predictive analytics or model training rather than agent behavior | Use Microsoft Fabric, Azure Machine Learning, Foundry Models, or Power BI |
| Need requires custom runtime, cross-platform orchestration, specialized security boundary, or advanced engineering control | Use code-first architecture on Azure with Microsoft Agent Framework or appropriate frameworks |
5. Single-Agent Versus Multi-Agent Decision
Default to a single-agent pilot unless there is a clear reason to split responsibilities.
| Use Single Agent When | Use Multi-Agent When |
|---|---|
| Scope is narrow and one owner can govern the agent. | The workflow crosses materially different domains, teams, or policy boundaries. |
| The agent uses a small set of tools and knowledge sources. | Distinct agents need separate identities, skills, data access, or approval paths. |
| Latency and observability are easier with one orchestration path. | Parallel work, handoffs, or specialist roles create measurable value. |
| The pilot is testing whether agentic behavior is useful at all. | Scale, reuse, or future growth requires modular agent responsibilities. |
6. Grounding And Tool-Use Decision
| Need | Pattern | Control Focus |
|---|---|---|
| Answer from enterprise content | Search/RAG | Source authority, freshness, permissions trimming, citation, content lifecycle |
| Read or update business records | API/tool action | Least privilege, approval, validation, audit, rollback |
| Connect to standardized external or internal tools | MCP or connector | Server trust, tool schema, authorization, data exposure, logging |
| Execute high-risk business decision | Deterministic workflow with human approval | Policy enforcement, explainability, segregation of duties, evidence |
| Combine knowledge, action, and adaptive reasoning | Mixed grounding and tools | Orchestration, prompt injection defense, tool constraints, telemetry |
7. Go/No-Go Gates
| Decision | Go | Redesign | Stop |
|---|---|---|---|
| Business value | KPI target is material and measurable. | Value exists but KPI or owner is unclear. | No meaningful business outcome or sponsor. |
| Agent fit | Reasoning/tool/adaptive behavior is required. | Need can be simplified. | Deterministic automation or static Q&A is enough. |
| Data readiness | Authoritative data is available and permissioned. | Data remediation is achievable in pilot timeframe. | Data cannot be accessed, trusted, or used compliantly. |
| Security/compliance | Controls and evidence are acceptable. | Controls need architecture changes. | Residual risk is unacceptable. |
| User adoption | Pilot users agree workflow placement and feedback loop. | Change plan needs revision. | Users do not want or cannot use the solution. |
| Operations | Owner, telemetry, support, cost, and lifecycle are assigned. | Operating model needs more work. | No accountable owner or support path. |