Skip to content

How To Facilitate The Agentic AI Workshops

Use this guide to plan and run the four workshops plus implementation follow-up. Keep platform selection behind use-case classification, data readiness, and governance evidence.

Pre-Work

Complete pre-work at least five business days before Workshop 1.

ItemOwnerOutput
Confirm executive sponsor and business ownerSponsorNamed sponsor, decision maker, and escalation path
Identify target business or operating contextBusiness ownerProcess area, pain points, target users, in-scope systems
Collect current KPIsBusiness ownerBaseline for cost, speed, quality, revenue, CX, EX, or risk
Identify data owners and system ownersIT/system ownersInitial source-system list and access constraints
Confirm governance stakeholdersSponsorSecurity, compliance, privacy, risk, AI CoE, platform team
Share template packFacilitation leadWorkbook and template set distributed

Workshop 1: Strategy And Use-Case Selection

Purpose: Align AI ideas to measurable business outcomes and select use cases worth deeper assessment.

Recommended duration: 2.5 to 3 hours.

Required participants: executive sponsor, business owner, process SMEs, product owner, enterprise architect, AI/platform lead, change lead.

AgendaTimeFacilitation NotesOutput
Business outcome framing30 minAsk what must improve and how it is measured today. Separate aspiration from measurable KPI.Outcome map and KPI baseline
Workflow walkthrough45 minMap the current process, decisions, systems, handoffs, exceptions, and pain points.Workflow opportunity notes
Agent fit filter45 minClassify ideas as agent, RAG/search, deterministic automation, analytics/model, or prebuilt SaaS.Use-case inventory and "not an agent" log
Portfolio scoring45 minScore business impact, feasibility, data readiness, user desirability, and risk/control complexity.Prioritization matrix
Pilot shortlist and gates30 minPick one to three pilots and define what scale, redesign, pause, or stop would mean.Pilot shortlist and go/no-go criteria

Exit criteria:

  • Each candidate use case has a business owner, target user, affected workflow, KPI, and classification.
  • Non-agent ideas are preserved with a recommended path.
  • Pilot candidates have success metrics and decision thresholds.

Workshop 2: Data And Architecture

Purpose: Confirm whether the pilot can be grounded safely and select the Microsoft solution pattern.

Recommended duration: 2.5 to 3 hours.

Required participants: business owner, data owners, system owners, enterprise architect, Microsoft 365/Power Platform lead, Azure/Foundry lead, security architect.

AgendaTimeFacilitation NotesOutput
Data source review40 minIdentify systems of record, knowledge repositories, operational systems, APIs, and data owners.Data access map
Data readiness assessment45 minScore accuracy, timeliness, cleanliness, completeness, permissions, compliance, residency, and availability.Data readiness assessment
Grounding decision40 minDecide whether the agent needs RAG/search, API/tool calls, MCP, connectors, or mixed grounding.Retrieval decision register
Platform selection45 minApply SaaS first, then Microsoft 365 Copilot extension, Copilot Studio, Foundry, or custom build logic.Platform selection record
Architecture sketch40 minDraft user channel, agent runtime, identity, tools/actions, data sources, logs, monitoring, and control points.Target architecture

Exit criteria:

  • Authoritative data sources and access constraints are documented.
  • Grounding and tool-use decisions have rationale.
  • Target Microsoft platform pattern is selected with assumptions and tradeoffs.

Workshop 3: Governance And Risk

Purpose: Define ownership, policy, security, responsible AI, compliance, audit, and lifecycle controls.

Recommended duration: 3 hours.

Required participants: business owner, product owner, AI CoE, platform team, security, compliance, privacy, risk, operations, enterprise architect.

AgendaTimeFacilitation NotesOutput
Operating model40 minAssign ownership across sponsor, product owner, workload team, AI CoE, platform, security, compliance, and operations.Operating model and RACI
Agent lifecycle and registry35 minDefine registration metadata, identity, access scope, funding, monitoring, pause, retire, and review requirements.Agent registry model
Agent charter45 minDefine purpose, scope, prohibited actions, tools, approvals, fallback, escalation, and memory/retention.Agent charter
Threat model and controls55 minCover prompt injection, data leakage, privilege misuse, tool misuse, residency, model risk, audit gaps, and abuse.Risk/control register
Responsible AI evidence25 minDefine evidence for fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability.Responsible AI assessment inputs

Exit criteria:

  • Every material risk has an owner, control, evidence type, and residual risk decision path.
  • Agent boundaries and prohibited actions are explicit.
  • The agent can be paused, audited, monitored, reviewed, and retired.

Workshop 4: Pilot Design

Purpose: Define what will be built, how it will be tested, and how rollout and operations will work.

Recommended duration: 2.5 to 3 hours.

Required participants: business owner, product owner, engineering lead, QA/test lead, AI/platform lead, security, operations, change lead, support lead.

AgendaTimeFacilitation NotesOutput
Pilot scope30 minConfirm included users, workflows, systems, data, tools, and exclusions.Pilot scope statement
Validation design50 minDefine golden test set, task-completion thresholds, quality metrics, safety tests, red-team cases, cost, and latency limits.Pilot validation plan
ALM and environments35 minDefine dev/test/prod environments, promotion gates, prompt/version control, connector promotion, data refresh, and rollback.ALM/environment strategy
Rollout and change35 minPlan launch channel, Teams or business app placement, communications, training, support, and feedback collection.Rollout/change plan
Operations and scale decision40 minDefine telemetry, dashboard, review cadence, cost controls, lifecycle review, and scale/redesign/pause/stop decision.Observability spec and operations plan

Exit criteria:

  • Build scope and non-goals are locked for the pilot.
  • Validation thresholds are known before development starts.
  • Rollout, support, operations, cost controls, and lifecycle review are assigned.

Implementation Follow-Up

ActivityCadenceExpected Output
Build standup2-3 times weeklyBlockers, decisions, risk updates, scope control
Control evidence reviewWeeklyUpdated risk/control register and audit evidence
Validation reviewWeekly during test phaseTest pass/fail, red-team results, defect backlog
Pilot telemetry reviewWeekly after launchValue, quality, safety, cost, latency, adoption
Scale decision reviewEnd of pilotScale, redesign, pause, or stop decision
Lifecycle reviewMonthly or quarterlyImprovement backlog, access review, cost review, retirement decision

Facilitator Checklist

  • Keep business outcomes visible in every workshop.
  • Ask whether simpler automation, RAG, analytics, or a prebuilt SaaS agent would solve the need.
  • Capture unresolved assumptions as decisions with owners and dates.
  • Require evidence for governance and security claims.
  • Do not allow platform selection to precede data readiness and use-case classification.
  • Keep the pilot narrow enough to validate value and controls quickly.

Agent Kit helps teams shape governed, measurable agentic AI initiatives.