Who this questionnaire is for
Executives, senior leaders, and decision-makers responsible for approving AI initiatives, budgets, and organisational direction.
What it assesses
Whether AI adoption is strategically grounded — including clarity of business objectives, prioritised use cases, ownership models, rollout discipline, and decision gates.
How it helps
This questionnaire turns abstract “AI strategy” into explicit decisions. It surfaces whether leadership has defined what AI is for, what it should not be used for, who owns outcomes, and how risk is managed before scale. Outputs are designed to support board-level and senior leadership discussions.
Best used when
- Deciding whether to scale AI beyond pilots
- Aligning leadership on priorities and risk tolerance
- Preparing for enterprise-wide adoption
AI Strategy & Adoption Readiness (Executive)
This assessment helps leaders identify whether their organisation is ready to adopt AI without chaos. It focuses on strategy, ownership, governance, and rollout discipline.
Section A — Strategy & Value
1) Do you have a clear business objective for AI that is measurable (time saved, cost reduced, revenue increased, risk reduced)?
2) Are your AI use cases prioritised (top 3–5) with an explicit “do not do” list?
3) Do you know what data the system will rely on (sources, freshness, ownership, access rights)?
4) Do you have a “safe to ship” definition for AI outputs (quality threshold, refusal behaviour, escalation for high-stakes)?
5) Do you have a defined adoption roadmap (pilot → gated production → scale) with decision gates?
Section B — Ownership & Operating Model
6) Are owners assigned for business value, product delivery, engineering reliability, and governance?
7) Do you have an intake process for new AI use cases (risk tiering, approval, prioritisation)?
8) Do you have a policy for which tasks require a human decision (human-in-the-loop) vs can be automated?
9) Do you have training and enablement (what to trust, when to escalate, how to report failures)?
10) Are vendors/tools evaluated with evidence requirements (logs, exportability, failure cases) rather than demos only?
Section C — Governance, Risk & Continuity
11) Do you have incident response for AI failures (triage, rollback, comms, learning loop)?
12) Do you track drift over time (quality shifts, refusal changes, source shifts, new failure patterns)?
13) Do you maintain change logs and approvals for prompts, models, tools, and data access?
14) Can you produce an evidence trail for outputs (sources used, checks applied, why it answered)?
15) Do you run structured evaluations (not demos) before shipping changes and at regular intervals?
Tip: If multiple leaders complete this assessment, compare answers across roles. Misalignment is often the biggest hidden risk in AI programmes.
Posture
—
—
Decision
—
—
