Who this questionnaire is for
Teams and organizations at an early or mixed stage of AI adoption, including business leaders, product teams, and governance stakeholders.
What it assesses
Whether foundational controls are in place to safely adopt AI — including evidence handling, scope definition, ownership, and basic operational safeguards.
How it helps
This questionnaire provides a baseline snapshot of readiness. It highlights whether AI use is still exploratory or whether minimum controls exist to prevent silent errors, misuse, or audit exposure. Results help teams understand whether they can proceed responsibly or need to pause and formalise fundamentals first.
Best used when
- Starting AI pilots
- Scaling from experimentation to broader use
- Aligning business, technical, and governance expectations
AI Adoption Readiness Questionnaire
Answer each question. Your score updates instantly. This is designed to be practical: evidence, governance, operations. Optional fields below help teams correlate results internally.
Section A — Evidence & Provenance
1) Do outputs include explicit citations or source references?
2) Can you trace a major claim to a specific document, version, or timestamp?
3) Are permitted sources defined (approved corpora, domains, or internal documents)?
4) Is retrieval filtered by metadata (date, jurisdiction, document type)?
5) Do you have verification that can refuse or return a supported partial answer?
Section B — Governance & Policy
6) Are use cases explicitly scoped (what the system can and cannot do)?
7) Are owners defined (business, product, engineering, risk, model)?
8) Do you maintain change logs for prompts, models, or data?
9) Do you have incident response for AI failures?
10) Are policy constraints enforceable in the system (not just documented)?
Section C — Operations & Drift
11) Do you monitor refusal rates and verification pass rates over time?
12) Do you test failure paths (missing evidence, contradictions, policy conflicts)?
13) Is evaluation separated by stage (retrieval vs reasoning vs compliance)?
14) Are logs exportable for audit or review?
15) Do humans have defined review workflows for high-stakes outputs?
Tip: Use the email or copy function to share results internally and compare perspectives across roles.
Tier
—
—
What this means
—
—
