Who this questionnaire is for
Risk, legal, compliance, audit, policy, and governance teams — as well as regulators and internal oversight functions.
What it assesses
Whether governance controls are enforceable in practice, including ownership definitions, evidence trails, logging, incident response, policy enforcement, and auditability.
How it helps
This questionnaire distinguishes between documented governance and operational governance. It shows whether controls exist only on paper or are embedded into systems and workflows. Outputs support internal audits, regulatory conversations, and remediation planning.
Best used when
- Preparing for audits or regulatory review
- Formalising AI governance programs
- Evaluating whether policies are actually enforced
AI Governance & Compliance Readiness
Scores automatically as you click. The top fields are optional — only the questions affect scoring.
Optional fields: these improve your governance record, but are not required to score.
Section A — Governance Model & Accountability
1) Are AI systems covered by an explicit governance model (owners, approvals, risk tiers, and escalation paths)?
Accountability2) Do you maintain an inventory of AI use cases (purpose, owners, users, data sources, and risk classification)?
Inventory3) Are policies translated into enforceable controls (not only documents) for high-stakes or sensitive contexts?
Enforcement4) Is human accountability defined for decisions influenced by AI (who signs off, who reviews, who can override)?
Human-in-loop5) Are third-party vendors evaluated using evidence requirements (logs, exportability, failure-path demonstrations) and contractual expectations?
VendorsSection B — Evidence, Transparency & Audit Trail
6) Can you produce an evidence trail for outputs (sources used, checks run, and why the system answered/refused)?
Evidence7) Are outputs labelled appropriately (uncertainty, limitations, scope, and “not advice” language where needed)?
Disclosure8) Are logs retained with a defined retention policy and access controls (privacy/security aligned)?
Retention9) Can you reproduce an output later (same versioned prompt/model/tools, with recorded context)?
Reproducibility10) Are high-stakes use cases subject to stronger requirements (human review, refusal thresholds, stricter evidence, additional tests)?
High-stakesSection C — Change Management, Incidents & Continuous Assurance
11) Do you have change controls for prompts, models, tools, and data access (approvals + rollback)?
Change control12) Are structured evaluations run before release and at regular intervals (not only demos)?
Evaluation13) Do you maintain an incident response process for AI failures (triage, comms, remediation, learning loop)?
IR14) Do you monitor drift and control effectiveness (verification pass rates, refusal shifts, new failure classes)?
Monitoring15) Are periodic audits performed (controls reviewed, gaps tracked, remediation verified)?
AuditTip: The most common governance failure is policy without enforcement. If your score is high on documentation but low on audit trails, focus on making controls measurable and exportable.
