AI Governance & Compliance Readiness

Who this questionnaire is for

Risk, legal, compliance, audit, policy, and governance teams — as well as regulators and internal oversight functions.

What it assesses
Whether governance controls are enforceable in practice, including ownership definitions, evidence trails, logging, incident response, policy enforcement, and auditability.

How it helps
This questionnaire distinguishes between documented governance and operational governance. It shows whether controls exist only on paper or are embedded into systems and workflows. Outputs support internal audits, regulatory conversations, and remediation planning.

Best used when

  • Preparing for audits or regulatory review
  • Formalising AI governance programs
  • Evaluating whether policies are actually enforced

AI Governance & Compliance Readiness

Scores automatically as you click. The top fields are optional — only the questions affect scoring.

Status: Not scored Coverage: 0% Score: Decision:

Optional fields: these improve your governance record, but are not required to score.

Your score
0
out of 45
Readiness
Answer the questions to see your readiness level.

Section A — Governance Model & Accountability

1) Are AI systems covered by an explicit governance model (owners, approvals, risk tiers, and escalation paths)?

Accountability
A real governance model names owners, defines approval gates, and makes escalation paths usable in practice.

2) Do you maintain an inventory of AI use cases (purpose, owners, users, data sources, and risk classification)?

Inventory
Inventory is the foundation for evidence, controls, and audit scope.

3) Are policies translated into enforceable controls (not only documents) for high-stakes or sensitive contexts?

Enforcement
Controls should be measurable and testable — not just statements of principle.

4) Is human accountability defined for decisions influenced by AI (who signs off, who reviews, who can override)?

Human-in-loop
Define sign-off, review responsibilities, and override authority.

5) Are third-party vendors evaluated using evidence requirements (logs, exportability, failure-path demonstrations) and contractual expectations?

Vendors
Vendor due diligence should demand evidence, not just marketing claims.

Section B — Evidence, Transparency & Audit Trail

6) Can you produce an evidence trail for outputs (sources used, checks run, and why the system answered/refused)?

Evidence
Evidence is your defence in audits and incidents.

7) Are outputs labelled appropriately (uncertainty, limitations, scope, and “not advice” language where needed)?

Disclosure
Disclosure reduces harm and improves defensibility.

8) Are logs retained with a defined retention policy and access controls (privacy/security aligned)?

Retention
Retention and access governance are as important as having logs at all.

9) Can you reproduce an output later (same versioned prompt/model/tools, with recorded context)?

Reproducibility
Versioning + recorded context allow incident reconstruction and audit defence.

10) Are high-stakes use cases subject to stronger requirements (human review, refusal thresholds, stricter evidence, additional tests)?

High-stakes
High-stakes should not rely on generic “best effort” behaviour.

Section C — Change Management, Incidents & Continuous Assurance

11) Do you have change controls for prompts, models, tools, and data access (approvals + rollback)?

Change control
Governance is weakest at change boundaries — enforce approvals and rollback paths.

12) Are structured evaluations run before release and at regular intervals (not only demos)?

Evaluation
You can’t govern what you don’t measure — require structured tests beyond demos.

13) Do you maintain an incident response process for AI failures (triage, comms, remediation, learning loop)?

IR
Incident response should be usable, practiced, and tied to measurable remediation.

14) Do you monitor drift and control effectiveness (verification pass rates, refusal shifts, new failure classes)?

Monitoring
If drift is not measured, governance silently decays.

15) Are periodic audits performed (controls reviewed, gaps tracked, remediation verified)?

Audit
Audits keep governance honest and prevent “paper compliance”.

Tip: The most common governance failure is policy without enforcement. If your score is high on documentation but low on audit trails, focus on making controls measurable and exportable.