Skip to the content.

AI Governance Policy  

Accountability‑First Model for Safe AI Deployment  

**By Andrew Kingdom Version 1.2 2026‑03‑23 Public Draft**

Description

A plain‑language governance model for AI systems that enforces real accountability, not just stated responsibility.   Designed for leaders, policymakers, and organizations deploying AI at scale, with a focus on limiting harm, exposing risk, and ensuring decisions remain traceable to individuals.

Policy Definition:
A policy is a clearly stated guide for decision-making and behavior; it is not a technical procedure or detailed instruction, but a signpost showing how we are expected to act responsibly and accountably.


Executive Summary

Core Rule   No AI system is released unless it has survived independent challenge.   Builders cannot approve their own systems.

Structure (who holds power)  

Mandatory Gates (no bypass)  

  1. Capability  
  2. Failure  
  3. Harm  
  4. Independent Review  
  5. Monitoring  

Accountability (named, not shared)  

Final Principle   If we cannot clearly explain a system’s limits, we do not release it.


Full AI Governance Policy (Plain Language Version)

Purpose: prevent scaled harm from systems we do not fully understand


1. Scope

This policy applies to any AI system that:

No system is exempt based on size, novelty, or commercial pressure.


2. Core Requirement

No AI system may be deployed unless it has been independently tested and its limits are clearly understood and documented.   This process may not be bypassed for reasons of urgency, competition, or commercial pressure.

The team that builds a system cannot be the final authority on releasing it.


3. Roles and Authority

3.1 Builders (Product & Engineering)

Builders must:

They must not:

Accountability:   If known, material risks are hidden or misrepresented, the approving executives are accountable for releasing a system without adequate disclosure. Builders who proactively escalate risks in good faith are protected from retaliation.


3.2 Risk & Integrity Unit (Independent Internal Team)

This team must:

They have authority to:

Independence & Resources:   The unit reports directly to the board (or equivalent governing body), maintains a budget separate from product lines, and has authority to hire its own technical and legal expertise.   The unit’s performance reviews and compensation must not be tied to the release schedule or commercial success of the products they audit.   Members may not report into product, engineering, or revenue‑generating functions.

Accountability:   If they approve a system without adequate testing or ignore clear risks, they share responsibility for resulting harm.


3.3 External Oversight (Independent Review)

For high‑impact systems—those affecting physical safety, fundamental rights (housing, employment, criminal justice, finance), or more than one million users—external review is required.

External review must occur prior to deployment, not after release.   Findings, including disagreements, must be published.

Accountability:   If oversight is compromised or withheld, the issue must be disclosed and the body replaced.


3.4 Executive Decision‑Makers

Executives must:

They must not:

Overriding a Risk & Integrity Block:   If the unit blocks release, an override requires unanimous approval from a cross‑functional committee (e.g., CEO, Chief Risk Officer, board member) and triggers an automatic external audit.   The override and its justification must be disclosed publicly before deployment.

Accountability:   If harm occurs from known risks, executives are personally accountable and may be subject to internal and external consequences.


4. Required Release Process

4.1 Capability Statement

Clear description of:


4.2 Failure and Misuse Report

Document:


4.3 Harm Testing

Must include:


4.4 Independent Review

Risk & Integrity Unit must:

High‑impact systems must also undergo external review.


4.5 Deployment Controls

Systems must include:


4.6 Controlled Deployment (Optional Path)

For systems where real‑world learning is necessary before full release, a Controlled Deployment may be authorized. This requires:

Controlled deployments are not a bypass; they are a strictly supervised way to gather live data under controlled conditions.


5. Post‑Deployment Requirements

After release:


6. Records and Transparency

Record all major decisions, including:

Records must be:


7. Independent Appeals Mechanism

Any Builder or external party may appeal a decision made under this policy—such as a block, an override, or a finding of non‑compliance—to the External Oversight body or a designated board committee.   Appeals must be acknowledged within 15 days and resolved within a reasonable timeframe.   Retaliation against an appellant is prohibited.


8. Reporting and Protection

Provide clear reporting channels:

Retaliation against reporting is prohibited and penalized.


9. Enforcement

Violations include:

Consequences may include:

Disciplinary measures are applied progressively to encourage learning and improvement.   Progressive discipline does not apply in cases of intentional concealment or fraud.


10. Guiding Principle

If we do not understand a system well enough to explain its limits clearly, we do not release it.   If harm appears, we act immediately—not after justification or delay.


Version History


Intended Use

This document may be:


Authorship & Accountability

Authored and published by:

Andrew Kingdom

The author accepts responsibility for the ideas and structure presented herein.   Organizations adopting this policy retain full responsibility for its implementation and outcomes.


License

MIT License

Copyright (c) 2026 Andrew Kingdom

Permission is hereby granted, free of charge, to any person obtaining a copy of this document to use, copy, modify, and distribute it, subject to the following condition:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the document.

THE DOCUMENT IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.


Attribution

If you use or adapt this policy, attribution is appreciated:

“Based on AI Governance Policy by Andrew Kingdom”


Notes


Feedback

Issues and discussion can be raised via GitHub Issues in this repository.