Introduction
Last reviewed: 2026-05-11Artificial intelligence is no longer an emerging technology. By May 2026, generative AI has been mainstream for more than three years; frontier models from Anthropic, Google, OpenAI, and xAI are routinely deployed inside regulated workflows; and the question facing every organisation has shifted from “should we govern AI?” to “how do we govern AI without slowing down what already works?”
AI Governance is the set of frameworks, controls, and processes an organisation uses to ensure that the AI systems it builds, buys, or deploys are lawful, safe, fair, and aligned with its values. This handbook is a practical primer covering five neighbouring disciplines — governance, safety, trustworthiness, responsible AI, and risk management — tailored for practitioners, compliance officers, executives, and policymakers.
How this edition is different
The previous edition (March 2025) was written before the second major wave of AI regulation. Since then:
- The European Union has begun enforcing the AI Act and, on 7 May 2026, the Council and Parliament reached a provisional agreement on a Digital Omnibus that rebases the high-risk obligation deadline from August 2026 to December 2027.[1]
- The United States has replaced Executive Order 14110 with Executive Order 14179, published the America’s AI Action Plan, and issued a December 2025 executive order targeting state-law obstruction of federal AI policy.[2]
- Colorado (SB 24-205, effective 30 June 2026), Texas (TRAIGA, effective 1 January 2026), and California (SB 53, SB 942, AB 2013) have enacted binding state laws.[3]
- South Korea and Japan have national AI statutes; Canada’s Bill C-27/AIDA died on the order paper; China’s synthetic-content labelling rules took effect September 2025.[4]
- ISO/IEC 42005:2025 (AI impact assessment) and ISO/IEC 42006:2025 (audit/certification body requirements) joined ISO/IEC 42001:2023.[5]
- The NIST AI Safety Institute was renamed the Center for AI Standards and Innovation (CAISI) in June 2025.[6]
- The largest US AI copyright case to date — Bartz v. Anthropic — settled in September 2025 for $1.5 billion.[7]
Two entirely new chapters cover US state laws and Copyright & IP. A third new chapter, Frontier Models, pulls together GPAI obligations, the EU Code of Practice, frontier-safety frameworks, and the international AI Safety Institute network.
How to read this handbook
This handbook is meant to be useful at three levels:
- As a reference. Each chapter is self-contained. Compliance officers can jump to EU AI Act or US Federal; engineers can jump to Technical Safety; executives can jump to Audience Guidance.
- As a maturity assessment. The six Maturity Models chapters provide seven-stage progressions for governance, safety, trust, responsible AI, risk, and compliance — useful for benchmarking and roadmap planning.
- As a primer. Read it cover-to-cover for a comprehensive view of the field as of mid-2026.
A note on stability. Some material in this edition is fact (signed, in force, or published) and some is pending. We flag pending items inline. The most volatile area at the time of writing is the EU AI Act Omnibus, which has been agreed in principle but is awaiting formal adoption. Where the text says “PENDING,” verify with the source linked in References.
Looking forward
Three trends will dominate the next eighteen months:
- Compliance as a product feature. ISO/IEC 42006 unlocked credible third-party certification of AI management systems in 2025. Vendors that can produce a 42001 certificate and a GPAI Code of Practice signature are now winning enterprise procurement on those grounds alone.
- State-law fragmentation in the US. With the federal preemption executive order signed in December 2025 but no statute attached to the NDAA, state laws will continue to multiply through 2026-2027 until either a federal AI statute passes or the courts resolve preemption challenges.
- Frontier-model governance maturing. The CAISI testing agreements, the EU GPAI Code, the UK Blueprint, and Korea’s frontier-safety track are converging on a shared template: pre-deployment evaluations, post-deployment incident reporting, and structured disclosure of capabilities, limitations, and known failure modes.
What this means for organisations: AI governance is no longer an optional uplift — it is the cost of doing business with AI in any meaningful market. The good news is that the building blocks are now well understood. The remainder of this handbook is a practical guide to assembling them.
Council of the EU. (2026, May 7). Artificial intelligence: Council and Parliament agree to simplify and streamline rules. ↩︎
The White House. (2025, January 23). Removing Barriers to American Leadership in Artificial Intelligence (EO 14179). ↩︎
See chapter on US State Laws for full text and citations. ↩︎
See chapter on International Regulation for full text and citations. ↩︎
ISO/IEC. 42005:2025 — AI System Impact Assessment; 42006:2025 — Requirements for Bodies Providing Audit and Certification of AI Management Systems. ↩︎
Authors Guild. What Authors Need to Know About the Anthropic Settlement. ↩︎