AI Governance Handbook
A practical guide to AI regulation, governance, safety, and compliance — updated for the 2026 regulatory landscape.
What this handbook covers
The AI governance landscape changed substantially between 2024 and 2026: the EU AI Act began enforcement and was rebased by the May 2026 Digital Omnibus; the United States rescinded Executive Order 14110 and replaced it with EO 14179 and the America’s AI Action Plan; Colorado, Texas, and California enacted binding state laws; Korea and Japan adopted national AI statutes; and ISO/IEC 42005 and 42006 published, unlocking third-party certification of AI management systems. This edition catches up on every one of those changes and adds two new chapters: US state laws and copyright & IP.
Introduction
Who this handbook is for, how to read it, and the shape of the field today.
Legal & Regulatory Frameworks
EU AI Act, US federal & state law, international regulation, ISO standards, sectoral rules, copyright cases.
Privacy, Data & Security
Data governance, PETs, model security, third-party risk, incident response.
Technical Safety
Robustness, alignment, interpretability, bias mitigation, monitoring.
Frontier Models
GPAI Code of Practice, frontier-model safety frameworks, voluntary commitments, the AI Safety Institute network.
Audience Guidance
Practical focus for practitioners, compliance officers, executives, and policymakers.
AI Maturity Stages
Six seven-stage models for governance, safety, trust, responsible AI, risk, and compliance.
Glossary
Core terminology aligned with ISO/IEC 22989:2022.
Changelog
What changed in v2.0.0 (the May 2026 rewrite) and earlier releases.
What changed in this edition? See the Changelog for a complete release log, or jump straight to the rewritten EU AI Act, US federal, US state laws, international, and new Copyright & IP chapters.