International Standards (ISO/IEC)
Last reviewed: 2026-05-11Three new ISO/IEC standards joined the AI governance toolkit during 2025: ISO/IEC 42005 (AI System Impact Assessment) and ISO/IEC 42006 (Requirements for Bodies Providing Audit and Certification of AI Management Systems). With 42006 in place, third-party certification of AI management systems under 42001 became credible during the second half of 2025 — Anthropic received the first such certification in January 2025; IBM Granite, UiPath, and Changi Airport followed.[1]
ISO/IEC 42001:2023 — AI Management Systems
Published in December 2023, ISO/IEC 42001 is the first global standard for AI management systems.[2] It is the AI-specific analogue of ISO 9001 (quality) and ISO 27001 (information security): a certifiable management-system standard that requires organisations to establish an AI governance policy, senior-leadership commitment, risk management processes, resource allocation, and operational controls covering the AI lifecycle.
42001 is sector-agnostic and proportionate to organisation size. Achieving certification demonstrates to regulators and customers that AI projects are managed against recognised baselines for ethics, transparency, accountability, bias mitigation, safety, and privacy.
What changed in 2025-2026. With ISO/IEC 42006 now defining requirements for the bodies that audit and certify AIMS, the certification pathway is real rather than theoretical. Certification is increasingly relevant for procurement — enterprise buyers and regulators are starting to ask for it as a baseline.
ISO/IEC 23894:2023 — AI Risk Management
ISO/IEC 23894 is the AI-specific companion to ISO 31000 (generic risk management).[2:1] It guides organisations through identifying risks across the AI lifecycle — from data collection and model training to deployment and monitoring — assessing severity, and treating risks with appropriate controls.
Together, 23894 (risk management) and 42001 (management system) form a coherent toolkit: 42001 establishes the governance structure, 23894 provides the risk-specific procedures.
ISO/IEC 22989:2022 — AI Concepts and Terminology
ISO/IEC 22989 is the foundational terminology standard for AI. Where ambiguity matters — in contracts, in compliance documentation, in incident reporting — aligning on 22989 definitions of AI system, AI agent, model, training, and similar terms reduces downstream disputes. Definitions used in this handbook’s Glossary are aligned with 22989 where the standard speaks.
ISO/IEC 42005:2025 — AI System Impact Assessment
Published in May 2025, ISO/IEC 42005:2025 is the first international standard dedicated to AI impact assessment.[3] It provides lifecycle guidance for assessing how an AI system affects individuals, groups, and society — covering risk identification, stakeholder analysis, evaluation of impacts on fundamental rights, and documentation.
42005 is not certifiable on its own but feeds into 42001 compliance: an AI management system that conforms to 42001 will use 42005 as the operational guide for impact assessments. It also aligns with the Fundamental Rights Impact Assessment required under Article 27 of the EU AI Act for certain high-risk systems, making 42005 a natural choice for organisations needing a single methodology that satisfies both standards-based and regulatory regimes.
ISO/IEC 42006:2025 — Audit and Certification
Published in 2025, ISO/IEC 42006 defines the requirements for bodies that audit and certify AI management systems against 42001.[4] In practical terms, 42006 is what makes 42001 certification credible: it ensures that certification bodies have appropriate competence, independence, and process, so that a 42001 certificate from one accredited body means the same thing as a 42001 certificate from another.
The combined effect of 42001 + 42005 + 42006 in 2025 is that AI management-system certification now has the same architectural completeness as quality management (ISO 9001 + 9000 family + 17021) or information security (ISO 27001 + 27000 family + 17021). Expect certification to become a procurement default in regulated sectors during 2026-2027.
Other relevant ISO/IEC standards
- ISO/IEC 38507:2022 — Governance implications of the use of AI by organisations. A governance-board-level companion to 42001.
- ISO/IEC 5338:2023 — AI system life cycle processes. Practical lifecycle reference, particularly useful for engineering teams.
- ISO/IEC TR 24028:2020 — Trustworthiness in AI. Technical report on trustworthy AI characteristics.
- ISO/IEC TR 24368:2022 — Overview of ethical and societal concerns. Useful framing for ethics teams.
How to use these standards
For organisations starting from scratch, a typical path is:
- Adopt 22989 terminology in internal documentation.
- Build the management system to 42001, using 38507 for board-level governance hooks.
- Use 23894 to design the risk management process inside 42001.
- Use 42005 as the operational guide for impact assessments.
- Pursue 42006-accredited certification when ready for external assurance.
For organisations already certified to ISO 27001 or ISO 9001, 42001 is intentionally compatible: shared management-system clauses mean an integrated management system covering quality, security, and AI is feasible and often the most efficient implementation.
Anthropic. (2025, January). ISO/IEC 42001 certification. See also reporting on IBM Granite, UiPath, and Changi Airport 2025 certifications. ↩︎
Osler, Hoskin & Harcourt LLP. The role of ISO/IEC 42001 in AI governance. ↩︎ ↩︎
ISO/IEC. 42005:2025 — Information technology — Artificial intelligence — AI System Impact Assessment. ↩︎
ISO/IEC. 42006:2025 — Information technology — Artificial intelligence — Requirements for bodies providing audit and certification of AI management systems. ↩︎