Glossary of AI Governance Terms

Last reviewed: 2026-05-11

Definitions are aligned with ISO/IEC 22989:2022 where the standard provides one; ISO-sourced definitions are explicitly cited.[1] Definitions for newer concepts (agentic AI, GPAI, frontier model) are drawn from current regulatory instruments where available.

Agentic AI

AI system designed to perform multi-step actions in pursuit of a goal, typically via tool use, code execution, or interaction with external systems. Distinguished from non-agentic AI by the ability to take autonomous actions beyond producing output for direct human use.

AI agent

"Entity that senses and responds to its environment and takes actions to achieve goals."

ISO/IEC 22989:2022(E), 3.1.3

AI Act (EU)

Regulation (EU) 2024/1689 establishing harmonised rules on artificial intelligence. The world's first horizontal AI statute, entered into force 1 August 2024 and being enforced in phases. See EU AI Act chapter for current obligations and Omnibus amendments.

AI component

"Functional element that constructs an AI system."

ISO/IEC 22989:2022(E), 3.1.4

AI Office (EU)

The body within the European Commission (DG CONNECT) responsible for enforcing the EU AI Act with respect to general-purpose AI models, cross-border cases, and the GPAI Code of Practice. Established and operational since 2 August 2025.

AI Safety Institute / Center for AI Standards and Innovation (CAISI)

National bodies focused on frontier-model evaluation and AI safety standards. The US AISI was renamed CAISI in June 2025; UK AISI and others continue under the AISI brand. Members of the International Network of AI Safety Institutes coordinate on frontier-model evaluation. See Frontier Models.

Artificial intelligence (AI)

"Capability of an engineered system to acquire, process, and apply knowledge and skills." (ISO/IEC 22989). In regulatory practice, definitions vary; the EU AI Act Article 3(1) definition is widely cited.

ISO/IEC 22989:2022(E), 3.1.2

Artificial intelligence system (AI system)

"Engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives." (ISO/IEC 22989). The EU AI Act and OECD definitions are functionally aligned.

ISO/IEC 22989:2022(E), 3.1.5

Automated decision-making (ADM)

Decision-making by algorithmic or AI systems without human intervention. GDPR Article 22 restricts solely-automated decisions producing legal or similarly significant effects, requiring human review or meaningful information about the logic.

Bias audit

Structured evaluation of an AI system for differential performance or disparate impact across protected groups. Required by NYC Local Law 144 for automated employment decision tools and emerging as best practice broadly.

Conformity assessment

Process required by the EU AI Act for high-risk AI systems to demonstrate compliance before placing on the market. May involve a notified body for third-party assessment or self-assessment depending on the system category (Articles 43, 44).

Consequential decision

Decision affecting a consumer's access to or pricing of employment, education, financial services, essential government services, healthcare, housing, insurance, or legal services. Used in Colorado SB 24-205 to define the scope of high-risk AI systems.

Constitutional AI / RLAIF

Alignment technique using AI-generated feedback against an explicit set of principles to train models toward desired behaviour. Reduces reliance on human raters at scale.

Datasheet for dataset

Standardised documentation describing a dataset's motivation, composition, collection process, preprocessing, recommended uses, distribution, and maintenance. Proposed by Gebru et al. (2018); widely adopted as a transparency tool.

Differential privacy

Mathematical framework providing formal guarantees about the influence of any individual record on a model's output. Implemented through noise addition during training (DP-SGD) or output aggregation.

Explainability

"Property of an AI system to express important factors influencing the AI system results in a way that humans can understand." Distinguished from interpretability (which refers to the model's intrinsic understandability).

ISO/IEC 22989:2022(E), 3.5.4

Federated learning

Training paradigm in which a model is trained across distributed datasets without centralising the data. Widely deployed in healthcare, mobile keyboards, and cross-institutional research where data cannot or should not leave its source.

Foundation model

Large model trained on broad data, intended to be adapted to many downstream tasks. In US discourse, often used interchangeably with "general-purpose AI" or "frontier model" though the terms are technically distinct.

Frontier model

Most capable AI models, typically those exceeding compute thresholds (10²⁵ FLOPs in the EU AI Act) or designated by regulators for posing systemic risk. See Frontier Models.

Fundamental Rights Impact Assessment (FRIA)

Assessment required by EU AI Act Article 27 for certain high-risk AI systems used by deployers, evaluating impacts on fundamental rights protected by the Charter. ISO/IEC 42005 provides operational methodology.

General-purpose AI (GPAI) model

AI model "trained with a large amount of data using self-supervision at scale, [that] displays significant generality and is capable of competently performing a wide range of distinct tasks." (EU AI Act Article 3(63)). Subject to Articles 53-55 obligations.

GPAI Code of Practice

Voluntary instrument published 10 July 2025 that operationalises GPAI obligations under the EU AI Act. Three chapters: Transparency, Copyright, Safety & Security. Signatories include Google, Microsoft, OpenAI, Anthropic. See EU AI Act.

Hallucination

Generation of plausible but factually incorrect or fabricated output by a generative AI model. Mitigated through retrieval grounding, output verification, and clearer uncertainty signalling.

High-risk AI system (EU AI Act)

AI system included in Annex III or Annex I of the EU AI Act, subject to risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and conformity assessment obligations. Deadlines rebased by the May 2026 Omnibus.

Impact assessment (AI)

Structured evaluation of an AI system's effects on individuals, groups, and society, covering risks, stakeholder analysis, fundamental-rights impacts, and mitigations. Standardised in ISO/IEC 42005:2025.

Interpretability

Property of a model being inherently understandable to humans. Distinguished from explainability (which refers to producing human-understandable explanations for opaque models).

ISO/IEC 42001

International standard for AI management systems, published December 2023. Certifiable. Anthropic was the first AI developer to achieve certification (January 2025). See ISO Standards.

Machine learning (ML)

"Process using computational techniques to enable systems to learn from data or experience." Subfield of AI.

ISO/IEC 22989:2022(E), 3.2.10

Model card

Documentation describing an AI model's intended use, performance, limitations, training data, evaluation, and ethical considerations. Increasingly required by regulatory regimes (EU AI Act Article 11; California AB 2013 for training-data summary).

NIST AI Risk Management Framework (RMF)

Voluntary US framework published January 2023, organising AI risk management into four functions: Govern, Map, Measure, Manage. Companion profiles include the Generative AI Profile (NIST AI 600-1) and Cybersecurity Framework Profile for AI (NIST IR 8596). See US Federal.

Reinforcement Learning from Human Feedback (RLHF)

Alignment technique training models using human preference signals. Widely deployed for instruction-following and value alignment in large language models.

Responsible Scaling Policy (RSP)

Frontier developer's published commitment to specific capability evaluations and risk thresholds, with mitigation actions triggered at defined thresholds. Originated with Anthropic; adopted in different forms by other frontier developers.

Risk-based regulation

Regulatory approach that calibrates obligations to the risk posed by a system or activity. The EU AI Act, Colorado AI Act, and Korea AI Basic Act are leading examples.

Safety case

Structured argument and evidence demonstrating that a system is acceptably safe for a defined use. Required for frontier models by California SB 53 and emerging as a standard unit of assurance for high-impact AI.

Systemic risk (EU AI Act)

Risk specific to high-impact capabilities of GPAI models. Models exceeding 10²⁵ FLOPs of training compute are presumed to pose systemic risk; the AI Office may also designate models as posing systemic risk based on capability assessment.

Trustworthiness (in AI)

"Ability to meet stakeholders' expectations in a verifiable way." Cross-cutting property addressed in NIST AI RMF; characteristics include validity, reliability, safety, security, accountability, transparency, explainability, privacy, and fairness.

ISO/IEC 22989:2022(E), 3.5.16

  1. ISO/IEC 22989:2022, Information technology — Artificial intelligence — Artificial intelligence concepts and terminology. ↩︎