Privacy, Data Governance, and Security

Last reviewed: 2026-05-11

Privacy and data governance are critical pillars of AI governance because AI systems consume and generate large volumes of data, often including personal and sensitive information. Compliance with privacy laws — GDPR in the EU, CCPA/CPRA and state-level laws in the US, PIPEDA in Canada, PIPA in Korea — is the starting point, but a mature AI governance programme also addresses data quality, model security, third-party risk, and incident response specifically calibrated to AI.

Foundational privacy law

The EU General Data Protection Regulation (GDPR) mandates principles — lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality, accountability — that apply directly to AI training and inference. Article 22 GDPR restricts solely-automated decisions producing legal or similarly significant effects, requiring human review and meaningful information about the logic involved.[1] Fines reach 4% of global turnover.

In the United States, CCPA/CPRA gives California residents rights to access, correct, delete, opt-out of sale or sharing, and limit use of sensitive personal information.[2] Other states (Virginia VCDPA, Connecticut CTDPA, Colorado CPA, Utah UCPA, Texas TDPSA, Oregon OCPA, Delaware DPDPA, Iowa ICDPA, Tennessee TIPA, Montana MCDPA, Florida FDBR, New Jersey NJDPA, New Hampshire NHPA) have comparable statutes with varying coverage and enforcement; multi-state operations should track Westin Research or IAPP trackers to keep current.

Sector-specific privacy regimes (HIPAA for healthcare, GLBA for financial services, FERPA for education, COPPA for children) overlay these horizontal laws and apply directly to AI used in those sectors.

International privacy laws relevant to AI include Brazil’s LGPD, Japan’s APPI, Singapore’s PDPA, India’s DPDPA (effective 2024-2025 in phases), and Korea’s PIPA. Many include provisions for automated decision-making analogous to GDPR Article 22.

Data quality and lineage

AI outcomes are only as good as the data they are trained on. Mature data governance for AI requires:

For models subject to California AB 2013 or EU AI Act Article 53, a published training-data summary is now mandatory — see Copyright & IP.

Data minimisation and access control

AI systems should use the minimum data necessary for their purpose. Personal data not needed should not be collected; data that is needed should be pseudonymised, encrypted, and access-controlled. Common patterns:

Privacy-enhancing technologies (PETs)

PETs are no longer experimental. By 2026 several are production-grade:

PETs increasingly appear in regulatory expectations — the EU AI Act Article 10 references them implicitly, and US sector regulators cite them as appropriate safeguards.

Retention and purpose limitation

Data governance policies should define retention schedules and purpose-limitation controls for training and inference data. GDPR requires data not be kept longer than necessary; CCPA permits consumer-initiated deletion. Practically:

Model security

AI models themselves are attack targets. Threat categories include:

The NIST AI 600-1 GenAI Profile (updated March 2025) added explicit threat categories for poisoning, evasion, extraction, and model manipulation — this update is the most current US reference for GenAI threat modelling.[4]

Defensive measures:

Data security

AI data pipelines must be secured with general infosec hygiene plus AI-specific considerations:

Third-party and supply chain risks

Most AI systems incorporate third-party components: pre-trained foundation models, open-source libraries, cloud AI services, vector databases, datasets. Governance must extend to these:

EU AI Act Article 25 explicitly addresses provider obligations through the value chain for high-risk systems.

Incident response

AI incident response should be a dedicated discipline within general incident response, with playbooks for:

Mandatory incident reporting now applies in multiple regimes — EU AI Act Article 55 (serious incidents for systemic-risk GPAI), California SB 53 (critical incidents), Korea AI Basic Act, sector regulators (FDA for medical devices, OCC for banks). Map your reporting obligations early; many regimes have short windows (e.g., 15 days for serious incidents under the EU AI Act).

Audits and red-teaming

Independent audits are increasingly expected:

Coordination across functions

A robust AI governance programme coordinates privacy, security, data governance, AI/ML engineering, legal, compliance, and product functions. Many organisations establish an AI Governance Council with representation from each function, supported by an AI Governance Office that owns documentation, audits, and regulatory engagement. ISO/IEC 42001 specifies this coordination at the management-system level.


  1. GDPR Info. Art. 22 GDPR — Automated individual decision-making. ↩︎

  2. Cloudflare. What is the CCPA?. ↩︎

  3. Gebru, T., et al. (2021). Datasheets for Datasets. Communications of the ACM, 64(12). ↩︎

  4. NIST. AI 600-1: Generative AI Profile. ↩︎