Audience-Specific Guidance

Last reviewed: 2026-05-11

Different roles have different parts to play in AI governance. This chapter offers tailored guidance for four primary audiences: AI Practitioners, Compliance Officers, Executives & Board Members, and Policymakers & Regulators.

AI Practitioners

Data scientists, ML engineers, AI developers

Focus

Practitioners are at the front lines of building and deploying AI. The job is to operationalise governance and safety inside the development process: build models that perform well on accuracy metrics and meet criteria for fairness, explainability, robustness, and compliance.

Specific practices

  • Translate principles into code. Ethics guidelines mean nothing if they don’t show up as concrete model-validation steps, bias checks, or model-card sections.
  • Handle data lawfully. Anonymise where required, obtain proper consent, respect opt-outs, work with privacy reviewers early.
  • Test exhaustively. Train/test splits are not enough — add stress tests, adversarial tests, fairness checks, and (for GenAI) red-teaming.
  • Maintain a model inventory. Document each model’s purpose, training data, version, evaluations, deployment context. This is now table stakes for EU AI Act and ISO/IEC 42001 compliance.
  • Monitor in production. Set up performance dashboards and drift alerts. Plan retraining cadence.
  • Partner with compliance early. When an AI tool is going to fall under EU AI Act high-risk, NYC LL 144, or Colorado SB 24-205, finding out before deployment is much cheaper than after.
  • Cultivate a safety culture. Ethical AI is everyone’s job, like security. Speak up when something looks wrong; build feedback into development culture, not just review gates.

What changed for practitioners in 2025-2026

  • Frontier development frameworks (RSPs, EU GPAI Code, SB 53) now require pre-deployment evaluations and safety cases for the largest models — see Frontier Models.

  • Training-data transparency (California AB 2013, EU AI Act Article 53) requires published training-data summaries.

  • Agentic systems have a much heavier governance burden than traditional ML — permission scoping, audit trails, reversibility analysis.

Compliance Officers

Legal, regulatory, ethics, and risk personnel

Focus

Compliance officers ensure AI systems and processes adhere to law, regulation, and policy. They translate regulatory requirements into controls, guide AI projects, and verify controls are working.

Specific practices

  • Track the regulatory landscape. EU AI Act (and Omnibus rebase), US federal EOs, state laws (CO, TX, CA, UT, IL, NYC), Korea AI Basic Act, Japan AI Promotion Act, China labelling rules, sector regulators. See Legal & Regulatory.
  • Develop internal policies. AI governance policy, model-development standards, third-party AI usage policy, AI procurement standards.
  • Run training and awareness. AI ethics training for developers, product managers, executives. Workshops on EU AI Act high-risk classification and Colorado AI Act consequential-decision tests.
  • Review and audit. DPIA / FRIA (fundamental rights impact assessment), bias audits, model risk assessments, contract review for third-party AI.
  • Incident handling. Coordinate response to compliance issues; regulator notifications (EU AI Office, Cal OES, state AGs); cross-functional incident triage.
  • Manage the standards stack. ISO/IEC 42001 (management system), 23894 (risk), 42005 (impact assessment) are no longer optional for large organisations.

Key concerns

Liability, regulatory sanctions, reputational risk. The 2025-2026 enforcement landscape is harsher than the 2024 landscape: EU AI Act penalties (up to EUR 35M / 7% turnover), Texas TRAIGA penalties ($10K-$200K), TAKE IT DOWN Act FTC enforcement, FTC Section 5, state AG actions.

What changed in 2025-2026

  • Federal preemption uncertainty in the US complicates multi-state compliance — track the December 2025 EO and litigation outcomes.

  • Audit-and-certification market for ISO/IEC 42001 is now real with 42006 published; certification is increasingly procurement-relevant.

  • Frontier-model rules add a layer for large developers — see Frontier Models.

Executives & Board Members

C-suite, board directors, AI sponsors

Focus

Executives are responsible for strategic oversight and organisational commitment to AI governance. The job is to balance innovation with risk and to maintain stakeholder trust.

Specific practices

  • Set strategy. Decide which AI use cases the organisation will pursue, which are out of bounds, and the corresponding risk appetite.
  • Establish governance structures. AI Governance Council, model risk management function, AI ethics committee with real authority and budget.
  • Set the tone. Communicate that responsible AI is a core value; reward responsible behaviour; back compliance teams when they say “not yet.”
  • Own accountability. Boards now routinely ask for AI risk reports. Be ready with metrics: number of AI systems in production, classification by risk tier, recent incidents, audit findings, compliance posture by jurisdiction.
  • Prepare for regulation. Where regulation is in force, fund compliance programmes proportionate to risk. Where regulation is pending (UK AI Bill, Brazil PL 2338), monitor and plan.
  • Pursue strategic certifications. ISO/IEC 42001 certification, GPAI Code of Practice signature (for frontier developers), CAISI agreements where applicable.

Considerations for 2025-2026

  • ESG and AI — trustworthy AI is increasingly part of ESG reporting and investor expectations.
  • AI workforce — reskilling, AI literacy obligations under EU AI Act Article 4, internal AI usage policies.
  • Geopolitical risk — export controls, data-localisation rules, jurisdictional fragmentation. The US December 2025 preemption EO and the ongoing state-federal tension affect operational planning.

What changed in 2025-2026

  • AI-related liability has moved from theoretical to concrete — Bartz v. Anthropic settlement, TAKE IT DOWN Act platform liability, EU AI Act penalties.

  • Frontier developers face additional executive-level expectations: published frontier framework, safety cases, whistleblower protections under California SB 53.

Policymakers & Regulators

Government officials, regulators, standards body participants

Focus

Policymakers create and enforce the rules. The job is to address public risks while preserving innovation and accommodating sectoral diversity.

Specific focus areas

  • Develop and refine AI regulations. EU AI Act implementation and Omnibus refinement, state legislative work, sector-specific rules.
  • Harmonise where possible. OECD, G7 Hiroshima Process, ISO/IEC, IEEE; bilateral cooperation agreements (e.g., the International Network of AI Safety Institutes).
  • Build enforcement capacity. Stand up AI offices and inspectorates; train staff; develop technical evaluation capability.
  • Address societal impacts. Workforce displacement, AI literacy, public-sector AI use, election integrity, deepfake harms.

Concerns

Prevent harm; preserve fundamental rights; ensure national security; maintain transparency and accountability; balance with innovation; avoid over-regulation.

What changed in 2025-2026

  • Multilateral fragmentation. The Paris AI Action Summit produced a 58-nation statement that the US and UK declined to sign — coordination is harder than it was in 2023-2024.

  • Capacity building. The International Network of AI Safety Institutes is the most concrete operational coordination layer; bilateral CAISI agreements show what national capacity can deliver.

  • Preemption tensions. The US December 2025 preemption EO has created uncertainty for state policymakers; the EU Omnibus shows how complex even single-jurisdiction harmonisation is in practice.

Cross-audience: the shared language

These four audiences must work together. The common language across them increasingly is:

Investing in shared vocabulary and shared documentation formats (model cards, datasheets, impact assessments, audit reports) materially reduces the friction between these roles.