Sectoral Regulation

Last reviewed: 2026-05-11

Beyond horizontal AI law, sector regulators have moved on AI throughout 2025-2026 — particularly in healthcare, financial services, employment, and consumer finance. This chapter surveys the most consequential US sectoral developments. See International for non-US sector regulation and EU AI Act for the European high-risk system regime (which itself functions as sectoral overlay in healthcare, employment, justice, and other domains).

Healthcare — FDA AI/ML medical devices

The Food and Drug Administration’s principal AI/ML regulatory instrument is the Predetermined Change Control Plan (PCCP) Final Guidance, issued in December 2024.[1] The Final Guidance operationalises the FDA’s “AI/ML Software as a Medical Device Action Plan” by allowing sponsors to plan, in advance, the modifications an AI/ML-enabled device may undergo without triggering a new premarket submission. A PCCP must include:

The Final Guidance is significant because it gives AI/ML medical-device sponsors a structured pathway for post-market model updates, addressing one of the longest-standing tensions in AI medical device regulation. Good Machine Learning Practice principles, jointly published by FDA, Health Canada, and the UK MHRA, continue to apply.

For non-device clinical AI (e.g., clinical decision-support tools that fall outside FDA’s device definitions), HHS Office for Civil Rights guidance and 42 CFR Part 92 nondiscrimination rules apply.

Financial services

Three federal regulators — the OCC, Federal Reserve, and FDIC — jointly oversee bank AI use. Foundational guidance:

For consumer-facing AI in financial services, the CFPB published an AI Compliance Plan on 26 September 2025 detailing its implementation of OMB M-25-21 and its supervisory approach to bank and non-bank use of AI in lending, servicing, and collections.[2]

Fair lending statutes — ECOA, the Fair Housing Act, and Regulation B — continue to apply to AI used in credit decisions. The Fair Credit Reporting Act (FCRA) applies to AI used in consumer-report-based decisioning.

Employment

Federal employment law (Title VII, ADA, ADEA, GINA) applies to AI used in hiring, promotion, and termination. Although the EEOC and OFCCP withdrew their AI-specific Technical Assistance documents on 27 January 2025, the underlying statutes are unchanged.

State and local laws fill the federal guidance gap — see US State Laws for NYC Local Law 144, Illinois HB 3773, and Colorado SB 24-205’s employment coverage.

Practical compliance for AI in employment:

Consumer protection — FTC

The Federal Trade Commission continues to enforce Section 5 of the FTC Act against unfair or deceptive AI-related practices. Notable FTC themes during 2024-2026:

The FTC’s authority is broad and post-hoc; structured compliance with NIST AI RMF and AI-specific Section 5 expectations (truthful claims, evidence base for performance, fairness review where decisions affect consumers) is the most effective preventive posture.

Telecommunications

The December 2025 preemption executive order directs the FCC to develop a federal AI disclosure standard (see US Federal). The FCC has also enforced existing rules against AI-generated robocalls, including the February 2024 declaratory ruling that AI-generated voices in calls constitute “artificial or prerecorded voices” under the Telephone Consumer Protection Act.

Critical infrastructure

NIST’s AI RMF Profile for Trustworthy AI in Critical Infrastructure (concept note 7 April 2026) is the developing reference for AI used in critical infrastructure sectors covered by Presidential Policy Directive 21. Sector-specific cybersecurity rules (NERC CIP for electric grid, TSA security directives for pipelines) increasingly include AI-relevant provisions.

Other sector overlays

How to navigate sectoral overlap

Most organisations deploying AI face multiple overlapping regimes: a horizontal regime (EU AI Act or US state law), a sector overlay (FDA, OCC, EEOC), and broad consumer protection (FTC, AG). Best practice is to:

  1. Map each AI system to all applicable regimes during design.
  2. Document compliance evidence in a single technical file usable across regimes (a 42001-aligned management system enables this).
  3. Track regulatory developments per sector via a designated owner.
  4. Engage early with sector regulators when an AI system substantively changes a regulated process — particularly in healthcare and financial services where conformity assessments are slow.

  1. FDA. Predetermined Change Control Plan for AI-Enabled Device Software Functions (Final Guidance, December 2024). ↩︎

  2. Consumer Financial Protection Bureau. (2025, September 26). AI Compliance Plan for OMB M-25-21. ↩︎