Sectoral Regulation
Last reviewed: 2026-05-11Beyond horizontal AI law, sector regulators have moved on AI throughout 2025-2026 — particularly in healthcare, financial services, employment, and consumer finance. This chapter surveys the most consequential US sectoral developments. See International for non-US sector regulation and EU AI Act for the European high-risk system regime (which itself functions as sectoral overlay in healthcare, employment, justice, and other domains).
Healthcare — FDA AI/ML medical devices
The Food and Drug Administration’s principal AI/ML regulatory instrument is the Predetermined Change Control Plan (PCCP) Final Guidance, issued in December 2024.[1] The Final Guidance operationalises the FDA’s “AI/ML Software as a Medical Device Action Plan” by allowing sponsors to plan, in advance, the modifications an AI/ML-enabled device may undergo without triggering a new premarket submission. A PCCP must include:
- Description of Modifications — what changes the sponsor plans to make.
- Modification Protocol — how those changes will be validated.
- Impact Assessment — expected impact on safety, effectiveness, and overall device performance.
The Final Guidance is significant because it gives AI/ML medical-device sponsors a structured pathway for post-market model updates, addressing one of the longest-standing tensions in AI medical device regulation. Good Machine Learning Practice principles, jointly published by FDA, Health Canada, and the UK MHRA, continue to apply.
For non-device clinical AI (e.g., clinical decision-support tools that fall outside FDA’s device definitions), HHS Office for Civil Rights guidance and 42 CFR Part 92 nondiscrimination rules apply.
Financial services
Three federal regulators — the OCC, Federal Reserve, and FDIC — jointly oversee bank AI use. Foundational guidance:
- SR 11-7 / OCC Bulletin 2011-12 — the Federal Reserve and OCC joint guidance on Model Risk Management (MRM), originally 2011, remains operative and is the foundational supervisory expectation for AI/ML models used in regulated financial decisions.
- OCC Bulletin 2025-26 — issued during 2025, provides clarifications for community banks on MRM expectations as AI tools become more accessible to smaller institutions.
- OCC Bulletin 2026-13 — revised MRM guidance reflecting evolving practice and AI-specific considerations.
- A joint OCC / Federal Reserve / FDIC RFI on AI/MRM is in the supervisory pipeline as of mid-2026.
For consumer-facing AI in financial services, the CFPB published an AI Compliance Plan on 26 September 2025 detailing its implementation of OMB M-25-21 and its supervisory approach to bank and non-bank use of AI in lending, servicing, and collections.[2]
Fair lending statutes — ECOA, the Fair Housing Act, and Regulation B — continue to apply to AI used in credit decisions. The Fair Credit Reporting Act (FCRA) applies to AI used in consumer-report-based decisioning.
Employment
Federal employment law (Title VII, ADA, ADEA, GINA) applies to AI used in hiring, promotion, and termination. Although the EEOC and OFCCP withdrew their AI-specific Technical Assistance documents on 27 January 2025, the underlying statutes are unchanged.
State and local laws fill the federal guidance gap — see US State Laws for NYC Local Law 144, Illinois HB 3773, and Colorado SB 24-205’s employment coverage.
Practical compliance for AI in employment:
- Conduct bias audits under NYC LL 144 if hiring in New York City; consider expanding to all hiring jurisdictions for defensive purposes.
- Document validation studies for selection procedures under the Uniform Guidelines on Employee Selection Procedures (UGESP).
- Provide accommodation pathways under the ADA for applicants who cannot use AI-mediated screening tools.
- Disclose AI use to candidates where required by state law.
Consumer protection — FTC
The Federal Trade Commission continues to enforce Section 5 of the FTC Act against unfair or deceptive AI-related practices. Notable FTC themes during 2024-2026:
- AI claims that are false, misleading, or unsubstantiated.
- Algorithmic discrimination in violation of the FTC Act or other consumer protection statutes.
- Privacy violations via AI-mediated data collection or model training.
- TAKE IT DOWN Act enforcement against platforms failing to remove non-consensual intimate imagery within 48 hours (see US Federal).
The FTC’s authority is broad and post-hoc; structured compliance with NIST AI RMF and AI-specific Section 5 expectations (truthful claims, evidence base for performance, fairness review where decisions affect consumers) is the most effective preventive posture.
Telecommunications
The December 2025 preemption executive order directs the FCC to develop a federal AI disclosure standard (see US Federal). The FCC has also enforced existing rules against AI-generated robocalls, including the February 2024 declaratory ruling that AI-generated voices in calls constitute “artificial or prerecorded voices” under the Telephone Consumer Protection Act.
Critical infrastructure
NIST’s AI RMF Profile for Trustworthy AI in Critical Infrastructure (concept note 7 April 2026) is the developing reference for AI used in critical infrastructure sectors covered by Presidential Policy Directive 21. Sector-specific cybersecurity rules (NERC CIP for electric grid, TSA security directives for pipelines) increasingly include AI-relevant provisions.
Other sector overlays
- Education — Department of Education guidance (2023, updated 2024) on AI in K-12; FERPA continues to apply.
- Transportation — NHTSA Standing General Order on AI-equipped vehicles; FMCSA guidance on AI in commercial trucking.
- Defence — DoD Directive 3000.09 on Autonomy in Weapon Systems; CDAO governance for DoD AI use.
- Insurance — NAIC Model Bulletin on the Use of AI by Insurers; state-by-state adoption.
How to navigate sectoral overlap
Most organisations deploying AI face multiple overlapping regimes: a horizontal regime (EU AI Act or US state law), a sector overlay (FDA, OCC, EEOC), and broad consumer protection (FTC, AG). Best practice is to:
- Map each AI system to all applicable regimes during design.
- Document compliance evidence in a single technical file usable across regimes (a 42001-aligned management system enables this).
- Track regulatory developments per sector via a designated owner.
- Engage early with sector regulators when an AI system substantively changes a regulated process — particularly in healthcare and financial services where conformity assessments are slow.
FDA. Predetermined Change Control Plan for AI-Enabled Device Software Functions (Final Guidance, December 2024). ↩︎
Consumer Financial Protection Bureau. (2025, September 26). AI Compliance Plan for OMB M-25-21. ↩︎