US Federal
Last reviewed: 2026-05-11US federal AI policy underwent a complete reorientation in 2025. Executive Order 14110 (Biden, October 2023) was rescinded on 20 January 2025; Executive Order 14179 (“Removing Barriers to American Leadership in Artificial Intelligence”) replaced it three days later. In July 2025, the White House published the America’s AI Action Plan, accompanied by three further executive orders covering exports, datacenter permitting, and federal procurement. In December 2025, the administration issued an executive order targeting state-law obstruction of federal AI policy. NIST’s AI Safety Institute was renamed the Center for AI Standards and Innovation (CAISI) in June 2025, with a mission shift toward standards, national security, and competitiveness.
What did not change: the NIST AI Risk Management Framework remains the foundational US voluntary framework, and underlying anti-discrimination law (Title VII, ADA, ADEA, FCRA, ECOA) continues to apply to AI systems — even where the EEOC and OFCCP withdrew their AI-specific guidance in January 2025.
Executive Order 14179 — Removing Barriers to American Leadership in AI (January 2025)
EO 14179, signed 23 January 2025, is the centrepiece of the current administration’s AI policy.[1] Its core elements:
- Rescission of EO 14110 and related agency guidance issued under it.
- A directive that “AI policy promotes the United States’ ability to lead in AI” and removes regulatory barriers to AI development.
- Direction to OMB to revise federal AI-use and procurement guidance (delivered as M-25-21 and M-25-22 in April 2025).
- Direction to develop an AI Action Plan (delivered in July 2025).
EO 14179 is silent on many topics that EO 14110 had addressed in detail (e.g., the Defense Production Act reporting obligations for frontier models). Where prior agency action was taken under EO 14110 authority, it has generally been wound down or repurposed.
OMB M-25-21 and M-25-22 (April 2025)
On 3 April 2025, OMB issued two memoranda replacing the Biden-era M-24-10 (federal AI use) and M-24-18 (federal AI procurement):
- M-25-21 — Federal Use of AI. Sets governance, risk management, and transparency requirements for federal agencies using AI, with particular focus on “high-impact” AI (a recast of “rights-impacting” and “safety-impacting” categories under M-24-10).
- M-25-22 — Federal AI Procurement. Sets procurement standards and supplier requirements for federal AI purchases.
Each cabinet department was required to publish an AI Compliance Plan within 180 days; the CFPB published one of the most detailed plans on 26 September 2025.[2]
America’s AI Action Plan (July 2025)
Published 23 July 2025, America’s AI Action Plan organises 90+ federal actions into three pillars: Innovation, Infrastructure, and Diplomacy.[3] Three accompanying executive orders implement specific Action Plan items:
- AI Technology Stack export EO — rationalises export controls for AI hardware and software.
- Datacenter permitting EO — accelerates federal permitting for AI infrastructure.
- “Unbiased AI Principles” procurement EO — defines federal procurement criteria around model “bias,” “ideology,” and disclosure.
The Action Plan also formalised the CAISI mission (see below) and committed the United States to specific milestones in the International Network of AI Safety Institutes.
December 2025 preemption executive order
On 11 December 2025, the President signed an executive order titled “Eliminating State Law Obstruction of National Artificial Intelligence Policy.”[4] Its main elements:
- Creation of an AI Litigation Task Force at the Department of Justice to challenge state laws considered to obstruct national AI policy.
- Direction to the Department of Commerce to evaluate state laws for consistency with federal AI policy.
- Authorisation to condition certain federal funding on state cooperation with national AI policy.
- Direction to the FCC to develop a federal AI disclosure standard.
- Carve-outs preserving state authority over (i) child safety, (ii) datacenter infrastructure decisions, and (iii) state government procurement.
Congress declined to include AI preemption in the FY2026 National Defense Authorization Act. The preemption EO is therefore the operative federal posture; legal challenges to specific state laws under preemption theories are anticipated through 2026.
Center for AI Standards and Innovation (CAISI)
In June 2025, Secretary of Commerce Lutnick renamed the NIST AI Safety Institute the Center for AI Standards and Innovation (CAISI).[5] The mission was reframed toward standards, national security, and competitiveness. CAISI has since signed frontier-model testing agreements with Google DeepMind, Microsoft, and xAI; Anthropic and OpenAI agreements pre-existed under the prior AISI brand.
CAISI continues to participate in the International Network of AI Safety Institutes alongside the UK AI Security Institute, Singapore, Japan, Korea, and others.
NIST AI Risk Management Framework
NIST AI RMF 1.0 (January 2023) remains the foundational US voluntary framework. It organises AI risk management into four functions:
- Govern. Establish organisational governance for AI risk — culture, accountability, policies.
- Map. Contextualise the AI system; identify what could go wrong and who is affected.
- Measure. Analyse, assess, and monitor risks (bias, robustness, drift, security).
- Manage. Mitigate and respond — controls, incident response, change management.
Figure: NIST AI RMF core functions — Govern (overarching, cross-cutting culture and policy), with Map, Measure, and Manage operating as a continuous iterative cycle.
NIST profiles and updates
NIST has not published an AI RMF 2.0. Evolution is via profiles (use-case overlays) and crosswalks:
- NIST AI 600-1 — Generative AI Profile, originally issued July 2024, updated March 2025 to add threat categories for poisoning, evasion, extraction, and model manipulation.[6]
- NIST IR 8596 — Cybersecurity Framework Profile for AI — preliminary draft December 2025.
- AI RMF Profile for Trustworthy AI in Critical Infrastructure — concept note released 7 April 2026.
- SP 800-53 AI overlay — in development for federal cybersecurity controls applied to AI systems.
For organisations subject to federal contracts or working with critical infrastructure, the relevant profile is increasingly important; the GenAI Profile in particular is widely adopted as the de facto reference for generative-AI threat modelling.
TAKE IT DOWN Act (May 2025)
Signed 19 May 2025, the TAKE IT DOWN Act is the first federal statute focused specifically on AI-adjacent harms.[7] It criminalises the knowing publication of non-consensual intimate imagery, including deepfakes, and requires online platforms to remove such content within 48 hours of receiving a valid notice. Penalties include criminal liability for publication and FTC enforcement against non-compliant platforms.
The Act intersects with AI governance in two ways: it directly addresses one of the most prominent generative-AI harms, and it establishes the FTC as a federal enforcer of an AI-related obligation, complementing the FTC’s pre-existing Section 5 unfair-or-deceptive-practices authority.
AI Diffusion Rule rescission (May 2025)
On 13 May 2025, the Bureau of Industry and Security (BIS) rescinded the AI Diffusion Rule (issued at the end of the Biden administration) and replaced it with narrower advanced-IC guidance and a public warning regarding Huawei Ascend chips.[8] The practical effect was to remove the broad export-licensing requirements the Diffusion Rule had introduced for AI hardware destined for many countries, leaving in place targeted controls focused on China and a handful of other jurisdictions.
Anti-discrimination law continues to apply
On 27 January 2025, the EEOC removed its 2023 AI hiring Technical Assistance documents, and the OFCCP removed its AI/EEO guidance. The underlying statutes — Title VII, the ADA, the ADEA — remain fully binding.[9] The removal of AI-specific guidance does not change the legal obligations on employers using AI in hiring, promotion, or termination decisions; it removes the agency’s stated interpretation of how those statutes apply to AI. Litigation under the underlying statutes continues, and several state laws (notably New York City Local Law 144 and Illinois HB 0053) impose specific AI-hiring obligations independent of federal guidance.
See US State Laws for state-level AI employment rules and Sectoral for FCRA, ECOA, and other federal regimes that continue to apply to AI used in regulated decisions.
The White House. (2025, January 23). Removing Barriers to American Leadership in Artificial Intelligence (EO 14179). ↩︎
Consumer Financial Protection Bureau. (2025, September 26). AI Compliance Plan for OMB M-25-21. ↩︎
The White House. (2025, July 23). America’s AI Action Plan. ↩︎
The White House. (2025, December 11). Eliminating State Law Obstruction of National Artificial Intelligence Policy. ↩︎
NIST. Center for AI Standards and Innovation (CAISI). See also FedScoop. (2025, June). Trump administration rebrands AI Safety Institute as CAISI. ↩︎
TAKE IT DOWN Act — see overview at Wikipedia. ↩︎
Wiley. BIS Rescinds AI Diffusion Rule. ↩︎
EEOC and OFCCP removed AI-specific employment guidance on 27 January 2025; underlying Title VII / ADA / ADEA obligations remain binding. ↩︎