The rule will presumably override any state laws regulating AI
- President plans One Rulebook for artificial intelligence, aiming to override state regulations
- Big Tech backs a unified federal standard, while state leaders in both parties warn of lost consumer protections
- Order expected to challenge state authority through preemption, lawsuits and potential funding restrictions
President Trump says he'll issue an executive order that will block state attempts to protect consumers from abuses by artificial intelligence (AI), responding to the pleas of the Big Tech companies that are in a race to dominate the fast-evolving technology.
The action would mark a major victory for tech giants that have urged the administration to preempt state laws including those intended to protect children that they view as fragmented and burdensome. It is also likely to spark sharp backlash from governors, attorneys general and lawmakers who say states must retain the ability to protect consumers.
There must be only One Rulebook if we are going to continue to lead in AI, Trump wrote on Truth Social, adding that companies cannot be expected to secure 50 approvals every time they want to do something.
Details unclear, but preemption strategy expected
Though Trump did not provide specifics, Reuters reported last month that the White House is considering an order that would challenge state AI laws through federal preemption, court action and restrictions on federal funding.
The proposal represents an escalation of Trumps earlier push for Congress to insert language blocking state AI regulations into a major defense bill. Lawmakers from both parties rejected that idea, and the Senate voted 991 to preserve state authority over AI legislation.
Companies including OpenAI, Google, Meta and venture firm Andreessen Horowitz have lobbied heavily for federal rules that override state statutes. Industry leaders argue that complying with disparate regulations would slow innovation, burden developers and allow China to outpace the U.S. in AI leadership.
They say a unified national framework would provide consistent expectations and reduce legal uncertainty across jurisdictions.
States insist on guardrails to protect residents
State leadersRepublican and Democrat alikesay local governments must retain the ability to respond to AI risks affecting their citizens.
Florida Gov. Ron DeSantis last week proposed an AI bill of rights that would include privacy protections, parental controls and consumer safeguards. Other states have enacted laws banning nonconsensual sexual imagery, prohibiting unauthorized political deepfakes, restricting discriminatory AI practices, and regulating high-risk AI systems. California will soon require major developers to document how they plan to address catastrophic-risk scenarios.
North Carolina Attorney General Jeff Jackson, a Democrat, rebuked Trumps earlier attempt to block state oversight. Congress cant fail to create real safeguards and then block the states from stepping up, he said.
A new federal-state clash over tech regulation
The anticipated order sets the stage for a sweeping legal and political battle over AI governance, with implications for privacy, innovation and consumer protection.
If the White House proceeds with the One Rule directive, courts will likely be asked to decide whether the federal government can sharply limit state authority in an area where Congress has yet to enact comprehensive legislation.
State officials warn that, absent robust federal standards, a preemption effort would leave millions of residents exposed to risks ranging from fraud to civil-rights violations. Tech companies counter that only a uniform national rule will allow the U.S. to maintain global AI competitiveness.
The executive order is expected later this week.
Examples of state AI/deepfake and AI-systems laws
-
Colorado AI Act Colorado in 2024 passed legislation regulating high-risk AI systems that affect areas such as employment, housing, insurance and government services.
-
ELVIS Act (Tennessee) This 2024 law prohibits unauthorized AI-generated voice or likeness impersonations. It was touted as a protection against AI-enabled voice cloning and deepfakes.
-
State laws banning or restricting distribution of AI-generated or otherwise manipulated deepfake sexual imagery or nonconsensual intimate content Forty-six states have enacted laws prohibiting creation or distribution of explicit deepfakes, including revenge porn, to protect individuals privacy and prevent abuse.
-
State laws regulating use of deepfakes in political or election-related communications As of 2025, roughly 28 states have passed laws restricting AI-generated media used in political campaigns or elections, aiming to curb misinformation and deception.
What these laws seek to do and what preemption could erase
These state-level laws typically aim to:
-
Ban nonconsensual or exploitative sexual content created by AI.
-
Prohibit impersonation or unauthorized use of a persons likeness or voice via AI (e.g., the ELVIS Act).
-
Regulate AI-generated content used in elections or political messaging.
-
Impose transparency or safety requirements on use of high-risk AI systems for example, to prevent discriminatory outcomes in housing, employment or insurance (as with the Colorado law).
If a federal executive order imposed a single national rule that preempts state laws, many or all of those protections could be voided along with any state-level enforcement mechanisms.
Why state-level action has surged
-
Rapid expansion of generative-AI and deepfake tools has made misuse easier and cheaper; lawmakers responded with targeted bans on nonconsensual deepfakes and AI-enabled impersonation. (multistate.us)
-
Growing awareness that discrimination, bias or safety harms could arise from AI systems used in sensitive areas (hiring, housing, public services) prompting laws like Colorados targeting high-risk systems. (Wikipedia)
Posted: 2025-12-09 16:34:27















