DMR News

Advancing Digital Conversations

Federal–State Conflict Intensifies as Lawmakers Clash Over Who Should Regulate AI

ByJolyen

Nov 29, 2025

Federal–State Conflict Intensifies as Lawmakers Clash Over Who Should Regulate AI

Growing Divide Over Regulatory Authority
For the first time, Washington is nearing decisions on how to regulate artificial intelligence, and the central dispute now hinges on who should have the authority to create and enforce the rules. In the absence of federal standards focused on consumer safety, states have introduced dozens of bills aimed at addressing AI-related risks, including California’s SB-53 and Texas’s Responsible AI Governance Act, which prohibits intentional misuse of AI systems. Tech companies and AI-focused startups argue these laws form a patchwork they see as unworkable and harmful to innovation. Pro-AI PAC co-founder Josh Vlasto told TechCrunch such regulations will slow the United States “in the race against China.”

Push for Federal Preemption and Legislative Maneuvers
Industry leaders and several White House officials are pushing for a national standard that would preempt state authority, and new efforts have emerged to prohibit states from passing their own AI laws. House lawmakers have considered adding preemption language to the National Defense Authorization Act. A leaked draft of a White House executive order shows similar intent. The draft outlines the creation of an “AI Litigation Task Force” to challenge state AI laws, directs agencies to review state regulations deemed burdensome, and encourages the Federal Communications Commission and Federal Trade Commission to issue national rules that would override state measures.

The draft also gives Trump’s AI and Crypto Czar, David Sacks, co-lead authority to shape a unified legal framework, placing him alongside officials who traditionally manage technology policy. Sacks has advocated for limiting state involvement and keeping federal oversight minimal, supporting industry self-regulation.

Resistance in Congress and Federal Legislative Plans
A sweeping preemption that removes states’ ability to regulate AI is unpopular in Congress. Lawmakers voted overwhelmingly against a similar moratorium earlier this year and argued that without a federal standard, blocking state involvement would leave consumers unprotected. Rep. Ted Lieu and the bipartisan House AI Task Force are preparing a package of federal AI bills covering fraud, healthcare, transparency, child safety, and catastrophic risk. The scale of this package means it could take months or years to become law, underscoring why the accelerated push to limit state authority has become one of the most contentious issues in AI policy.

Escalating Preemption Efforts in the NDAA and White House Draft
Recent weeks have seen intensified efforts to restrict state AI regulation. Majority Leader Steve Scalise told Punchbowl News that the House has considered inserting language into the NDAA that would stop states from regulating AI. Politico reported that Congress aimed to finalize a deal before Thanksgiving. A source familiar with the negotiations told TechCrunch discussions have focused on narrowing the restrictions to potentially preserve state authority over areas such as children’s safety and transparency.

The leaked White House draft EO, which has reportedly been paused, outlines a parallel preemption strategy by preparing legal challenges to state laws, instructing agencies to evaluate regulations deemed restrictive, and encouraging federal rulemaking that supersedes state authority.

Industry Funding and Political Influence
Sacks’s position aligns with much of the AI industry. Several pro-AI super PACs have formed in recent months, spending hundreds of millions of dollars on local and state elections to oppose candidates who support AI regulation. Leading the Future—backed by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale—has raised over $100 million and recently launched a $10 million campaign urging Congress to establish federal AI rules that override state laws.

Vlasto told TechCrunch that inconsistent state regulations would hinder innovation. Nathan Leamer, executive director of Build American AI, the PAC’s advocacy arm, confirmed support for preemption even without new federal consumer protections. Leamer said existing laws covering fraud and product liability are sufficient, favouring a reactive posture that lets companies address harms in court. He contrasted this with state laws that often seek to prevent issues before they arise.

Political Pushback and State Policy Momentum
One of the PAC’s first targets is New York Assembly member Alex Bores, who sponsored the RAISE Act requiring AI labs to create safety plans for preventing critical harms. Bores told TechCrunch he supports a national AI policy but argues states can move faster to address emerging risks. As of November 2025, 38 states have adopted more than 100 AI-related laws, mostly addressing deepfakes, transparency, disclosure, and government use of AI. A recent study found that 69% of those laws impose no requirements on AI developers.

Congress has moved at a slower pace. Hundreds of AI bills have been introduced, but few have passed. Since 2015, Lieu has introduced 67 bills to the House Science Committee, and only one became law. More than 200 lawmakers signed an open letter opposing preemption in the NDAA, arguing that states “serve as laboratories of democracies” and must retain flexibility to address new digital challenges. Nearly 40 state attorneys general also sent a letter opposing a federal block on state AI laws.

Expert Perspectives and Drafting of a Comprehensive Federal Package
Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders argue that concerns about regulatory patchwork are overstated. They note AI companies already comply with stricter EU rules, and other industries manage varying state laws. They describe the industry’s opposition as an attempt to avoid accountability.

Lieu is drafting a federal megabill exceeding 200 pages that covers fraud penalties, deepfake protections, whistleblower protections, expanded compute resources for academia, and mandatory testing and disclosure requirements for companies building large language models. The bill does not direct federal agencies to evaluate AI models directly, differing from a proposal by Sens. Josh Hawley and Richard Blumenthal that calls for a government-run evaluation program for advanced AI systems before deployment. Lieu acknowledged his approach would not be as strict but said it has a better chance of becoming law. He noted that his goal is to pass legislation under Republican control of the House, Senate, and White House.


Featured image credits: Pexels

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *