In a powerful bipartisan move, the U.S. Senate voted 99–1 on July 1 to remove a controversial part of a major spending bill that would have banned U.S. states from passing their own AI laws for the next 10 years. The ban was originally included in H.R. 1, a wide-ranging tax-and-spending bill supported by President Donald Trump. But after pushback from state leaders, civil society groups, and privacy advocates, Senator Marsha Blackburn (R–Tennessee) introduced an amendment to strike it out. The Senate agreed, with only Senator Thom Tillis (R–North Carolina) voting against the change.
Later that day, the Senate passed the full bill by a 51–50 vote, with Vice President JD Vance casting the deciding vote. The rest of the bill remains intact, but the AI regulation ban is gone.
WHAT JUST HAPPENED?
The controversial AI provision in question sought to ban states and local governments from making or enforcing their own AI-related laws for 10 years. It was quietly included in the original version of H.R. 1—colloquially called the “One Big Beautiful Bill”, but drew criticism from across the political spectrum. It also tied this ban to federal funding—meaning states would have had to give up money for broadband and tech infrastructure if they chose to regulate AI on their own.
On July 1, Senator Marsha Blackburn (R–TN) formally introduced an amendment to strike the moratorium from the bill. The amendment passed overwhelmingly, 99–1, with Senator Thom Tillis (R–NC) casting the lone dissenting vote.
Shortly thereafter, the broader bill itself passed the Senate 51–50, with Vice President JD Vance casting the tie-breaking vote, preserving key provisions on federal AI investment, broadband access, and tech competitiveness, minus the state regulation ban.
WHY DID THE AI MORATORIUM FAIL?
The proposed moratorium had been heavily backed by major tech firms, including Meta, Alphabet (Google), Microsoft, Amazon, and OpenAI. Industry leaders argued that a patchwork of inconsistent state regulations would create operational chaos, legal uncertainty, and hinder the U.S.’s ability to compete with AI developments in China and the EU.
The proposal received strong opposition from many sides:
- State governors, including several Republicans, said the ban would take away their right to protect residents from AI harms like bias in hiring, deepfakes, or facial recognition abuse.
- Civil rights groups warned it would weaken efforts to fight AI-driven discrimination or privacy violations.
- Lawmakers from both parties said it would give too much power to the federal government and Big Tech while ignoring real risks that communities face.
Senator Blackburn, who once supported the idea of a shorter five-year ban, ultimately said she could not support the moratorium without stronger protections for children and consumers. Her amendment to remove the ban passed almost unanimously.
WHAT ARE STATES DOING WITH THAT POWER?
Several states had been gearing up to introduce or expand their AI regulations prior to the federal vote. For example:
- California proposed legislation requiring AI systems to disclose training data and undergo bias audits.
- Illinois, with its long-standing Biometric Information Privacy Act (BIPA), was looking to expand rules to include generative AI surveillance.
- New York, Connecticut, and Washington were drafting bills aimed at regulating the use of AI in employment decisions and political communications.
This means the U.S. could soon see a variety of AI laws across different states, each responding to local concerns.
LEGAL AND POLICY IMPLICATIONS
This development reinforces the Tenth Amendment-based principle that unless expressly preempted, states retain the authority to regulate technologies in areas like consumer protection, education, and civil rights.
However, it also creates regulatory fragmentation. AI companies must now prepare to comply with varying laws across multiple states some of which may impose stricter requirements than others. Legal scholars have pointed out that future lawsuits could arise over whether eventual federal AI laws will preempt these state-level efforts.
The Senate vote also raises big questions for federal AI strategy:
- Will Congress move toward a comprehensive federal AI regulation to unify standards?
- Can the government balance innovation and rights-based protections without stifling technological growth?
- Will other federal efforts, such as the proposed Kids Online Safety Act, provide the missing guardrails?
WHAT’S NEXT?
The House of Representatives now needs to agree on the final version of the bill. It’s possible that some lawmakers may try to bring the AI moratorium back—but with such strong opposition in the Senate, that seems unlikely.
Going forward, there’s still a big question:
Will the federal government create a nationwide AI law that sets clear standards across all states?
For now, the answer is NO. So, each state will continue to act on its own—which some see as a win for democracy, and others worry could make things messy for companies trying to follow the rules.
FINAL THOUGHTS
In the escalating contest over AI governance, the Senate’s vote represents a meaningful pivot toward decentralized, democratic oversight. It reaffirms the idea that states have a right to lead where Congress delays, and that the public interest must not be overridden by the interests of a few large firms.
Whether this moment marks the beginning of a more balanced AI regulatory futureor just a brief pause in Washington’s deregulatory momentum remains to be seen.
References:
- The Hindu: U.S. Senate strikes AI regulation ban from Trump megabill
- Reuters: U.S. Senate strikes AI regulation ban from Trump megabill
- PBS NewsHour: Senate pulls AI regulatory ban from GOP bill after complaints from states
- Ogletree Deakins: U.S. Senate strikes proposed 10-year ban on state and local AI regulation from spending bill