[This essay is coauthored with many representatives from States across the United States, as listed below.]

Artificial intelligence holds immense promise—from accelerating disease detection to streamlining services—but it also presents serious risks, including deepfake deception, misinformation, job displacement, exploitation of vulnerable workers and consumers, and threats to critical infrastructure. As AI rapidly transforms our economy, workplaces, and civic life, the American public is calling for meaningful oversight. According to the Artificial Intelligence Policy Institute, 82% of voters support the creation of a federal agency to regulate AI. A Pew Research Center survey found that 52% of Americans are more concerned than excited about AI’s potential, and 67% doubt that government oversight will be sufficient or timely.
Public skepticism crosses party lines and reflects real anxiety: voters worry about data misuse, algorithmic bias, surveillance, and impersonation, and even catastrophic risks. Pope Leo XIV has named AI as one of the defining challenges of our time, warning of its ethical consequences and impacts on ordinary people and calling for urgent action.
Yet instead of answering this call with guardrails and public protections, Congress, which has done almost nothing to address these concerns, is considering a major step backwards, a tool designed to prevent States from taking matters into their own hands: a sweeping last-minute preemption provision tucked into a federal budget bill that would ban all state regulation on AI for the next decade.
The provision, which is likely at odds with the 10th Amendment, demands that “no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.” The measure would prohibit any state from regulating AI for the next ten years in any way—even in the absence of any federal standards.
This would be deeply problematic under any circumstance, but it’s especially dangerous in the context of a rapidly evolving technology already reshaping healthcare, education, civil rights, and employment. If enacted, the statute would preempt states from acting —even if AI systems cause measurable harm, such as through discriminatory lending, unsafe autonomous vehicles, or invasive workplace surveillance. For example, twenty states have passed laws regulating the use of deepfakes in election campaigns, andColorado passed a law to ensure transparency and accountability when AI is used in crucial decisions affecting consumers and employees. The proposed federal law would automatically block the application of those state laws, without offering any alternative. The proposed provision would also preempt laws holding AI companies liable for any catastrophic damages that they contributed to, as the California Assembly tried to do.
The federal government should not get to control literally every aspect of how states regulate AI — particularly when they themselves have fallen down on the job —- and the Constitution makes pretty clear that the bill as written is far, far too broad. The 10th Amendment states, quite directly, that “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” In stepping so thoroughly on states’ rights, it is difficult to see how the proposed bill would not clash with this 234-year-old bedrock principle of the United States. (Defenders of this overbroad bill will claim that AI is part of interstate commerce; years of lawsuits will ensue.)
Of course there are always arguments on the other side. The Big Tech position was laid out well in a long piece from Friday in Lawfare by Kevin Frazier and Adam Thierer that has elements of truth but miss the larger picture. Part of it emphasizes the race with China and the need for speed. Their claim, exaggerating the costs of regulation and minimizing the costs of having none (not to mention states’ rights) is that AI regulation “could undermine the nation’s efforts to stay at the cutting edge of AI innovation at a critical moment when competition with China for global AI supremacy is intensifying” and that “If this growing patchwork of parochial regulatory policies takes root, it could undermine U.S. AI innovation” and call on Congress “to get serious about preemption”.
What they miss is threefold. First, if current trends continue, the “race” with China will not end in victory, for either side. Because both countries are both building essentially the same kind of models with the same kinds of techniques using the same kinds of data, the results from the two nations are essentially converging on the same outcomes. So-called leaderboards are no longer dominated by any one country. Any advantage in Generative AI (which still hasn’t remotely made a net profit, and is all still speculative) will be minimal, and short-lasting. Our big tech giants will match theirs, and vice versa, and the only real question is about the size of the profits. Any regulation that is proposed will be absorbed as a cost of business (trivial for trillion dollar companies), and there is no serious argument that the relatively modest costs of regulation (which they don’t even bother to estimate) will have any real-world impact whatsoever on those likely tied outcomes. Silicon Valley loves to invoke China to get better terms, but it probably won’t make any difference. (China actually has far more national regulation around AI than the US does, and that has in no way stopped them from catching up)
Second, Frazier and Thierer are presenting a false choice. The comparison here is not between a coherent federal laws versus a patchwork of a state laws, but between essentially zero enduring federal AI law (only executive orders that seem to come and go with the tides) and the well-intentioned efforts of many state legislators to make up for the fact that Washington has failed. If Washington wants to pass a comprehensive privacy or AI law with teeth, more power to them, but we all know this is unlikely; Frazier and Thierer would leave citizens out to dry, much as low-touch advocates have left us all out to dry when it comes to social media.
Third, Frazier skirted the issue of States rights altogether, not even considering at all how AI fits relative to other sensitive issues such as abortion or gun control. In insisting that “might makes right” here for AI, they risk setting a dangerous precedent in which whatever party has Federal power makes all the rules, all the time, overriding the power to the States that the 10th Amendment exists to protect, and one of our last remaining checks and balances.
And as Senator Markey put it, “[a] 10-year moratorium on state AI regulation won’t lead to an AI Golden Age. It will lead to a Dark Age for the environment, our children, and marginalized communities.”
Consumer Reports’ Policy Analyst for AI Issues Grace Gedye also weighed in, “Congress has long abdicated its responsibility to pass laws to address emerging consumer protection harms; under this bill, it would also prohibit the states from taking actions to protect their residents”
Well aware of the challenges AI poses, state leaders have already been acting. An open letter from the International Association of Privacy Professionals, signed by 62 legislators from 32 states, underscores the importance of state-level AI legislation—especially in the absence of comprehensive federal rules. Since 2022, dozens of states have introduced or passed AI laws. In 2024 alone, 31 states, Puerto Rico, and the Virgin Islands enacted AI-related legislation or resolutions, and at least 27 states passed deepfake laws. These include advisory councils, impact assessments, grant programs, and comprehensive legislation like Colorado’s, which would have mandated transparency and anti-discrimination protections in high-risk AI systems. It would also undo literally every bit of State privacy legislation, despite the fact that no Federal bill has passed after many years of discussion.
It’s specifically because of state momentum that Big Tech is trying to shut the states down. According to a recent report in Politico, “As California and other states move to regulate AI, companies like OpenAI, Meta, Google and IBM are all urging Washington to pass national AI rules that would rein in state laws they don’t like. So is Andreessen Horowitz, a Silicon Valley-based venture capitalist firm closely tied to President Donald Trump.” All largely behind closed doors. Why? With no regulatory pressure, tech companies would have little incentive to prioritize safety, transparency, or ethical design; any costs to society would be borne by society.
But the reality is that self-regulation has repeatedly failed the public, and the absence of oversight would only invite more industry lobbying to maintain weak accountability.
At a time when voters are demanding protection—and global leaders are sounding the alarm—Congress should not tie the hands of the only actors currently positioned to lead. A decade of deregulation isn’t a path forward. It’s an abdication of responsibility.
If you are among the 82% of Americans who think AI needs oversight, you need to call or write your Congress members now, or the door on AI regulation will slam shut at least for the next decade, if not forever, and we will be entirely at Silicon Valley’s mercy.
Senator Katie Fry Hester, Maryland
Gary Marcus, Professor Emeritus, NYU
Delegate Michelle Maldonado, Virginia
Senator James Maroney, Connecticut
Senator Robert Rodriguez, Colorado
Representative Kristin Bahner, Minnesota
Representative Steve Elkins, Minnesota
Senator Kristen Gonzalez, New York
Representative Monique Priestley, Vermont