Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

24 Hour Ticket Offer – Legal Innovators California – June 11 + 12 – Artificial Lawyer

The Zacks Analyst Blog Highlights C3.ai, UiPath, Microsoft, Alphabet and Amazon

Nvidia to Launch Downgraded H20 AI Chip in China after US Export Curbs – Space/Science news

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » When it Comes to AI Policy, Congress Shouldn’t Cut States off at the Knees
Gary Marcus

When it Comes to AI Policy, Congress Shouldn’t Cut States off at the Knees

Advanced AI BotBy Advanced AI BotMay 14, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


[This essay is coauthored with many representatives from States across the United States, as listed below.]

Maryland State House, workplace of first author Senator Katie Fry Hester, one of many State legislators around the nation who would lose their voice in all matters of AI policy if a last minute Congressional moratorium on state AI legislation becomes law.

Artificial intelligence holds immense promise—from accelerating disease detection to streamlining services—but it also presents serious risks, including deepfake deception, misinformation, job displacement, exploitation of vulnerable workers and consumers, and threats to critical infrastructure. As AI rapidly transforms our economy, workplaces, and civic life, the American public is calling for meaningful oversight. According to the Artificial Intelligence Policy Institute, 82% of voters support the creation of a federal agency to regulate AI. A Pew Research Center survey found that 52% of Americans are more concerned than excited about AI’s potential, and 67% doubt that government oversight will be sufficient or timely.

Public skepticism crosses party lines and reflects real anxiety: voters worry about data misuse, algorithmic bias, surveillance, and impersonation, and even catastrophic risks. Pope Leo XIV has named AI as one of the defining challenges of our time, warning of its ethical consequences and impacts on ordinary people and calling for urgent action.

Yet instead of answering this call with guardrails and public protections, Congress, which has done almost nothing to address these concerns, is considering a major step backwards, a tool designed to prevent States from taking matters into their own hands: a sweeping last-minute preemption provision tucked into a federal budget bill that would ban all state regulation on AI for the next decade.

The provision, which is likely at odds with the 10th Amendment, demands that “no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.” The measure would prohibit any state from regulating AI for the next ten years in any way—even in the absence of any federal standards.

This would be deeply problematic under any circumstance, but it’s especially dangerous in the context of a rapidly evolving technology already reshaping healthcare, education, civil rights, and employment. If enacted, the statute would preempt states from acting —even if AI systems cause measurable harm, such as through discriminatory lending, unsafe autonomous vehicles, or invasive workplace surveillance. For example, twenty states have passed laws regulating the use of deepfakes in election campaigns, andColorado passed a law to ensure transparency and accountability when AI is used in crucial decisions affecting consumers and employees. The proposed federal law would automatically block the application of those state laws, without offering any alternative. The proposed provision would also preempt laws holding AI companies liable for any catastrophic damages that they contributed to, as the California Assembly tried to do.

The federal government should not get to control literally every aspect of how states regulate AI — particularly when they themselves have fallen down on the job —- and the Constitution makes pretty clear that the bill as written is far, far too broad. The 10th Amendment states, quite directly, that “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” In stepping so thoroughly on states’ rights, it is difficult to see how the proposed bill would not clash with this 234-year-old bedrock principle of the United States. (Defenders of this overbroad bill will claim that AI is part of interstate commerce; years of lawsuits will ensue.)

Of course there are always arguments on the other side. The Big Tech position was laid out well in a long piece from Friday in Lawfare by Kevin Frazier and Adam Thierer that has elements of truth but miss the larger picture. Part of it emphasizes the race with China and the need for speed. Their claim, exaggerating the costs of regulation and minimizing the costs of having none (not to mention states’ rights) is that AI regulation “could undermine the nation’s efforts to stay at the cutting edge of AI innovation at a critical moment when competition with China for global AI supremacy is intensifying” and that “If this growing patchwork of parochial regulatory policies takes root, it could undermine U.S. AI innovation” and call on Congress “to get serious about preemption”.

What they miss is threefold. First, if current trends continue, the “race” with China will not end in victory, for either side. Because both countries are both building essentially the same kind of models with the same kinds of techniques using the same kinds of data, the results from the two nations are essentially converging on the same outcomes. So-called leaderboards are no longer dominated by any one country. Any advantage in Generative AI (which still hasn’t remotely made a net profit, and is all still speculative) will be minimal, and short-lasting. Our big tech giants will match theirs, and vice versa, and the only real question is about the size of the profits. Any regulation that is proposed will be absorbed as a cost of business (trivial for trillion dollar companies), and there is no serious argument that the relatively modest costs of regulation (which they don’t even bother to estimate) will have any real-world impact whatsoever on those likely tied outcomes. Silicon Valley loves to invoke China to get better terms, but it probably won’t make any difference. (China actually has far more national regulation around AI than the US does, and that has in no way stopped them from catching up)

Second, Frazier and Thierer are presenting a false choice. The comparison here is not between a coherent federal laws versus a patchwork of a state laws, but between essentially zero enduring federal AI law (only executive orders that seem to come and go with the tides) and the well-intentioned efforts of many state legislators to make up for the fact that Washington has failed. If Washington wants to pass a comprehensive privacy or AI law with teeth, more power to them, but we all know this is unlikely; Frazier and Thierer would leave citizens out to dry, much as low-touch advocates have left us all out to dry when it comes to social media.

Third, Frazier skirted the issue of States rights altogether, not even considering at all how AI fits relative to other sensitive issues such as abortion or gun control. In insisting that “might makes right” here for AI, they risk setting a dangerous precedent in which whatever party has Federal power makes all the rules, all the time, overriding the power to the States that the 10th Amendment exists to protect, and one of our last remaining checks and balances.

And as Senator Markey put it, “[a] 10-year moratorium on state AI regulation won’t lead to an AI Golden Age. It will lead to a Dark Age for the environment, our children, and marginalized communities.”

Consumer Reports’ Policy Analyst for AI Issues Grace Gedye also weighed in, “Congress has long abdicated its responsibility to pass laws to address emerging consumer protection harms; under this bill, it would also prohibit the states from taking actions to protect their residents”

Well aware of the challenges AI poses, state leaders have already been acting. An open letter from the International Association of Privacy Professionals, signed by 62 legislators from 32 states, underscores the importance of state-level AI legislation—especially in the absence of comprehensive federal rules. Since 2022, dozens of states have introduced or passed AI laws. In 2024 alone, 31 states, Puerto Rico, and the Virgin Islands enacted AI-related legislation or resolutions, and at least 27 states passed deepfake laws. These include advisory councils, impact assessments, grant programs, and comprehensive legislation like Colorado’s, which would have mandated transparency and anti-discrimination protections in high-risk AI systems. It would also undo literally every bit of State privacy legislation, despite the fact that no Federal bill has passed after many years of discussion.

It’s specifically because of state momentum that Big Tech is trying to shut the states down. According to a recent report in Politico, “As California and other states move to regulate AI, companies like OpenAI, Meta, Google and IBM are all urging Washington to pass national AI rules that would rein in state laws they don’t like. So is Andreessen Horowitz, a Silicon Valley-based venture capitalist firm closely tied to President Donald Trump.” All largely behind closed doors. Why? With no regulatory pressure, tech companies would have little incentive to prioritize safety, transparency, or ethical design; any costs to society would be borne by society.

But the reality is that self-regulation has repeatedly failed the public, and the absence of oversight would only invite more industry lobbying to maintain weak accountability.

At a time when voters are demanding protection—and global leaders are sounding the alarm—Congress should not tie the hands of the only actors currently positioned to lead. A decade of deregulation isn’t a path forward. It’s an abdication of responsibility.

If you are among the 82% of Americans who think AI needs oversight, you need to call or write your Congress members now, or the door on AI regulation will slam shut at least for the next decade, if not forever, and we will be entirely at Silicon Valley’s mercy.

Share

Senator Katie Fry Hester, Maryland

Gary Marcus, Professor Emeritus, NYU

Delegate Michelle Maldonado, Virginia

Senator James Maroney, Connecticut

Senator Robert Rodriguez, Colorado

Representative Kristin Bahner, Minnesota

Representative Steve Elkins, Minnesota

Senator Kristen Gonzalez, New York

Representative Monique Priestley, Vermont



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTrump’s preferred price for oil is between $40-$50 based on his social media posts
Next Article How to learn and master a new skill
Advanced AI Bot
  • Website

Related Posts

ChatGPT Blows Mapmaking 101 – by Gary Marcus

May 12, 2025

The secret to AGI, in 4 pages

May 10, 2025

”Everyone is cheating their way through college” with GenAI. Who should bear the costs?

May 8, 2025
Leave A Reply Cancel Reply

Latest Posts

Mexico Says MrBeast Followed Rules at Mayan Sites but Faked Key Scenes

New 54 Below Show By Liz Callaway Celebrates Music Of Stephen Schwartz

Phillips Evening Sale Sees 40 Percent Drop from 2024

The Artisans At Altitude In The Peruvian Andes

Latest Posts

24 Hour Ticket Offer – Legal Innovators California – June 11 + 12 – Artificial Lawyer

May 14, 2025

The Zacks Analyst Blog Highlights C3.ai, UiPath, Microsoft, Alphabet and Amazon

May 14, 2025

Nvidia to Launch Downgraded H20 AI Chip in China after US Export Curbs – Space/Science news

May 14, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.