Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

MIT and Harvard Medical School announce a new research pathway to fight Alzheimer’s disease

In race to build Google Chrome rival, why Perplexity’s fresh funding is crucial

Nvidia provides Omniverse Blueprint for AI factory digital twins

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » House Republicans want to stop states from regulating AI. More than 100 organizations are pushing back
Finance AI

House Republicans want to stop states from regulating AI. More than 100 organizations are pushing back

Advanced AI BotBy Advanced AI BotMay 20, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


More than 100 organizations are raising alarms about a provision in the House’s sweeping tax and spending cuts package that would hamstring the regulation of artificial intelligence systems.

Tucked into President Donald Trump’s “one big, beautiful” agenda bill is a rule that, if passed, would prohibit states from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for 10 years.

With AI rapidly advancing and extending into more areas of life — such as personal communications, health care, hiring and policing — blocking states from enforcing even their own laws related to the technology could harm users and society, the organizations said. They laid out their concerns in a letter sent Monday to members of Congress, including House Speaker Mike Johnson and House Democratic Leader Hakeem Jeffries.

“This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm — regardless of how intentional or egregious the misconduct or how devastating the consequences — the company making or using that bad tech would be unaccountable to lawmakers and the public,” the letter, provided exclusively to CNN ahead of its release, states.

The bill cleared a key hurdle when the House Budget Committee voted to advance it on Sunday night, but it still must undergo a series of votes in the House before it can move to the Senate for consideration.

The 141 signatories on the letter include academic institutions such as the University of Essex and Georgetown Law’s Center on Privacy and Technology, and advocacy groups such as the Southern Poverty Law Center and the Economic Policy Institute. Employee coalitions such as Amazon Employees for Climate Justice and the Alphabet Workers Union, the labor group representing workers at Google’s parent company, also signed the letter, underscoring how widely held concerns about the future of AI development are.

“The AI preemption provision is a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives,” said Emily Peterson-Cassin, corporate power director at non-profit Demand Progress, which drafted the letter.

“Speaker Johnson and Leader Jeffries must listen to the American people and not just Big Tech campaign donations,” Peterson-Cassin said in a statement.

The letter comes as Trump has rolled back some of the limited federal rules for AI that had been existed prior to his second term.

Shortly after taking office this year, Trump revoked a sweeping Biden-era executive order designed to provide at least some safeguards around artificial intelligence. He also said he would rescind Biden-era restrictions on the export of critical US AI chips earlier this month.

Ensuring that the United States remains the global leader in AI, especially in the face of heightened competition from China, has been one of the president’s key priorities.

“We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off,” Vice President JD Vance told heads of state and CEOs at the Artificial Intelligence Action Summit in February.

US states, however, have increasingly moved to regulate some of the highest risk applications of AI in the absence of significant federal guidelines.

Colorado, for example, passed a comprehensive AI law last year requiring tech companies to protect consumers from the risk of algorithmic discrimination in employment and other crucial decisions, and inform users when they’re interacting with an AI system. New Jersey Gov. Phil Murphy, a Democrat, signed a law earlier this year that creates civil and criminal penalties for people who distribute misleading AI-generated deepfake content. And Ohio lawmakers are considering a bill that would require watermarks on AI-generated content and prohibit identity fraud using deepfakes.

Multiple state legislatures have also passed laws regulating the use of AI-generated deepfakes in elections.

That some applications of AI should be regulated has been a rare point of bipartisan agreement on Capitol Hill. On Monday, President Donald Trump is set to sign into law the Take It Down Act, which will make it illegal to share non-consensual, AI-generated explicit images, which passed both the House and Senate with support from both sides of the aisle.

The budget bill provision would run counter to the calls from some tech leaders for more regulation of AI.

OpenAI CEO Sam Altman testified to a Senate subcommittee in 2023 that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” More recently on Capitol Hill, Altman said he agreed that a risk-based approach to regulating AI “makes a lot of sense,” although he urged federal lawmakers to create clear guidelines to help tech companies navigating a patchwork of state regulations.

“We need to make sure that companies like OpenAI and others have legal clarity on how we’re going to operate. Of course, there will be rules. Of course, there need to be some guardrails,” he said. But, he added, “we need to be able to understand how we’re going to offer services, and where the rules of the road are going to be.”

–Correction: A previous version of this story incorrectly stated that Cornell University was a signatory on the letter.

For more CNN news and newsletters create an account at CNN.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGetty Villa Will Reopen on June 27 After Palisades Fire Closure
Next Article Video Art Curator Worked at Wexner Center
Advanced AI Bot
  • Website

Related Posts

Darren Aronofsky joins AI Hollywood push with Google deal

May 20, 2025

Google’s unleashes ‘AI Mode’ in the next phase of its journey to change search

May 20, 2025

Grok’s ‘white genocide’ meltdown nods to the real dangers of the AI arms race

May 20, 2025
Leave A Reply Cancel Reply

Latest Posts

Julius Smack Gets Ahead of the Conversation of AI and Art

The Do LaB On Capturing Lightning In A Bottle

Artists Through The Eyes Of Artists’ At Pallant House Gallery

Your Boardroom, Anywhere—Google Beam May Redefine Executive Presence

Latest Posts

MIT and Harvard Medical School announce a new research pathway to fight Alzheimer’s disease

May 21, 2025

In race to build Google Chrome rival, why Perplexity’s fresh funding is crucial

May 21, 2025

Nvidia provides Omniverse Blueprint for AI factory digital twins

May 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.