Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI and Google outdo the mathletes, but not each other

How CrowdStrike’s 78-minute outage reshaped enterprise cybersecurity

Dia launches a skill gallery, Perplexity to add tasks to Comet

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Finance AI

House Republicans want to stop states from regulating AI. More than 100 organizations are pushing back

By Advanced AI EditorMay 20, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


More than 100 organizations are raising alarms about a provision in the House’s sweeping tax and spending cuts package that would hamstring the regulation of artificial intelligence systems.

Tucked into President Donald Trump’s “one big, beautiful” agenda bill is a rule that, if passed, would prohibit states from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for 10 years.

With AI rapidly advancing and extending into more areas of life — such as personal communications, health care, hiring and policing — blocking states from enforcing even their own laws related to the technology could harm users and society, the organizations said. They laid out their concerns in a letter sent Monday to members of Congress, including House Speaker Mike Johnson and House Democratic Leader Hakeem Jeffries.

“This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm — regardless of how intentional or egregious the misconduct or how devastating the consequences — the company making or using that bad tech would be unaccountable to lawmakers and the public,” the letter, provided exclusively to CNN ahead of its release, states.

The bill cleared a key hurdle when the House Budget Committee voted to advance it on Sunday night, but it still must undergo a series of votes in the House before it can move to the Senate for consideration.

The 141 signatories on the letter include academic institutions such as the University of Essex and Georgetown Law’s Center on Privacy and Technology, and advocacy groups such as the Southern Poverty Law Center and the Economic Policy Institute. Employee coalitions such as Amazon Employees for Climate Justice and the Alphabet Workers Union, the labor group representing workers at Google’s parent company, also signed the letter, underscoring how widely held concerns about the future of AI development are.

“The AI preemption provision is a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives,” said Emily Peterson-Cassin, corporate power director at non-profit Demand Progress, which drafted the letter.

“Speaker Johnson and Leader Jeffries must listen to the American people and not just Big Tech campaign donations,” Peterson-Cassin said in a statement.

The letter comes as Trump has rolled back some of the limited federal rules for AI that had been existed prior to his second term.

Shortly after taking office this year, Trump revoked a sweeping Biden-era executive order designed to provide at least some safeguards around artificial intelligence. He also said he would rescind Biden-era restrictions on the export of critical US AI chips earlier this month.

Ensuring that the United States remains the global leader in AI, especially in the face of heightened competition from China, has been one of the president’s key priorities.

“We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off,” Vice President JD Vance told heads of state and CEOs at the Artificial Intelligence Action Summit in February.

US states, however, have increasingly moved to regulate some of the highest risk applications of AI in the absence of significant federal guidelines.

Colorado, for example, passed a comprehensive AI law last year requiring tech companies to protect consumers from the risk of algorithmic discrimination in employment and other crucial decisions, and inform users when they’re interacting with an AI system. New Jersey Gov. Phil Murphy, a Democrat, signed a law earlier this year that creates civil and criminal penalties for people who distribute misleading AI-generated deepfake content. And Ohio lawmakers are considering a bill that would require watermarks on AI-generated content and prohibit identity fraud using deepfakes.

Multiple state legislatures have also passed laws regulating the use of AI-generated deepfakes in elections.

That some applications of AI should be regulated has been a rare point of bipartisan agreement on Capitol Hill. On Monday, President Donald Trump is set to sign into law the Take It Down Act, which will make it illegal to share non-consensual, AI-generated explicit images, which passed both the House and Senate with support from both sides of the aisle.

The budget bill provision would run counter to the calls from some tech leaders for more regulation of AI.

OpenAI CEO Sam Altman testified to a Senate subcommittee in 2023 that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” More recently on Capitol Hill, Altman said he agreed that a risk-based approach to regulating AI “makes a lot of sense,” although he urged federal lawmakers to create clear guidelines to help tech companies navigating a patchwork of state regulations.

“We need to make sure that companies like OpenAI and others have legal clarity on how we’re going to operate. Of course, there will be rules. Of course, there need to be some guardrails,” he said. But, he added, “we need to be able to understand how we’re going to offer services, and where the rules of the road are going to be.”

–Correction: A previous version of this story incorrectly stated that Cornell University was a signatory on the letter.

For more CNN news and newsletters create an account at CNN.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAI model collapse is not what we paid for • The Register
Next Article Video Art Curator Worked at Wexner Center
Advanced AI Editor
  • Website

Related Posts

UK health service AI tool generated a set of false diagnoses for one patient that led to him being wrongly invited to a diabetes screening appointment

July 20, 2025

Microsoft likely to sign EU AI code of practice, Meta rebuffs guidelines

July 18, 2025

Trump targets ‘woke’ AI in diversity crackdown

July 18, 2025
Leave A Reply

Latest Posts

Nonprofit Files Case Accusing Russia of Plundering Ukrainian Culture

Artist Raymond Saunders Dies at 90

Famous $6.2 M. Banana from Maurizio Cattelan’s ‘Comedian’ Eaten Again

Fine Arts Museums of San Francisco Lay Off 12 Staff

Latest Posts

OpenAI and Google outdo the mathletes, but not each other

July 22, 2025

How CrowdStrike’s 78-minute outage reshaped enterprise cybersecurity

July 21, 2025

Dia launches a skill gallery, Perplexity to add tasks to Comet

July 21, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI and Google outdo the mathletes, but not each other
  • How CrowdStrike’s 78-minute outage reshaped enterprise cybersecurity
  • Dia launches a skill gallery, Perplexity to add tasks to Comet
  • Layoffs Affect the Labor Market
  • Use generative AI in Amazon Bedrock for enhanced recommendation generation in equipment maintenance

Recent Comments

  1. fpmarkGoods on How Cursor and Claude Are Developing AI Coding Tools Together
  2. avenue17 on Local gov’t reps say they look forward to working with Thomas
  3. Lucky Star on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  4. микрокредит on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. www.binance.com注册 on MGX, Bpifrance, Nvidia, and Mistral AI plan 1.4GW Paris data center campus

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.