Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

TikTok parent Bytedance launches new AI tool Seedream 4.0 to rival Google’s Nano Banana

Lovable, Harvey Does A2J, Legora, LegalOn, LexisNexis – Artificial Lawyer

OmniEVA: Embodied Versatile Planner via Task-Adaptive 3D-Grounded and Embodiment-aware Reasoning – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Gary Marcus

The Tale of An About-face in AI Regulation

By Advanced AI EditorMay 16, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The US Capitol, May 15, 2023, photo by the author, the night before the Senate’s first hearing on AI oversight

We need to maximize the good over the bad. Congress has a choice. Now. We had the same choice when we faced social media. We failed to seize that moment. The result is predators on the internet, toxic content exploiting children, creating dangers for them.

– Senator Richard Blumenthal (D-CT), May 16, 2023

I think my question is, what kind of an innovation is it going to be? Is it gonna be like the printing press that diffused knowledge, power, and learning widely across the landscape that empowered, ordinary, everyday individuals that led to greater flourishing, that led above all to greater liberty? Or is it gonna be more like the atom bomb, huge technological breakthrough, but the consequences severe, terrible, continue to haunt us to this day? I don’t know the answer to that question. I don’t think any of us in the room know the answer to that question. Cause I think the answer has not yet been written. And to a certain extent, it’s up to us here and to us as the American people to write the answer.

– Senator Josh Hawley (R-MO), May 16, 2023

Thank you, Mr. Chairman [Sen. Blumenthal] and Senator Hawley for having this. I’m trying to find out how [AI] is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to hide, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?

– Senator Lindsey Graham (R-SC), May 16, 2023

We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.

– OpenAI CEO Sam Altman, May 16, 2023

Two years ago today, Sam Altman, Christina Montgomery, and I testified at the US Senate Judiciary Oversight Committee, at the behest of Senators Blumenthal and Sen Hawley.

At the time, it felt like the highlight of my life. I had a palpable sense of history – this was the Senate’s first hearing on AI. I nearly wept the evening before when I walked by the Capitol at twilight, taking the photo above and reflecting on the history of the United Sates, and the importance of AI to our future. And then, to my great and pleasant surprise, at the hearing itself, the next day, nearly everybody gathered in the room seemed to get it, to be on the same page about the importance of AI regulation and the importance of getting it right and not delaying. As the quotes above illustrate (and I could have chosen many others), Senators, both Democrats and Republicans, recognized the gravity of the moment, and expressed guilt at not having acted faster or more effectively in the regulation of social media. All seemed highly motivated to do better this time.

And it wasn’t just the bipartisan enthusiasm of the Senators that buoyed me, but also the remarks of Sam Altman, perhaps the most visible representative of the AI industry. Throughout the meeting he spoke out in favor of genuine AI regulation, at one point even endorsing my own ideas around international AI governance.

Tragically, almost none of what was discussed that day has come to fruition. We have no concretely implemented international AI governance, no national AI agency; we are no longer even positioned well to detect and address AI-escalated cybercrime. AI-fueled discrimination in job decisions is likely far more rampant than before. Absolutely nothing is being done about AI-generated misinformation, political or medical. By many accounts, AI-fueled scams have exploded, too, and again there is no coherent federal response.

Two years later, Washington seems entirely different. Government officials aren’t worrying out loud about the risks of AI, anymore. They are downplaying them. Congress has failed to pass any meaningful AI regulation, and even worse, they are now actively aiming to prevent States — probably our last hope — from passing anything meaningful. Republicans as a whole are far more resistant to AI regulation now than they were in 2023, and voices like Josh Hawley, who seemed sincerely interested in how to regulate AI, are now drowned out by the administration’s across the board anti-regulatory turn.

And when Altman returned to Senate last week, he sang an entirely different tune, effectively trying to block AI regulation at every turn. Altman is no longer talking about AI regulation, he is actively resisting it.

Which raises a question: Did Altman actually mean any of what he said two years ago? I believed him at the time, but I probably shouldn’t have.

In hindsight, Altman is phenomenal at reading the room, and telling people what they want to hear, even if he doesn’t really mean it. For example, he pretended to be doing the job purely out of love, working for health insurance and no equity, but didn’t disclose that he had indirect equity in OpenAI’s for-profit subsidiary via his holdings in YCombinator; he also conveniently forgot to mention his ownership of OpenAI Startup Fund (which he subsequently got out of, under pressure).

And even as Altman was telling Congress that he supported AI regulation, his company was lobbying the EU to water down its AI act. (At one point he even threatened to have OpenAI walk from the EU altogether.) Now he is doing everything he can to stop AI regulation of any meaningful sort. (He also said at the time “we think that content creators, content owners, need to benefit from this technology” but ever since his company has been pushing for free training materials and exemption from copyright laws.)

You can see Sam’s about-face for yourself in this brief clip below, from a forthcoming film called Making God, which interviewed me recently.

It’s worth two minutes of your time:

I think the question about whether Sam can be trusted by now has a clear answer. Two new books, by the journalists Keach Hagey (of The Wall Street Journal) and Karen Hao (of The Atlantic) further bear that out, in detailed reporting on why he was briefly fired from OpenAI in November 2023.

The real question is why the US government continues to place so much faith in Altman, given (a) his own track record, and (b) his own 2023 testimony to the Senate that AI could cause “cause significant harm to the world”.

The cost to humanity of being beguiled by this man may turn out to be enormous.

§

In an excerpt from her new book, Empire of AI, that appeared yesterday in The Atlantic, Karen Hao writes eloquently about how much has changed at Altman’s company, OpenAI, since he was fired and rehired in 2023:

The events of November 2023 [when Altman was fired] illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be. It has turned into a nonprofit in name only, aggressively commercializing products such as ChatGPT and seeking historic valuations. It has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry….

The shift in norms has extended to our government, too; gone are the days when Washington was afraid of the risks of AI enough to seriously consider doing anything about them. All talk about regulation has been replaced by talk about innovation, which is really shorthand for help the companies as much as possible, no matter what it costs the citizenry. Gone too is a chance to avoid what Senators from Blumenthal to Graham warned about: a repeat of the mess of social media, in which big tech got its way, and society was left paying the consequences.

Gary Marcus, Professor Emeritus at NYU, is author of six books, including Taming Silicon Valley, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber.

Share

p.s. The folks from the documentary excerpted above, Making God, are raising money to support the completion of their film.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHouse Republicans include a 10-year ban on US states regulating AI in ‘big, beautiful’ bill
Next Article Ryan Hall: Martial Arts and the Philosophy of Violence, Power, and Grace | Lex Fridman Podcast #125
Advanced AI Editor
  • Website

Related Posts

Peak bubble – by Gary Marcus

September 11, 2025

OpenAI’s future, foretold? – by Gary Marcus

September 7, 2025

AI is going pretty much as I expected

September 5, 2025
Leave A Reply

Latest Posts

Long-Lost Painting By Rubens From 1613 Discovered in Paris Mansion

Sally Mann Says Her Black Men Photos Are ‘Problematic’ in Hindsight

NeueHouse, a Hot Spot for Art Events, Files for Bankruptcy

Obama Presidential Center Announces Nine New Artist Commissions

Latest Posts

TikTok parent Bytedance launches new AI tool Seedream 4.0 to rival Google’s Nano Banana

September 12, 2025

Lovable, Harvey Does A2J, Legora, LegalOn, LexisNexis – Artificial Lawyer

September 12, 2025

OmniEVA: Embodied Versatile Planner via Task-Adaptive 3D-Grounded and Embodiment-aware Reasoning – Takara TLDR

September 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • TikTok parent Bytedance launches new AI tool Seedream 4.0 to rival Google’s Nano Banana
  • Lovable, Harvey Does A2J, Legora, LegalOn, LexisNexis – Artificial Lawyer
  • OmniEVA: Embodied Versatile Planner via Task-Adaptive 3D-Grounded and Embodiment-aware Reasoning – Takara TLDR
  • Anthropic’s Claude AI chatbot introduces memory updates for enterprises
  • Intel Just Changed Computer Graphics Forever!

Recent Comments

  1. Jeffreyrag on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Jeffreyrag on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Brentcrelp on Trump’s Tech Sanctions To Empower China, Betray America
  4. RobertKaf on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Richardsmeap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.