Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

China orders tech giants to stop buying Nvidia AI chips

YouTube rolls out AI tool that can turn speech into songs for Shorts powered by Google DeepMind’s Lyria 2 music model

OpenAI says it is rolling out new safety measures for ChatGPT users under 18

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

The hunger strike to end AI

By Advanced AI EditorSeptember 17, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


On Guido Reichstadter’s 17th day without eating, he said he was feeling alright — moving a little slower, but alright.

Each day since September 2nd, Reichstadtler has appeared outside the San Francisco headquarters of AI startup Anthropic, standing from around 11AM to 5PM. His chalkboard sign states “Hunger Strike: Day 15,” though he actually stopped eating on August 31st. The sign calls for Anthropic to “stop the race to artificial general intelligence” or AGI: the concept of an AI system that equals or surpasses human cognitive abilities.

AGI is a favorite rallying cry of tech CEOs, with leaders at big companies and startups alike racing to achieve the subjective milestone first. To Reichstadler, it’s an existential risk these companies aren’t taking seriously. “Trying to build AGI — human-level, or beyond, systems, superintelligence — this is the goal of all these frontier companies,” he told The Verge. “And I think it’s insane. It’s risky. Incredibly risky. And I think it should stop now.” A hunger strike is the clearest way he sees to get AI leaders’ attention — and right now, he’s not the only one.

Reichstadter referenced a 2023 interview where Anthropic CEO Dario Amodei that he says exemplifies the AI industry’s recklessness. “My chance that something goes quite catastrophically wrong on the scale of human civilization might be somewhere between 10 and 25 percent,” Amodei said. Amodei and others have concluded AGI’s development is inevitable and say their goal is to simply be the most responsible custodians possible — something Reichstadtler calls “a myth” and “self-serving.”

In Reichstadter’s view, companies have a responsibility not to develop technology that will harm people on a large scale, and anyone who understands the risk bears some responsibility, too.

“That’s kind of what I’m trying to do, is fulfill my responsibility as just an ordinary citizen who has some respect for the lives and the wellbeing of my fellow citizens, my fellow countrymen,” he said. “I’ve got two kids, too.”

Anthropic did not immediately respond to a request for comment.

Every day, Reichstadter said he waves to the security guards at Anthropic’s office as he sets up, and he watches Anthropic employees avert their eyes as they walk past him. He said at least one employee has shared some similar fears of catastrophe, and he hopes to inspire AI company staffers to “have the courage to act as human beings and not as tools” of their company because they have a deeper responsibility since “they’re developing the most dangerous technology on Earth.”

His fears are shared by countless others in the AI safety world. It’s a splintered community, with myriad disagreements on the specific dangers AI poses over the long-term and how best to stop them — even the term “AI safety” is fraught. One thing most of them can agree on, though, is that its current path bodes ill for humanity.

Reichstadter said he first became aware of the potential for “human-level” AI during his college years about 25 years ago and that back then, that it seemed far off — but with the release of ChatGPT in 2022, he sat up and took notice. He says he’s especially been concerned with how he believes AI is playing a role in increasing authoritarianism in the U.S.

“I’m concerned about my society,” he said. “I’m concerned about my family, their future. I’m concerned about what’s happening with AI to affect them. I’m concerned that it is not being used ethically. And I’m also concerned that it poses realistic grounds to believe that there’s catastrophic risks and even existential risks associated with it.”

In recent months, Reichstadter has tried increasingly public methods of getting tech leaders’ attention to an issue he believes is vital. He’s worked in the past with a group called “Stop AI,” which seeks to permanently ban superintelligent AI systems “to prevent human extinction, mass job loss, and many other problems.” In February, he and other members helped chain shut the doors to OpenAI’s offices in San Francisco, with a few of them, including Reichstadter, being arrested for the obstruction.

Reichstadter delivered a handwritten letter to Amodei via the Anthropic security desk on September 2nd, and a few days later, he posted it online. The letter requests that Amodei stop trying to develop a technology he can’t control — and do everything in his power to stop the AI race globally — and that if he isn’t willing to do so, to tell him why not. In the letter, Reichstadter wrote, “For the sake of my children and with the urgency and gravity of our situation in my heart I have begun a hunger strike outside the Anthropic offices … while I await your response.”

“I hope that he has the basic decency to answer that request,” Reichstadter said. “I don’t think any of them have been really challenged personally. It’s one thing to anonymously, abstractly, consider that the work you’re doing might end up killing a lot of people. It’s another to have one of your potential future victims face-to-face and explain [why] to them as a human being.”

Soon after Reichstadter started his peaceful protest, two others inspired by him began a similar protest in London, maintaining a presence outside Google DeepMind’s office. And one joined him in India, fasting on livestream.

Michael Trazzi participated in the London hunger strike for seven days before choosing to stop due to two near-fainting episodes and a doctor consultation, but he is still supporting the other participant, Denys Sheremet, who is on day 10. Trazzi and Reichstadter share similar fears about the future of humanity under AI’s continued advancement, though they’re reluctant to define themselves as part of a specific community or group.

Trazzi said he’s been thinking about the risks of AI since 2017. He wrote a letter to DeepMind CEO Demis Hassabis and posted it publicly, as well as passed it along through an intermediary.

In the letter, Trazzi asked that Hassabis “take a first step today towards coordinating a future halt on the development of superintelligence, by publicly stating that DeepMind would agree to halt the development of frontier AI models if all the other major AI companies in the West and China were to do the same. Once all major companies have agreed to a pause, governments could organise an international agreement to enforce it.”

Trazzi told The Verge, “If it was not for AI being very dangerous, I don’t think I would be … super pro-regulation, but I guess … there are some things in the world that, by default, the incentives are going [in] the wrong direction. I think for AI, we do need regulation.”

Amanda Carl Pratt, Google DeepMind’s director of communications, said in a statement, “AI is a rapidly evolving space and there will be different views on this technology. We believe in the potential of AI to advance science and improve billions of people’s lives. Safety, security and responsible governance are and have always been top priorities as we build a future where people benefit from our technology while being protected from risk.”

In a post on X, Trazzi wrote that the hunger strike has sparked a lot of discussion with tech workers, claiming that one Meta employee asked him, “Why only Google guys? We do cool work too. We’re also in the race.”

He also wrote in the post that one DeepMind employee said AI companies likely wouldn’t release models that could cause catastrophic harms because of the opportunity cost, while another, he said, “admitted he believed extinction from AI was more likely than not, but chose to work for DeepMind because it was still one of the most safety-conscious companies.”

Neither Reichstadter nor Trazzi have received a response yet from their letters to Hassabis and Amodei. (Google also declined to answer a question from The Verge about why Hassabis has not responded to the letter.) They have faith, though, that their actions result in an acknowledgement, a meeting, or ideally, a commitment from the CEOs to change their trajectories.

To Reichstadter, “We are in an uncontrolled, global race to disaster,” he said. “If there is a way out, it’s going to rely on people being willing to tell the truth and and say, ‘We’re not in control.’ Ask for help.”

0 Comments

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Hayden Field

Close

Hayden Field

Hayden Field

Posts from this author will be added to your daily email digest and your homepage feed.

PlusFollow

See All by Hayden Field

AI

Close

AI

Posts from this topic will be added to your daily email digest and your homepage feed.

PlusFollow

See All AI

Anthropic

Close

Anthropic

Posts from this topic will be added to your daily email digest and your homepage feed.

PlusFollow

See All Anthropic

Google

Close

Google

Posts from this topic will be added to your daily email digest and your homepage feed.

PlusFollow

See All Google

Report

Close

Report

Posts from this topic will be added to your daily email digest and your homepage feed.

PlusFollow

See All Report

Tech

Close

Tech

Posts from this topic will be added to your daily email digest and your homepage feed.

PlusFollow

See All Tech



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCoca-Cola and MIT Are Using AI in a Bold Plan to Save Oranges
Next Article OpenAI’s ChatGPT now comes in teen version, enables guardians to keep check after 16-year-old’s suicide case; check details
Advanced AI Editor
  • Website

Related Posts

YouTube rolls out AI tool that can turn speech into songs for Shorts powered by Google DeepMind’s Lyria 2 music model

September 17, 2025

Google DeepMind Alumni Launch Hiverge With $5 Million Seed Funding for ‘Algorithm Factory’ to Discover and Deploy Algorithms Beyond Human Capabilities

September 17, 2025

Google to create 8,250 AI jobs as part of £5bn UK investment

September 16, 2025

Comments are closed.

Latest Posts

Jennifer Packer and Marie Watt Win $250,000 Heinz Award

KAWS Named Uniqlo’s First Artist-in-Residence

Jeffrey Gibson Talks About Animals at Unveiling of New Sculptures at the Met

Sylvester Stallone Owns Works by Warhol, Condo, and Other Art Stars

Latest Posts

China orders tech giants to stop buying Nvidia AI chips

September 17, 2025

YouTube rolls out AI tool that can turn speech into songs for Shorts powered by Google DeepMind’s Lyria 2 music model

September 17, 2025

OpenAI says it is rolling out new safety measures for ChatGPT users under 18

September 17, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • China orders tech giants to stop buying Nvidia AI chips
  • YouTube rolls out AI tool that can turn speech into songs for Shorts powered by Google DeepMind’s Lyria 2 music model
  • OpenAI says it is rolling out new safety measures for ChatGPT users under 18
  • Aspiring to be a unicorn in the era of AI: One Dal grad’s adventures at IBM – Dal News
  • The Download: Measuring returns on R&D, and AI’s creative potential

Recent Comments

  1. Europameister wetten quote on Perplexity AI’s Comet browser will streak across the web this month
  2. zapwhirlwindostrich3Nalay on AI code suggestions sabotage software supply chain • The Register
  3. whimsybubblecrab6Nalay on NVIDIA Mined Hours Of Classic Tom & Jerry Shorts To Generate New AI Horrors
  4. flamewhirlwindemu2Nalay on Anthropic’s popular Claude Code AI tool now included in its $20/month Pro plan
  5. whimsybubblecrab6Nalay on The first Google TPU for the age of inference

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.