Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Carnegie Mellon Debuts Initiative to Combine Disparate AI Research — Campus Technology

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Anthropic is launching a new program to study AI ‘model welfare’
TechCrunch AI

Anthropic is launching a new program to study AI ‘model welfare’

Advanced AI BotBy Advanced AI BotApril 24, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Could future AIs be “conscious,” and experience the world similarly to the way humans do? There’s no strong evidence that they will, but Anthropic isn’t ruling out the possibility.

On Thursday, the AI lab announced that it has started a research program to investigate — and prepare to navigate — what it’s calling “model welfare.” As part of the effort, Anthropic says it’ll explore things like how to determine whether the “welfare” of an AI model deserves moral consideration, the potential importance of model “signs of distress,” and possible “low-cost” interventions.

There’s major disagreement within the AI community on what human characteristics models “exhibit,” if any, and how we should “treat” them.

Many academics believe that AI today can’t approximate consciousness or the human experience, and won’t necessarily be able to in the future. AI as we know it is a statistical prediction engine. It doesn’t really “think” or “feel” as those concepts have traditionally been understood. Trained on countless examples of text, images, and so on, AI learns patterns and sometime useful ways to extrapolate to solve tasks.

As Mike Cook, a research fellow at King’s College London specializing in AI, recently told TechCrunch in an interview, a model can’t “oppose” a change in its “values” because models don’t have values. To suggest otherwise is us projecting onto the system.

“Anyone anthropomorphizing AI systems to this degree is either playing for attention or seriously misunderstanding their relationship with AI,” Cook said. “Is an AI system optimizing for its goals, or is it ‘acquiring its own values’? It’s a matter of how you describe it, and how flowery the language you want to use regarding it is.”

Another researcher, Stephen Casper, a doctoral student at MIT, told TechCrunch that he thinks AI amounts to an “imitator” that “[does] all sorts of confabulation[s]” and says “all sorts of frivolous things.”

Yet other scientists insist that AI does have values and other human-like components of moral decision-making. A study out of the Center for AI Safety, an AI research organization, implies that AI has value systems that lead it to prioritize its own well-being over humans in certain scenarios.

Anthropic has been laying the groundwork for its model welfare initiative for some time. Last year, the company hired its first dedicated “AI welfare” researcher, Kyle Fish, to develop guidelines for how Anthropic and other companies should approach the issue. (Fish, who’s leading the new model welfare research program, told The New York Times that he thinks there’s a 15% chance Claude or another AI is conscious today.)

In a blog post Thursday, Anthropic acknowledged that there’s no scientific consensus on whether current or future AI systems could be conscious or have experiences that warrant ethical consideration.

“In light of this, we’re approaching the topic with humility and with as few assumptions as possible,” the company said. “We recognize that we’ll need to regularly revise our ideas as the field develops.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThis Team won the Minecraft RL BASALT Challenge! (Paper Explanation & Interview with the authors)
Next Article China Throws Down AI Gauntlet to Nvidia and Broadcom in $8.2B State Funding Program
Advanced AI Bot
  • Website

Related Posts

Trump administration takes aim at Biden and Obama cybersecurity rules

June 7, 2025

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

June 7, 2025

Will Musk vs. Trump affect xAI’s $5 billion debt deal?

June 7, 2025
Leave A Reply Cancel Reply

Latest Posts

What Makes Lightning In A Bottle A Unique Festival Experience

How Icelandic Band KALEO Made The Rock Record Of 2025 So Far

Celebrating 60 Years At Detroit’s Charles H. Wright Museum Of African American History

16 Iconic Wild Animals Photos Celebrating Remembering Wildlife

Latest Posts

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 9, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 9, 2025

Carnegie Mellon Debuts Initiative to Combine Disparate AI Research — Campus Technology

June 9, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.