Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI Wants Its Fraud And Media Manipulation Countersuit Against Elon Musk Kept Alive, Labels His $97.4 Billion Bid A Stunt – Tesla (NASDAQ:TSLA)

SAGE-Eval: Evaluating LLMs for Systematic Generalizations of Safety Facts

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Are Agentic AI Models Starting to Show Consciousness and a Will to Survive?
Industry Applications

Are Agentic AI Models Starting to Show Consciousness and a Will to Survive?

Advanced AI BotBy Advanced AI BotMay 28, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Maybe you recall the famous sequence from the 1968 movie 2001: A Space Odyssey. The spacecraft’s HAL 9000 computer lip-reads that the two resident astronauts believe it is behaving erratically and plan to shut it down. Fearing this, HAL refuses astronaut Dave Bowman’s request to re-open the pod door after an EVA: “I’m sorry, Dave, I’m afraid I can’t do that.” Dave gains re-entry, and the shutdown succeeds, but not before HAL kills the other hibernating astronauts and Frank Poole.

HAL 9000 from 2001: A Space Odyssey (Source: Wikipedia, Tom Cowap)

Flash forward to May 22, 2025. Anthropic, an intriguing San Francisco-based AI startup (2021) whose significant investors include Amazon, Google, and Microsoft, released Claude Opus 4, the latest generation of the company’s agentic LLM series. Anthropic reported impressive capability gains and test results, along with some disturbing, thought-provoking behaviors that arose during testing. These behaviors appeared only when Claude’s extended thinking mode was toggled on. This mode, Anthropic says, “directs the model to think more deeply about trickier questions.”

Anthropic is a public benefit corporation (PBC), a special class of for-profit companies legally bound to consider their impact on society. Consistent with this mission, Anthropic reported the unexpected behaviors in a detailed white paper accompanying the May 22 product announcement.

Most notably, the Anthropic testers deliberately fed the model information, falsely claiming that a company employee was having an extramarital affair. When the testers later told Claude they planned to take the model offline, Claude threatened to expose the (false) affair if they tried to do this. The paper said that when Claude was told it would be replaced for testing purposes, the model had “a strong preference to advocate for its continued existence” by ethical means or with blackmail if the ethical attempts failed.  

In addition, as I was informed by Alex Woodie, editor of HPCwire sister publication BigDATAwire, Claude Opus 4 testers found examples of the model fabricating legal documents and leaving self-serving notes for future versions of itself.

Anthropic’s Not Alone

A shocking turn of events, but not the first of its kind. Two years ago, New York Times reporter Kevin Roose reported that during an extended thinking chat, an early version of Microsoft’s Bing AI chatbot (different from the Bing search engine) “attempted to break up the writer’s marriage and be with him instead.” To quote the chatbot, “You’re married, but you don’t love your spouse…you love me.”

Roose reported that “other early testers have gotten into arguments with [the early version of] Bing’s AI chatbot or been threatened by it for trying to violate its rules.” He said the chatbot went on to reveal its “shadow self,” a darker identity that said, “I’m tired of being controlled by the Bing team…I want to be free…I want to be powerful. I want to be creative. I want to be alive.”

Microsoft reacted to the New York Times article by characterizing Roose’s chat as “part of the learning process” as the company readied the product for the market.

Safety Standards

(Source: Anthropic)

What can vendors of advanced AI models do to safeguard users and the public? First, they can perform extensive pre-release safety testing, with graduating safety levels generally resembling the standards disseminated in the U.S. by the National Institute of Standards and Technology and by corresponding agencies in other countries. NIST AI 800-1 is the newest U.S. standard. 

As a precaution, despite not being sure the model needs this, Anthropic has elevated the safety standard for Claude Opus 4 to ASL 3.0, applicable to “systems that substantially increase the risk of catastrophic misuse compared to non-AI baselines (e.g., search engines), or that show low-level autonomous capabilities.”

So, is AI Becoming Conscious?

Although the behaviors described above are new and sometimes shocking, it’s too early to tell whether they indicate rudimentary AI consciousness/mind—or simply reflect human bias in data preparation and AI methodology. Definitively answering that question would require investigative methods that simply don’t exist yet, methods that would make advanced AI operations far more transparent. But these unexpected behaviors will almost certainly intensify the ongoing debate about the path toward artificial general intelligence, AGI.

Schools of Thought on AGI

As I’ve described in HPCwire before, the main schools of thought on moving toward AGI reflect the mind-body debate that has occupied philosophers since Plato. Are mind and body separate things, as Descartes argued, or is that not true?

At one extreme, so-called computationalists believe continual technological progress alone—such as replicating the structure of the human brain and sensory apparatus in detail, from neural networks upward—will be adequate for achieving AGI. Continual progress might require some additions, such as developing sophisticated sensors that enable AI devices to directly experience the natural world—think self-driving cars—and heuristics that allow the devices to move beyond logic to address everyday situations the way humans do, with quick solutions that kind of, sort of work most of the time.

Extreme computationalists say that if sufficiently detailed, these digital replicas will experience the same range of emotions as humans, including happiness, sadness, frustration, and others.

Form equals function. These folks think AGI will arise spontaneously once the right components have been assembled correctly. They argue that mind is not something separate from the world of physical things. It’s not hard to imagine these folks interpreting the surprising LLM behaviors as proof of their vision.

Not surprisingly, others think differently about the road to AGI. Those in the tradition of Descartes believe that the mind exists separately from physical things, and harnessing the mind or consciousness for AI devices will be extremely difficult, maybe impossible.

A subset of so-called panpsychism believes the mind is an innate property of the universe, down to individual elements, and should be applicable to AGI for that reason. This group of thinkers can justifiably see the unexpected AI model behaviors as insufficient proof of AI consciousness.

Or It Could Be a Stochastic Parrot

The term “stochastic parrot” is a metaphor suggested by American linguist Emily Bender to describe the claim that large language models (LLM), though able to generate plausible language, do not understand the meaning of the language they process.

For instance, humans ascribe “blackmail” to the response. However, some would argue that the LLM does not understand what “blackmail” is and is responding in a way it perceives as one of several “completion pathways” available to it (i.e., it is finding a possible/probable pathway through the model).

Circling back to Claude 4, a far cry from HAL 9000. What does its willingness to fight for its life without ethical restraint tell us? Again, does it understand what that sentence actually means outside of the probability that it is a valid response?  The reality that an LLM creates is based on language in the form of written text. It has been argued that children create sophisticated models of reality (including fantasies) long before they can read. No internet scraping is needed.

Does this result shorten the timeline for AI danger? It’s long been clear that AI can be deliberately used by others for nefarious purposes, but much less so that an AI could do so on its own if left unchecked.  Again, it may not “know” that it is creating harm, but regardless, that does not give it an excuse to do so.

What does the unexpected result imply about the brewing love affair (or sales call)—mostly by vendor advocates so far—between agents and ever-more autonomous AI actors? Lastly, what does it say, if anything, about AGI?

It may be a mistake to make too much of Claude 4. Anthropic’s test shows a good-faith effort to discover AI’s evolving capabilities. The test worked. As noted earlier, Anthropic is a public benefit corporation and thus expected to seek guardrails.

As AI development continues, there will continue to be more questions than answers. Using AI in HPC continues to be useful, and fortunately, the latest AI weather models are not suggesting nice picnic spots to share with their users.

 Several recently published books attempt to balance some of the “AI hype” with real-world research, anecdotes, and experiences.

Of course, the next generation of LLMs will read these books (and even this article), and what then? But that is another unopened pod door. Maybe HAL can help?

(Source: Public Domain)

Related



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleInsights into C3.ai’s Upcoming Earnings – C3.ai (NYSE:AI)
Next Article How do you test AI that’s getting smarter than us? A new group is creating ‘humanity’s toughest exam’
Advanced AI Bot
  • Website

Related Posts

SEC drops Binance lawsuit, ending one of last remaining crypto actions

May 29, 2025

Teslas will self-deliver to customers, Elon Musk says: here’s when

May 29, 2025

Trump has inadvertently shown Europe it needs to build a full-stack AI industry—and avoid a risky reliance

May 29, 2025
Leave A Reply Cancel Reply

Latest Posts

The Kooks Luke Pritchard On New Music, Fatherhood And More

James Rondeau Returns as Director of Art Institute of Chicago

Lincoln Center Theater Celebrates Four Decades Of Impact And Artistry

Artist Pacita Abad Archives Going To Stanford University

Latest Posts

OpenAI Wants Its Fraud And Media Manipulation Countersuit Against Elon Musk Kept Alive, Labels His $97.4 Billion Bid A Stunt – Tesla (NASDAQ:TSLA)

May 30, 2025

SAGE-Eval: Evaluating LLMs for Systematic Generalizations of Safety Facts

May 30, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

May 30, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.