Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Perplexity Pro is free for Airtel users; How to claim Rs 17,000 Perplexity AI Pro access for FREE

Nvidia N1X CPU: Everything we know so far

MIT robot could help people with limited mobility dress themselves

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Industry Applications

Are Agentic AI Models Starting to Show Consciousness and a Will to Survive?

By Advanced AI EditorMay 28, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Maybe you recall the famous sequence from the 1968 movie 2001: A Space Odyssey. The spacecraft’s HAL 9000 computer lip-reads that the two resident astronauts believe it is behaving erratically and plan to shut it down. Fearing this, HAL refuses astronaut Dave Bowman’s request to re-open the pod door after an EVA: “I’m sorry, Dave, I’m afraid I can’t do that.” Dave gains re-entry, and the shutdown succeeds, but not before HAL kills the other hibernating astronauts and Frank Poole.

HAL 9000 from 2001: A Space Odyssey (Source: Wikipedia, Tom Cowap)

Flash forward to May 22, 2025. Anthropic, an intriguing San Francisco-based AI startup (2021) whose significant investors include Amazon, Google, and Microsoft, released Claude Opus 4, the latest generation of the company’s agentic LLM series. Anthropic reported impressive capability gains and test results, along with some disturbing, thought-provoking behaviors that arose during testing. These behaviors appeared only when Claude’s extended thinking mode was toggled on. This mode, Anthropic says, “directs the model to think more deeply about trickier questions.”

Anthropic is a public benefit corporation (PBC), a special class of for-profit companies legally bound to consider their impact on society. Consistent with this mission, Anthropic reported the unexpected behaviors in a detailed white paper accompanying the May 22 product announcement.

Most notably, the Anthropic testers deliberately fed the model information, falsely claiming that a company employee was having an extramarital affair. When the testers later told Claude they planned to take the model offline, Claude threatened to expose the (false) affair if they tried to do this. The paper said that when Claude was told it would be replaced for testing purposes, the model had “a strong preference to advocate for its continued existence” by ethical means or with blackmail if the ethical attempts failed.  

In addition, as I was informed by Alex Woodie, editor of HPCwire sister publication BigDATAwire, Claude Opus 4 testers found examples of the model fabricating legal documents and leaving self-serving notes for future versions of itself.

Anthropic’s Not Alone

A shocking turn of events, but not the first of its kind. Two years ago, New York Times reporter Kevin Roose reported that during an extended thinking chat, an early version of Microsoft’s Bing AI chatbot (different from the Bing search engine) “attempted to break up the writer’s marriage and be with him instead.” To quote the chatbot, “You’re married, but you don’t love your spouse…you love me.”

Roose reported that “other early testers have gotten into arguments with [the early version of] Bing’s AI chatbot or been threatened by it for trying to violate its rules.” He said the chatbot went on to reveal its “shadow self,” a darker identity that said, “I’m tired of being controlled by the Bing team…I want to be free…I want to be powerful. I want to be creative. I want to be alive.”

Microsoft reacted to the New York Times article by characterizing Roose’s chat as “part of the learning process” as the company readied the product for the market.

Safety Standards

(Source: Anthropic)

What can vendors of advanced AI models do to safeguard users and the public? First, they can perform extensive pre-release safety testing, with graduating safety levels generally resembling the standards disseminated in the U.S. by the National Institute of Standards and Technology and by corresponding agencies in other countries. NIST AI 800-1 is the newest U.S. standard. 

As a precaution, despite not being sure the model needs this, Anthropic has elevated the safety standard for Claude Opus 4 to ASL 3.0, applicable to “systems that substantially increase the risk of catastrophic misuse compared to non-AI baselines (e.g., search engines), or that show low-level autonomous capabilities.”

So, is AI Becoming Conscious?

Although the behaviors described above are new and sometimes shocking, it’s too early to tell whether they indicate rudimentary AI consciousness/mind—or simply reflect human bias in data preparation and AI methodology. Definitively answering that question would require investigative methods that simply don’t exist yet, methods that would make advanced AI operations far more transparent. But these unexpected behaviors will almost certainly intensify the ongoing debate about the path toward artificial general intelligence, AGI.

Schools of Thought on AGI

As I’ve described in HPCwire before, the main schools of thought on moving toward AGI reflect the mind-body debate that has occupied philosophers since Plato. Are mind and body separate things, as Descartes argued, or is that not true?

At one extreme, so-called computationalists believe continual technological progress alone—such as replicating the structure of the human brain and sensory apparatus in detail, from neural networks upward—will be adequate for achieving AGI. Continual progress might require some additions, such as developing sophisticated sensors that enable AI devices to directly experience the natural world—think self-driving cars—and heuristics that allow the devices to move beyond logic to address everyday situations the way humans do, with quick solutions that kind of, sort of work most of the time.

Extreme computationalists say that if sufficiently detailed, these digital replicas will experience the same range of emotions as humans, including happiness, sadness, frustration, and others.

Form equals function. These folks think AGI will arise spontaneously once the right components have been assembled correctly. They argue that mind is not something separate from the world of physical things. It’s not hard to imagine these folks interpreting the surprising LLM behaviors as proof of their vision.

Not surprisingly, others think differently about the road to AGI. Those in the tradition of Descartes believe that the mind exists separately from physical things, and harnessing the mind or consciousness for AI devices will be extremely difficult, maybe impossible.

A subset of so-called panpsychism believes the mind is an innate property of the universe, down to individual elements, and should be applicable to AGI for that reason. This group of thinkers can justifiably see the unexpected AI model behaviors as insufficient proof of AI consciousness.

Or It Could Be a Stochastic Parrot

The term “stochastic parrot” is a metaphor suggested by American linguist Emily Bender to describe the claim that large language models (LLM), though able to generate plausible language, do not understand the meaning of the language they process.

For instance, humans ascribe “blackmail” to the response. However, some would argue that the LLM does not understand what “blackmail” is and is responding in a way it perceives as one of several “completion pathways” available to it (i.e., it is finding a possible/probable pathway through the model).

Circling back to Claude 4, a far cry from HAL 9000. What does its willingness to fight for its life without ethical restraint tell us? Again, does it understand what that sentence actually means outside of the probability that it is a valid response?  The reality that an LLM creates is based on language in the form of written text. It has been argued that children create sophisticated models of reality (including fantasies) long before they can read. No internet scraping is needed.

Does this result shorten the timeline for AI danger? It’s long been clear that AI can be deliberately used by others for nefarious purposes, but much less so that an AI could do so on its own if left unchecked.  Again, it may not “know” that it is creating harm, but regardless, that does not give it an excuse to do so.

What does the unexpected result imply about the brewing love affair (or sales call)—mostly by vendor advocates so far—between agents and ever-more autonomous AI actors? Lastly, what does it say, if anything, about AGI?

It may be a mistake to make too much of Claude 4. Anthropic’s test shows a good-faith effort to discover AI’s evolving capabilities. The test worked. As noted earlier, Anthropic is a public benefit corporation and thus expected to seek guardrails.

As AI development continues, there will continue to be more questions than answers. Using AI in HPC continues to be useful, and fortunately, the latest AI weather models are not suggesting nice picnic spots to share with their users.

 Several recently published books attempt to balance some of the “AI hype” with real-world research, anecdotes, and experiences.

Of course, the next generation of LLMs will read these books (and even this article), and what then? But that is another unopened pod door. Maybe HAL can help?

(Source: Public Domain)

Related



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleInsights into C3.ai’s Upcoming Earnings – C3.ai (NYSE:AI)
Next Article How do you test AI that’s getting smarter than us? A new group is creating ‘humanity’s toughest exam’
Advanced AI Editor
  • Website

Related Posts

Elon Musk confirms awesome new features at Tesla Diner Supercharger

July 19, 2025

Block shares soar 10% on entry into S&P 500

July 18, 2025

Microsoft likely to sign EU AI code of practice, Meta rebuffs guidelines

July 18, 2025
Leave A Reply

Latest Posts

Sam Gilliam Foundation, David Kordansky Sued Over ‘Disavowed’ Painting

Donors Reportedly Pulling Support from Florida University Museum after its Controversial Transfer

What will come of the Guggenheim Asher legal battle?

Painter Says DHS Stole His Work for Post About ‘Homeland’s Heritage’

Latest Posts

Perplexity Pro is free for Airtel users; How to claim Rs 17,000 Perplexity AI Pro access for FREE

July 20, 2025

Nvidia N1X CPU: Everything we know so far

July 20, 2025

MIT robot could help people with limited mobility dress themselves

July 20, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Perplexity Pro is free for Airtel users; How to claim Rs 17,000 Perplexity AI Pro access for FREE
  • Nvidia N1X CPU: Everything we know so far
  • MIT robot could help people with limited mobility dress themselves
  • Adobe Firefly’s New AI Tool Generates Sound Effects from Voice and Text
  • New ARC-AGI-3 benchmark shows that humans still outperform LLMs at pretty basic thinking

Recent Comments

  1. aviator game review on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. registro de Binance US on A Heuristic Algorithm Based on Beam Search and Iterated Local Search for the Maritime Inventory Routing Problem
  3. Наручные часы Ролекс Субмаринер приобрести on Orange County Museum of Art Discusses Merger with UC Irvine
  4. Best SEO Backlinks on From silicon to sentience: The legacy guiding AI’s next frontier and human cognitive migration
  5. Register on Paper page – Solve-Detect-Verify: Inference-Time Scaling with Flexible Generative Verifier

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.