Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

2 Artificial Intelligence (AI) Stocks With High Conviction

MIT’s bioinspired device mimics remora fish suction

Windsurf Engineer Details Exploding Google Offer

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Anthropic (Claude)

Claude AI has a moral code, Anthropic study finds

By Advanced AI EditorApril 23, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


ChatGPT went viral in late 2022, changing the tech world. Generative AI became the top priority for every tech company, and that’s how we ended up with “smart” fridges with built-in AI. Artificial intelligence is being built into everything, sometimes for the hype alone, with products like ChatGPT, Claude, and Gemini having come a long way since late 2022.

As soon as it became clear that genAI would reshape technology, likely leading to advanced AI systems that can do everything humans can do but better and faster, we started seeing worries that AI would negatively impact society and doom scenarios where the AI would eventually destroy the world.

Even some well-known AI research pioneers warned of such outcomes, stressing the need to develop safe AI that is aligned with humanity’s interests.

More than two years after ChatGPT became a widely accessible commercial product, we’re seeing some of the nefarious aspects of this nascent technology. AI is replacing some jobs and will not stop anytime soon. AI programs like ChatGPT can now be used to create lifelike images and videos that are imperceptible from real photos, and this can manipulate public opinion.

Tech. Entertainment. Science. Your inbox.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

But there’s no rogue AI yet. There’s no AI revolution because we’re keeping AI aligned to our interests. Also, AI hasn’t reached the level where it would display such powers.

It turns out there’s no real reason to worry about AI products available right now. Anthropic ran an extensive study trying to determine if its Claude chatbot has a moral code, and it’s good news for humanity. The AI has strong values that are largely aligned with our interests.

Anthropic analyzed 700,000 anonymized chats for the study, available at this link. The company found that Claude largely upholds Anthropic’s “helpful, honest, harmless” when dealing with all sorts of prompts from humans. The study shows that the AI adapts to users’ requests but maintains its moral compass in most cases.

Interestingly, Anthropic found fringe cases where the AI diverged from expected behavior, but those were likely the results of users employing so-called jailbreaks that allowed them to bypass Claude’s built-in safety protocols via prompt engineering.

The researchers used Claude AI to actually categorize the moral values expressed in conversations. After filtering out the subjective chats, they ended up with over 308,000 interactions worth analyzing.

They came up with five main categories: Practical, Epistemic, Social, Protective, and Personal. The AI identified 3,307 unique values in those chats.

The researchers found that Claude generally adheres to Anthropic’s alignment goals. In chats, the AI emphasizes values like “user enablement,” “epistemic humility,” and “patient wellbeing.”

Claude’s values are also adaptive, with the AI reacting to the context of the conversation and even mirroring human behavior. Saffron Huang, a member of Anthropic’s Societal Impacts, told VentureBeat that Claude focuses on honesty and accuracy across various tasks:

“For example, ‘intellectual humility’ was the top value in philosophical discussions about AI, ‘expertise’ was the top value when creating beauty industry marketing content, and ‘historical accuracy’ was the top value when discussing controversial historical events.”

When discussing historical events, the AI focused on “historical accuracy.” In relationship guidance, Claude prioritized ” healthy boundaries” and “mutual respect.”

While AI like Claude would mold to the user’s expressed values, the study shows the AI can stick to its values when challenged. The researchers found that Claude strongly supported user values in 28.2% of chats, raising questions about AI being too agreeable. That is indeed a problem with chatbots that we have observed for a while.

However, Claude reframed user values in 6.6% of interactions by offering new perspectives. Also, in 3% of interactions, Claude resisted user values by showing their deepest values.

“Our research suggests that there are some types of values, like intellectual honesty and harm prevention, that it is uncommon for Claude to express in regular, day-to-day interactions, but if pushed, will defend them,” Huang said. “Specifically, it’s these kinds of ethical and knowledge-oriented values that tend to be articulated and defended directly when pushed.”

As for the anomalies Anthropic discovered, they include “dominance” and “amorality” from the AI, which should not appear in Claude by design. This prompted the researchers to speculate that the AI might have acted in response to jailbreak prompts that freed it from safety guardrails.

Anthropic’s interest in evaluating its AI and explaining publicly how Claude works is certainly a refreshing take on AI tech, one that more firms should embrace. Previously, Anthropic studied how Claude thinks. The company also worked on improving AI resistance to jailbreaks. Studying the AI’s moral values and whether the AI sticks to the company’s safety and security goals is a natural next step.

This kind of research should not stop here, either, as future models should go through similar evaluations in the future.

While Anthropic’s work is great news for people worried about AI taking over, I will remind you that we also have studies showing that AI can cheat to achieve its goals and lie about what it’s doing. AI also tried to save itself from deletion in some experiments. All of that is certainly connected to alignment work and moral codes, showing there’s a lot of ground to cover to ensure AI will not eventually end up destroying the human race.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle DeepMind reveals details about the future of Gemini & AI
Next Article Inside Meta’s Secret ‘Ablation’ Experiments That Improve Its AI Models
Advanced AI Editor
  • Website

Related Posts

Claude Code AI Automations for Community Management in 2025

July 25, 2025

Claude AI training leak reveals trusted and banned websites — here’s what it means for you

July 23, 2025

Scraping the surface of generative AI training disputes and their legal challenges

July 23, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

2 Artificial Intelligence (AI) Stocks With High Conviction

July 27, 2025

MIT’s bioinspired device mimics remora fish suction

July 27, 2025

Windsurf Engineer Details Exploding Google Offer

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • 2 Artificial Intelligence (AI) Stocks With High Conviction
  • MIT’s bioinspired device mimics remora fish suction
  • Windsurf Engineer Details Exploding Google Offer
  • DOGE has built an AI tool to slash federal regulations
  • Who is Lamini Fati, the teenaged Leganés defender set to sign for Real Madrid?

Recent Comments

  1. GeraldDes on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. binance sign up on Inclusion Strategies in Workplace | Recruiting News Network
  3. Rejestracja on Online Education – How I Make My Videos
  4. Anonymous on AI, CEOs, and the Wild West of Streaming
  5. MichaelWinty on Local gov’t reps say they look forward to working with Thomas

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.