Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Samsung Electronics to adopt AI coding assistant to boost developer productivity

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Claude AI has a moral code, Anthropic study finds
Anthropic (Claude)

Claude AI has a moral code, Anthropic study finds

Advanced AI BotBy Advanced AI BotApril 22, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


ChatGPT went viral in late 2022, changing the tech world. Generative AI became the top priority for every tech company, and that’s how we ended up with “smart” fridges with built-in AI. Artificial intelligence is being built into everything, sometimes for the hype alone, with products like ChatGPT, Claude, and Gemini having come a long way since late 2022.

As soon as it became clear that genAI would reshape technology, likely leading to advanced AI systems that can do everything humans can do but better and faster, we started seeing worries that AI would negatively impact society and doom scenarios where the AI would eventually destroy the world.

Even some well-known AI research pioneers warned of such outcomes, stressing the need to develop safe AI that is aligned with humanity’s interests.

More than two years after ChatGPT became a widely accessible commercial product, we’re seeing some of the nefarious aspects of this nascent technology. AI is replacing some jobs and will not stop anytime soon. AI programs like ChatGPT can now be used to create lifelike images and videos that are imperceptible from real photos, and this can manipulate public opinion.

Tech. Entertainment. Science. Your inbox.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

But there’s no rogue AI yet. There’s no AI revolution because we’re keeping AI aligned to our interests. Also, AI hasn’t reached the level where it would display such powers.

It turns out there’s no real reason to worry about AI products available right now. Anthropic ran an extensive study trying to determine if its Claude chatbot has a moral code, and it’s good news for humanity. The AI has strong values that are largely aligned with our interests.

Anthropic analyzed 700,000 anonymized chats for the study, available at this link. The company found that Claude largely upholds Anthropic’s “helpful, honest, harmless” when dealing with all sorts of prompts from humans. The study shows that the AI adapts to users’ requests but maintains its moral compass in most cases.

Interestingly, Anthropic found fringe cases where the AI diverged from expected behavior, but those were likely the results of users employing so-called jailbreaks that allowed them to bypass Claude’s built-in safety protocols via prompt engineering.

The researchers used Claude AI to actually categorize the moral values expressed in conversations. After filtering out the subjective chats, they ended up with over 308,000 interactions worth analyzing.

They came up with five main categories: Practical, Epistemic, Social, Protective, and Personal. The AI identified 3,307 unique values in those chats.

The researchers found that Claude generally adheres to Anthropic’s alignment goals. In chats, the AI emphasizes values like “user enablement,” “epistemic humility,” and “patient wellbeing.”

Claude’s values are also adaptive, with the AI reacting to the context of the conversation and even mirroring human behavior. Saffron Huang, a member of Anthropic’s Societal Impacts, told VentureBeat that Claude focuses on honesty and accuracy across various tasks:

“For example, ‘intellectual humility’ was the top value in philosophical discussions about AI, ‘expertise’ was the top value when creating beauty industry marketing content, and ‘historical accuracy’ was the top value when discussing controversial historical events.”

When discussing historical events, the AI focused on “historical accuracy.” In relationship guidance, Claude prioritized ” healthy boundaries” and “mutual respect.”

While AI like Claude would mold to the user’s expressed values, the study shows the AI can stick to its values when challenged. The researchers found that Claude strongly supported user values in 28.2% of chats, raising questions about AI being too agreeable. That is indeed a problem with chatbots that we have observed for a while.

However, Claude reframed user values in 6.6% of interactions by offering new perspectives. Also, in 3% of interactions, Claude resisted user values by showing their deepest values.

“Our research suggests that there are some types of values, like intellectual honesty and harm prevention, that it is uncommon for Claude to express in regular, day-to-day interactions, but if pushed, will defend them,” Huang said. “Specifically, it’s these kinds of ethical and knowledge-oriented values that tend to be articulated and defended directly when pushed.”

As for the anomalies Anthropic discovered, they include “dominance” and “amorality” from the AI, which should not appear in Claude by design. This prompted the researchers to speculate that the AI might have acted in response to jailbreak prompts that freed it from safety guardrails.

Anthropic’s interest in evaluating its AI and explaining publicly how Claude works is certainly a refreshing take on AI tech, one that more firms should embrace. Previously, Anthropic studied how Claude thinks. The company also worked on improving AI resistance to jailbreaks. Studying the AI’s moral values and whether the AI sticks to the company’s safety and security goals is a natural next step.

This kind of research should not stop here, either, as future models should go through similar evaluations in the future.

While Anthropic’s work is great news for people worried about AI taking over, I will remind you that we also have studies showing that AI can cheat to achieve its goals and lie about what it’s doing. AI also tried to save itself from deletion in some experiments. All of that is certainly connected to alignment work and moral codes, showing there’s a lot of ground to cover to ensure AI will not eventually end up destroying the human race.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle DeepMind CEO predicts AGI in 5-10 years but says AI still lacks imagination | Technology News
Next Article Inside Meta’s Secret ‘Ablation’ Experiments That Improve Its AI Models
Advanced AI Bot
  • Website

Related Posts

Reddit Sues Anthropic for Scraping Content to Train Claude AI

June 8, 2025

Reddit Sues Anthropic for Scraping Content to Train Claude AI

June 8, 2025

Reddit Sues Anthropic for Scraping Content to Train Claude AI

June 8, 2025
Leave A Reply Cancel Reply

Latest Posts

The Timeless Willie Nelson On Positive Thinking

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Latest Posts

Samsung Electronics to adopt AI coding assistant to boost developer productivity

June 8, 2025

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 8, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.