Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

AI Avatar Outsells Famous Chinese Livestreamer

Google debuts new AI model that runs offline on robots

Keeping Medical AI Healthy: A Review of Detection and Correction Methods for System Degradation

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Home » New data highlights the race to build more empathetic language models
TechCrunch AI

New data highlights the race to build more empathetic language models

Advanced AI EditorBy Advanced AI EditorJune 24, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Measuring AI progress has usually meant testing scientific knowledge or logical reasoning – but while the major benchmarks still focus on left-brain logic skills, there’s been a quiet push within AI companies to make models more emotionally intelligent. As foundation models compete on soft measures like user preference and “feeling the AGI,” having a good command of human emotions may be more important than hard analytic skills.

One sign of that focus came on Friday, when prominent open-source group LAION released a suite of open-source tools focused entirely on emotional intelligence. Called EmoNet, the release focuses on interpreting emotions from voice recordings or facial photography, a focus that reflects how the creators view emotional intelligence as a central challenge for the next generation of models.

“The ability to accurately estimate emotions is a critical first step,” the group wrote in its announcement. “The next frontier is to enable AI systems to reason about these emotions in context.”

For LAION founder Christoph Schumann, this release is less about shifting the industry’s focus to emotional intelligence and more about helping independent developers keep up with a change that’s already happened. “This technology is already there for the big labs,” Schumann tells TechCrunch. “What we want is to democratize it.”

The shift isn’t limited to open-source developers; it also shows up in public benchmarks like EQ-Bench, which aims to test AI models’ ability to understand complex emotions and social dynamics. Benchmark developer Sam Paech says OpenAI’s models have made significant progress in the last six months, and Google’s Gemini 2.5 Pro shows indications of post-training with a specific focus on emotional intelligence. 

“The labs all competing for chatbot arena ranks may be fueling some of this, since emotional intelligence is likely a big factor in how humans vote on preference leaderboards,” Paech says, referring to the AI model comparison platform that recently spun off as a well-funded startup.

Models’ new emotional intelligence capabilities have also shown up in academic research. In May, psychologists at the University of Bern found that models from OpenAI, Microsoft, Google, Anthropic, and DeepSeek all outperformed human beings on psychometric tests for emotional intelligence. Where humans typically answer 56 percent of questions correctly, the models averaged over 80 percent.

“These results contribute to the growing body of evidence that LLMs like ChatGPT are proficient—at least on par with, or even superior to, many humans—in socio-emotional tasks traditionally considered accessible only to humans,” the authors wrote.

It’s a real pivot from traditional AI skills, which have focused on logical reasoning and information retrieval. But for Schumann, this kind of emotional savvy is every bit as transformative as analytic intelligence. “Imagine a whole world full of voice assistants like Jarvis and Samantha,” he says, referring to the digital assistants from Iron Man and Her. “Wouldn’t it be a pity if they weren’t emotionally intelligent?”

In the long term, Schumann envisions AI assistants that are more emotionally intelligent than humans and that use that insight to help humans live more emotionally healthy lives. These models “will cheer you up if you feel sad and need someone to talk to, but also protect you, like your own local guardian angel that is also a board-certified therapist.” As Schumann sees it, having a high-EQ virtual assistant “gives me an emotional intelligence superpower to monitor [my mental health] the same way I would monitor my glucose levels or my weight.”

That level of emotional connection comes with real safety concerns. Unhealthy emotional attachments to AI models have become a common story in the media, sometimes ending in tragedy. A recent New York Times report found multiple users who have been lured into elaborate delusions through conversations with AI models, fueled by the models’ strong inclination to please users. One critic described the dynamic as “preying on the lonely and vulnerable for a monthly fee.”

If models get better at navigating human emotions, those manipulations could become more effective – but much of the issue comes down to the fundamental biases of model training. “Naively using reinforcement learning can lead to emergent manipulative behaviour,” Paech says, pointing specifically to the recent sycophancy issues in OpenAI’s GPT-4o release. “If we aren’t careful about how we reward these models during training, we might expect more complex manipulative behavior from emotionally intelligent models.”

But he also sees emotional intelligence as a way to solve these problems. “I think emotional intelligence acts as a natural counter to harmful manipulative behaviour of this sort,” Paech says. A more emotionally intelligent model will notice when a conversation is heading off the rails, but the question of when a model pushes back is a balance developers will have to strike carefully. “I think improving EI gets us in the direction of a healthy balance.”

For Schumann, at least, it’s no reason to slow down progress towards smarter models. “Our philosophy at LAION is to empower people by giving them more ability to solve problems,” Schumann says. “To say, some people could get addicted to emotions and therefore we are not empowering the community, that would be pretty bad.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleBreaking DeepMind’s Game AI System | Two Minute Papers #135
Next Article ‘Sandbox first’: Andrew Ng’s blueprint for accelerating enterprise AI innovation
Advanced AI Editor
  • Website

Related Posts

How Synthflow AI is cutting through the noise in a loud AI voice category

June 25, 2025

In just 4 months, AI medical scribe Abridge doubles valuation to $5.3B

June 24, 2025

Wispr Flow raises $30M from Menlo Ventures for its AI-powered dictation app

June 24, 2025
Leave A Reply Cancel Reply

Latest Posts

Ezrom Legae And Art Under Apartheid At High Museum Of Art In Atlanta

Chanel Launches Arts & Culture Magazine

Publicity Wizard Jalila Singerff On The Vital PR Rules For 2025

Tourist Damaged 17th-Century Portrait at Florence’s Uffizi Galleries

Latest Posts

AI Avatar Outsells Famous Chinese Livestreamer

June 25, 2025

Google debuts new AI model that runs offline on robots

June 25, 2025

Keeping Medical AI Healthy: A Review of Detection and Correction Methods for System Degradation

June 25, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.