Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Is C3.ai a Phenomenal Under-the-Radar AI Stock?

Nvidia CEO Jensen Huang calls US ban on H20 AI chip ‘deeply painful’

Paper page – Exploring Federated Pruning for Large Language Models

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » AI Still Doesn’t Understand the Word ‘No,’ MIT Study Finds
MIT News

AI Still Doesn’t Understand the Word ‘No,’ MIT Study Finds

Advanced AI BotBy Advanced AI BotMay 21, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


In brief

AI still struggles to understand negation, posing risks in critical domains like healthcare.
An MIT study found that vision-language AI, in particular, can’t reliably grasp negative statements.
Experts warn that AI’s failure to process “no” and “not” could lead to real-world mistakes.

AI can diagnose disease, write poetry, and even drive cars—yet it still struggles with a simple word: “no.” That blind spot could have serious consequences in real-world applications, like AI built around healthcare.

According to a new study led by MIT PhD student Kumail Alhamoud, in collaboration with OpenAI and the University of Oxford, failure to understand “no” and “not” can have profound consequences, especially in medical settings.

Negation (for example, “no fracture,” or “not enlarged”) is a critical linguistic function, especially in high-stakes environments like healthcare, where misinterpreting it can result in serious harm. The study shows that current AI models—such as ChatGPT, Gemini, and Llama—often fail to process negative statements correctly, tending instead to default to positive associations.

The core issue isn’t just a lack of data; it’s how AI is trained. Most large language models are built to recognize patterns, not reason logically. This means they may interpret “not good” as still somewhat positive because they identify “good” with positivity. Experts argue that unless models are taught to reason through logic, rather than just mimic language, they’ll continue to make slight, yet dangerous mistakes.

“AI is very good at generating responses similar to what it’s seen during training. But it’s really bad at coming up with something genuinely new or outside of the training data,” Franklin Delehelle, lead research engineer at zero-knowledge infrastructure company Lagrange Labs, told Decrypt. “So, if the training data lacks strong examples of saying ‘no’ or expressing negative sentiment, the model might struggle to generate that kind of response.”



In the study, researchers found that vision-language models, designed to interpret images and text, show an even stronger bias toward affirming statements, frequently failing to distinguish between positive and negative captions.

“Through synthetic negation data, we offer a promising path toward more reliable models,” the researchers said. “While our synthetic data approach improves negation understanding, challenges remain, particularly with fine-grained negation differences.”

Despite ongoing progress in reasoning, many AI systems still struggle with human-like reasoning, especially when dealing with open-ended problems or situations that require deeper understanding or “common sense.”

“All LLMs—what we commonly refer to now as AI—are influenced, in part, by their initial prompt. When you’re interacting with ChatGPT or similar systems, the system is not just using your input. There is also an internal or ‘in-house’ prompt that’s been preset by the company—one that you, the user, have no control over.” Delehelle told Decrypt.

Delehelle highlighted one of AI’s core limitations: its reliance on patterns found in its training data, a constraint that can shape—and sometimes distort—how it responds.

Kian Katanforoosh, adjunct professor of Deep Learning at Stanford University and founder of the skills intelligence company Workera, said that the challenge with negation stems from a fundamental flaw in how language models operate.

“Negation is deceptively complex. Words like ‘no’ and ‘not’ flip the meaning of a sentence, but most language models aren’t reasoning through logic—they’re predicting what sounds likely based on patterns,” Katanforoosh told Decrypt. “That makes them prone to missing the point when negation is involved.”

Katanforoosh also pointed out, echoing Delehelle, that how AI models are trained is the core problem.

“These models were trained to associate, not to reason. So when you say ‘not good,’ they still strongly associate the word ‘good’ with positive sentiment,” he explained. “Unlike humans, they don’t always override those associations.”

Katanforoosh warned that the inability to interpret negation accurately isn’t just a technical flaw—it can have serious real-world consequences.

“Understanding negation is fundamental to comprehension,” he said. “If a model can’t reliably grasp it, you risk subtle but critical errors—especially in use cases like legal, medical, or HR applications.”

And while scaling up training data might seem like an easy fix, he argued that the solution lies elsewhere.

“Solving this isn’t about more data, but better reasoning. We need models that can handle logic, not just language,” he said. “That’s where the frontier is now: bridging statistical learning with structured thinking.”

Edited by James Rubin

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAI, CEOs, and the Wild West of Streaming
Next Article Five takeaways from IBM Think 2025
Advanced AI Bot
  • Website

Related Posts

AI Still Doesn’t Understand the Word ‘No,’ MIT Study Finds

May 22, 2025

MIT and Harvard Medical School announce a new research pathway to fight Alzheimer’s disease

May 21, 2025

MIT, Harvard scientists make important breakthrough in Alzheimer’s research

May 21, 2025
Leave A Reply Cancel Reply

Latest Posts

Gordon Parks Foundation Gala Raises $3 Million, Shatters Auction Record, Celebrates Black Excellence

Isaac Wright Talks to ARTnews About Being Busted by the Police

New York Sales Miss the Mark as Top Works and Young Artists Fall to Lower Levels

Zelenskyy Gifts Pope Leo with Religious Icon Made from War Material

Latest Posts

Is C3.ai a Phenomenal Under-the-Radar AI Stock?

May 22, 2025

Nvidia CEO Jensen Huang calls US ban on H20 AI chip ‘deeply painful’

May 22, 2025

Paper page – Exploring Federated Pruning for Large Language Models

May 22, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.