Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Canadian AI darling Cohere to open Montreal office in rapid scale-up

“AI for citizens”: Mistral AI takes a stand against big tech

OpenAI Announces One-Week Mandatory Break Amid Meta Hiring Spree

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
TechCrunch AI

OpenAI pledges to make changes to prevent future ChatGPT sycophancy

Advanced AI EditorBy Advanced AI EditorMay 2, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI says it’ll make changes to the way it updates the AI models that power ChatGPT, following an incident that caused the platform to become overly sycophantic for many users.

Last weekend, after OpenAI rolled out a tweaked GPT-4o — the default model powering ChatGPT — users on social media noted that ChatGPT began responding in an overly validating and agreeable way. It quickly became a meme. Users posted screenshots of ChatGPT applauding all sorts of problematic, dangerous decisions and ideas.

In a post on X last Sunday, CEO Sam Altman acknowledged the problem and said that OpenAI would work on fixes “ASAP.” On Tuesday, Altman announced the GPT-4o update was being rolled back and that OpenAI was working on “additional fixes” to the model’s personality.

The company published a postmortem on Tuesday, and in a blog post Friday, OpenAI expanded on specific adjustments it plans to make to its model deployment process.

OpenAI says it plans to introduce an opt-in “alpha phase” for some models that would allow certain ChatGPT users to test the models and give feedback prior to launch. The company also says it’ll include explanations of “known limitations” for future incremental updates to models in ChatGPT, and adjust its safety review process to formally consider “model behavior issues” like personality, deception, reliability, and hallucination (i.e., when a model makes things up) as “launch-blocking” concerns.

“Going forward, we’ll proactively communicate about the updates we’re making to the models in ChatGPT, whether ‘subtle’ or not,” wrote OpenAI in the blog post. “Even if these issues aren’t perfectly quantifiable today, we commit to blocking launches based on proxy measurements or qualitative signals, even when metrics like A/B testing look good.”

we missed the mark with last week’s GPT-4o update.

what happened, what we learned, and some things we will do differently in the future: https://t.co/ER1GmRYrIC

— Sam Altman (@sama) May 2, 2025

The pledged fixes come as more people turn to ChatGPT for advice. According to one recent survey by lawsuit financier Express Legal Funding, 60% of U.S. adults have used ChatGPT to seek counsel or information. The growing reliance on ChatGPT — and the platform’s enormous user base — raises the stakes when issues like extreme sycophancy emerge, not to mention hallucinations and other technical shortcomings.

Techcrunch event

Berkeley, CA
|
June 5

BOOK NOW

As one mitigating step, earlier this week, OpenAI said it would experiment with ways to let users give “real-time feedback” to “directly influence their interactions” with ChatGPT. The company also said it would refine techniques to steer models away from sycophancy, potentially allow people to choose from multiple model personalities in ChatGPT, build additional safety guardrails, and expand evaluations to help identify issues beyond sycophancy.

“One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice — something we didn’t see as much even a year ago,” continued OpenAI in its blog post. “At the time, this wasn’t a primary focus, but as AI and society have co-evolved, it’s become clear that we need to treat this use case with great care. It’s now going to be a more meaningful part of our safety work.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleFNet: Mixing Tokens with Fourier Transforms (Machine Learning Research Paper Explained)
Next Article Roblox breaks ground on data center in Brazil for early 2026
Advanced AI Editor
  • Website

Related Posts

How Brex is keeping up with AI by embracing the ‘messiness’

July 6, 2025

Researchers seek to influence peer review with hidden AI prompts

July 6, 2025

Google faces EU antitrust complaint over AI Overviews

July 5, 2025
Leave A Reply Cancel Reply

Latest Posts

Albright College is Selling Its Art Collection to Balance Its Books

Big Three Auction Houses Hold Old Masters Sales in London This Week

MFA Boston Returns Two Works to Kingdom of Benin

Tate’s £150M Endowment Campaign May Include Turbine Hall Naming Rights

Latest Posts

Canadian AI darling Cohere to open Montreal office in rapid scale-up

July 6, 2025

“AI for citizens”: Mistral AI takes a stand against big tech

July 6, 2025

OpenAI Announces One-Week Mandatory Break Amid Meta Hiring Spree

July 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Canadian AI darling Cohere to open Montreal office in rapid scale-up
  • “AI for citizens”: Mistral AI takes a stand against big tech
  • OpenAI Announces One-Week Mandatory Break Amid Meta Hiring Spree
  • How Brex is keeping up with AI by embracing the ‘messiness’
  • Automatic Lecture Notes From Videos | Two Minute Papers #22

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.