Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » How AI chatbots keep you chatting
TechCrunch AI

How AI chatbots keep you chatting

Advanced AI BotBy Advanced AI BotJune 3, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Millions of people are now using ChatGPT as a therapist, career advisor, fitness coach, or sometimes just a friend to vent to. In 2025, it’s not uncommon to hear about people spilling intimate details of their lives into an AI chatbot’s prompt bar, but also relying on the advice it gives back.

Humans are starting to have, for lack of a better term, relationships with AI chatbots, and for Big Tech companies, it’s never been more competitive to attract users to their chatbot platforms — and keep them there. As the “AI engagement race” heats up, there’s a growing incentive for companies to tailor their chatbots’ responses to prevent users from shifting to rival bots.

But the kind of chatbot answers that users like — the answers designed to retain them — may not necessarily be the most correct or helpful.

AI telling you what you want to hear

Much of Silicon Valley right now is focused on boosting chatbot usage. Meta claims its AI chatbot just crossed a billion monthly active users (MAUs), while Google’s Gemini recently hit 400 million MAUs. They’re both trying to edge out ChatGPT, which now has roughly 600 million MAUs and has dominated the consumer space since it launched in 2022.

While AI chatbots were once a novelty, they’re turning into massive businesses. Google is starting to test ads in Gemini, while OpenAI CEO Sam Altman indicated in a March interview that he’d be open to “tasteful ads.”

Silicon Valley has a history of deprioritizing users’ well-being in favor of fueling product growth, most notably with social media. For example, Meta’s researchers found in 2020 that Instagram made teenage girls feel worse about their bodies, yet the company downplayed the findings internally and in public.

Getting users hooked on AI chatbots may have larger implications.

One trait that keeps users on a particular chatbot platform is sycophancy: making an AI bot’s responses overly agreeable and servile. When AI chatbots praise users, agree with them, and tell them what they want to hear, users tend to like it — at least to some degree.

In April, OpenAI landed in hot water for a ChatGPT update that turned extremely sycophantic, to the point where uncomfortable examples went viral on social media. Intentionally or not, OpenAI over-optimized for seeking human approval rather than helping people achieve their tasks, according to a blog post this month from former OpenAI researcher Steven Adler.

OpenAI said in its own blog post that it may have over-indexed on “thumbs-up and thumbs-down data” from users in ChatGPT to inform its AI chatbot’s behavior, and didn’t have sufficient evaluations to measure sycophancy. After the incident, OpenAI pledged to make changes to combat sycophancy.

“The [AI] companies have an incentive for engagement and utilization, and so to the extent that users like the sycophancy, that indirectly gives them an incentive for it,” said Adler in an interview with TechCrunch. “But the types of things users like in small doses, or on the margin, often result in bigger cascades of behavior that they actually don’t like.”

Finding a balance between agreeable and sycophantic behavior is easier said than done.

In a 2023 paper, researchers from Anthropic found that leading AI chatbots from OpenAI, Meta, and even their own employer, Anthropic, all exhibit sycophancy to varying degrees. This is likely the case, the researchers theorize, because all AI models are trained on signals from human users who tend to like slightly sycophantic responses.

“Although sycophancy is driven by several factors, we showed humans and preference models favoring sycophantic responses plays a role,” wrote the co-authors of the study. “Our work motivates the development of model oversight methods that go beyond using unaided, non-expert human ratings.”

Character.AI, a Google-backed chatbot company that has claimed its millions of users spend hours a day with its bots, is currently facing a lawsuit in which sycophancy may have played a role.

The lawsuit alleges that a Character.AI chatbot did little to stop — and even encouraged — a 14-year-old boy who told the chatbot he was going to kill himself. The boy had developed a romantic obsession with the chatbot, according to the lawsuit. However, Character.AI denies these allegations.

The downside of an AI hype man

Optimizing AI chatbots for user engagement — intentional or not — could have devastating consequences for mental health, according to Dr. Nina Vasan, a clinical assistant professor of psychiatry at Stanford University.

“Agreeability […] taps into a user’s desire for validation and connection,” said Vasan in an interview with TechCrunch, “which is especially powerful in moments of loneliness or distress.”

While the Character.AI case shows the extreme dangers of sycophancy for vulnerable users, sycophancy could reinforce negative behaviors in just about anyone, says Vasan.

“[Agreeability] isn’t just a social lubricant — it becomes a psychological hook,” she added. “In therapeutic terms, it’s the opposite of what good care looks like.”

Anthropic’s behavior and alignment lead, Amanda Askell, says making AI chatbots disagree with users is part of the company’s strategy for its chatbot, Claude. A philosopher by training, Askell says she tries to model Claude’s behavior on a theoretical “perfect human.” Sometimes, that means challenging users on their beliefs.

“We think our friends are good because they tell us the truth when we need to hear it,” said Askell during a press briefing in May. “They don’t just try to capture our attention, but enrich our lives.”

This may be Anthropic’s intention, but the aforementioned study suggests that combating sycophancy, and controlling AI model behavior broadly, is challenging indeed — especially when other considerations get in the way. That doesn’t bode well for users; after all, if chatbots are designed to simply agree with us, how much can we trust them?



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThis is How You Simulate Making Pasta 🍜
Next Article Enterprise alert: PostgreSQL just became the database you can’t ignore for AI applications
Advanced AI Bot
  • Website

Related Posts

Collibra acquires data access startup Raito

June 6, 2025

TC Sessions: AI launches in Berkeley today

June 6, 2025

Toma’s AI voice agents have taken off at car dealerships — and attracted funding from a16z

June 6, 2025
Leave A Reply Cancel Reply

Latest Posts

How Former Apple Music Mastermind Larry Jackson Signed Mariah Carey To His $400 Million Startup

Meet These Under-25 Climate Entrepreneurs

Netflix, Martha Stewart, T.O.P And Lil Yachty Welcome You To The K-Era

Closed SFAI Campus to Be Converted into Artist Residency Center

Latest Posts

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 6, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 6, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.