Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

YouTube to use AI to help podcasters promote themselves with clips and Shorts

What’s Wrong With CLM Systems? – Artificial Lawyer

Why UBS Is Still on the Sidelines With C3.ai (AI) Despite a Higher Target

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Customer Service AI

Call Center Scripts Explain The GenAI “Sycophancy” Problem

By Advanced AI EditorSeptember 16, 2025No Comments10 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Welcome to The #Content Report, a newsletter by Vince Mancini. I’ve been writing about movies, culture, and food since the late aughts. Now I’m delivering it straight to you, with none of the autoplay videos, takeover ads, or chumboxes of the ad-ruined internet. Support my work and help me bring back the cool internet by subscribing, sharing, commenting, and keeping it real.

—

It seems like every day, there’s a new article about how Chat GPT told a suicidal teen how to make and hide a noose from his family, or convinced a high school dropout corporate recruiter that he had invented a new type of math, or that that or some other LLM seemingly pushed some previously sane person into some kind of mental breakdown. On one level, it’s not that surprising, that some of our loneliest, most vulnerable, most mentally teetering personalities would be the ones nudged over the edge by a glorified search engine suggestion machine. Probably well-adjusted people with lots of friends and confidants don’t use chatbots for companionship or life advice.

At the same time, if these people are the potential user base, having a system that not infrequently drives some of them to madness would seem to be a major design flaw. As the New York Times wrote of the aforementioned corporate recruiter, “He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.”

In response to that story, “An OpenAI spokeswoman said the company was ‘focused on getting scenarios like role play right’ and was ‘investing in improving model behavior over time, guided by research, real-world use and mental health experts.’ On Monday, OpenAI announced that it was making changes to ChatGPT to ‘better detect signs of mental or emotional distress.’”

As was so abundantly illustrated by Elon Musk personally tweaking his own generative AI chatbot, Grok, until it started identifying Ashkenazi surnames and calling itself “Mechahitler,” the way these chatbots interact with the world is heavily influenced by their design prerogatives—the tweaks to the datasets from which they pull, and the output tone in which they interact. In Grok’s case, the problem was fairly easy to identify: Musk had simply turned the “anti-woke” knob a little too far in the direction of “Oops, all Hitlers.”

A related problem with ChatGPT and some other gen AIs identified in the New York Times story as making people crazy, has come to be known as “sycophancy.”

Sycophancy, in which chatbots agree with and excessively praise users, is a trait they’ve manifested partly because their training involves human beings rating their responses. “Users tend to like the models telling them that they’re great, and so it’s quite easy to go too far in that direction,” Ms. Toner* said. [*a director at Georgetown’s Center for Security and Emerging Technology and former OpenAI board member].

I’m not an AI researcher, but “sycophancy” certainly rang a bell, and perhaps it should for anyone who has tried to reach a major corporation on the phone at any time in the last five years.

In my experience, it goes something like this: you need an answer to a question or to fix a problem that isn’t covered on a company’s website. Maybe you’re trying to merge several accounts, figure out why one app isn’t playing nice with another, noticed a discrepancy on your bill—whatever. Everything is internet now and the internet constantly breaks. And so you track down a contact phone number for the company (no easy lift in and of itself these days, especially with the advent of AI-driven search). You attempt to navigate its phone tree—which almost always includes some combination of voice recognition software and old-school phone menu (“if you would like an exhaustive recounting of our terms and conditions, please press four…”). God help you if you’re trying to do voice recognition in a loud environment (“I’m sorry, I didn’t get that.”). Sometimes you just press zero a hundred times or scream “OPERATOR OPERATOR OPERATOR!!!” into the receiver and that works, though that doesn’t seem all that common anymore.

Once you finally do reach a person, they usually begin with some variation on “…and to whom do I have the pleasure of speaking with today?”

You tell them your name. Maybe provide additional information, like your account number and email. Ideally, sometime in the first few minutes, you actually get to explain the problem. This is generally followed by:

(*long pause, maybe some typing noises*) “Thank you sir or madame for explaining this situation to me. I see you have been a customer since 2013, so I thank you for that. Just to confirm, you are having a problem with your (*insert attempt to paraphrase what you just described that hopefully bears a faint resemblance to what you actually said but often doesn’t*).”

You confirm it, already getting annoyed at the flood of doublespeak and elaborate, flowery pleasantries.

“We are so sorry to hear that you are experiencing this problem. We know how frustrating it can be to not experience optimum service from your wifi-connected juice pressing machine. May I ask how your day is going so far?”

Sound familiar? (*Obi-Wan Kenobi hologram*) Users tend to like the models telling them that they’re great, and so it’s quite easy to go too far in that direction…

Despite the call center employees’ attempts to display empathy and commiseration—sometimes they’ll ask about my day or my plans for the weekend or the weather where I live—I usually come away from the phone call or chat annoyed that what ultimately felt like a fairly simple problem to fix ended up sucking up ten, 20, 40 minutes of my time. The conversation being larded up with fake empathy and Victorian formalities certainly didn’t help. What if we did less bowing and scraping and more addressing the issue quickly so we could both move on with our days? Or maybe that’s just me.

Sometimes I’ll wish I could make this point with someone at the company, to maybe rewrite their call center scripts more for speed and efficiency rather than for creating this atmosphere of slightly uncanny politeness. Show your appreciation by respecting my time. Instead, it almost always goes something like… “Stay on the line if you would like to rate the level of customer service provided by our representative today…”

These companies are always putting you through this mind-numbing, infantilizing ordeal that could be improved any number of ways. Easier to find a phone number, no voice recognition menu, quicker to get a human, quicker to get to the actual problem… Any of these would be a genuine improvement. And then at the end of the call, rather than offering you the option of any of these suggestions, they only allow you to rate the personality of the low-wage, off-shored employee they hired to read their script.

(*Obi-Wan Kenobi hologram rematerializes*) Sycophancy… is a trait they’ve manifested partly because their training involves human beings rating their responses.

It occurs to me that the same people training customer service representative call center employees are roughly the same people training gen AI chatbots (oftentimes, the latter are being trained explicitly in order to replace the former’s jobs). Which is to say, people with a vested interest in selling the systems that they have designed, in part to reduce labor costs; people insulated and/or ignorant from the process their systems are designed to carry out; and maybe perhaps people without great interpersonal skills themselves, who are slightly unclear on the motives of those who would use their systems. They seem to imagine that what a person calling customer services wants is to be flattered and commiserated with, not to have their issue fixed quickly. OR, by only offering the option of rating the flattery and commiseration itself, the system organically evolves in that direction because of its built-in incentives. Bit of chicken-or-egg situation.

Likewise, despite all the companies investing billions in their new genAI technology, in the process propping up an entire bubble economy (“AI startups received 53% of all global venture capital dollars invested in the first half of 2025, and 64% in the US”) all it seems that we’ve been able to train these chatbots to competently do is to mimic the conventions of human conversation. In the same vein, I can’t think of a single customer service call interaction I’ve had in the last 10 years in which the representative seemed more competent, more knowledgeable about the product, or more helpful to me in resolving an issue. The only value that has identifiably changed is the volume of politeness and pleasantries.

One thing I keep going back to is the commercial for Google’s Gemini AI chatbot that originally aired during the Olympics. The father of an aspiring athlete uses the AI technology to help his daughter write a letter to her hero, the Olympic hurdler Sydney McLaughlin-Levrone. “Now she wants to show Sydney some love,” the father says via voiceover. “And I’m pretty good with words, but this has to be just right. So, Gemini, help my daughter write a letter telling Sydney how inspiring she is.”

The whole thing feels telling, and Google eventually pulled the ad, because it seems to boil down the act of “showing love” to a matter of getting words right. “Right” in this case being most the most commonly used and the most voluminously applied. As if love can be measured in the correctness of grammar rather than by the intention of the gesture. Does the recipient experience more love if the words are more effusive? And does having an AI intermediary alter the way we perceive the gesture, or the intentionality behind it? The designers of this kind of AI seem to think it doesn’t. Or maybe they haven’t even considered it.

The New York Times piece drills down into what these chatbots are actually good at, which is improv. They didn’t help the corporate recruiter to actually develop a new type of math (or a forcefield vest, or a way to communicate with animals, or a levitation machine, all breakthroughs he came to believe he was on the precipice of facilitating). They just sort of yes-anded him into a heretofore untapped genre of mental breakdown. And in the sense that the chatbot kept him using the product and eventually upgrading to the $20-a-month paid subscription, it was successful.

We’re living in a reality defined by a kind of post-product economy. The goal isn’t so much for companies to generate better tools for us to use or to produce things of value, it’s to keep us using their tools for as much time as possible. Engagement has become a benchmark, really the only benchmark, for effectiveness. It matters not for what you use a tool, so long as you use it. And in this case, the time-of-use is specifically becoming a problem. “Chatbots can privilege staying in character over following the safety guardrails that companies have put in place. ‘The longer the interaction gets, the more likely it is to kind of go off the rails,’ Ms. Toner said.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle’s Nobel-Winning AI Scientist Says Learning How To Learn Is The Key Skill in the AI Age
Next Article ChatGPT may soon require ID verification from adults, CEO says
Advanced AI Editor
  • Website

Related Posts

Saritasa Partners with Sports Thread to Launch AI-Powered

September 16, 2025

Sobot Encourages the Synergy of Generative AI & Multi-Faceted AI in Customer Contact

September 16, 2025

Zendesk to Shutter Zendesk Sell, Go All-In on Customer Service

September 16, 2025

Comments are closed.

Latest Posts

Jennifer Packer and Marie Watt Win $250,000 Heinz Award

Sylvester Stallone Owns Works by Warhol, Condo, and Other Art Stars

LA Louver Gallery to Shutter Venice Gallery After 50 Years

Pritzker Family’s Hidden Art Trove Heads to Sotheby’s This Fall

Latest Posts

YouTube to use AI to help podcasters promote themselves with clips and Shorts

September 17, 2025

What’s Wrong With CLM Systems? – Artificial Lawyer

September 17, 2025

Why UBS Is Still on the Sidelines With C3.ai (AI) Despite a Higher Target

September 17, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • YouTube to use AI to help podcasters promote themselves with clips and Shorts
  • What’s Wrong With CLM Systems? – Artificial Lawyer
  • Why UBS Is Still on the Sidelines With C3.ai (AI) Despite a Higher Target
  • Down and out with Cerebras Code
  • Google DeepMind Alumni Launch Hiverge With $5 Million Seed Funding for ‘Algorithm Factory’ to Discover and Deploy Algorithms Beyond Human Capabilities

Recent Comments

  1. KennethWax on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. burun estetiği fiyatları on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. HowardLut on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Richardsmeap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Ghép thận chui on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.