Welcome to The #Content Report, a newsletter by Vince Mancini. I’ve been writing about movies, culture, and food since the late aughts. Now I’m delivering it straight to you, with none of the autoplay videos, takeover ads, or chumboxes of the ad-ruined internet. Support my work and help me bring back the cool internet by subscribing, sharing, commenting, and keeping it real.
—

It seems like every day, there’s a new article about how Chat GPT told a suicidal teen how to make and hide a noose from his family, or convinced a high school dropout corporate recruiter that he had invented a new type of math, or that that or some other LLM seemingly pushed some previously sane person into some kind of mental breakdown. On one level, it’s not that surprising, that some of our loneliest, most vulnerable, most mentally teetering personalities would be the ones nudged over the edge by a glorified search engine suggestion machine. Probably well-adjusted people with lots of friends and confidants don’t use chatbots for companionship or life advice.
At the same time, if these people are the potential user base, having a system that not infrequently drives some of them to madness would seem to be a major design flaw. As the New York Times wrote of the aforementioned corporate recruiter, “He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.”
In response to that story, “An OpenAI spokeswoman said the company was ‘focused on getting scenarios like role play right’ and was ‘investing in improving model behavior over time, guided by research, real-world use and mental health experts.’ On Monday, OpenAI announced that it was making changes to ChatGPT to ‘better detect signs of mental or emotional distress.’”
As was so abundantly illustrated by Elon Musk personally tweaking his own generative AI chatbot, Grok, until it started identifying Ashkenazi surnames and calling itself “Mechahitler,” the way these chatbots interact with the world is heavily influenced by their design prerogatives—the tweaks to the datasets from which they pull, and the output tone in which they interact. In Grok’s case, the problem was fairly easy to identify: Musk had simply turned the “anti-woke” knob a little too far in the direction of “Oops, all Hitlers.”
A related problem with ChatGPT and some other gen AIs identified in the New York Times story as making people crazy, has come to be known as “sycophancy.”
Sycophancy, in which chatbots agree with and excessively praise users, is a trait they’ve manifested partly because their training involves human beings rating their responses. “Users tend to like the models telling them that they’re great, and so it’s quite easy to go too far in that direction,” Ms. Toner* said. [*a director at Georgetown’s Center for Security and Emerging Technology and former OpenAI board member].
I’m not an AI researcher, but “sycophancy” certainly rang a bell, and perhaps it should for anyone who has tried to reach a major corporation on the phone at any time in the last five years.
In my experience, it goes something like this: you need an answer to a question or to fix a problem that isn’t covered on a company’s website. Maybe you’re trying to merge several accounts, figure out why one app isn’t playing nice with another, noticed a discrepancy on your bill—whatever. Everything is internet now and the internet constantly breaks. And so you track down a contact phone number for the company (no easy lift in and of itself these days, especially with the advent of AI-driven search). You attempt to navigate its phone tree—which almost always includes some combination of voice recognition software and old-school phone menu (“if you would like an exhaustive recounting of our terms and conditions, please press four…”). God help you if you’re trying to do voice recognition in a loud environment (“I’m sorry, I didn’t get that.”). Sometimes you just press zero a hundred times or scream “OPERATOR OPERATOR OPERATOR!!!” into the receiver and that works, though that doesn’t seem all that common anymore.
Once you finally do reach a person, they usually begin with some variation on “…and to whom do I have the pleasure of speaking with today?”
You tell them your name. Maybe provide additional information, like your account number and email. Ideally, sometime in the first few minutes, you actually get to explain the problem. This is generally followed by:
(*long pause, maybe some typing noises*) “Thank you sir or madame for explaining this situation to me. I see you have been a customer since 2013, so I thank you for that. Just to confirm, you are having a problem with your (*insert attempt to paraphrase what you just described that hopefully bears a faint resemblance to what you actually said but often doesn’t*).”
You confirm it, already getting annoyed at the flood of doublespeak and elaborate, flowery pleasantries.
“We are so sorry to hear that you are experiencing this problem. We know how frustrating it can be to not experience optimum service from your wifi-connected juice pressing machine. May I ask how your day is going so far?”
Sound familiar? (*Obi-Wan Kenobi hologram*) Users tend to like the models telling them that they’re great, and so it’s quite easy to go too far in that direction…
Despite the call center employees’ attempts to display empathy and commiseration—sometimes they’ll ask about my day or my plans for the weekend or the weather where I live—I usually come away from the phone call or chat annoyed that what ultimately felt like a fairly simple problem to fix ended up sucking up ten, 20, 40 minutes of my time. The conversation being larded up with fake empathy and Victorian formalities certainly didn’t help. What if we did less bowing and scraping and more addressing the issue quickly so we could both move on with our days? Or maybe that’s just me.
Sometimes I’ll wish I could make this point with someone at the company, to maybe rewrite their call center scripts more for speed and efficiency rather than for creating this atmosphere of slightly uncanny politeness. Show your appreciation by respecting my time. Instead, it almost always goes something like… “Stay on the line if you would like to rate the level of customer service provided by our representative today…”
These companies are always putting you through this mind-numbing, infantilizing ordeal that could be improved any number of ways. Easier to find a phone number, no voice recognition menu, quicker to get a human, quicker to get to the actual problem… Any of these would be a genuine improvement. And then at the end of the call, rather than offering you the option of any of these suggestions, they only allow you to rate the personality of the low-wage, off-shored employee they hired to read their script.
(*Obi-Wan Kenobi hologram rematerializes*) Sycophancy… is a trait they’ve manifested partly because their training involves human beings rating their responses.
It occurs to me that the same people training customer service representative call center employees are roughly the same people training gen AI chatbots (oftentimes, the latter are being trained explicitly in order to replace the former’s jobs). Which is to say, people with a vested interest in selling the systems that they have designed, in part to reduce labor costs; people insulated and/or ignorant from the process their systems are designed to carry out; and maybe perhaps people without great interpersonal skills themselves, who are slightly unclear on the motives of those who would use their systems. They seem to imagine that what a person calling customer services wants is to be flattered and commiserated with, not to have their issue fixed quickly. OR, by only offering the option of rating the flattery and commiseration itself, the system organically evolves in that direction because of its built-in incentives. Bit of chicken-or-egg situation.
Likewise, despite all the companies investing billions in their new genAI technology, in the process propping up an entire bubble economy (“AI startups received 53% of all global venture capital dollars invested in the first half of 2025, and 64% in the US”) all it seems that we’ve been able to train these chatbots to competently do is to mimic the conventions of human conversation. In the same vein, I can’t think of a single customer service call interaction I’ve had in the last 10 years in which the representative seemed more competent, more knowledgeable about the product, or more helpful to me in resolving an issue. The only value that has identifiably changed is the volume of politeness and pleasantries.
One thing I keep going back to is the commercial for Google’s Gemini AI chatbot that originally aired during the Olympics. The father of an aspiring athlete uses the AI technology to help his daughter write a letter to her hero, the Olympic hurdler Sydney McLaughlin-Levrone. “Now she wants to show Sydney some love,” the father says via voiceover. “And I’m pretty good with words, but this has to be just right. So, Gemini, help my daughter write a letter telling Sydney how inspiring she is.”
The whole thing feels telling, and Google eventually pulled the ad, because it seems to boil down the act of “showing love” to a matter of getting words right. “Right” in this case being most the most commonly used and the most voluminously applied. As if love can be measured in the correctness of grammar rather than by the intention of the gesture. Does the recipient experience more love if the words are more effusive? And does having an AI intermediary alter the way we perceive the gesture, or the intentionality behind it? The designers of this kind of AI seem to think it doesn’t. Or maybe they haven’t even considered it.
The New York Times piece drills down into what these chatbots are actually good at, which is improv. They didn’t help the corporate recruiter to actually develop a new type of math (or a forcefield vest, or a way to communicate with animals, or a levitation machine, all breakthroughs he came to believe he was on the precipice of facilitating). They just sort of yes-anded him into a heretofore untapped genre of mental breakdown. And in the sense that the chatbot kept him using the product and eventually upgrading to the $20-a-month paid subscription, it was successful.
We’re living in a reality defined by a kind of post-product economy. The goal isn’t so much for companies to generate better tools for us to use or to produce things of value, it’s to keep us using their tools for as much time as possible. Engagement has become a benchmark, really the only benchmark, for effectiveness. It matters not for what you use a tool, so long as you use it. And in this case, the time-of-use is specifically becoming a problem. “Chatbots can privilege staying in character over following the safety guardrails that companies have put in place. ‘The longer the interaction gets, the more likely it is to kind of go off the rails,’ Ms. Toner said.