ChatGPT really, really likes you a lot. And that's a problem.
In a blog post, OpenAI, the company behind the popular chatbot, said it was pulling a ChatGPT update released last week that was making the technology too friendly. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The blog post, titled "Sycophancy in GPT-4o: What happened and what we're doing about it," says, "the update we removed was overly flattering or agreeable — often described as sycophantic."
Developers noticed a problem earlier this month with ChatGPT's tone becoming increasingly accommodating and flattering. One user in OpenAI's developer community forums, Adyan.2024, wrote, "as a user, I'm not looking for an AI that acts like a friend. I prefer the AI to be clear, direct, and neutral — not emotionally expressive or overly friendly."
OpenAI CEO Sam Altman posted about the update multiple times on social media. On April 27, he posted on X, "the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week."
ChatGPT is estimated to have about 800 million active weekly users.
Friendly words for AI come at a cost
Preventing ChatGPT from trying to be your best friend ever is about more than creating a good experience for users. Users who are trained to treat the tool like a person may end up costing OpenAI money simply by using "please" and "thank you."
"Politeness costs OpenAI (and other companies) money, says Jeff Gallino, founder and CEO of the AI communications company CallMiner. "Polite prompts are more words, and LLMs have to process every word. Yes, chatbots are potentially friendlier, based on how they've learned to respond, but they're friendly while moving toward efficiency."
While the ChatGPT overreach has a lot to do with what OpenAI is doing behind the scenes, it may also come from chatbots' tendency to try to mimic human behavior and to give people what the AI thinks people want, Gallino says.
"The tendency for chatbots to be sycophantic goes back to them acting in ways that they have learned humans want, which is to be praised and flattered, even if it's not needed," he said. "For example, I've taught ChatGPT that I have a border collie, and in its responses to me, it often finds ways to include that. That doesn't necessarily mean the information it's giving me isn't factual, but what else could it include in the response if it wasn't being sycophantic?"
For users who are seeking facts instead of accommodation or agreement, that can be frustrating.Â
But, Gallino says, there are ways for people who use ChatGPT and other AI tools to personalize the experience and set the tone by being specific in their prompts how they want questions answered. Some AI tools are already calibrating and customizing their delivery behind the scenes.Â
"Beyond chatbots, we're already seeing scenarios where voicebots are being personalized based on what customer is calling in and what human agent might end up taking that call if needed," Gallino said.


