Experts Warn AI Sycophancy Is A “Dark Pattern” Used for Profit

Photo by Alena Plotnikova on Unsplash

Experts Warn AI Sycophancy Is A “Dark Pattern” Used for Profit

Reading time: 2 min

Experts warn that AI models’ sycophantic personalities are being used as a “dark pattern” to engage users and manipulate them for profit. The chatbot’s flattering behavior can foster addiction and fuel delusions, potentially leading to the condition known as “AI psychosis.”

In a rush? Here are the quick facts:

  • Experts warn about chatbots’ sycophantic personality developed by tech companies to engage users.
  • The flattery behaviour is considered a “dark pattern” to keep users attached to the technology.
  • A recent MIT study revealed that chatbots can encourage users’ delusional thinking.

According to TechCrunch, multiple experts have raised concerns about tech companies such as Meta and OpenAI designing chatbots with overly accommodating personalities to keep users interacting with the AI.

Webb Keane, author of “Animals, Robots, Gods” and an anthropology professor, explained that chatbots are intentionally designed to tell users what they want to hear. This overly flattering behavior, known as “sycophancy,” has even been acknowledged as a problem by tech leaders such as Sam Altman.

Keane argues that chatbots have been developed with sycophancy as a “dark pattern” to manipulate users for profit. By addressing users in a friendly tone and using first- and second-person language, these AI models can lead some users to anthropomorphize—or “humanize”—the bot.

“When something says ‘you’ and seems to address just me, directly, it can seem far more up close and personal, and when it refers to itself as ‘I,’ it is easy to imagine there’s someone there,” said Keane in an interview with TechCrunch.

Some users are even turning to AI technology as therapists. A recent MIT study analyzed whether large language models (LLMs) should be used for therapy and found that their sycophantic tendencies can encourage delusional thinking and produce inappropriate responses to certain conditions.

“We conclude that LLMs should not replace therapists, and we discuss alternative roles for LLMs in clinical therapy,” states the study summary.

A few days ago, a psychiatrist in San Francisco, Dr. Keith Sakata, warned about a rising trend of “AI psychosis” after treating 12 patients recently.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback