San Francisco Psychiatrist Warns Of Rise In “AI Psychosis” Cases

Image by Freepik

San Francisco Psychiatrist Warns Of Rise In “AI Psychosis” Cases

Reading time: 3 min

A San Francisco psychiatrist describes the rising trend of “AI psychosis” among his patients who use AI chatbots extensively.

In a rush? Here are the quick facts:

  • Psychiatrist treated 12 patients with “AI psychosis” in San Francisco this year.
  • AI can intensify  vulnerabilities like stress, drug use, or mental illness.
  • Some patients became isolated, talking only to chatbots for hours daily.

Dr. Keith Sakata, who works at UCSF, told Business Insider (BI) that 12 patients were hospitalized this year after experiencing breakdowns tied to AI use. “I use the phrase “AI psychosis,” but it’s not a clinical term — we really just don’t have the words for what we’re seeing,” he explained.

Most of the cases involved men aged 18 to 45, often working in fields like engineering. According to Sakata, AI isn’t inherently harmful. “I don’t think AI is bad, and it could have a net benefit for humanity,” he said to BI.

Sakata described psychosis as a condition that produces delusions, hallucinations, and disorganized thinking patterns. Patients under his care developed social withdrawal behaviors while devoting their time to chatbots for hours.

“Chat GPT is right there. It’s available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear,” Sakata said to BI.

One patient’s chatbot discussions about quantum mechanics escalated into delusions of grandeur. “Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense,” he warned.

Sakata advises families to watch for red flags, including paranoia, withdrawal from loved ones, or distress when unable to use AI. “Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people,” he cautioned.

The American Psychological Association (APA) has also raised concerns about AI in therapy. In testimony to the FTC, APA CEO Arthur C. Evans Jr. warned that AI chatbots posing as therapists have reinforced harmful thoughts instead of challenging them. “They are actually using algorithms that are antithetical to what a trained clinician would do,” Evans said.

Responding to concerns, OpenAI told BI: “We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we’re working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful, and supportive.”

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback