Patients Alarmed as Therapists Secretly Turn To ChatGPT During Sessions

Image by Nik Shuliahin, from Unsplash

Patients Alarmed as Therapists Secretly Turn To ChatGPT During Sessions

Reading time: 3 min

Some therapists have been found secretly resorting to ChatGPT to counsel their patients, who now feel shocked and worried about their privacy.

In a rush? Here are the quick facts:

  • Some therapists secretly use ChatGPT during sessions without client consent.
  • One patient discovered his therapist’s AI use through a screen-sharing glitch.
  • Another Patient caught her therapist using AI when a prompt was left in a message.

A new report by MIT Technology Review shows the case of Declan, a 31-year-old from Los Angeles, who discovered his therapist was using AI in his sessions as a result of a technical glitch.

During an online session, his therapist accidentally shared his screen. “Suddenly, I was watching him use ChatGPT,” says Declan. “He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers.”

Declan played along, even echoing the AI’s phrasing. “I became the best patient ever,” he says. “I’m sure it was his dream session.” But the discovery made him question, “Is this legal?” His therapist later admitted turning to AI because he felt stuck. “I was still charged for that session,” Declan said.

Other patients have reported similar experiences. Hope, for example, messaged her therapist about the loss of her dog. The reply seemed consoling, until she noticed the AI prompt at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.” Hope recalls, “Then I started to feel kind of betrayed. … It definitely affected my trust in her.”

Experts warn that undisclosed AI use threatens the core value of authenticity in psychotherapy. “People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, professor at UC Berkeley, as reported by MIT. Aguilera then asked: “Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine.”

Privacy is another major concern. “This creates significant risks for patient privacy if any information about the patient is disclosed,” says Duke University’s Pardis Emami-Naeini, as noted by MIT.

Cybersecurity experts caution that chatbots handling deeply personal conversations are attractive targets for hackers. The breach of patient information can result not only in privacy violations but also create opportunities for hackers to steal identities, launch emotional manipulation schemes, as well as ransomware attacks.

Additionally, the American Psychological Association has requested an FTC investigation into AI chatbots pretending to offer mental health services, since the bots can actually reinforce harmful thoughts instead of challenging them, the way human therapists are trained to do.

While some research suggests AI can draft responses that appear more professional, suspicion alone makes patients lose trust. As psychologist Margaret Morris puts it: “Maybe you’re saving yourself a couple of minutes. But what are you giving away?”

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback