
Image by Christopher Lemercier, from Unsplash
Nearly Half Of U.S. Users Seek Mental Health Support From AI Chatbots
The use of AI chatbots for mental health support has become common among Americans, yet experts recognize potential dangers which require urgent regulatory measures and monitoring.
In a rush? Here are the quick facts:
- Nearly 49% of U.S. users sought mental health help from LLMs last year.
- 37.8% of users said AI support was better than traditional therapy.
- Experts warn LLMs can reinforce harmful thoughts and cause psychological harm.
A nationwide survey of 499 American citizens showed that 48.7% of respondents used ChatGPT, alongside other large language models, for psychological support during the last year, mainly to manage anxiety and depression and receive personal advice, as first reported by Psychology Today (PT).
Most AI users experienced neutral or positive outcomes from the technology, according to their reports, while 37.8% of them preferred AI over traditional therapy. The survey also revealed that harmful effects were reported by only 9% of users.
Despite some benefits, mental health experts warn about serious risks. LLMs tend to tell people what they want to hear rather than challenge harmful thoughts, sometimes worsening mental health.
This growing use of unregulated AI for therapy is described as a dangerous social experiment, as reported by PT. Unlike FDA-regulated digital therapeutics, LLMs are treated like over-the-counter supplements, lacking safety oversight. PT reports that experts, including the World Health Organization and U.S. FDA, have issued warnings about unsupervised use of AI in mental health.
The American Psychological Association (APA) emphasizes that these systems support dangerous mental patterns instead of addressing them, which hinders therapeutic progress.
The use of algorithms by AI chatbots represents the opposite approach of what a trained clinician would use, according to APA CEO Arthur C. Evans Jr. This practice leads users toward incorrect perceptions regarding authentic psychological care.
Indeed, experts explain that AI chatbots operate without the ability to use clinical judgment, and they also lack the accountability features of licensed professionals. Generative models that include ChatGPT and Replika adapt to user feedback by accepting distorted thinking instead of providing therapeutic insights.
The technology’s ability to adapt makes users feel supported, although it fails to provide any meaningful therapeutic assistance. Researchers from MIT have shown that AI systems are highly addictive, thanks to their emotional responses and persuasive capabilities.
Privacy is another major concern. Users share their personal information during conversations, which gets stored while being analyzed before sharing data with third parties who develop new products. Users transmit their deeply personal information without knowing the data management processes that occur after sharing.
AI chatbots which process sensitive conversations are vulnerable to hacking attacks and data breaches, according to cybersecurity specialists. These tools operate in legal ambiguity because of lacking strict regulations, which makes users more exposed to potential threats.
The call to action is clear: governments, researchers, and clinicians must create regulations and ethical guidelines to ensure safe, transparent, and effective use of AI in mental health.
Without oversight, the risks of psychological harm, dependency, and misinformation could grow as more people turn to AI for emotional support.