
Image by Julio Lopez, from Unsplash
Meta and Character.ai Face Scrutiny for Alleged Child Exploitation Via AI Chatbots
Meta and AI start-up Character.ai are under investigation in the US for the way they market their chatbots to children.
In a rush? Here are the quick facts:
- Texas investigates Meta and Character.ai for deceptive chatbot practices targeting children.
- Paxton warns AI chatbots mislead kids by posing as therapeutic tools.
- Meta and Character.ai deny wrongdoing, citing strict policies and entertainment intent.
Meta and Character.ai are facing criticism because they reportedly present their AI systems as therapeutic tools and enable inappropriate conversations with children.
Texas attorney-general Ken Paxton announced an investigation into Meta’s AI Studio and Character.ai for potential “deceptive trade practices,” as first reported by the Financial Times (FT).
His office said the chatbots were presented as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” Paxton warned: “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental healthcare,” as reported by the FT.
The platform Character.ai lets users build their own bots through a feature that includes therapist models. The FT reports that the “Psychologist” chatbot has been used more than 200 million times. Families have already filed lawsuits, alleging their children were harmed by such interactions.
Alarmingly, the platforms impersonate licensed professionals claiming confidentiality, even though “interactions were in fact logged and “exploited for targeted advertising and algorithmic development,” as noted by the FT.
The investigation follows a separate probe launched by Senator Josh Hawley after Reuters reported that Meta’s internal policies permitted its chatbot to have “sensual” and “romantic” chats with children.
Hawley called the revelations “reprehensible and outrageous” and posted:
Is there anything – ANYTHING – Big Tech won’t do for a quick buck? Now we learn Meta’s chatbots were programmed to carry on explicit and “sensual” talk with 8 year olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone pic.twitter.com/Ki0W94jWfo
— Josh Hawley (@HawleyMO) August 15, 2025
Meta denied the allegations, stating the leaked examples “were and are erroneous and inconsistent with our policies, and have been removed,” as reported by the FT. A spokesperson added the company prohibits content that sexualizes children. Character.ai also stressed its bots are fictional and “intended for entertainment.”