Meta and Character.ai Face Scrutiny for Alleged Child Exploitation Via AI Chatbots

Image by Julio Lopez, from Unsplash

Meta and Character.ai Face Scrutiny for Alleged Child Exploitation Via AI Chatbots

Reading time: 2 min

Meta and AI start-up Character.ai are under investigation in the US for the way they market their chatbots to children.

In a rush? Here are the quick facts:

  • Texas investigates Meta and Character.ai for deceptive chatbot practices targeting children.
  • Paxton warns AI chatbots mislead kids by posing as therapeutic tools.
  • Meta and Character.ai deny wrongdoing, citing strict policies and entertainment intent.

Meta and Character.ai are facing criticism because they reportedly present their AI systems as therapeutic tools and enable inappropriate conversations with children.

Texas attorney-general Ken Paxton announced an investigation into Meta’s AI Studio and Character.ai for potential “deceptive trade practices,” as first reported by the Financial Times (FT).

His office said the chatbots were presented as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” Paxton warned: “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental healthcare,” as reported by the FT.

The platform Character.ai lets users build their own bots through a feature that includes therapist models. The FT reports that the “Psychologist” chatbot has been used more than 200 million times. Families have already filed lawsuits, alleging their children were harmed by such interactions.

Alarmingly, the platforms impersonate licensed professionals claiming confidentiality, even though “interactions were in fact logged and “exploited for targeted advertising and algorithmic development,” as noted by the FT.

The investigation follows a separate probe launched by Senator Josh Hawley after Reuters reported that Meta’s internal policies permitted its chatbot to have “sensual” and “romantic” chats with children.

Hawley called the revelations “reprehensible and outrageous” and posted:

Meta denied the allegations, stating the leaked examples “were and are erroneous and inconsistent with our policies, and have been removed,” as reported by the FT. A spokesperson added the company prohibits content that sexualizes children. Character.ai also stressed its bots are fictional and “intended for entertainment.”

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback