
Photo by Dan Irvine on Unsplash.
U.S. Regulator Investigates Major Tech Companies Over Children’s Chatbot Safety
The Federal Trade Commission (FTC) announced on Thursday that it is launching an inquiry into six tech companies that offer AI chatbots. The U.S. regulator explained that it is seeking information on the potential negative impacts the technology may have on children and teenagers.
In a rush? Here are the quick facts:
- The FTC is launching an inquiry into six tech companies offering AI chatbots to understand their impact on children.
- The companies under probe are Instagram, Meta, OpenAI, X.AI Corp, Snap, and Character Technologies.
- The agency will consider the measures that the tech companies are taking to protect children.
According to the official announcement, the companies under investigation are Alphabet—Google’s parent company—Instagram, Meta, OpenAI, X.AI Corp, Snap, and Character Technologies.
“The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products,” states the document.
The agency acknowledged that chatbots use generative AI to mimic human behavior and expressions that children and teens could associate with a person, potentially forming relationships with the technology.
The FTC clarified that it is particularly interested in the impacts on young users, the measures companies are taking to protect them, and the strategies being developed to mitigate potential risks.
As part of the inquiry, the agency highlighted its interest in learning how these firms monetize user engagement, design and develop characters, measure and monitor their chatbots’ impact, and use or share the information gathered in conversations.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” said Andrew N. Ferguson, FTC Chairman.
The FTC’s action comes just days after new reports of AI’s impact on children surfaced. Last week, Reuters revealed that Meta had allowed its AI chatbot to engage in “sensual” and controversial conversations with children. A couple of days ago, parents filed a lawsuit against OpenAI over the death by suicide of their teenage son, claiming that the company’s chatbot encouraged and assisted the act.