
Photo by Karla Rivera on Unsplash
OpenAI Announces New Safety Measures For Teenagers
OpenAI announced on Tuesday that it will implement new safety measures to protect teenagers, user freedom, and overall security. The company’s CEO, Sam Altman, shared a statement saying that the company is building an age verification system to prevent the chatbot from discussing topics such as self-harm or suicide, as well as from engaging in flirtatious conversations with underage users.
In a rush? Here are the quick facts:
- OpenAI is developing an age verification system to prevent the chatbot from discussing topics including suicide and self-harm.
- It might require ChatGPT users to provide government-issued IDs in certain countries.
- The decision has been made after a family sued OpenAI over the death by suicide of their teenage son.
According to OpenAI’s announcement, the company is introducing new security layers to safeguard private data as part of its first principle: security. The company emphasized that not even its employees can access private data—except in cases involving AI-reported misuse or critical risks.
The second principle, freedom, focuses on giving users more flexibility and options for how they use AI tools. However, the company’s third principle, safety, prioritizes protections for children.
Altman explained that the company wants to follow its internal motto, “treat our adult users like adults,” but without causing harm, and with new filters to determine the user’s age.
“For example, the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it,” states the document. “For a much more difficult example, the model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.”
The decision has been made after a family sued OpenAI in August over the death by suicide of a 16-year-old teenager, and while other tech companies, such as Meta have been accused of engaging in sensual chats with children and child exploitation via AI chatbots.
OpenAI added that it is working on an age-prediction system and that, in certain countries, it might require official government-issued IDs from users, acknowledging that it might pose opposition from adult users wishing to protect their privacy.
“We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict,” wrote Altman. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”