As the influence of AI chatbots on the mental health of young people is being discussed and debated, OpenAI has said it is creating a “different ChatGPT experience,” specifically for teenagers, that will be rolled out later this year.
These changes include age-prediction technology to keep kids under 18 out of the standard version of ChatGPT. Acknowledging that even the most advanced systems sometimes struggle to predict age, the Microsoft-backed AI startup said that ChatGPT will automatically switch to the under-18 experience if the system is unable to reliably determine a user’s age.
“It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. We prioritise safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” Open AI CEO Sam Altman wrote in a blog post published on September 16. As part of its teen safety efforts, OpenAI said that it is working with experts, advocacy groups, and policymakers.
The new teen mode in ChatGPT comes after a lawsuit filed by a family in the US alleged that the popular AI chatbot’s lack of safeguards had led to the suicide death of their teenage son.
OpenAI’s version of ChatGPT for teenagers will feature a range of new safety measures. The platform will include stricter content filters that block flirtatious conversations and discussions of self-harm, regardless of whether they are in a fictional or creative-writing context.
In situations where a teen expresses suicidal thoughts, OpenAI’s crisis response system may alert parents and, in emergencies, even contact authorities.
Additionally, parents will have access to a suite of parental controls, allowing them to link their accounts (for under-13 users), set “blackout hours” when the app cannot be used, manage features like memory and chat history, and help guide rules for how ChatGPT responds. “These controls will add to features available for all users, including in-app reminders during long sessions to encourage breaks,” OpenAI said.
Altman’s blog post underscores the shift towards striking a balance between freedom, privacy, and safety. He stated that adults should be treated “like adults” with fewer restrictions, while kids need more protection, even if it means seeking identification and losing privacy. He also said that the company is working on developing advanced security features to ensure the data of all users is private, even from OpenAI employees, with certain exceptions for monitoring potential serious misuse and critical risks.
“If you talk to a doctor about your medical history or a lawyer about a legal situation, we have decided that it’s in society’s best interest for that information to be privileged and provided higher levels of protection,” Altman said.