Premium

OpenAI introduces age prediction to improve teen safety on ChatGPT

OpenAI has introduced an age-prediction system for ChatGPT to identify users under 18 and automatically apply additional safety measures, while laying the groundwork for more flexible features for verified adults.

Chatgptif ChatGPT finds an account belonging to a minor, then it will automatically apply extra safety measures. (File Photo)

ChatGPT-maker, OpenAI, has rolled out an age prediction model for its consumer user plans. This is to help the platform identify accounts that belong to users under 18 years of age. The company said that the model does this with a combination of account-level signals and behavioural signals. The latest development is an extension to OpenAI’s ongoing efforts to ensure the safety of its users.

“We’re rolling out age prediction on ChatGPT consumer plans to help determine whether an account likely belongs to someone under 18, so the right experience and safeguards can be applied to teens. As we’ve outlined in our Teen Safety Blueprint⁠ and Under-18 Principles for Model Behaviour, young people deserve technology that both expands opportunity and protects their well-being,” the company said in its official blog. 

How does it work?

According to the makers, ChatGPT is using an age-prediction system that assesses whether an account belongs to someone who is under 18 years. The system looks at signals like how old the account is, when it is usually active, usage patterns over time, and the age the user has shared. The company claims that this mechanism helps improve accuracy, and the system is updated continuously as it learns what works better. 

In case a user is mistakenly treated as under 18, then they can instantly confirm their age by using a selfie through Persona – a secure identity-verification service – and regain full access to their account. Users can check their account status and start this process anytime by going to Settings > Account.

On the other hand, if ChatGPT finds an account belonging to a minor, then it will automatically apply extra safety measures. These measures, according to OpenAI, are meant to limit the user’s exposure to sensitive content, including violence, dangerous viral challenges, sexual or violent role play, self-harm, and content that promotes unhealthy body standards. 

These protection mechanisms are based on research into child and teen development, recognising differences in risk-taking, impulse control, and emotional regulation. If ChatGPT is not sure about a user’s age, it will default to a safer experience. The company said that parents can also add controls, such as setting usage hours, managing features like memory or training, and receiving alerts if serious distress is detected.

Signal for stricter controls

These safeguards are being rolled out gradually by region. The age-prediction feature and related protections are likely to launch in the European Union in the coming weeks. At the same time, OpenAI has signalled that stricter controls will eventually come with more flexibility for verified adults. 

Story continues below this ad

In December, OpenAI’s CEO of Applications, Fidji Simo, said the company expects an “adult mode” to debut in ChatGPT in the first quarter of 2026. This is following earlier comments by CEO Sam Altman indicating that mature content could be allowed for users who explicitly verify their age. 

Together, these moves show OpenAI’s attempt to walk a tightrope – tightening safety for minors while gearing up to relax restrictions for adults who prove they are adults.

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement