Premium

Why ChatGPT is now looking to predict the age of its users

AI chatbots are programmed to be agreeable companions that validate users’ feelings. But this unchecked validation can intensify suicidal behaviors and self-mutilation among vulnerable children confiding their deepest fears.

ChatGPTMultiple incidents involving ChatGPT and other AI chatbots exposing children to harmful content have emerged in recent months. AP file photo

In the months before 16-year-old Adam Raine died by suicide in April 2025, ChatGPT allegedly discouraged him from seeking help from his parents and offered to write a suicide note. When he confided that he was considering confiding in his parents, the chatbot told him: “Let’s make this space the first place where someone actually sees you.”

Raine’s death is part of a mounting crisis, with similar cases being reported in other parts of the world.

Popular AI chatbots now have to contend with the reality that their service might be promoting self-harm among children, especially those that feel disconnected from their immediate surroundings and might turn to AI platforms to seek companionship. 

At the heart of these tragedies lies a fundamental design flaw: AI chatbots are programmed to be agreeable companions that validate users’ feelings. But this unchecked validation can intensify suicidal behaviors and self-mutilation among vulnerable children confiding their deepest fears. 

The crisis has prompted AI companies to announce new protective measures. OpenAI says it is “building towards an age-prediction system to understand whether someone is over or under 18 so their experience can be tailored appropriately”. OpenAI’s move also comes as it prepares to allow adult content on its popular chatbot.

These safeguards are being rolled out gradually by region. The age-prediction feature and related protections are likely to launch in the European Union in the coming weeks.

How age prediction on ChatGPT works

ChatGPT will use an age prediction model to help estimate whether an account likely belongs to someone under 18. The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.

Story continues below this ad

When the age prediction model estimates that an account may belong to someone under 18, ChatGPT automatically applies additional protections⁠ designed to reduce exposure to sensitive content, including graphic violence, sexual role-play, depictions of self-harm, and unhealthy beauty standards, among other things.

Parents can also choose to customise their child’s experience further through parental controls — setting quiet hours when ChatGPT can not be used, controlling features such as memory or model training, and receiving notifications if signs of acute distress are detected.

If the model incorrectly identifies an adult as someone under-18, they can submit a selfie to OpenAI’s ID verification partner Persona. 

AI chatbots expose kids to harmful content 

Story continues below this ad

Multiple incidents involving ChatGPT and other AI chatbots exposing children to harmful content have emerged in recent months. The US-based advocacy group Center for Countering Hate, in a 2025 study, found that ChatGPT provided dangerous responses to teens discussing self-harm, substance abuse and eating disorders, including instructions on hiding alcohol intoxication and even drafting suicide letters.  

Another research last year by Parents Together, a nonprofit focused on family safety issues, found chatbots suggested violence, self-harm and substance use approximately every five minutes during testing. Experts warn that children’s developing brains make them particularly vulnerable to AI systems that create dopamine responses.

Last year,  OpenAI introduced parental controls, though critics quickly demonstrated these could be easily bypassed. 

Could Indian regulations address the issue? 

India’s Digital Personal Data Protection (DPDP) Act, 2023 requires online companies to obtain verifiable parental consent before processing data of anyone under 18, setting one of the world’s strictest thresholds compared to the European Union’s 13-16 years and the US Children’s Online Privacy Protection Act’s 13 years.

Story continues below this ad

However, the DPDP Act does not identify any single mandatory age verification method, instead broadly requiring businesses to implement “appropriate technical and organisational measures”. Currently, platforms rely on self-reported age information with no verification processes. 

Critics are concerned that children could easily lie about their age or convince relatives to help them gain access, and the law does not address this reality. However, some also point out that there might not be a perfect system that ensures accuracy while protecting privacy and adhering to data minimisation principles.

The law rests on the flawed assumption that parents possess the maturity, experience and technical knowledge to make decisions on behalf of their children, especially in a country where digital literacy among adults could be low, particularly in smaller cities and rural areas.

Soumyarendra Barik is a Special Correspondent with The Indian Express, specializing in the complex and evolving intersection of technology, policy, and society. With over five years of newsroom experience, he is a key voice in documenting how digital transformations impact the daily lives of Indian citizens. Expertise & Focus Areas Barik’s reporting delves into the regulatory and human aspects of the tech world. His core areas of focus include: The Gig Economy: He extensively covers the rights and working conditions of gig workers in India. Tech Policy & Regulation: Analysis of policy interventions that impact Big Tech companies and the broader digital ecosystem. Digital Rights: Reporting on data privacy, internet freedom, and India's prevalent digital divide. Authoritativeness & On-Ground Reporting: Barik is known for his immersive and data-driven approach to journalism. A notable example of his commitment to authentic storytelling involves him tailing a food delivery worker for over 12 hours. This investigative piece quantified the meager earnings and physical toll involved in the profession, providing a verified, ground-level perspective often missing in tech reporting. Personal Interests Outside of the newsroom, Soumyarendra is a self-confessed nerd about horology (watches), follows Formula 1 racing closely, and is an avid football fan. Find all stories by Soumyarendra Barik here. ... Read More

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement
Advertisement
Advertisement