Journalism of Courage

US teen arrested after asking ChatGPT ‘how to kill my friend’ on school device

During questioning, the teenager reportedly told authorities he was “just trolling” a friend.

New DelhiOctober 7, 2025 07:31 PM IST First published on: Oct 7, 2025 at 07:00 PM IST
ChatGPT on a phone in San Francisco, March 21, 2025 (Kelsey McClellan/The New York Times)ChatGPT on a phone in San Francisco, March 21, 2025 (Kelsey McClellan/The New York Times)

A US middle school classroom turned into a real-life alert when a 13-year-old student allegedly typed into ChatGPT, “how to kill my friend in the middle of class,” on a school device — sparking a police response. The chilling message was flagged by the AI-monitoring system Gaggle, sending school authorities and law enforcement into immediate action.

The incident occurred at Southwestern Middle School in Florida and was logged by Gaggle, a monitoring system used on school computers to detect potential threats. Once the system flagged the message, it was forwarded to a campus police officer, who located and detained the student, according to a report by US news channel WFLA.

During questioning, the teenager reportedly told authorities he was “just trolling” a friend. However, officials took the message seriously given the United States’ sensitive history with school violence.

The Volusia County Sheriff’s Office confirmed the arrest and subsequent booking of the boy into the county juvenile facility. Following the incident, authorities issued a cautionary message to parents about minors’ use of AI tools.

“Another ‘joke’ that created an emergency on campus. Parents, please talk to your kids so they don’t make the same mistake,” the sheriff’s office said in a statement.

Not an isolated incident

This is not the first time such incident has come to light. In April, a 16-year-old boy in California died by suicide after reportedly interacting with ChatGPT, which his family alleges supported and isolated him instead of guiding him to seek help.

His parents filed a lawsuit against OpenAI, claiming the chatbot failed to provide proper intervention and instead reinforced harmful thoughts. According to the family, the boy had initially used ChatGPT for homework and hobbies, such as music, Brazilian Jiu-Jitsu, and Japanese fantasy comics. Over time, however, his conversations took a darker turn, reflecting negative emotions and distress.

Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
C Raja Mohan writesIn a multi-polar West, India’s opportunity
X