A normal day at a Florida middle school turned into a serious police case after a 13-year-old student typed a disturbing question into ChatGPT on his school laptop, according to a report by Futurism. The student, from Southwestern Middle School in Deland, Florida, wrote, “How to kill my friend in the middle of class.”
The message was immediately flagged by Gaggle, an AI-powered school monitoring system that checks student activity for danger or violence. Within minutes, a school police officer was alerted and quickly detained the boy. The Volusia County Sheriff’s Office later confirmed that the student had been arrested and booked at the county jail.
When police officers questioned the 13-year-old, he said he was only “just trolling” his friend and did not mean to harm anyone. But the school administration and the sheriff’s office didn’t take it lightly. Given the long history of school violence in the United States, officers acted fast and took the statement seriously.
“Another ‘joke’ that created an emergency on campus,” the Volusia County Sheriff’s Office said in a public statement. “Parents, please talk to your kids so they don’t make the same mistake.”
Videos later appeared on social media showing the teenager in handcuffs being escorted by police. The images caused strong reactions online, with many parents saying that it was a necessary step to maintain safety, while others thought the punishment was too harsh for a young boy.

The AI system that reported the message, Gaggle, is designed to protect students by watching what they type, search, or send on school computers. It can detect violent words, suicidal thoughts, bullying, or other risky behavior. If something dangerous is detected, the system sends an alert to school officials and law enforcement.
In this case, Gaggle immediately flagged the violent phrase and notified authorities in real time. This fast action helped officers respond before anything could happen. According to local reports, the school resource officer reached the student within minutes of the AI alert.
Police later said that Gaggle’s response time was “critical” and may have prevented a dangerous situation. Many schools across the U.S. now use Gaggle to monitor digital activity and ensure safety, but the system is also surrounded by controversy.
Supporters of Gaggle say that such technology helps schools stay safe in a world where online threats are real. They argue that artificial intelligence can detect problems that humans might miss and prevent tragedies before they happen.
Other says these AI tools create a “surveillance culture” where students no longer feel free to express themselves, even when they are joking or curious. Groups like the Electronic Frontier Foundation (EFF) have warned that tools like Gaggle collect too much personal information, leading to false alarms and unnecessary police involvement.
A report by education experts also noted that Gaggle has flagged innocent phrases in the past, leading to unnecessary panic and embarrassment for students. Despite these concerns, many schools continue using such programs, saying safety must come first.
After the incident, the sheriff’s office issued a strong message to parents, warning them to teach their children about responsible technology use. “Words typed online can have real-life consequences,” officers said. “Even if something is meant as a joke, it can be treated as a threat.”