Toxic online behavior has been a problem in schools across the country for quite some time now. It’s a broad term that can cover a variety of different issues in online interactions and communications. It can most often be associated with cyberbullying, but can also include behavior indicating risks of self-harm, suicide, sexual coercion, and more.
One thing that makes toxic online behavior hard to spot is that it comes in many forms. It can be text, images, audio, and/or video content. K-12 IT teams can have a difficult time monitoring their shared drives, emails, and files for each different type of content to identify if there are issues going on among students.
Toxic online behavior has increased as many districts move to remote or hybrid learning due to the COVID-19 pandemic. A report by software vendor L1ght recorded a 40% increase in online toxicity among teens and children since the early days of the COVID-19 shutdowns, as compared to 2019 activity. In that report, L1ght noted, “Our research reveals a worrying rise in online toxicity and cyberbullying among children, precisely when they are most reliant on digital platforms.”
Today’s toxic school behavior is almost unrecognizable from what it was even a year ago—not to mention compared to the days when students didn’t have laptops, tablets, and smartphones. Further, there is a lot more bullying happening online rather than face to face since many students are learning remotely and are relegated entirely to online social communities.
Sadly, as social distancing efforts and social upheaval drag on, school administrators are seeing a significant increase in K-12 cyber safety issues, including cyberbullying, toxic language, self-harm content, suicidal signals, and more.
Most researchers are seeing a significant increase in toxic behavior on social media and gaming platforms. It seems that any platform that offers community interaction is rife with toxicity these days. Even school newspaper reporters are writing about the phenomenon. But, it doesn’t exist only on social media sites. Students are adding school apps to the alternatives they use to display toxic behavior.
Unfortunately, many school administrators have a blind spot when it comes to thinking about toxic behavior occurring within their education platforms. In some cases, the ability to monitor language and behavior in these school apps simply isn’t in place because administrators are focused on monitoring student social media interactions and web search activity. They’ve overlooked the possibility of incidents occurring within their own domain.
But, that is changing as some districts are becoming aware of how students are abusing school apps meant for better collaboration and productivity. For example, many districts are now aware that students are using school apps like Google Docs, Slides, Chat, and Gmail as chat rooms.
For example, Diana Gill, Director of Technology at the East Porter County School Corporation in Indiana, was surprised to find toxic behavior in her school’s apps after implementing ManagedMethods.
“My ‘Aha!’ moment happened just a few days into becoming a ManagedMethods customer. The platform’s inappropriate content monitor alerted me to a Google Doc that a dozen students were using to chat in. They were being quite clever by typing in white text and constantly changing the file name. Needless to say, the language was very inappropriate. I never would have been able to hunt this down without ManagedMethods,” Diana reports.
Unfortunately, incidents similar to this—and some even more harmful—are occurring with increased frequency. It is our duty as IT administrators to do what we can to maintain cyber safety in schools by keeping students safe both online and offline. This means doing what we can to detect potential cyber safety signals, such as student self-harm.