How Cyber Safety Artificial Intelligence Helps Students In K-12 Technology

Artificial Intelligence is taking K-12 cyber safety monitoring to the next level

Issues with cyber safety in schools are on the rise. Gaggle recently reported a 66% jump in the number of cyber safety incidents in the first three months of the 2020-21 school year compared to the same time in the 2019-20 school year.

The reason? More students are spending more time online. And, as the pandemic drags on, students are suffering mentally and psychologically. They are letting their guard down as they become more comfortable with online spaces, and they currently have fewer outlets for expressing themselves. The problem isn’t just cyber safety, it’s really about student safety and overall well-being.

There are specific student cyber safety risks to being online. Students are using online platforms for risky and toxic behavior such as cyberbullying, sexting, and spreading discriminatory content. They also send signals online that may indicate they’re in crisis, which can include self-harming behavior or thoughts of suicide.

Spotting these problems is difficult for schools that traditionally rely heavily on teachers and counselors to spot problems in classrooms and hallways. Districts are increasing their use of cyber safety artificial intelligence (AI) to improve online school safety programs, such as self-harm and cyberbullying monitoring. AI helps these districts identify subtle signals that they may miss otherwise.

[FREE WEBINAR] Student Cyber Safety in Schools. REGISTER HERE >>

What is Cyber Safety Artificial Intelligence?

AI can be built into a program, allowing it to simulate human intelligence. For example, AI allows a program to learn and solve problems based on the input it receives. AI is used in many industries, including finance and healthcare.

In the case of student cyber safety solutions, AI is applied to monitoring capabilities to allow a system to detect toxic online behavior and indicators that identify children who are in crisis. A monitoring program that includes AI will learn that “To Kill a Mockingbird” is the name of a book, while “I just have no reason to live” points to a child who may be in crisis. AI is helping districts identify student cyber safety issues that include:

  • Cyberbullying
  • Inappropriate/Explicit Content
  • Sexting, Sextortion, and Online Predation
  • Discrimination and Hate Speech
  • Threats of Violence
  • Self-Harm and Suicide

AI in Self-Harm Detection and Suicide Prevention

Self-harm and suicide aren’t the same thing. But they are often lumped together because self-harm is often, though not always, an early indication of future suicidal tendencies.

Student self-harm refers to situations where students hurt themselves without the intent to end their lives. They may use harming themselves to try to manage things like depression, anxiety, low self-esteem, or abuse. Students who are harming themselves need professional help to address the underlying issues before the self-harm behavior is so ingrained that it’s difficult to stop.

Self-harm monitoring must look for indicators in both images and text. Students are using school apps to write down their thoughts, and they’re also uploading and sharing images using school apps. Each district must find the right balance between student data privacy and monitoring to protect their students’ well-being.

IT teams are in a unique position to assist in student suicide prevention. Using AI-enabled monitoring technology allows the IT team to identify indicators of possible suicides. Once identified, however, the role of the IT team ends. That information should be turned over to professionals in the district such as principals and counselors to follow up with students.

[FREE WEBINAR] Student Cyber Safety in Schools. REGISTER HERE >>

Cyberbullying, Toxicity, and Artificial Intelligence

Sameer Hinduja, from the Cyberbullying Research Center, provides an in-depth look at how AI works to help detect toxic language online. This toxicity sentiment detection can help school monitors identify likely incidents of bullying, threats of violence, discrimination, and harassment.

Hinduja is glad that self-harm monitoring technology has moved beyond simple keyword searches. AI provides much better analysis than simply scanning for keywords. It saves us from spending a lot of time checking instances of things like “To Kill a Mockingbird.” Using AI, a monitoring system can use multiple layers of analysis to focus on truly harmful content.

Cyberbullying detection is so critical because of the devastating effect that it can have on students who are targeted. Research indicates that students can develop severe mental health issues as a result of cyberbullying that include:

  • Developing social anxiety (41%)
  • Developing depression (37%)
  • Having suicidal thoughts (26%)
  • Engaging in self-harm (25%)
  • Skipping classes (20%)
  • Developing an eating disorder (14%)
  • Abusing drugs and alcohol (9%)

Along with suicide prevention, cyberbullying is another area where a district’s IT team can be the first line of defense. With the right AI-powered monitoring, IT can identify problems that school administrators and counselors need to address. This is more critical now, since the time students spend face-to-face with teachers and administrators is restricted due to pandemic precautions that are putting students behind a monitor rather than a desk.

ManagedMethods is working hard to offer cyber safety artificial intelligence products that make Google Workspace and Microsoft 365 safety, security, and compliance easy for K-12 IT teams. Signals is an AI-powered student cyber safety monitoring feature that takes K-12 cyber safety monitoring to the next level.

New call-to-action

© 2024 ManagedMethods

Website Developed & Managed by C. CREATIVE, LLC