Banner

Hackernoon: How AI Phishing Is Putting Schools at Risk

This article was originally published in Hackernoon on 10/29/25 by Charlie Sander.

AI is super-charging social engineering, and K-12 is still a precious target

With an average of 2,739 edtech tools per district, staff and students rely heavily on laptops and classroom tech that must be protected from the latest threats. Today, these include anything from convincing “superintendent” emails to deepfake voice notes and student account takeovers.

PromptLock is one example of a new kind of computer virus that uses generative tools to help write its own harmful code every time it runs. That means it can change slightly each time, making it harder for security systems to catch.

Once it’s on a computer, the malware looks through the files. It can then steal them and lock them up so schools can’t open them.

Schools can’t afford shortcuts in cyber defense. The path forward combines smarter technology, disciplined verification, and a community that understands its role in security.

Charlie Sander, CEO, ManagedMethods

As ransomware becomes more sophisticated, attacks could target not just large schools but also individual students and staff members, leaving them open to higher risks of data theft, financial loss, and service disruptions. Schools must know where their blind spots are and how to protect themselves against these types of cyber attacks.

Find and fix blind spots in built-in filters

Built-in tools often miss AI-powered lures, because the latest generative AI tools can write polished messages that sound human. In a recent survey of 18,000 employed adults, only 46% correctly identified that a phishing email was written by AI. For traditional security systems, it’s equally difficult. When there are no spelling errors or awkward phrases, filters that look for “typical scam language” struggle to flag them.

Part of the problem is that AI can pull details from public websites or social media, and mention upcoming school events and staff names, making them sound authentic. Even when an email doesn’t contain malware, it can trick someone into sharing passwords or sensitive data. That means IT administrators must introduce filters that understand context.

Once security teams realize an account has been compromised, they can flag the content and account as a warning to the rest of the school and update their security systems. But since AI can generate a slightly different version of the same phishing message for each target, it’s tricky to tell traditional security systems what patterns or “signatures” to look for. Tools that rely on rules and known threat lists, not real-time reasoning, no longer suffice.

To tighten defenses, districts should audit their native filters quarterly. They must test defenses with realistic phishing simulations that represent today’s standard of attack, and adjust rules to flag messages containing urgency, payment requests, or login prompts. Advanced phishing detection tools and add-ons can help security teams flag messages that “feel off,” even if they look clean…

Read More >>

FREE! Google & Microsoft Security Audit for K-12 Schools >

Category
In The News