Big Brother Watching Kids Nonstop Won’t Make Schools Safer

As America reels from yet another horrific school massacre, temptation is rising to unleash new surveillance technologies upon schools and students—for “safety.” But by and large, these technologies not only don’t work as intended—they’re actually harmful.If you are a student in America, there’s already a good chance that while you’re at school, every website you…

Powered by NewsAPI , in Liberal Perspective on .

news image

As America reels from yet another horrific school massacre, temptation is rising to unleash new surveillance technologies upon schools and students—for “safety.” But by and large, these technologies not only don’t work as intended—they’re actually harmful.

If you are a student in America, there’s already a good chance that while you’re at school, every website you visit, every assignment you write, each email you send, all attachments you include on emails, any comments you post in a Google document, and much more, are scanned by automated monitoring tools.

The algorithms used in this software are designed to catch anyone looking at, writing, or researching “dangerous” subjects—which can encompass everything from often-benign words and phrases like “I’m taking a mental break” or “I had a fight with my boyfriend,” to more politically complicated but not dangerous topics like reproductive rights, race relations, or LGTBQ+ issues.

When using a school-issued device—often the only device to which marginalized, low-income students have access—scanning is even more likely and even more pervasive. According to the Center for Democracy and Technology, 71 percent of teachers report automated monitoring software is in place on school-issued devices, while three in four teachers report that such monitoring is not limited to school hours.

Software companies claim to use artificial intelligence and algorithmic systems to monitor nearly all student activity under the guise of safety, but often students, parents, and teachers aren’t aware just how much is monitored, how ineffective these tools often are, and how this growing form of ubiquitous, pervasive surveillance harms students by blocking educational material, invading their privacy, and inviting unjust scrutiny.

Some companies, like Securly, tout their ability to monitor student activity at home as a valued feature. In a case study of a Virginia school district that offered Chromebooks to every student, they quote an administrator who was happy to find Securly’s Filter product—advertised as “a safer, more educational web for all your students, on any device, anywhere they go”—while “looking for a new solution that would work seamlessly when students took their devices home.”

Kids know they’re being monitored, making the tools less effective to prevent violence, while also potentially discouraging adolescents from reaching out for help

Yet there is no real evidence that this ubiquitous surveillance makes students safer.

Software companies use ambiguous statistics to tout their products’ effectiveness, which are rarely backed up by detailed data. The parental control app Bark claims that it detected “2.6 million severe bullying situations,” with no indication of whether these reports were accurate or how they were handled in the students’ best interests. Securly claims it has saved 1,652 lives, but case studies on the company’s site note one purported instance of this—while mostly focusing instead on the ease of setup and simplicity for school administrators.

The few studies published about these tools indicate that they likely do more harm than good. The filters frequently stop students from accessing content they need for their education. A Securly case study (which was advertised as a positive) said that a teacher was able to ask the “instructional technology coach” in charge of filtering to unblock a website when a student needed it. The site was Amnesty International. Why that site would be blocked goes unexplained.

Sadly, kids know they’re being monitored, making the tools less effective to prevent violence, while also potentially discouraging adolescents from reaching out for help—particularly those in minority and LGBTQ+ communities, who are far more likely to seek help online.

These tools are ripe for misuse and “false positives” abound. They tend to take a “block and track everything” approach, using artificial intelligence for detection—which means, more often than not, kids interested in gun control are going to be tracked just as often as kids interested in guns.

In March, Democratic Sens. Elizabeth Warren and Ed Markey took Gaggle.net, Bark Technologies, GoGuardian, and Securly to task for surveilling students inappropriately. The senators produced a report which found that student monitoring software has been misused for disciplinary purposes and increases law enforcement interactions with students, which increases the risk of sanctions on students. It also noted the companies “have not taken any steps to determine whether student activity monitoring software disproportionately targets students from marginalized groups.”

And with new laws wreaking havoc upon LGBTQ+ rights and abortion rights, these surveillance tools potentially become arbiters of students’ private personal freedoms.

Our hearts are raw with grief. Nobody wants another Sandy Hook, another Parkland, another Uvalde. But all-encompassing surveillance of students will hurt more than help.

Jason Kelley is Associate Director of Digital Strategy at the Electronic Frontier Foundation, an international digital civil liberties organization based in San Francisco and online at eff.org.

Read More