It may sound like a science fiction plot similar to Minority Report, the futuristic film about a “pre-crime” police force that used technology to convict criminals before they committed a crime, but some schools are using artificial intelligence (AI) today to prevent future violent attacks on campus and in classrooms.
Some schools have hired companies that use artificial intelligence programs to scan student emails, texts, social media postings, and private documents for signs of potentially dangerous threats or disturbing imagery or language. With the rise of school shootings, parents and educators are eager to make schools safer, but how accurate are these types of programs at identifying potential threats. And could these types of programs infringe on student privacy and freedom of speech? Read on to learn about the current batch of AI products geared at reviewing students’ personal data, as well as its effectiveness in preventing serious issues.
How Does Artificial Intelligence Detect Threats?
Schools have long employed filters that can sort through and keep indecent language, images, or materials from hitting students’ email accounts or search engine results. While schools were able to limit what students viewed during the school day or on school-issued devices, many educators and parents worried about what students were posting on private accounts. Enter artificial intelligence that uses algorithms to determine potentially troublesome phrases or topics. The algorithms can search for everything from drug and alcohol use, sexual activity, bullying, or violent threats toward peers or teachers. Once AI finds questionable activity, school officials and parents receive a notification, and in serious cases, the authorities become involved.
How Effective Is AI at Preventing School Violence?
Bark, one of the leading AI parental control apps, reports that they have been able to deter 16 potential school shootings and more than 10,000 cases of self-harm. While these numbers are promising, there are limitations with using artificial intelligence as the only method of preventing school violence. First, students who want to keep certain information private are quite skilled at doing so—possibly even more skilled than the adults who are receiving their account information. It is possible for students to create a separate, secret social media account and not provide that information to parents or administrators, which renders AI virtually useless. Understanding tone, context, and slang are also critical for administrators to gauge as they receive AI reports about student language. However, as a general sweep of student online activity and documents, artificial intelligence can provide useful data for parents and administrators to discuss difficult and sensitive topics with students.
What About a Student’s Right to Privacy?
At first glance, students may be reluctant to have anyone go through their personal thoughts, writings, or posts. For teens, self-expression and the growing need to attain autonomy and protect their individuality might turn off students from agreeing to this invasion of privacy. In order to create student “buy-in,” parents and administrators would need to create a school culture and climate based around students’ overall mental and physical well-being, and avoid the perception that they are looking for ways to “get” students for foul language or poor choices.
Supplementing School Safety with Artificial Intelligence
Artificial intelligence can be an excellent tool to comb through student accounts and look for troubling signs. In combination with a strong administrative and counseling staff, parent and community connections, and communication between students, AI can illustrate potential warning signs of depression, suicidal thoughts, bullying, or violent actions. Schools who know all of their students on a personal level, and encourage student feedback on AI in their buildings, combine two helpful factors in student safety.
What do you think? Should schools use artificial intelligence to prevent school violence or does that infringe on student privacy? Share your views in the comments section.