School surveillance will never protect children from shootings

if we are to believe the school surveillance system providers, K-12 schools will soon operate in a manner that resembles a conurbation of Minority ReportSuspectand Robocop† “Military-grade” systems would guzzle student data, pick up the mere hint of harmful ideas, and dispatch agents before the would-be culprits could carry out their despicable acts. In the unlikely event that anyone was able to evade the predictive systems, they would inevitably be stopped by next-generation weapons-detection systems and biometric sensors that interpret a person’s gait or tone and alert authorities to impending danger. The last tier is arguably the most technologically advanced – some form of drone or perhaps even a robotic dog, which could disarm, distract, or incapacitate the dangerous individual before doing any real damage. If we invest in these systems, the thinking goes, our children will finally be safe.

Not only is this not our present, it will never be our future – no matter how elaborate and complex surveillance systems become.

Numerous companies have sprung up in recent years, all promising a variety of technological interventions that will reduce or even eliminate the risk of school shootings. The proposed “solutions” range from tools that use machine learning and human monitoring to predict violent behavior, to artificial intelligence combined with cameras that determine individuals’ intent through their body language, to microphones that identify potential for violence based on a voice. . Many of them use the ghost of dead children to smuggle their technology. Security company AnyVision, for example, uses images from the Parkland and Sandy Hook shootings in presentations showcasing the facial and firearms recognition technology. Immediately after the shooting in Uvalde last month, the company Axon announced plans for a taser-equipped drone to target school shooters. (The company later put the plan on hold after members of the ethics board resigned.) The list continuesand every company would have us believe that only they provide the solution to this problem.

The malfunction here is not only in the systems themselves (Uvalde, for example, seemed to have applied at least one of these “security measures”), but the way people perceive them. As with the police themselves, any failure of a surveillance or security system usually leads to people asking for more extensive surveillance. When a threat is not predicted and prevented, companies often cite the need for more data to close the gaps in their systems – and governments and schools are often working on that. In New York, despite the many failures of surveillance mechanisms to prevent (or even capture) recent subway shooterthe mayor of the city has decided to more surveillance technology† Meanwhile, the schools of the city allegedly ignore the moratorium on facial recognition technology. The New York Times reports that U.S. schools spent $3.1 billion on security products and services in 2021 alone. And Congress’s recent gun laws includes another $300 million to improve school safety.

But at their core, many of these predictive systems promise a degree of certainty in situations where there can be none. Tech companies consistently frame the idea of ​​complete data, and thus perfect systems, as something just over the next ridge – an environment where we are so completely guarded that any and all antisocial behavior can be predicted and thus violence prevented. But a comprehensive dataset of ongoing human behavior is like the horizon: it can be conceptualized, but never actually achieved.

Currently, companies are using several bizarre techniques to train these systems: mock attacks† others use action movies Like it John Wick, hardly good indicators of real life. At some point, macabre as it may sound, it’s conceivable that these companies would train their systems on real-life shooting data. But even if images of real-life incidents became available (and in the large quantities that these systems require), the models would still not be able to accurately predict the next tragedy based on previous ones. Uvalde was different from Parkland, which was different from Sandy Hook, which was different from Columbine.

Technologies that make predictions about intention or motivations make a statistical bet on the probability of a given future based on what will always be incomplete and contextless data, regardless of source. The starting point when using a machine learning model is that a pattern can be recognized; in this case, that there is some “normal” behavior that shooters exhibit at the crime scene. But finding such a pattern is unlikely. This is especially true given the near-continuous shifts in the lexicon and practices of teenagers. Probably more than many other segments of the population, young people are changing the way they speak, dress, write and present themselves — often explicitly to avoid and evade the watchful eye of adults. Developing a consistently accurate model of that behavior is nearly impossible.

Leave a Comment

Your email address will not be published. Required fields are marked *