Can AI really predict crime a week in advance? That’s the claim.

The University of Chicago recently announced to great fanfare that,

Data and social scientists at the University of Chicago have developed a new algorithm that predicts crime by learning patterns in time and geographic locations from public data on violent and property crime. The model can predict future crimes a week in advance with an accuracy of about 90%.

University of Chicago Medical CenterAlgorithm predicts crime a week in advance, but reveals bias in police response” Bee News by the way (June 28, 2022)

Many immediately thought of the 2002 film Minority Report in which three psychics (“precogs”) visualize murders before they happen, allowing special PreCrime police to arrest potential attackers before committing them. Have these researchers at the University of Chicago made this fiction a reality?

No. Their model is much more prosaic. What the model? predicts, using historical records of where and when crimes occurred is where and when crimes are likely to occur. The model does not predict that Jessie Jodie will attack 123 Waverly Place on April 1 at 10pm. Instead, it “predicts” hotspots, relatively large geographic areas where a relatively large number of, say, street crimes are likely to occur.

That doesn’t seem particularly difficult, but the authors report that their model very well done in the National Institute of Justice’s Real-Time Crime Forecasting Challenge. Participants were tasked with predicting crime hotspots in Portland.

The challenge was realistic because the participating teams were provided with historical data for the period from March 1, 2012 to July 31, 2016, which they could use to develop and calibrate their models. Additional data was released for model testing over the next six months. During the last week of this six-month testing period, between February 22, 2017 and February 28, 2017, the teams were able to submit their official hotspot forecasts for the following week, two weeks, month, two months or three months beginning March 1, 2017. This was , as advertised, a real-time forecasting challenge as the 62 entrants competing for $1.2 million in prize money had to make predictions of things that hadn’t happened yet.

Too often people “predict” things that happened in the past – which is often easy and mostly useless because, as the Danish proverb warns,

It is difficult to make predictions, especially about the future.

The University of Chicago team reported that it did well in the Portland challenge, but it made its predictions five years after the real-time contest ended! We don’t know how many times their model has been tweaked to better predict the past, and it’s certainly not fair to compare their backtests to real-time predictions.

On the other hand, we should be thankful that the Chicago model does not claim to predict specific individual crimes, such as Jessie attacking Jodie. Too many people may believe the algorithm and want Jessie arrested. We are terrifyingly close to that nightmare scenario.

Algorithmic criminology is now widely used to determine bail for people who have been arrested, to determine prison terms for people who have been convicted, and to decide on parole for people who are in prison. Richard Berk is a professor of criminology and statistics at the University of Pennsylvania. One of his specialties is algorithmic criminology: “predicting criminal behavior and/or victimization using statistical/machine learning procedures.” He wrote, “The approach is a ‘black box,’ for which no apologies are made,” giving an alarming example: “If I could use sunspots or shoe size or the size of the wristband on their wrist, I would. If I give the algorithm enough predictors to get it going, it finds things you wouldn’t expect.” Things that we don’t foresee are usually things that don’t make sense but happen to be correlated.

Disturbingly, Berk and other intelligent, well-meaning people think that bail, conviction and parole decisions should be based on what could well be statistical coincidences. In addition, some predictors may very well be proxies for gender, race, sexual orientation, and other factors that should not be taken into account. People should not be given heavy bail, unreasonable sentences and parole because of their gender, race or sexual orientation – because they belong to certain groups. What matters are the specific facts of a particular case.

If decisions about releasing people from prison are based on AI algorithms, putting people in prison based on statistical algorithms is only a short step. In 2016, two Chinese researchers reported that they were able to apply their computer algorithm to scanned facial photos and predict with 89.5 percent accuracy whether a person is a criminal. They reported that their algorithm “identified some distinguishing structural features for crime prediction, such as lip curvature, corner of eye distance and the so-called nose-mouth corner.” Such algorithms are not only easily misled by static coincidences, they are inherently discriminatory. Indeed, it’s hard to imagine anything more racially discriminatory than facial recognition software.

Yet one blogger wrote:

What if they just put the people who look like criminals in an internment camp? What harm would that do? They should just stay there until they go through an extensive rehabilitation program. Even if some went that were innocent; how could this adversely affect them in the long run?

As I’ve written elsewhere, the real danger today is not that computers are smarter than us, but that we think computers are smarter than us, and therefore rely on them to make decisions they shouldn’t.

You may also want to read: The AI ​​illusion – state-of-the-art chatbots are not what they seem GPT-3 is a lot like a good magician’s performance. You can thank human labelers, not some intelligence on the part of GPT-3, for improvements in the answers. †Gary Smith

Leave a Comment

Your email address will not be published.