Crime prediction tool also exposes police bias

(TNS) — For once, crime-predicting algorithms can be used to uncover, rather than amplify, bias in the police force.

A group of social and data scientists developed a machine learning tool they hoped would better predict crime. The scientists say they succeeded, but their work also revealed inferior police protection in poorer neighborhoods in eight major US cities, including Los Angeles.

Rather than justify more aggressive policing in those areas, however, the hope is that the technology will lead to “policy changes that result in a more equitable, needs-based allocation of resources,” including sending officials other than law enforcement officers to certain species, according to a report published Thursday in the journal Nature Human Behavior.


Developed by a team led by University of Chicago Professor Ishanu Chattopadhyay, the tool predicts crime by discovering patterns amid massive amounts of public data on property crime and violent crime, while learning from the data.

Chattopadhyay and his colleagues said they wanted to prevent the system from being abused.

“Rather than simply increasing the power of states by predicting where and when the expected crime will occur, our tools allow us to monitor them for enforcement bias, and gain deep insight into the nature of the ( intertwined) processes by which police and crime collaborate are evolving in urban spaces,” according to their report.

For decades, law enforcement agencies across the country have been using digital technology for surveillance and forecasting, believing it would make policing more efficient and effective. But in practice, civil liberties advocates and others have argued that such policies are based on biased data that contribute to more patrols in black and Latino neighborhoods or false accusations against people of color.

Chattopadhyay said previous attempts to predict crime have not always accounted for systemic bias in law enforcement and have often been based on incorrect assumptions about crime and its causes. Such algorithms gave too much weight to variables such as the presence of graffiti, he said. They focused on specific “hot spots,” while not taking into account the complex social systems of cities or the effects of police enforcement on crime, he said. The predictions sometimes led the police to overrun certain neighborhoods with extra patrols.

His team’s efforts have shown promising results in some places. According to the report, the tool predicted future crimes up to a week in advance with about 90% accuracy.

Running a separate model led to an equally important discovery, Chattopadhyay said. By comparing arrest data from neighborhoods of different socioeconomic levels, the researchers found that crime in wealthier areas of the city led to more arrests in those areas, while arrests in deprived neighborhoods decreased.

But the opposite was not true. Crime in poor neighborhoods didn’t always lead to more arrests — suggesting “enforcement bias,” the researchers concluded. The model is based on data from several years from Chicago, but researchers found similar results in seven other larger cities: Los Angeles; Atlanta; Austin, Texas; Detroit; Philadelphia; Portland, OR; and San Francisco.

The danger of any form of artificial intelligence used by law enforcement, the researchers said, lies in misinterpreting the results and “creating a harmful feedback of sending more police into areas that may already be over-guarded but under-protected.” “

To avoid such pitfalls, the researchers decided to make their algorithm available for public scrutiny, so anyone can verify that it’s being used properly, Chattopadhyay said.

“Often the systems deployed aren’t very transparent, and so there’s the fear that there’s bias built in and there’s some real risk — because the algorithms themselves or the machines might not be biased, but the input could be,” said Chattopadhyay in a telephone interview.

The model his team developed can be used to monitor police action. “You can turn it around and check prejudice,” he said, “and check whether the policy is fair as well.”

Most machine learning models used by law enforcement today are built on proprietary systems that make it difficult for the public to know how they work or how accurate they are, said Sean Young, executive director of the University of California Institute for Predict Technology.

Given some of the criticism of the technology, some data scientists have become more aware of potential bias.

“This is one of a number of growing research papers or models that are now trying to find some of that nuance and better understand the complexities of crime prediction and try to both make it more accurate and address the controversy,” Young, a professor of emergency medicine medicine and computer science at UC Irvine, said of the just-published report.

Predictive policing can also be more effective, he said, if it’s used to work with community members to solve problems.

Despite the study’s promising findings, it is likely to raise eyebrows in Los Angeles, where police critics and privacy advocates have long opposed the use of predictive algorithms.

In 2020, the Los Angeles Police Department stopped using a predictive police program called Pred-Pol that critics say led to heavier policing in minority neighborhoods.

At the time, Police Chief Michel Moore insisted that he terminate the program due to budgetary concerns due to the COVID-19 pandemic. He had previously said he disagreed with the view that Pred-Pol was unfairly targeting Latino and black neighborhoods. Later, Santa Cruz became the first city in the country to ban predictive policing outright

Chattopadhyay said he sees machine learning evoke “Minority Report,” a novel set in a dystopian future where people are swept away by the police for crimes they have yet to commit.

But the effect of the technology is only beginning to be felt, he said.

“There’s no way to put the cat back in the bag,” he said.

2022 Los Angeles Times, distributed by: Tribune Content Agency, LLC.

Leave a Comment

Your email address will not be published. Required fields are marked *