A Literal Minority Report: Predictive Policing’s Algorithmic Biases

Genevieve Poblano / Digital Trends

Predictive policing should change the way policing works and lead us into a world of intelligent law enforcement where prejudices have been removed and the police would be able to respond to the data, not to the suspicion. But a decade after most of us heard the term “predictive policing” for the first time, it seems clear that it didn't work. Due to a public backlash, the technology has seen a significant decline in its use compared to a few years ago.

In April this year, Los Angeles, which, according to the LA Times, “pioneered crime prediction with data,” cut funds and blamed the cost of its proactive policing program. "It's a tough decision," police chief Michel Moore told the LA Times. "It's a strategy we have applied, but the hundreds of thousands of dollars in estimates we have to spend now instead of finding that money and using that money for other more central activities is what I have to do."

What went wrong? How could something advertised as "smart" technology lead to further prejudice and discrimination? And is the dream of predictive policing a dream that could be optimized with the right algorithm – or a dead end in a fairer society that is currently dealing with the way the police work?

The promise of predictive policing

Predictive policing in its current form is based on a 2009 paper by psychologist Colleen McCue and Los Angeles Police Chief Charlie Beck titled “Predictive Policing: What Can We Do at Walmart and Amazon on Combating Crime in a Recession? learn? ”Back. The paper picked up on how big data was used by large retailers to reveal patterns in past customer behavior that could be used to predict future behavior. Thanks to advances in both data processing and data collection, McCue and Beck suggested that it was possible to collect and analyze crime data in real time. This data could then be used to anticipate and prevent more effective crimes that have not yet occurred.

In recent years, predictive policing has evolved from a throwaway idea to a reality in many parts of the United States and the rest of the world. It has set itself the goal of transforming policing from reactive to proactive. Leverage some of the breakthroughs in data-driven technology that enable real-time patterns to be recognized – and responded to.

Predictive policing mapThe Washington Post / Getty

"There are two main forms of predictive policing," said Andrew Ferguson, a professor of law at the David A. Clarke School of Law at the University of the District of Columbia and author of The Rise of Big Data Policing: surveillance, race, and the future of law enforcement said Digital Trends. "(These are) location-based policing and person-based policing."

In both cases, predictive police systems assign a risk assessment to the person or location in question, which encourages the police to follow up at a specific interval. The first of these approaches, location-based predictive policing, focuses primarily on police patrols. It includes the use of crime mapping and analysis of the likely locations of future crimes based on past statistics.

Instead of helping to get rid of problems like racism and other systemic prejudices, predictive policing can actually help anchor them.

The second approach focuses on predicting the likelihood that a person will pose a potential future risk. For example, in 2013 a Chicago police commander was sent home to a 22-year-old Robert McDaniel who had been categorized by an algorithm as a potential risk or perpetrator of armed violence in downtown Chicago. The Heat List, which was used to compile the algorithm, looked for patterns that could be used to predict future offenders or victims, even if they did nothing to justify this check beyond profile customization.

As the Chicago Tribune stated: "The strategy is to individually warn those on the Heat List that further criminal activity, even the smallest of crimes, will result in the full force of the law being reduced to them."

Predictive policing's dream was that by intervening in quantifiable data, the police would not only become more efficient, but also less susceptible to assumptions and thus to bias. Proponents would change policing for the better and usher in a new era of intelligent policing. Almost from the beginning, however, predictive policing had strong critics. They argue that predictive policing can actually help, rather than eliminate, problems like racism and other systemic prejudices. And it's hard to argue that they don't have a point.

Discriminatory algorithms

The idea that machine-based predictive police systems can learn to discriminate based on factors such as race is nothing new. Machine learning tools are trained with massive amounts of data. And as long as this data is collected by a system in which race is still an overwhelming factor, it can lead to discrimination.

Police officer on patrolThe Washington Post / Getty

As Renata M. O & # 39; Donnell writes in a 2019 article titled "Challenge racist predictive police algorithms under the equal treatment clause", machine learning algorithms learn from data that comes from a judicial system in which "black Americans in state prisons are incarcerated A rate 5.1 times the incarceration of whites, and one in three blacks born today can count on going to prison in their lives if current trends continue. "

"Data is not objective," Ferguson told Digital Trends. "We are reduced to binary code. Data-driven systems that work in the real world are no more objective, fair or unbiased than the real world. If your real world is structurally unequal or racially discriminatory, a data-driven system will reflect these social inequalities. The incoming inputs are contaminated by bias, the analysis is biased, and the mechanisms of the police force don't change just because there are technologies that guide the systems. "

Ferguson cites the example of arrests as an apparently objective factor in predicting risk. However, arrests are distorted by the allocation of police resources (e.g. where they patrol) and the types of crime that normally warrant arrests. This is just one example of potentially problematic data.

The dangers of dirty data

Missing and incorrect data are sometimes referred to as "dirty data" in data mining. A work by researchers at A.I. Now, the New York University Institute is expanding this term to include data that is affected by corrupt, biased, and unlawful practices, whether deliberately manipulated, distorted by individual and societal prejudices. For example, this can be data that comes from the arrest of an innocent person, on which evidence has been planted, or which is otherwise falsely accused.

There is a certain irony in the fact that data society requirements for quantification and cast-iron numerical targets have resulted in a whole host of … well, really bad data in recent decades. The HBO series The Wire showed the real phenomenon of "juking the stats," and the years since the show's launch have delivered numerous examples of actual systemic data manipulation, fake police reports, and unconstitutional practices to innocent people.

Christian Science Monitor / Getty

One thing is bad data that enables those in power to artificially hit targets. But combine that with algorithms and predictive models that use this as the basis for modeling the world, and you may get something far worse.

Researchers have shown how questionable crime data built into predictive policing algorithms can create "out of control feedback loops" that repeatedly send police to the same neighborhoods, regardless of the actual crime rate. One of the co-authors of this paper, computer scientist Suresh Venkatasubramanian, says that machine learning models can incorporate incorrect assumptions by modeling them. Like the old saying about how every problem looks like a nail to the person with a hammer, these systems model only certain elements of a problem – and only imagine a possible result.

"(Something that isn't addressed in these models) is to what extent you model the fact that throwing more police officers into an area can actually reduce the quality of life of the people who live there?" Venkatasubramanian, a professor at the School of Computing at the University of Utah, told Digital Trends. “We assume that more police officers are a better thing. But as we are seeing, having more police is not necessarily a good thing. It can actually make things worse. In no model I have ever seen, has anyone ever asked what it costs to get more police into an area. "

The uncertain future of predictive policing

Those who work in predictive policing sometimes unironically use the term "minority report" to refer to the type of prediction they make. The term is often used to refer to the film of the same name from 2002, which in turn was loosely based on a short story by Philip K. Dick from 1956. In the Minority Report, a special PreCrime police agency arrests criminals based on prior knowledge of crimes to be committed in the future. These predictions are provided by three clairvoyants called "Precogs".

However, the twist in the minority report is that the predictions are not always accurate. Deviating visions of one of the Precogs offer an alternative view of the future, which is suppressed for fear of not making the system appear trustworthy.

Internal audits showing that the tactic didn't work. The prediction lists were not only incorrect, but also ineffective.

Predictive policing is currently facing an uncertain future. In addition to new technologies such as face recognition, the technology available to law enforcement agencies for potential use has never been so powerful. At the same time, awareness of the use of predictive policing has triggered a public backlash that may have actually helped to suppress it. Ferguson told Digital Trends that the use of predictive policing tools has seen a "downturn" in recent years.

"At its peak (location-based predictive policing), growth was seen in over 60 major cities, but was largely reduced due to the successful organization of communities and / or replaced by other forms of data-driven analysis," he said. “In short, the concept of predictive policing became toxic and the police learned to rename what they were doing with data. Personal predictive policing fell sharply. The two capitals that have invested in their creation – Chicago and Los Angeles – have withdrawn their personal strategies after keen criticism from the community and devastating internal audits that show the tactic didn't work. The prediction lists were not only incorrect, but also ineffective. "

The wrong tools for the job?

Rashida Richardson, Director of Policy Research at A.I. Now the institute said that the use of this technology was too opaque. "We still don't know because there is a lack of transparency in government technology purchases and there are many gaps in existing procurement practices that could protect certain technology purchases from public scrutiny," she said. It gives the example of a technology that can be made available to a police force free of charge or bought from a third party. "We know from research like mine and media reports that many of the largest police departments in the United States have used the technology at some point, but there are also many small police stations that use it or only for a limited period of time. ”

Given the current question of the role of the police, will there be a temptation to resume predictive policing as an instrument for data-driven decisions – possibly with less dystopian sci-fi branding? There is a possibility that such a resurgence may arise. However, Venkatasubramanian is very skeptical that machine learning as it is currently practiced is the right tool for the job.

"All machine learning and its success in modern society are based on the premise that regardless of the problem at hand, it all comes down to collecting data, building a model, predicting results – and you don't have to worry about the domain," he said. “You can write the same code and apply it in 100 different places. That is the promise of abstraction and portability. The problem is that you cannot do this if we use socio-technical systems where people and technology are intertwined in complex waves. You can't just plug in a part and expect it to work. Because (there are) ripple effects when inserting this piece and the fact that in such a system there are different players with different agendas and they adapt the system to their own needs in different ways. All of these things need to be considered when talking about effectiveness. Yes, you can say in the summary that everything will work fine, but there is no summary. There is only the context in which you work. "

Editor's recommendations

Leave a Reply

Your email address will not be published. Required fields are marked *