advertisement
advertisement

This AI algorithm supposedly predicts big-city crime before it happens. Is that a good idea?

Crime prediction has long been controversial, but University of Chicago researchers argue their new model could be used to monitor the police themselves.

This AI algorithm supposedly predicts big-city crime before it happens. Is that a good idea?
[Source Images: Getty]

A group of social scientists at the University of Chicago claim to have invented a computer algorithm that can predict future crimes up to one week in advance with 90% accuracy.

advertisement
advertisement

They tested the model’s accuracy in eight major U.S. cities, each of which the team broke into small sizes of about two blocks square. Starting with Chicago, their home turf, they studied historic data on violent crime and property crime inside each square, recording the changes and different patterns over time to devise predictions for future events. They say the model worked equally well in seven more cities (Atlanta; Austin; Detroit; Los Angeles; Philadelphia; Portland, Oregon; and San Francisco), giving them an algorithm that boasts “predictive accuracy far greater than has been achieved in past,” they wrote.

Their tool departs from past models, which tended to pin crime to geographic “hotspots,” relying instead on what their paper in Nature Human Behavior calls “spatiotemporal point processes unfolding in social context.” By analyzing hundreds of thousands of different patterns, they argue they’re able to determine the risk of crime at a specific time and space. They said this allows them to see not just how crime is changing over time, but also how policing evolves alongside it.

This style of “crime prevention” has never found a big following among criminal-justice reformers—and may even raise flags with people put off by the pre-crime scenario famously depicted in the 2002 movie Minority Report, based on a story by Philip K. Dick.

advertisement
advertisement

In fact, models seeking to predict crime before it ever occurs have a history of not being very accurate, exacerbating racial disparities, and also justifying focusing police resources in better-off parts of town. Among the most infamous model was the Chicago Police Department’s surveillance system, used circa 2012 to 2019 to monitor people it claimed had a “high propensity toward violent, gang-related crime.” A City of Chicago Inspector General’s report later found that of the 398,684 individuals on this “Strategic Suspect List” (which included every person arrested and fingerprinted since 1993), just 16% were ever confirmed to be gang members, and a Chicago Tribune investigation found 13% had never been charged with any violent crime.

The University of Chicago researchers wrote they’re well aware of these abuses. They argue their algorithm could even be used to monitor the police themselves, almost karmically: Their algorithm unearthed evidence that these cities’ police forces are more responsive to crime in predominantly white, higher-income areas than in less-affluent neighborhoods.

Nature adds its own quasi-disclaimer to their research, through a side commentary titled “The Promises and Perils of Crime Prediction,” penned by Andrew Papachristos, a Northwestern University sociologist known for his research on gun violence. Papachristos has critiqued police for “commandeer[ing]” his research in the past and using it to “identify ‘strategic subjects’ and guide enforcement operations.” He applauds his colleagues’ new model but leaves this ominous note: “The question of what others will do with these powerful new statistical research tools, however, is perhaps a more fraught task.”

advertisement
advertisement