The New York Police Department (NYPD) recently unveiled a new digital tool that it says can sift through police reports to help officers spot patterns of crimes potentially committed by the same criminals.
The NYPD says the program, dubbed Patternizr, can save time and help its investigators detect patterns in burglaries, robberies, and grand larcenies they might not have noticed by manually combing through the tens of thousands of reports the department receives every year. While the NYPD employs civilian crime analysts who focus on such tasks, it’s still a laborious task, and it can be hard for analysts to familiarize themselves with reports produced in different precincts from the ones where they work.
“Traditionally this is something that’s been somewhat difficult for us at the department, because crime analysts are concentrated on the precinct to which they’re assigned,” says Evan Levine, the NYPD’s assistant commissioner for data analytics and one of the authors of a paper on Patternizr recently published in the Informs Journal on Applied Analytics. “Obviously criminals don’t pay attention to precinct boundaries.”
The NYPD emphasizes that the tool doesn’t look at suspects’ race or gender in matching crimes, and that its tests have found the program no more likely to suggest links to crimes committed by people of certain races than random sampling from police reports. But civil liberties advocates still cautioned that Patternizr and other tools like it could exacerbate existing biases depending on how they’re used.
“The institution of policing in America is systemically biased against communities of color,” said New York Civil Liberties Union legal director Christopher Dunn in a statement shared with Fast Company. “Any predictive policing platform runs the risks of perpetuating disparities because of the over-policing of communities of color that will inform their inputs. To ensure fairness, the NYPD should be transparent about the technologies it deploys and allows independent researchers to audit these systems before they are tested on New Yorkers.”
Cracking the case of the needle-wielding shoplifter
Patternizr is currently being used by analysts on about 600 crimes per week, Levine says, looking for similarities between particular incidents and previously reported crime based on attributes like their distances from one another, the times of day they took place, suspects’ height and weight, and unstructured text descriptions of what happened. It was inspired by earlier research at the Massachusetts Institute of Technology, where scientists tested a version of such a tool with the Cambridge Police Department.
“What they did in New York is what we dreamed of in Cambridge,” says Cynthia Rudin, who worked on the MIT project and is now an associate professor of computer science, electrical and computer engineering at Duke University, where she runs the Prediction Analysis Lab. “NYPD had a whole data science team, whereas we just had a professor and a graduate student, so we couldn’t launch it at the scale that they launched it.”
The tool has already aided in investigations, the NYPD says. In one case highlighted by the department, an analyst looking into a robbery that involved suspect shoplifting power drills and attacking a store employee with a hypodermic needle was able to find a similar robbery in a “distant precinct,” according to the paper. The attacker was arrested and pleaded guilty to larceny and felony assault. In another case, an analyst was able to spot an apparent series of thefts from gym lockers. The department doesn’t track how many arrests can be attributed to the use of the tool.
Patternizr doesn’t include traditional neighborhood names, instead dividing the city into large squares more than nine miles on each side in an effort to avoid geography serving as a substitute for race or ethnicity, says Alex Chohlas-Wood, one of the paper’s authors who worked on the project while director of analytics at the NYPD.
The tool doesn’t make any attempts to predict where future patterns will take place, and potentially related crimes suggested by the tool still need to be reviewed by human analysts and officers. But Darius Charney, a senior staff attorney at the Center for Constitutional Rights who successfully represented plaintiffs in a suit challenging the department’s controversial stop-and-frisk practices, cautioned that patterns generated with the tool could still reflect bias in department data and could be used in biased policing if, say, people of color end up disproportionately stopped or searched in areas with reported crime patterns.
“I’ve always been uncomfortable with the notion of the crime pattern as a basis for routine law enforcement action,” he says.
Accountability of the algorithm and the effects
The department deliberately chose to publish a paper on Patternizr as part of an effort to be transparent with the public about the tool, says Chohlas-Wood, who is now deputy director of the Stanford Computational Policy Lab. Still, given the NYPD’s spotty history on racial issues, civil liberties advocates argued the algorithms and code should be made as public as possible and subject to outside auditing to test for bias.
“The fundamental thing that we always say in this context is that when you’re using new tools, it’s really important to be as transparent as possible about those tools and how you’re using those tools,” says Faiza Patel, co-director of the Liberty and National Security Program at the Brennan Center for Justice at New York University School of Law. “This whole project was started in 2016, and we’re now finding out pretty late in the process that this is happening, and that they’ve been testing and using this system for a while.”
Phillip Atiba Goff, who is the cofounder of the Center for Policing Equity at John Jay College of Criminal Justice, says the NYPD should also work with groups in the community to address concerns about potential bias or negative effects that spring from the tool’s use and the arrest patterns it may lead to.
“There also needs to be not just accountability in terms of the algorithm is fair but also accountability of the effects,” he says.
Patternizr is likely to continue to evolve: So far, internal suggestions have included also comparing grand larcenies to petit larcenies, which are essentially similar crimes with smaller dollar values, and otherwise comparing between types of crimes where only some elements differ, like the use of force distinguishing robberies and larcenies. And while civil libertarians say the program may be used for good in making policing more efficient, it’s likely to face continued scrutiny over potential harms.
“Speeding up human cognition may in fact be an accelerant to human bias,” says Goff.