About a year before Jeff Bezos flew to space, he got called on the carpet. Amazon had just responded to George Floyd’s murder at the hands of police with a $10 million donation, when observers began pointing out Amazon’s flagrant hypocrisy. After all, the company had spent years selling its inherently biased facial recognition software directly to police, who could be using that software to track activists in a burgeoning protest movement Amazon ostensibly supported. Pretty soon, the company issued a stunning secondary announcement: that Amazon had put a one-year pause on its facial recognition tech, in hopes that Congress would by then have created legislation to properly regulate it.
Amazon’s cover-your-ass gesture, while certainly laudable, has had very little impact thus far on the bigger picture. Along with the smaller companies like Rank One, Cognitec, and NEC, which continue to sell facial recognition software to law enforcement agencies, other strains of similar technology are further expanding the surveillance state and perpetuating a culture of fear and racism in America. A year after the George Floyd protests, we remain at an inflection point for law enforcement, but instead of pausing more potentially dangerous surveillance systems until the government sorts out regulation, the vast majority of tech companies and lawmakers have chosen instead to continue full speed ahead. Together, they’re applying a move-fast-and-break-things mentality to things that, if broken, cannot be fixed—namely privacy, freedom, and human lives.
If demand is running high for security solutions recently, it’s because violent crime has risen as well. Just last week, between July 17 and July 23, at least 430 people were killed in 915 shootings across the country, according to Gun Violence Archive’s collaboration with ABC’s This Week. Only halfway through the year, 2021 is already on pace to top 2020 in gun-related deaths—while 2020 was already the deadliest year for shootings in two decades.
Prominent Republicans and the National Fraternal Order of Police have tied the crime-spike to the Defund the Police rallying cry that emerged from last year’s protests, despite the fact that little defunding has actually taken place yet, and the fact that violent crime is also up in cities that maintained or increased funding for police. The panic around the surge also seems to somehow disregard the anomaly factor of a once-in-a-century pandemic, prior to which violent crime had plunged precipitously since the early 1990s. But panic has never historically been confused with rational, analytical thinking; so instead of looking into America’s surplus of guns and lack of a social safety net, the powers that be are instead beefing up surveillance systems.
A lot of people have an innate fear of being monitored and tracked. They put pieces of tape over their laptop camera lens, like a Band-Aid that might heal any vulnerability to being seen involuntarily. They avoid smart home gadgets, such as Amazon Echo, for fear that everything they say will be recorded and archived. However, eventually, everyone must go outside and enter the public sphere, at which point they find themselves in constant risk of being surveilled in one way or another.
Smile—you’re on candid panopticon.
What lawmakers and law enforcers both seem to want, in theory, is the digital equivalent of magic: the omniscient ability to catch criminals in the act, find them anywhere they may attempt to flee, and even predict their crimes before they happen. In practice, however, the equipment that tech companies have produced to meet these ends ranges from spotty to catastrophic.
Let’s take a look at some of what’s taking a look at us.
ShotSpotter is, for lack of a better way to put it, Shazam for crime noise. It’s a tool that uses hidden microphone sensors to detect the sound and location of gunshots, and then puts out an alert for participating police officers. Currently in use in more than 100 cities, the technology generates, in Chicago alone, an average of 21,000 alerts each year. The company claims it is 97% accurate.
Not only does the device inform officers about potential active crime scenes, its website also promises to help “build courtroom-ready cases.” Unfortunately, as Motherboard reported earlier this week, those cases are often built with altered evidence.
According to the report, ShotSpotter’s analysts have the ability to manually override its algorithms and “reclassify” a sound as a gunshot, or change other factors like the location of the sound, or the time it took place, according to the needs of the case. These analysts have frequently modified alerts at the request of police departments.
The danger doesn’t end there, either. The law enforcement tendency to use ShotSpotter exclusively in predominantly Black and Latino communities is poised to further the already disproportionate rate of police brutality in those communities. It was a ShotSpotter alert this past March, for instance, that sent police to a street in Chicago where they ended up shooting and killing 13-year-old Adam Toledo.
License Plate Readers
Atlanta-based Flock Safety made headlines recently for its $150 million Series D round of funding, and pledge to reduce crime by 25% in the next three years. Operating on motion sensors, the device pairs solar-powered license-plate readers with cloud-based software. Police have used license-plate readers for at least a decade, but the ones made by Flock Safety are said to be more powerful than their predecessors. They automatically take down the make, model, color, and distinguishing marks of any vehicle that passes by, and record the date and time as well. Flock Safety also issues an alert whenever it spots a known stolen vehicle, or one that’s fled a crime scene. The product is already set up in 1,200 communities in 40 states, and used by over 700 law enforcement agencies.
Just like gunshot detection systems, however, license plate readers can be used for unsavory ends.
Beyond merely solving crimes like stolen cars, some of Flock Safety’s competitors have been adopted by ICE agents to track down undocumented immigrants. These types of devices are not immune to error either. In 2018, for instance, another license-plate reader in the Bay Area led police to pull over a vehicle and point guns at the driver and his passenger, all over a rental car incorrectly identified as stolen.
Amazon’s Ring and Neighbors
Acquired by Amazon in 2018, Ring is a doorbell-security camera hybrid that records and sends video to users’ phones, and to Amazon’s cloud, based on motion sensors. It has famously captured some funny neighborhood moments, cementing it in some people’s imaginations as a quirky facet of modern living, but it also carries much more sinister connotations. Mainly, it turns the prospect of anyone ever coming near one’s door into an alarming event, providing law enforcement with a flood of false alerts—and an abundance of questionable opportunities.
Ring’s promise to consumers is “Protection at every corner,” and it fulfills that promise by deputizing Ring owners in the war on crime. Citizens report suspicious people, who may only be suspicious in their own minds, and either sic the cops on them—the so-called “Karen” problem—or possibly take matters into their own hands, like George Zimmerman. Police, meanwhile, know that many doors with a Ring on it contain footage that might help them either solve a case, or perhaps indulge a wild hunch.
Up until recently, Ring let police privately ask users to share video footage their cameras have captured. Thanks to vocal criticism from civil liberties groups and privacy advocates, though, police now have to publicly make requests through Ring’s Neighbors app, a sort of digital bulletin board where people can post alerts for their community. Amazon also recently set limits on what footage police can ask for, and how much of it, after the Electronic Frontier Foundation found police officers attempting to use Ring footage to spy on Black Lives Matter protesters last summer. (Exactly the kind of hypocrisy Amazon got called out for with its gesture supporting Black Lives Matter last June.)
Ring’s collaboration with law enforcement runs deep, with the company even drafting press statements and social media posts for police to promote its cameras with, and officers seeming to relish the technology in turn. As Gizmodo reported in 2019, police in Fort Lauderdale, Florida, apparently raffled off Rings to members of certain communities, and were “specifically instructed by superiors” to verify that the users knew how to receive police requests for Ring footage.
Similar to Ring’s Neighbors, Citizen is another highly localized crime notification app. Its original name was “Vigilante” when it launched in 2016, which says pretty much everything about the company’s intentions, even before the part where it encouraged users to approach the crime problem “as a group” and see what they could “do about it.” (“Vigilante” here is not to be confused with Vigilant Solutions, a facial recognition software company employed by many police officers.) Vigilante was swiftly banned, until it rebranded as Citizen, with a reduced emphasis on personal intervention. It now has more than 7 million users across 30 cities.
Even with the new name, the app still indulges fantasies of vigilantism, and helps mold more Kyle Rittenhouses—with unreliable information, to boot. The app’s alerts are based on uncorroborated 911 calls, which sometimes get details wrong. Back in May, for instance, Motherboard reported that Citizen CEO Andrew Frame put out a $30,000 bounty for info leading to the arrest of a suspected arsonist, imploring his staff to “FIND THIS FUCK,” only to later discover that the man whose head he’d put a price on was innocent.
Even more recently, the app has gone beyond deputizing civilians in the war on crime to quietly hiring teams of amateur field reporters to scour cities like New York and Los Angeles for crime scenes to livestream. Anyone interested in making $200 for an 8-hour shift (in New York) or $250 for a 10-hour shift (in Los Angeles) can become, essentially, Jake Gyllenhaal’s creepy character from the film Nightcrawler. If simple phone notifications isn’t enough to get people looking over both shoulders all the time, perhaps a series of snuff films will do the trick.
Palantir and “predictive policing”
Finally, there’s Palantir: the supposed “ultimate tool for surveillance.”
Named after the Seeing Stones in Lord of the Rings, Palantir is designed to take in reams of data collected by any number of organizations, everything from license plates and fingerprints to identities of confidential informants and email records, and enable users to spot hidden connections between them. It was forged with the help of Peter Thiel, and fueled by the same omniscient ambition as the Pentagon’s former data-mining program, Total Information Network. Although it has worked with just about every alphabet soup acronym in government, BuzzFeed News last year described it as “the most secretive company in law enforcement.”
Almost 5,000 police officers in Los Angeles have access to the all-seeing eye of Palantir. They can use one of many available non-Amazon facial recognition tools to take a photo of anyone they deem suspicious, instantly uncover their identity, and then plug it into Palantir to find out untold gobs of info about them, warrant-free. Like magic. In fact, Palantir also helps with what’s known as “predictive policing,” current technology’s answer to the “precogs” from Minority Report. It’s an idea premised on the belief that algorithmic data can determine where future crimes may take place and when.
According to research from the Brennan Center, the Los Angeles Police Department first began to explore the possibility of predictive policing back in 2008. Since then, the LAPD has implemented a variety of predictive policing programs, including LASER, which identifies areas where gun violence is thought likely to occur, and PredPol, which calculates so-called “hot spots” with a high likelihood of crimes. But there is a massive difference between deploying speed traps on highways where speeding has historically been prevalent, and sending police to neighborhoods—or near specific people—based solely on previous patterns, to stop crime before it happens. Especially when, as The Next Web points out, the data the predictive policing has collected may be based on “various forms of unlawful and biased police practices.”
Ultimately, any algorithm used to predict or prevent crime is only as reliable as the human operating it is fallible.
What happens next
Why are today’s police increasingly becoming equipped with tools one might use to track down terrorists? Especially when they can’t even seem to keep body cameras working properly.
At the beginning of last summer, during the peak of the George Floyd protests, when Amazon put a pause on its facial recognition software (it has since extended the pause indefinitely), it seemed as though America might have a serious moment of introspection over which communities were bearing the brunt of over-policing and why; but that moment has evaporated. Just as the country started to collectively question the police’s power in shaping narrative, the narrative of “crime is rising in cities where police were defunded” quickly took over. At this point, the idea of actually defunding the police and beefing up social services in any meaningful way has curdled into a cynical talking point that Democrats may use in the 2022 election.
If all this enhanced surveillance tech was already ramping up while crime was falling over the last few decades, I shudder to think of how much more of it we’ll get as crime rises as it has recently. The more that average citizens feel panicked, the more Silicon Valley will crank out new and inevitably flaw-prone systems to exploit the situation. Police departments will continue working with those companies, which not only give them the opportunity to cut corners or act on biases in some cases, but ironically also make them look “progressive” and future-forward while doing it.
At a certain point, this continued dependence on expensive, unreliable equipment begins to look like a feature and not a bug. The use of machines to reduce the possibility of human error in police work as much as possible might just be a way to avoid ever truly dealing with the kinds of deeper issues about the role of police in society that reached a boiling point last summer.
It’s as futile as putting a Band-Aid over a bullet wound—or putting a piece of tape over your laptop camera and thinking that means no one can find out what you’re up to.
[Correction: a previous version of this article incorrectly linked Flock Safety specifically to ICE.]