advertisement
advertisement

Cop cameras can track you in real-time and there’s no stopping them

At the body camera and stun-gun giant Axon, a face off is brewing over the ethics–and potentially fatal errors–of automated face recognition.

Cop cameras can track you in real-time and there’s no stopping them
[Photo: courtesy of Axon]

A police officer is patrolling the neighborhood when he gets an urgent notification on his phone: the face of a passerby matches that of a suspect in a police database. He touches a button to request backup and glides his hand to his holster.

advertisement
advertisement

For Rick Smith, the energized chief executive of police supplier Axon, formerly known as Taser, this “Robocop”-like vision is a tantalizing possibility in a larger vision of AI-assisted policing. In the case of a cop killer on the loose, Smith told me in 2016, “we can’t expect an officer to not get that alert if there’s official information on this guy.”

A police officer uses smart glasses to recognize the face of a suspect, as seen in a 2017 simulation by the U.S. Dept. of Homeland Security.

Since then, that idea has zoomed from science fiction to fact. Casinos, merchants, and police can now identify individuals captured on high-quality surveillance footage in mere seconds, thanks in part to advances in AI and cameras. Also new to the face recognition challenge: cloud-connected databases swelling with faces. Face databases used by local law enforcement now contain over half of the U.S.’s adults, many of them people who have never committed a crime, according to a 2016 study by researchers at Georgetown University Law Center. As internet companies gather their own collections of faces–encouraging users to upload them themselves–researchers are developing ways to scan facial features for possible signs of fear, sexuality, and other traits.

The cameras are inching closer to faces too. Motorola Solutions and an AI startup, Neurala, plan to release a police body camera this year “to more efficiently search for objects or persons of interest, such as missing children and suspects.” The companies later said the cameras would only look for objects like shirts or hats, but not faces.

In May, Axon received a patent for software that can find faces and other objects in footage from body cameras or drones in near real-time. The system could help police automatically redact the faces of people to protect their privacy before releasing videos to the public–something Axon is already developing–or it could help police identify those same individuals instantly as they pass a body camera. Once a face is captured by a user’s body-worn camera, say patent filings, a hand-held device “provides the name of the person to the user of the capture system.”

Amid a lack of any comprehensive laws that govern the technology’s use–and the growing Orwellian and Kafkaesque alarms of privacy advocates–Smith, who sells the bulk of the country’s police body cameras and electroshock weapons, says the company isn’t implementing face recognition yet. Instead, in April the police giant–no stranger to decades of controversies surrounding its Taser weapons–sought to get ahead of the AI controversies by convening an ethics board of advisers to weigh in on the company’s policing tech.

“Because if you get that wrong, you give up all the work that you’ve done to strengthen public trust, all the work that police departments are doing to work with communities post-Ferguson,” says Mike Wagers, vice president of Axon Ecosystem, who is leading the ethics effort.

advertisement

In the months since Axon’s announcement, face and object recognition software has become the spark that has set off PR wildfires and internal discord at companies like Google, Amazon, and Microsoft. Google issued a new set of ethical standards for AI and said it would not sell software to be used in weapons or surveillance “violating internationally accepted norms.” Microsoft touted the work of an internal ethics committee and said it had turned down some customers “where we’ve concluded that there are greater human rights risks.” Some startups building face recognition say they won’t sell their products to police at all.

“Whether or not you believe government surveillance is okay, using commercial facial recognition in law enforcement is irresponsible and dangerous,” Brian Brackeen, the CEO of Kairos, wrote in an op-ed. Among the partnerships Brackeen said Kairos turned down was a request from Axon.

So far, the company has largely avoided the recent controversy around face technology, but dissent is brewing. One of its new ethics advisers–the board’s only attorney–and the company’s former head of AI think face recognition software is a no-no.

“My instinct is that real-time body camera facial recognition should be banned,” says Barry Friedman, a constitutional law professor and director of NYU’s Policing Project. “I could make arguments for its use–but that’s the wrong question. The question is a cost-benefit kind of question: How likely is that scenario to happen and how often”–referring to Smith’s “cop killer” scenario–“and what are all the things that can go wrong?”

Axon’s former head of AI also sought a ban on the technology. “The ethics side of our work is incredibly important,” David Luan wrote on his website. “I am responsible for the efforts at Axon to establish an AI Safety/Ethics Board and a blanket ban on facial recognition on body-worn cameras.” Luan, a founder of an AI startup named Dextro that Axon acquired last year, quietly left the company in December. He did not respond to a request for comment.

advertisement

Face recognition–even real-time automated face recognition–may sound acceptable in certain cases: searching a crowd for active terror suspects, a missing child, or a dangerous fugitive, for instance. Scanning the faces of unknown persons, either with their consent during a police stop or in surveillance footage, is an increasingly common police practice. Last month, in one of the highest profile cases yet, police used a state database to identify a suspect arrested for a shooting at a newsroom in Annapolis, Maryland, after he refused to cooperate with police.

But scanning faces of people–either in real-time or after-the-fact–and without their consent, raises other issues. Combining face databases with roving, HD police cameras “may pose serious risks to public privacy,” the U.S. Justice Dept. warned in 2014, and urged agencies to “proceed very cautiously.”

Connected to a high-speed network, such a system could be used to track individuals across a city, much as police already use license plate readers to track vehicle movements. Simply logging faces at a protest or mosque or in immigrant-heavy neighborhoods could pose graver risks to civil rights. Simply walking past a cop could become a type of police “encounter,” according to Shahid Buttar, a constitutional lawyer at the Electronic Frontier Foundation, “without the individual basis for suspicion constitutionally required to justify a police search.”

Facial recognition technology isn’t like a general-purpose computer, philosopher Evan Siegler argued in a recent op-ed, but rather “a specific tool that enables tracking based on our most public-facing and innate biological feature. It’s an ideal tool for oppressive surveillance.”

[Photo: courtesy of Axon]

Related: When It Comes To AI, Facebook Says It Wants Systems That Don’t Reflect Our Biases


How humans use the technology isn’t the only concern: the technology itself is still rife with errors. Research has shown that leading commercial face recognition algorithms have lower rates of accuracy for dark-skinned faces, echoing a trend in which algorithms end up amplifying human bias. The problem is the data that the AI is trained on: Often, these databases are heavily skewed toward white faces. Garbage in, garbage out, goes the old computer science principle, but in this case, a group of civil rights group wrote in an open letter in April, errors “could have fatal consequences–consequences that fall disproportionately on certain populations.”

advertisement

Wagers, who is leading the ethics effort, conjures up an alternate version of his boss’s scenario: What if an officer’s body camera IDs the wrong guy?

“Those body cameras are giving false positives as officers are walking down the street and alerting an officer to stop an African-American person, for example,” imagines Wagers, whose previous stints include the Seattle Police Department and Amazon Web Services.”You’re creating an interaction that shouldn’t happen. And we all know what’s always the potential when you create an interaction like that–especially if you’ve got a body camera going, ‘This person is wanted for a crime.’ God forbid something goes wrong with that interaction.”

A table from Axon’s May 3, 2018, patent application. View larger size. [Image: USTPO]
Still, aside from legislation in Illinois and Texas requiring that companies get consent before using face recognition software, no U.S. laws govern the technology. While government researchers have designed benchmarks for testing face recognition software, there are no industry-wide standards for bias or accuracy. Meanwhile, few government agencies or police departments have outlined rules about biometric technologies like face recognition. Out of 50 major police departments surveyed by the public advocacy research group Upturn last year, just six had policies related to biometrics.

“The language that is now used about Facebook being weaponized–it really started with law enforcement,” says Tracy Ann Kosa, a member of Axon’s ethics board who has advised Google and the city of Seattle on privacy concerns.

“That said, the fact that we have an entire industry and domain like law enforcement starting to think about the ethical implications of their technology and ethical data usage really actually puts them a little further ahead than other domains and other practices. Tech companies didn’t really start having those conversations until quite recently.”

Kosa, for one, isn’t certain that a ban on the technology–be it by a company or a country–is ideal, given the challenge of enforceability.

advertisement

“If the United States bans, for example, the development of facial recognition on body cameras it simply means that another country will do it,” she wrote in an email. “So the question is: do we want a seat at the table or not?  Today, I’d argue we do. It’s more important right now to see where we might be going and help influence that direction.”

Friedman, an ardent advocate for a more democratic approach to policing who has previously questioned Axon’s cozy relationships with police departments, applauds its recent efforts. Still, he thinks the company would do well to inject more community members into its discussions.

“You’re thinking about selling these things to your customers in law enforcement agencies,” Friedman told a company executive at the board’s first meeting, “but my view is your customers are the communities in which those technologies operate.'”

In any case, Friedman’s views and those of the company’s other advisers will ultimately have limited impact on the company’s decisions about, for instance, face recognition.

“While the Board does not technically have ‘veto’ power, if we choose to ignore their advice and guidance, each Board member absolutely has the right to go public with that,” said a company spokesperson. “That is their ‘veto’ power.’ ”

“We do see a day when facial recognition, with the right controls and sufficient accuracy, could reduce bias and increase fairness in policing,” the spokesperson added.

advertisement

Axon’s widespread reach in police tech means that if the company decides to implement face recognition, it could have sweeping, rapid effects. “If and when they want to do it, that could happen quite fast,” cautions Harlan Yu, executive director of Upturn. “It would really be a software update away to add that capability.”

Face / Off

Electroshock weapons are still the company’s biggest sellers, but after a police officer fatally shot Michael Brown in Ferguson, Missouri, and sparked new calls for police oversight, Axon has become the dominant force in body cameras. The company has racked up contracts with most of the U.S.’s major police departments–who often pay a monthly software-and-storage subscription fee alongside the cameras–Axon has also sought to turn itself into a tech company, rebranding from Taser International last year and embarking on an engineering hiring spree.

Axon CEO Rick Smith [Photo: courtesy of Axon]
So far it has spent over $100 million developing a software platform that helps police officers organize video files or redact sensitive personal data–including faces and license plates–so that departments can more easily share videos with prosecutors and the public. A new app helps officers crowd-source criminal data and evidence. Eventually, Smith envisions a system for managing all police data, from dispatch information to police reports, and automatically analyze growing mountain of video, tagging objects, people, and behaviors: officer, weapon, expletive, bystander, use-of-force, and so on.

In the future, such an AI-backed system, paired with sensors like body cameras, could help police not only write reports after the fact but “anticipate criminal activity” beforehand, the company said in a 2017 report.

To civil rights advocates like Friedman, that’s precisely the kind of AI software that risks turning tools for transparency into something else. “The idea that we’re just going to be identified, threat scored, geo-located everywhere we go is terrifying to me,” he says.

“Part of what astonishes me,” he adds, “and why it’s so critical to have these [AI ethics] groups, is that a lot of these things are already going on. The general public just doesn’t know.”

advertisement

As ethicists debate the technology, it’s advancing quickly.

Creating a working facial recognition system “requires not much effort,” says Mojtaba Solgi, Axon’s Director of AI and Machine Learning. “If you go to any computer vision machine learning conference there are papers about it out there, and you could assign a few researchers and engineers to come up with a working system in a relatively short time,” he says.

Still, Solgi acknowledges, conversations about privacy and other ethical questions are rare among the people who build these technologies. “A lot of these new areas are areas that startups go after, and they don’t have the resources or the patience to really do that kind of thing.”

Amazon, Microsoft, and an assortment of smaller companies already sell cheap plug-and-play face recognition software to police and government agencies. Using Amazon’s service, the sheriff’s office in Washington County, Oregon, built a web app that allows patrol officers to identify faces they capture on their smartphones or in surveillance footage. The system, which relies on an internal database, cost only around $400 for the initial setup and a $6 monthly fee.

“It’s about helping us fight crime, solve crime, and find missing people,” deputy Jeff Talbot, a department spokesperson, told Fast Company. Last year, the department said that the system had helped generate leads on more than 20 suspects, but wouldn’t provide more recent statistics.

Elsewhere, police are turning to more automated, near-real-time face recognition. Public security officials in China say software by Yitu Technologies has helped nab wanted suspects during pop concerts, by scanning footage collected at stadium entrances. NEC Corporation sells real-time facial recognition to large American companies and has been fielding inquiries about the technology from unnamed police departments in large U.S. cities, a company official told NPR.

advertisement

IBM, which sells its Intelligent Video Analytics software to law enforcement, notes that in addition to people counting and “lying body detection,” its algorithms can find known faces in archived body camera video and real-time surveillance footage, or it can simply gather “metadata” about the faces it meets.

“As each person’s face is captured, characteristics are measured by the application of multiple attribute detectors around the face region,” a company document says. “IBM® Intelligent Video Analytics assigns a confidence score to each attribute and associates it with the metadata index for each detected person. You can then run a rank order search in a specified time period on any indexed attribute or combination of attributes.”

Top: an email from an AWS representative to an Oregon police official. Bottom: Amazon Web Services described “Law enforcement bodycams” as one use case for its facial recognition software but has since removed that language from the webpage.

Deputy Talbot said that the Amazon software was not being used in real-time or with body cameras: scanning faces on body camera footage is currently against Oregon state law. But in a series of emails from last year, two Amazon representatives can be seen arranging a meeting between a Washington County police official and a body camera maker to discuss how to sway public opinion on the issue.

“My colleague has a customer that manufactures police body cameras”—an executive from camera maker Federal Signal–“he was a bit skeptical that recognition of individuals in video feeds would be adopted at the moment because of all the issues surrounding it,” one of the Amazon reps writes in one of the emails, which was obtained by the ACLU. “That being said, he also believes that this technology will eventually be used broadly. He would love to understand how you overcame stakeholder resistance in order to get this cutting‐edge technology implemented.”

advertisement

In May, after civil rights groups called on Amazon to stop selling its face recognition technology to law enforcement, the company removed text on its website referring to its use with police body cameras. It also defended its sales to law enforcement. In a blog post, Matt Wood, AWS’s general manager of AI, said the company was unaware of any reports of abuse, and that its policies prohibit “activities that are illegal, that violate the rights of others, or that may be harmful to others.” Still, he wrote, Amazon believes that “a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future” is ” the wrong approach.”

The Rooms That Really Matter

Axon’s board, which includes two police officials, will meet quarterly, publish one or more reports a year, and consult on new products in the pipeline. Members wil be compensated but the police officials’ fees will be donated to charity, Axon says.

In the best case, says Friedman, the ethics board could contribute to an industrywide set of standards for the technology and its use. The board could also provide the template for a way to regulate the police industry. “Either vendors design products that meet acceptable uses and avoid misuse, or you need some sort of set of rules outside of the sale itself, some kind of sanctioning mechanism that we don’t really have.”

A handful of cities have passed ordinances that require police departments to seek public approval for new surveillance gear. but ultimately, Friedman says, more stringent laws are needed. Even Microsoft’s chief legal officer seems to agree. In a July blog post, Brad Smith outlined steps the company was taking to improve accuracy and transparency around face recognition and called for a “bipartisan and expert commission” to help the government steer the use of the technology. Weeks later, amid growing questions from lawmakers on Capitol Hill, Amazon’s Wood echoed the idea, updating a blog post to say that it is “a very reasonable idea” for government to weigh in.

Cultural shifts are important too, notes Friedman: any rules around the technology will ultimately depend upon the people using it.

“Ultimately, the rooms that matter are the rooms where the police departments themselves figure out how they’re going to use technologies in their own communities,” says Friedman. Axon conducts training sessions and offers departments guidelines for its cameras and stun guns, but it does not participate in writing local policy.

advertisement

At the Axon board’s first meeting, the hours of conversation tended to focus on “easy AI stuff that isn’t going to be controversial remotely in any way,” Friedman says. “I suggested that before we get together as a group again we ought to do some more briefing or organizing things for the harder questions, and there are certainly hard ones.”


This story has been updated.

advertisement
advertisement

About the author

Alex is a contributing editor at Fast Company, the founding editor and editor at large of Motherboard at Vice, and a freelance writer and producer with a focus on the intersections of science, technology, media, politics, and culture.

More