If you’ve been to the DMV to have your driver’s license photo taken, there’s a good chance your face is in a little-known group of databases that functions like a digital police lineup. Except, in this case, you needn’t even be suspected of a crime, and soon the searching could take place virtually anywhere, anytime.
It’s just one example of the exploding use of facial recognition technology in law enforcement, and it’s raising major concerns among U.S. lawmakers and privacy advocates.
“I’m frankly appalled,” Representative Paul Mitchell, a Republican from Michigan, told Kimberly Del Greco, the FBI’s deputy assistant director of criminal justice, during a House oversight committee hearing last week. “I wasn’t informed when my driver’s license was renewed my photograph was going to be in a repository that could be searched by law enforcement across the country.”
Watch the exchange between Rep. Mitchell, the FBI’s Kimberly Del Greco, and Alvaro Bedoya of Georgetown’s Law Center:
I was alarmed this morning to learn that the FBI has not subjected itself to the same provisions of the Privacy Act that the Justice Department imposes on private businesses. Approximately half of all adult Americans’ photos are in a facial recognition database, which the FBI used for years without first publishing a privacy assessment, as they are required to do by law. I am committed to protecting the privacy of all Americans and will stay engaged on this important issue.
Posted by Rep. Paul Mitchell on Wednesday, March 22, 2017
Mitchell’s face and those of more than 125 million Americans—more than half of the country’s adult population—are thought to be stored in a vast network of databases used by local and federal law enforcement to scan photos and videos of individuals. Many of these faces, which can be searched without reasonable suspicion, belong to people who have never been charged with a crime and have no idea they are being searched.
Yet there are few guidelines or legal rulings that govern exactly how face recognition, like a number of other new technologies, should be used by police. In a report last May on the FBI’s system, the Government Accountability Office (GAO) found the FBI had “not fully adhered to privacy laws and policies and had not taken sufficient action to help ensure accuracy of its face recognition technology.” In response to the report, which urged the FBI to conduct regular audits of the system, “the Department of Justice and the FBI disagreed with three recommendations and had taken some actions to address the remainder, but had not fully implemented them,” the GAO said.
“No federal law controls this technology, no court decision limits it,” said Alvaro Bedoya, executive director of Georgetown Law’s Center on Privacy and Technology, and the coauthor of “The Perpetual Lineup,” a report on the FBI and state face recognition databases. “This technology is not under control.”
While a few attempts to set limits are inching slowly through state legislatures, the technology is racing ahead. Advancements in machine vision and artificial intelligence are widening the scope of the lineup too: Via body-worn police cameras, which are rapidly proliferating, face searches could happen up-close, at street level and in real-time—anticipating a future in which anonymity in certain public places could disappear.
It’s this pairing of technologies in particular—the ability to scan and identify faces on the street—that is the “most concerning” from a privacy and First Amendment perspective, said Jason Chaffetz, Republican representative from Utah and chairman of the House Oversight committee. A 2014 Justice Dept. report also highlighted the combination, warning that using body cameras with “facial recognition systems and other new technologies like live feed and auto recording . . . may pose serious risks to public privacy.” In the legal vacuum surrounding their use, agencies exploring these technologies should “proceed very cautiously,” the report said.
“Imagine the world where the cops are going down the street and they’ve got Google Glass on and their body cameras are recognizing people,” says Barry Friedman, the director of the Policing Project at New York University School of Law. “And it’s not just recognizing them, but they’re getting their security scores at the same time, and people are getting colored based on how dangerous the algorithms think they are. That’s one scary world.”
Already law enforcement is pairing real-time face recognition software with footage from surveillance cameras, and police officers around the country are using face recognition apps on mobile phones to more quickly identify suspects they stop on the street. In New York, it emerged this week that police are beginning to acquire face recognition technology to scan the faces of all drivers commuting between the five boroughs.
Of immediate concern to Congress was the legality of the FBI’s system. With the FBI’s Del Greco in the hot seat, a number of committee members asked why the agency hadn’t published a privacy report of its face recognition system until 2015, years after it first deployed the technology in public, in 2010.
Had anyone at the FBI been reprimanded for the delay? Rep. Mitchell asked.
“I have no knowledge,” Del Greco said. “There are days ignorance is bliss,” he fired back.
Del Greco said the FBI had advised its privacy attorney internally throughout the roll-out of the system.
“We don’t believe you,” Chaffetz said, “and you’re supposed to make it public.” He also alleged that the FBI “went out of its way” to exempt its facial recognition database from the Privacy Act.
“So here’s the problem,” said Chaffetz. “You’re required by law to put out a privacy statement and you didn’t and now we’re supposed to trust you with hundreds of millions of people’s faces.”
Del Greco defended the agency’s use of what she referred to as “face services,” saying it had “enhanced the ability to solve crime,” emphasized that privacy was of utmost importance at the FBI, and said that the system was not used to positively identify suspects, but to generate “investigative leads.” In one recent positive outcome for the technology, Charles Hollin, an alleged child molester, was caught after spending 18 years as a fugitive, thanks to a database that contained his passport photo.
Currently, 18 U.S. states let the FBI use face-recognition technology to compare suspected criminals to their driver’s license or other ID photos. These, in addition to criminal and civil mug shots, and photos from the U.S. State Department’s passport and visa records, are part of a set of databases used by the FBI and police across the country. Over the past decade and a half, 29 states have allowed police agencies and the FBI to search their repositories of drivers’ faces during investigations.
“I have zero confidence in the FBI and the [Justice Department], frankly, to keep this in check,” Rep. Stephen Lynch, a Democrat from Massachusetts, said.
After the hearing, Bedoya, a former chief counsel for Sen. Al Franken of Minnesota, noted that the discussion had inspired a rare moment of bipartisan agreement. “The opposition to the use of driver’s licenses was remarkably strong and remarkably uniform across party lines,” he said. “In my five years as a Senate staffer, I never saw anything like it on a similar privacy issue.”
In some ways, the FBI’s approach to face recognition resembles how police traditionally try to match fingerprints at a crime scene to those of criminals. In this case, however, the “prints” include the faces of millions of innocent people, often collected and scanned without their knowledge, according to the analysis by Bedoya and his colleagues. It estimated that around 80% of the faces searchable by the FBI belong to individuals who have never been charged with a crime.
In under half a decade, they found, the FBI searched drivers’ faces more than 36,000 times, without warrants, audits, or regular accuracy tests. In Florida, police are encouraged to use face recognition “whenever practical.” (Individually, dozens of states also use face recognition to crack down on fraudsters applying for duplicate driver’s licenses, for instance.)
To access the FBI’s networked databases of faces, an authorized, participating police agency need only show that its search is for “law enforcement purposes,” said Del Greco of the FBI. Those criteria are determined by individual states, she noted.
Imagine going to the DMV to renew your license, Bedoya wrote in a recent op-ed in the Washington Post. “What if you—and most other teens in the United States—were then asked to submit your fingerprints for criminal investigations by the FBI or state police?”
In scope, he argues, the face-matching system resembles the National Security Agency’s call-records program, which logged the metadata of all Americans’ phone calls. “This has never happened before—not with DNA or fingerprints, which are kept in smaller national networks made up mostly of known or suspected criminals. Yet law-enforcement face-recognition systems have received a fraction of the NSA’s oversight.”
The possibility of misidentification and false positives is also worrisome, especially because the FBI has not been keeping track of such failures. “It doesn’t know how often the system incorrectly identifies the wrong subject,” explained the GAO’s Diana Maurer. “Innocent people could bear the burden of being falsely accused, including the implication of having federal investigators turn up at their home or business.”
More worrisome, research shows that facial recognition appears to disproportionately impact minority communities. A report that was co-written by the FBI in 2012 found that the technology exhibited a higher number of failure rates with darker faces, a function of the kinds of data that humans input as they train the algorithms.
“If you are black, you are more likely to be subjected to this technology, and the technology is more likely to be wrong,” said Elijah Cummings, a congressman for Maryland. “That’s one hell of a combination. Just let that sink in.”
The FBI, like other law enforcement agencies, has argued that the algorithms are race-blind, and reiterates that face searches are only used as “investigatory” leads. “This response is very troubling,” Cummings noted. “Rather than conducting testing that would show whether or not these concerns have merit, the FBI chooses to ignore growing evidence that the technology has a disproportionate impact on African Americans.”
Friedman notes that racial concerns aren’t exclusive to face recognition. “A lot of these technologies, just because of how they’re deployed, come with racialized aspects. If you’re using license plate readers in more heavily policed neighborhoods, you’re picking up more data in those neighborhoods. That’s something we need to be very aware of and thoughtful about.”
Related Story: Police Body Cameras Will Do More Than Just Record You
A plethora of private companies already have your face in all of its biometric glory, provided it’s ever been uploaded to the servers of firms like Apple, Google, and Facebook. As these companies pile millions into AI research—in part, to better find you and objects in photos—a number of startups are also racing to automatically analyze the world’s video. One startup, Kairos, aims to let Hollywood and ad agencies study audiences’ emotional responses and help theme park visitors more easily find and purchase photos of themselves. Another, Matroid, launched this week by researchers at Stanford, focuses on analyzing television appearances and scanning surveillance video.
“Google can give you pictures of cats, but not cat with grandpa or cat with grandpa and Christmas tree or with your son,” Pete Sonsini, a general partner at New Enterprise Associates, which is funding Matroid to an unspecified tune, told Bloomberg. “It’s really powerful for any human to be able to create a detector that can identify any image or set of images or face from their dataset.”
Banks are using biometric technology to provide better personal verification, eventually allowing people to pay with their face. Airlines are imagining using biometrics to let passengers board an airline without a paper ticket. At a conference this week in Orlando, NEC, the Japanese company whose facial recognition algorithms are considered the most accurate by the Dept. of Homeland Security, released new features for its software suite which include a “virtual receptionist” and a system by which “age/gender recognition can trigger tailored advertisements/greetings and can trigger notifications to sales personnel for immediate follow-up and interaction,” said a press release.
Another app, installed on a hotel’s cameras, would notify hotel receptionist and concierge services when a VIP or high value customer arrives at any of the building’s entrances. “This will allow them to receive them in person, greet them by their name, and provide them a better service. Also this can be used to identify any staff that were terminated when they enter the property and take appropriate action.”
At Beijing’s Temple of Heaven Park, meanwhile, biometrics are already being used to ration toilet paper, CNN reported last week: “The facial recognition program keeps the dispenser from offering another round to the same person; if you need more, you reportedly need to sit tight—literally—for nine minutes.”
Still, the most profitable applications for biometrics lie in the fast-growing law enforcement and public safety sector. Industry executives and police experts say that the next stage of the technology—automatically scanning public places for faces and objects in real time—could help more quickly find armed and dangerous or missing persons, identify critical moments amid torrents of body and dash cam footage, or even perhaps identify biased policing.
In many communities, automatic license plate readers—brick-sized cameras mounted on the back of patrol cars—already do something similar with cars, archiving and cross-referencing every license plate they pass in real time to check for outstanding warrants or traffic violations.
These are also the most worrisome applications, privacy advocates say, given the general secrecy that surrounds them, the few restrictions on their use, and the ability to track individuals. Pairing face-matching algorithms with body cameras, said Bedoya, “will redefine the nature of public spaces.”
Even the mere prospect of the technology could have a chilling effect on people’s First Amendment rights, privacy advocates warn. In some cities, police are restricted from filming at protests and demonstrations—unless a crime is thought to be in progress—for this reason.
Other police departments, however, have routinely filmed protesters: In New York, for instance, the NYPD sent video teams to record Occupy and Black Lives Matter protests hundreds of times, and apparently without proper authorization, agency documents released this week show.
“Will you attend a protest if you know the government can secretly scan your face and identify you—as police in Baltimore did during the Freddie Gray protests?” Bedoya writes. “Do you have the right to walk down your street without having your face scanned? If you don’t, will you lead your life in the same way? Will you go to a psychiatrist? A marriage counselor? An Alcoholics Anonymous meeting?”
Rep. Chaffetz appeared to support one controversial application of face recognition: using it to find undocumented immigrants. The Dept. of Homeland Security stores the faces of every visitor in its own database, and is determined to better track those who overstay their visas; in 2015, the agency estimated 500,000 overstays.
“I think it is absolutely a concern that face recognition would be used to facilitate deportations,” Rachel Levinson-Waldman, senior counsel to the Brennan Center’s National Security Program at New York University School of Law, told The Intercept‘s Ava Kofman. “We’re seeing how this administration is ramping up these deportation efforts. They’re looking much more widely.”
Generally, however, Chaffetz urged firmer limits. The technology “can also be used by bad actors to harass or stalk individuals,” he said. “It can be used in a way that chills free speech and free association by targeting people attending certain political meetings, protests, churches, or other types of places in the public.”
“And then having a system, with a network of cameras, where you go out in public, that too can be collected. And then used in the wrong hands, nefarious hands… it does scare me. Are you aware of any other country that does this? Anybody on this panel? Anybody else doing this?”
Neither Del Greco nor other members of the panel responded.
Jennifer Lynch, a staff attorney at the Electronic Frontier Foundation, noted that “we don’t yet appear to be at point where face recognition is being used broadly to monitor the public.” But, she said, “it is important to place meaningful checks on government use of face recognition now before we reach a point of no return.”
Real-time face recognition is coming to law enforcement; the question is how the technology itself will be policed, and what role the public will have in dictating its use. “If we’re going there, then we better be going there together,” said Friedman, of the Policing Project. “Did we all discuss this and agree to this?”
To improve the use of face recognition, Bedoya and other privacy advocates urge well-defined limits. The FBI should only access mugshot databases with reasonable suspicion of a crime, and, if it’s a forensic use—a face recognition scan on surveillance footage, for instance, as opposed to a police officer photographing a person he has stopped—the technology should only be used in cases of felonies.
For databases containing drivers license photos, Bedoya says law enforcement should have express consent from state legislatures, and only search those databases when they have probable cause to implicate the subject for a serious offense, like the kind required for a wiretap. The FBI should also regularly scrub its databases to remove innocent people, be more transparent about how it uses the technology, and, as the GAO recommended, conduct audits to ensure the software meets privacy and accuracy requirements.
Policing veterans have also expressed discomfort with the use of the technology. In December, a guide to body cameras published by the Constitution Project, a nonpartisan think tank, and co-authored by a handful of retired police officials, warned that the privacy risks of video “tagging” technologies were “immense” because they “have the potential to end anonymity, catalog every person at a sensitive location or event, and even facilitate pervasive location tracking of an individual over a prolonged period of time.”
In an interview about the future of body cameras, Bill Schrier, the former chief information officer at the Seattle Police Department and a 30-year veteran of government technology, told me that “most reasonable people don’t want potentially dangerous felons or sex offenders walking around in public and would, I think, support such use” of real-time face recognizing body cameras in those cases.
But used to catch people wanted for “misdemeanors such as unpaid traffic tickets or pot smoking,” real-time face recognition could be dangerous. That, he said, could “seriously undermine faith in government, and start us down the road to a police state.”