‘Face search’ creeps people out. But it still has a future—in AR

Clearview AI’s facial recognition database opened a Pandora’s box of privacy concerns. But you might want something similar someday—if you can opt in.

‘Face search’ creeps people out. But it still has a future—in AR
[Photo: Alexander Krivitskiy/Unsplash]

When The New York Times’ Kashmir Hill exposed a startup called Clearview AI in an article titled “The Secretive Company That Might End Privacy as We Know It,” the main reaction was one of future shock. Clearview had built an enormous database of photos of people, hooked it up to a facial recognition algorithm, and then sold the results as a service designed to identify almost anybody, instantly. People called the technology “terrifying” and “dangerous,” and tech giants moved to end Clearview’s scraping of their photos for its collection.


But whatever Clearview AI’s fate, some form of the idea it implemented is probably inevitable, especially with the approach of augmented reality eyewear. It’s all about timing, and Clearview AI’s service is too much too soon.

The negative public reaction proves we still have an expectation of privacy on the web, and that it’s still possible for tech companies to cross “the creepy line,” as former Google CEO Eric Schmidt famously termed it.

The two component parts of Clearview AI’s tech—facial recognition and image search—have been around for years. Today, it’s likely that any of the big cloud companies could create a bundle of existing services to create a product similar to Clearview’s. But they haven’t.

Schmidt put the brakes on a Clearview-like product in development at Google way back in 2011.

Google, for example, offers both image search and facial recognition features, but it keeps them separate. In the Google Photos app, it uses facial recognition to identify and tag your photos. And it provides a search service especially for images. What it does not do is use “faceprints,” (facial recognition scans) as the search term for combing through databases full of faces. Even in Google Photos, it performs facial recognition only on each individual user’s photos rather than identifying anyone across all the photos stored on the service.

“There does seem to be a fundamental difference between Google indexing image files and Clearview AI’s building of facial recognition profiles of individuals,” Electronic Privacy Information Center senior counsel Jeramie Scott wrote in an email. “If I take an image of someone and search it via Google Image Search, the search will return various results that are ‘visually similar’ to the image query but it is not able to identify all pictures of that person. Clearview AI has built a database of biometric data (i.e., faceprints) that aims to identify every image of a specific person.”

Schmidt put the brakes on a Clearview-like product in development at Google way back in 2011, a fact mentioned in the NYT story. At a Google conference that year, he said he was surprised at how fast facial recognition technology had developed. He was alarmed at the privacy implications raised by its increasing accuracy, which he called “very concerning.”


Schmidt said a database utilizing facial recognition was not a place Google would likely go, and added that “some company, by the way, is going to cross that line.” Now we know that he was right.

Today, big tech companies are objecting to Clearview scraping user-uploaded facial images from their sites. After Hill’s NYT story appeared, Facebook, Google’s YouTube, Twitter, and LinkedIn sent Clearview (which is partially funded by Facebook board member Peter Thiel) letters barring it from scraping content from their sites. “The scraping of member information is not allowed under our terms of service and we take action to protect our members,” a LinkedIn spokesperson told Buzzfeed News.

This could prove a legally shaky position. Tech companies have relied on the Computer Fraud and Abuse Act (CFAA) for the legal underpinning of their terms of service. But the Ninth Circuit Court of Appeals decided in HiQ Labs v. LinkedIn Corp. that HiQ’s scraping of public-facing LinkedIn data didn’t run afoul of CFAA. The court worried that making scraping unlawful might do more harm than good.

“This is an important clarification of the CFAA’s scope, which should provide some relief to the wide variety of researchers, journalists, and companies who have had reason to fear cease and desist letters threatening liability simply for accessing publicly available information in a way that publishers object to,” wrote Electronic Frontier Foundation attorney Andrew Crocker and Camille Fischer. Bottom line: The tech giants may not be able to stop Clearview from repurposing their images for its own purposes.

Face search AR style

Ultimately, what makes Clearview AI alarming isn’t so much its technology—which appears to be pretty straightforward—but the way the company has created a centralized face-search clearing house for use by law-enforcement agencies and security firms, without seeking the permission of anyone in the photos. That’s what has led to visions of a world in which there’s no such thing as going out in public and preserving any semblance of privacy. But face search based on similar technological underpinnings could resurface in a less threatening form. That’s because it’s complementary with another technology that could be a very big deal in the future—AR.

Ask Tim Cook. “My view is [that AR is] the next big thing, and it will pervade our entire lives,” the Apple CEO said when asked recently about the biggest tech developments in the next decade. AR eyewear, which superimposes digital imagery over the real world, might end up replacing the smartphone screen as our primary computing interface.


Face search creates a data representation of a physical thing—a face—then connects it to digital content related to that face in the digital realm. By placing digital content within the real world as seen through the lenses, AR bridges the physical and digital, too.

In fact, the use case you hear most often in AR circles sounds a lot like Clearview. A person wearing AR glasses or contact lenses sees an acquaintance or a customer approaching, and, for social or business reasons, wants to greet the person by name and perhaps base some small talk on some real personal or business information. Facial recognition technology scans the face, the scan is matched with a profile in a database, and relevant profile information shows up around the approaching person within the eyewear.

Like many technologies, AR facial recognition-based search may be used first in the workplace. A hotel concierge might use it to recognize guests, greet them by name, and check them in without staring at a computer screen. Facebook, which has used facial recognition for tagging photos for years, made an app for its employees that uses a smartphone camera to recognize fellow employees and then display social information about them. It was meant as a way to help employees get to know each other.

Only later will the tech find its way into consumer products. Facebook is developing AR glasses, and its employees are not secretive about the idea that matching social profiles with faces seen through the lenses is a key feature.

As AR eyewear slowly reaches the enterprise market and then consumers, what crosses the creepy line today may only toe it tomorrow. We’ve gotten used to giving away a certain amount of privacy in exchange for useful things like Gmail and Facebook. And we may find ourselves willing to trade away a little more in exchange for truly useful AR. If the face search of the future is permission-based—say, because you’ve opted in through your favorite hotel chain’s loyalty program—you might even like it.

About the author

Fast Company Senior Writer Mark Sullivan covers emerging technology, politics, artificial intelligence, large tech companies, and misinformation. An award-winning San Francisco-based journalist, Sullivan's work has appeared in Wired, Al Jazeera, CNN, ABC News, CNET, and many others.