advertisement
advertisement

How to spot the realistic fake people creeping into your timelines

A remarkable advance in artificial portrait generation adds a new potential layer of deception to online fraudsters, astroturfers, and propagandists.

How to spot the realistic fake people creeping into your timelines
[Photo: Amr Elmasry/Unsplash]

Alarm bells went off for Twitter user @ravenvanderrave, a nom de guerre for a consistent naysayer about the electric-car company Tesla, after a purported journalist with no apparent public history began sending messages pressing for personal info.

advertisement
advertisement

“Guys – this @msmaisymade account tries to get personal information out of you. This is a fucking RED ALERT BLOCK.”

“Maisy Kinsley,” whose Twitter bio said she was a senior journalist at Bloomberg, had a LinkedIn page and a professional-enough website—with a domain registered March 19, 2019—but no stories filed in her name anywhere.

“Maisy Kinsley”

Ravenvanderrave apparently wasn’t the only Tesla critic that @msmaisymade asked for personal details. But the Twitter avatar photo certainly put ravenvanderrave’s followers on high alert. They couldn’t find a different picture of the purported reporter (nor could I), the image had odd artifacts, and even her bio photo didn’t appear in many places. The account was reported to Twitter for impersonation and quickly taken down, although an archived version can be viewed via the Wayback Machine. (Tesla didn’t respond to a request for comment about the Twitter account.)

Maisy left ripples, however. “This looks suspiciously like the AI people generation to me too,” Brodie Ferguson” replied. Another person in the thread responded, “What a freaking cyberpunk world we live in.”

Welcome to the other side of the uncanny valley of profile photographs. The attention of generated imagery has so far focused on “deepfake videos,” in which a real person’s face is grafted semi-realistically (for now) onto someone else’s body. There’s a different impact with deep-learning AI-generated fake still-photo faces that look realistic but aren’t attempting to match the appearance of any actual person. And they’re so new that these images have yet to get a name—perhaps deepfaces will win out.

advertisement

Deepfaces have a greater potential to add to the noise of troll farms, social-media griefers, and outright scammers and fraudsters because they look legitimate and fail reverse-image searches. As Craig Silverman, a long-time exposer of online frauds and BuzzFeed media editor, says, “I think it presents a big challenge for some of the existing approaches used by investigators, journalists, and police and others to follow a breadcrumb trail.”

Even more disturbing? It’s possible we’ve been seeing them for years without realizing it. “It wouldn’t surprise me if this technology was already there—even maybe a year or two ago—to replicate humans and have fake images,” says Jevin West, an assistant professor at the University of Washington’s Information School, who co-teaches a course on “Calling Bullshit.” “I can’t wait for the person to find that they actually existed before the public release of this.”

But the sword has two edges: the particulars of these fake mugshots can be used for better unmasking, too. We can be trained to look at images more critically but also use existing tools and approaches that help pinpoint whether a person exists beyond a face and a c.v. And software designed to make fake faces could also, in theory, be steered toward catching them, too.

[Screenshot: Tero Karras, Samuli Laine, Timo Aila]

An infinite number of fake people’s faces

In December 2018, an academic pre-print paper with the dry title, “A Style-Based Generator Architecture for Generative Adversarial Networks,” or GANs, swept across the internet. The paper, by three Nvidia researchers—Tero Karras, Samuli Laine, and Timo Aila—described generating an effectively infinite variety of realistic-looking bio pics that could be controlled via sliders for hair color, race, age, ethnicity, gender, and other fine and coarse details. This processing could happen on commodity GPUs—from Nvidia, naturally. No massive clusters of computation are required to train these GANs—that is, two neural networks designed to compete with each other to reach a certain goal. That puts these tools within the reach of average researchers.

While the paper delves into the technical details with a depth that only someone in the field can fully grok, it was the graphic components that delivered impact. That came both from illustrations in the paper but particularly so from an accompanying trippy six-minute video that combines the dryness of computer science with an unintentionally psychedelic visualization of something torn from Carl Jung’s theory of the collective unconscious.

The generated faces aren’t perfect. But many are shockingly indistinguishable from photographs of faces taken from life at typical sizes used for social media avatars. We tend to only glance at avatars, and assume artifacts come from bad image compression or resizing.

advertisement

Along with the paper and video, the researchers posted their code to a Github archive under the name StyleGAN. In February, an Uber software engineer, Phillip Wang, created thispersondoesnotexist.com, which generates a completely new image every time you reload.

Around the same time, Jevin West and his colleague, Carl Bergstrom, a professor in theoretical and evolutionary biology who also teaches at the UW’s I-School, posted something even more unsettling: a side-by-side test of a photographically captured face and a generated one at whichfaceisreal.com. They began their co-taught course in spotting B.S. before the latest presidential election cycle, and deepfaces are another aspect in an array of analytical approaches they’re presenting to students and via the project’s web site.

Only one of these faces is real. (Hint: you’re right.) [Screenshot: whichfaceisreal.com]
These two sites seemed to produce another wave of fear, disbelief, and mild nausea across social media, as people found themselves compelled by the instant and endless generation of plausible faces.

The current era is rife with propaganda and fraud that makes ample use of invented identities. The ease of adding a fresh face seems to make misleading people that much easier, because performing a reverse-image search via Google Images or TinEye doesn’t match the avatar to a stock photograph, a Flickr image of someone else, or a headshot taken from a corporate web site. (Nvidia declined to make the researchers available for an interview, as they will present more details at the Conference on Computer Vision and Pattern Recognition in June 2019 and can’t disclose more until then under conference rules.)

Bergstrom also argues that deepfaces could contribute to something worse. The use of faked personas “is fundamentally undermining democracy,” he says. When elected representatives and government agencies receive feedback from good-enough fakes, whether impersonating real people with false information, like photos, or failing to validate whether people exist at all, he calls it a “man-in-the-middle attack on democracy.” The comment stuffing that occurred around network neutrality rules changes as the FCC didn’t rely on faces, but it did rely on faked and stolen identities, the New York State Attorney General has alleged. Imagine millions of comments accompanied by millions of unique faces.

Luckily, all is not lost. Both the specific technology and the context in which these generated images of people are used allows them to be unmasked, too.

advertisement

Invasion of the [GANs]

What the algorithm creates, the algorithm can take away, too. Tesla’s controversial autopilot driving-assistance software is both remarkable in its sophistication but can also be fooled by a few stickers placed strategically on pavement, researchers recently documented.

So, too, is StyleGAN exceptionally clever in advancing the state of the art but also has noticeable “tells” that can’t easily be erased. The eye shape and irises of a generated person often don’t look identical—one may be larger or shaped oddly compared to the other. Backgrounds also often don’t look exactly right, especially at borders between the person’s head and the rest of the background.

We can’t rely on that indefinitely. Buzzfeed’s Silverman says, “Six months from now, the AI is probably going to sort all those things out.” The faces will only become more and more reliable. The StyleGAN code has already been adapted (to anime faces, for instance) and extended to other domains. The researchers also showed how the technique could be used for cats and home interiors; imagine an infinite array of faked Airbnb listing pictures.

But even as the quality of images will certainly improve, each generated person will almost certainly remain unique: they’ll appear in one photo and in one view. The deep learning algorithm doesn’t know anything about any face it outputs. Those faces are purely the result of details extracted from training sets of photos of real people, manipulated into new forms.

Bergstrom notes, “Right now, to the best of my knowledge, there is no technology that can generate multiple images of the same person.” He says, “It has taken one layer of large-scale features and is putting on additional layers of fine-scale features. There’s no underlying model of what that person looks like.”

Deepfake videos allow changes in perspective by relying on many pictures of the same real person to produce sometimes-convincing generated video of that person’s face. When it comes to generating faces, that’s just not possible given the current approach.

advertisement

Which face is the real one? (Right again!) [Screenshot: whichfaceisreal.com]
A simple web search is one useful countermeasure. It is exceedingly rare to find a person whose online existence consists of just a single discoverable portrait. Silverman notes that “sometimes the total absence of data is really useful evidence.” If 10 Twitter accounts suddenly appear and begin engaging in suspicious or coordinated behavior, he says, having 10 unique facial photos that exist nowhere else could be a significant signal of fakery.

Silverman says that because you’ll never be able to get someone using a generated portrait to join a video call or to provide images of themselves to prove who they are, they may be more easily dismissed, too. “At the end of the day, if they can’t send a photo from five minutes ago to establish that they’re a person, all that comes crashing down,” he says.

But fraudsters and propagandists using deepfaces may be able to succeed with sheer volume. “The imminent threat of it seems somewhat less than when it gets really, really cheap and really easy and the quality gets really good and we can be flooded with it,” says Silverman. That could overwhelm examination and research.

Who’s the real one? (To the left, to the left) [Screenshot: whichfaceisreal.com]
Bergstrom, at the University of Washington, says that the risk is most acute in this current transition period, in which these images start to appear while people, organizations, and government agencies aren’t immunized against them.

Immunization is part of Bergstrom and West’s goal with their course and online efforts. West notes that people quickly became fairly good at whichfaceisreal.com in learning to look for artifacts, aberrant lighting, and other telltale signs of fakery.


Related: How to spot fake photos online

advertisement

“They’re doing exactly what we wanted,” he says. “They’re getting a little bit better at identifying at least what the fakes are doing to distinguish from the real.” Next, they plan to step up the game and remove backgrounds, as well as show people two fakes and two real images at times.

Of course, even simply asking humans to choose “which face is real” could help advance the technology too, providing data that, in the right hands, could be used to build even more convincing fake faces.


Related: How DARPA’s fighting deepfakes


But as people train themselves to recognize faked faces, they can also help train future anti-StyleGAN algorithms, teaching computers how to spot automatically false images. The circle will be complete when you’ll be able to run a test against any image you see not just with a reverse-image search but with a kind of reality search.

advertisement
advertisement

About the author

Glenn Fleishman is a veteran technology reporter based in Seattle, who covers security, privacy, and the intersection of technology with culture. Since the mid-1990s, Glenn has written for a host of publications, including the Economist, Macworld, the New York Times, and Wired

More