Fast company logo
|
advertisement

It’s easier than ever to create nonconsensual explicit images and videos using deepfakes. It’s almost impossible to get them taken down.

A Twitch streamer got caught viewing deepfake porn. His quest to make amends shows how hard removing it actually is

[Photo: Wei Ding/Unsplash]

BY Charlie Metcalfe7 minute read

On January 30, Brandon Ewing made a mistake he’ll never forget. While broadcasting himself playing Hitman 3 to some of his 316,000 Twitch followers, Ewing, who streams as Atrioc, briefly switched windows to check the time, revealing a browser tab open to a pornographic website. Not only did the OnlyFans-esque website host porn, but the page shown belonged to a user who exclusively made AI-generated deepfake porn, including videos that featured other Twitch streamers—many of whom Ewing knows personally.

Several of Ewing’s viewers captured screenshots and shared them online, revealing the website and women involved. They went viral. One post on Reddit received 8,500 upvotes before moderators eventually removed it. Some of the women from the videos remained silent. Others, including Twitch streamer QTCinderella, expressed their disgust online. “I want to scream,” she wrote on Twitter. “Being seen ‘naked’ against your will should NOT BE A PART OF THIS JOB.” She demanded that people stop sharing the screenshots of and links to the website involved online.

While deepfakes—which use machine learning and AI to de-age an actor or even realistically swap their face with someone else’s—have attracted attention over the past few years for the associated political disinformation risks they pose, most are pornographic. Software once required large volumes of images to generate videos of victims. Now, with only a few photos scraped from a social media feed, almost anybody with access to the internet can generate their own.

According to one analysis in 2019 by a company called Deeptrace, porn makes up about 96% of all deepfake content online. Since that report, however, progress in AI has increased accessibility, accelerated production, and improved results. The app Reface has generated more than a billion face swaps alone. 

In a follow-up stream a few hours after the incident, Ewing (with his wife crying behind him) claimed his curiosity had gotten the better of him in subscribing to the website. Two days later, he published a written apology on Twitter to announce that he would do everything he could to repair the damage. “​​[QTCinderella] described it to me as a ‘wildfire’, and I believe that is correct,” he wrote. “My goal now is concrete action to fight that wildfire and do everything I can do to combat the damage.”

Even the user whose work Ewing spotlighted removed the videos in the aftermath and issued an apology—pledging to “help decrease the number of future videos of those involved.”

But as Ewing has found, coaxing the deepfake genie back into the bottle is no easy task. With the new tools for making such content becoming more accessible every day, the traditional methods of combating it—copyright claims—are proving insufficient. Even the companies stepping in to help remove the illicit works from search results are only hiding something that may never disappear. 

In the aftermath of the Atrioc incident, the offending videos proliferated across the internet despite being deleted. At QTCinderella’s recommendation, Ewing contacted Los Angeles law firm Morrison Rothman, which serves people in the digital entertainment sector. Ewing paid the firm a $60,000 one-off fee, and announced that any women affected by his mistake could contact it for help.

Ryan Morrison, a founding partner at Morrison Rothman, tells Fast Company that a collection of women then contacted his firm—some through Ewing, others independently. Morrison’s employees got to work searching for websites hosting the content. Once they found these websites, they submitted takedown notices—requests to remove the offending content, in line with relevant copyright or trademark laws in the owner’s country. In the U.S., this is principally the Digital Millennium Copyright Act.

Morrison explains that a website owner must, by law, remove the content on receipt of a DMCA takedown request. That said, if the website owner responds with a counter-notice within two weeks, the owner can then republish the content without any review process. If victims wish to pursue the case further, they need to take formal legal action. “Since most people don’t have the money or desire to sue, many trolls get away with getting their content back up,” Morrison says.

Sometimes, the website owners do not respond, or live in countries where effective copyright laws do not exist. In these cases, Morrison’s team will pursue the website’s host company. Morrison says that these companies will almost always say “Go to hell. It’s not our problem.” At that point, his team can approach the registrar, which is the company that owns the websites’ URLs. The registrar has the ability to shut down the offending websites, although this sometimes requires court action, according to Morrison.

For Ewing, this process seemed too slow. Although the original website had removed the offending content, the porn had proliferated across the internet. Thousands of web pages now displayed it. Ewing believed that Morrison Rothman’s team was manually searching for and typing up each DMCA request individually. (Morrison confirmed this, although he said that his company also uses software to find offending content.) By the end of February, Ewing says that the law firm had taken down only 51 web pages.

advertisement

That’s when he heard about Ceartas. The Dublin-based anti-piracy and privacy protection agency typically helps people protect their copyrighted material online. And in that, it has been very successful—it has represented rapper Iggy Azalea, model Lucy Pinder, and adult content creator Sophie Mudd.

Like Morrison Rothman, Ceartas helps to find creators’ content that has been copied online, and then gets it taken down for breaching copyright or trademark laws. But its founder, Dan Purcell, claims to have found a quicker and more effective means of doing so—using innovative software assisted by AI to find offending content and submit DMCA takedown requests.

Though Purcell hesitates to talk in-depth about how his software works for fear of giving away too much—he says it creates automatic lists of search terms that potential viewers might use when looking for the relevant content. Its “crawlers” use those search terms to scour the internet for images and store them in a file when they find them. Using machine learning, each image enhances the bots’ search. Ceartas’s software then automatically serves takedown notices to all the offending websites (often running into the thousands). It also sends automatic delisting requests to Google, which prevents them from appearing in its search results. 

Using this method, Purcell says he delisted 400 web pages in the 12 hours after Ewing contacted him in March. When Fast Company last spoke to him on May 26, he says that Ceartas had delisted 269,414 web pages from Google, with about 50% of those being full takedowns in response to DMCA notices.

Ceartas’s method succeeds in removing and delisting large volumes of content from the internet. Its success rate in Google delisting requests is 90%, which Purcell claims is the highest in the industry. In the world of deepfake porn, that at least means that family members and friends—or the subjects themselves—are less likely to stumble across the content while innocently browsing the web.

Nevertheless, in many cases the content continues to exist. Google delistings only prevent the content from showing up on Google searches, not on the search functions of specific porn websites. And, unlike Morrison Rothman, Ceartas does not employ lawyers. Purcell’s team is unable to pursue legal cases against stubborn website owners, hosting companies, or registrars.

The challenge of removing pornographic deepfakes is something Noelle Martin, a legal reform activist in Australia, understands from personal experience. As a repeat target of deepfake porn, she now helps women seek justice. (Women directly affected by the deepfakes Ewing showed did not respond to requests for comment.) She says laws worldwide are still insufficient to address the pervasiveness of harmful AI-generated content. 

The facts globally suggest tools will remain limited until governments take action. Both Morrison Rothman and Ceartas use civil legislation, including copyright and trademark laws, to request the removal of content. And while the U.K. criminalized deepfake pornography in November 2022, most U.S. states and the federal government are yet to develop specific legislation against it. The EU also lacks specific laws to protect the victims of deepfake pornography. Even the civil law is hazy, according to Morrison.

Social media platforms might be able to evolve faster. Following the incident in January, Twitch updated its community guidelines, which now state that anyone caught promoting, creating, or sharing pornographic deepfakes will receive an indefinite suspension upon the first offense. But Martin says governments need to enact laws that force all media platforms to do more. She believes lawmakers should penalize platforms that host the content, and that creating nonconsensual deepfake porn should be criminalized. 

Because ultimately, in many cases, deepfake porn involving unknowing victims will continue to exist in the deepest, darkest corners of the internet—out of reach of even companies like Ceartas. “Technology’s great, but there’s a very dark side of this that people don’t really see,” Purcell says. “There’s an underbelly that’s quite disturbing.”

Martin believes that attempts to remove and delist deepfake porn from the internet are largely superficial. From her perspective, the damage has been done. Although her initial response as a victim also was to remove the content, she acknowledges that it fails to address the source of the problem: the people creating the content. But a combination of tools and approaches could help, she says, as would education. “It isn’t just going to be one solution.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy