advertisement
advertisement

Today In Tabs: Copyright Law Was Not Created To Protect People From Fatwas

An exclusive excerpt from Sarah Jeong’s new book, The Internet of Garbage.

Today In Tabs: Copyright Law Was Not Created To Protect People From Fatwas
[Photos: Flickr users tanakawho, Gavin Schaefer]

Sarah Jeong is one of the smartest writers on the weird intersection of the law and the internet, and today she has an ebook out called “The Internet of Garbage,” about the state of internet garbage in 2015, how we got here, and what we can do about it. The book covers spam, gendered harassment, and free speech, and arrives at solid technical and policy ideas for what we could do to make the internet better and safer for everyone. But I think my favorite chapter was the story of a strange copyright case called Garcia vs. Google and why the copyright-takedown model is a dead end for fighting online harassment. Fortunately Sarah has a book to sell so she gave me permission to send you that chapter as Tabs today! So here’s ~3500 words on copyright law and internet harassment, enjoy!

advertisement

Copyright Law Was Not Created to Protect People From Fatwas

1. Cindy Lee Garcia

On December 15, 2014, an en banc panel of 11 judges of the Ninth Circuit Court of Appeals sat for oral arguments in Garcia v. Google. Cris Armenta, the attorney for the plaintiff, began her argument:

Cindy Lee Garcia is an ordinary woman, surviving under extraordinary circumstances. After YouTube hosted a film trailer that contained her performance, she received the following threats in writing:

Record at 218: “Are you mad, you dirty bitch? I kill you. Stop the film. Otherwise, I kill you.”

Record at 212: “Hey you bitch, why you make the movie Innocence of Muslim? Delete this movie otherwise I am the mafia don.”

Record at 220: “I kill whoever have hand in insulting my prophet.”

Last one, Record at 217. Not the last threat, just the last one I’ll read. “O enemy of Allah, if you are insulting Mohammed prophet’s life, suffer forever, never let you live it freely, sore and painful. Wait for my reply.”

At this point, Armenta was interrupted by Judge Johnnie Rawlinson. “Counsel, how do those threats go to the preliminary injunction standard?”

Indeed, her opening was an odd way to begin, and the observers—mostly lawyers deeply familiar with copyright who had followed the case with great interest—were confused by it. Wasn’t Garcia a case about copyright law and preliminary injunctions?

For Cindy Lee Garcia, of course it wasn’t. It was a case about her right to control her exposure on the Internet. But in her quest to end the barrage of hate aimed at her, she ended up in a messy collision with copyright doctrine, the Digital Millennium Copyright Act (DMCA), the Communications Decency Act (CDA), and the First Amendment.

The Ninth Circuit had released an opinion earlier that year, written by then Chief Judge Kozinski. Garcia may have made few headlines, but it caused a wild frenzy in the world of copyright academia. In short, Kozinski’s opinion appeared to break copyright law as had been understood for decades, if not a century.

The case was a hard one—the plaintiff was sympathetic, the facts were bad, and the law was—Kozinski aside—straightforward. Cindy Garcia had been tricked into acting in the film The Innocence of Muslims. Her dialogue was later dubbed over to be insulting to the prophet Mohammed. Later the film’s controversial nature would play an odd role in geopolitics—at one point, the State Department would blame the film for inciting the attack on the Benghazi embassy.

advertisement

Meanwhile, Garcia was receiving a barrage of threats due to her role in the film. She feared for her safety. The film’s producers, who had tricked her, had vanished into thin air. She couldn’t get justice from them, so she had to settle for something different. Garcia wanted the film offline—and she wanted the courts to force YouTube to do it.

Garcia had first tried to use the DMCA. YouTube wouldn’t honor her request. Their reasoning was simple. The DMCA is a process for removing copyrighted content, not offensive or threatening material. While Garcia’s motivations were eminently understandable, her legal case was null. The copyright owner of the trailer for The Innocence of Muslims was Nakoula Basseley Nakoula, not Garcia. Garcia pressed the theory that her “performance” within the video clip (which amounted to five seconds of screen time) was independently copyrightable, and that she had a right to issue a DMCA takedown. YouTube disagreed, and their position was far from unfounded—numerous copyright scholars also agreed. (In the December 2014 en banc hearing, Judge M. Margaret McKeown would comment, “Could any person who appeared in the battle scenes of The Lord of the Rings claim rights in the work?”)

Garcia went to court. She lost in the district court, and she appealed up the Ninth Circuit. To nearly everyone’s surprise, then Chief Judge Kozinski agreed with her that her five-second performance had an independent copyright, a move that went against traditional doctrinal understandings of authorship and fixation.

A strange thing then unfolded there. It wasn’t merely a decision that Garcia had a copyright inside of a work someone else had made. If it had been, Garcia could go home and reissue the DMCA request. But instead, the court ordered YouTube to take down the video—thus creating an end-run around the DMCA, even though the DMCA notice-and-takedown procedure had been specifically designed to grant services like YouTube “safe harbor” from lawsuits so long as they complied with notice-and-takedown. (Cathy Gellis, in an amicus brief written for Floor64, additionally argued that an end-run around CDA 230 had also been created.) Kozinski had broken copyright law and the DMCA.

Google/YouTube immediately appealed the decision, requesting an en banc hearing—essentially, asking the court of appeals to rehear the case, with 11 judges sitting instead of only three. Their petition was accompanied by 10 amicus briefs by newspapers, documentarians, advocacy groups, industry groups for technology companies and broadcasters, corporations like Netflix and Adobe, and law professors by the dozen.

Nobody liked the Garcia ruling. What did it mean for news reporting casting interview subjects in an unflattering light? And what did it mean for reality television shows? For documentaries? What did it mean for services like Netflix that hosted those shows and documentaries? The first Ninth Circuit opinion had created a gaping hole in copyright and had pierced through the well-settled rules that governed how copyright liability worked on the Internet.

advertisement

In May 2015, the first ruling was reversed by the en banc panel. “We are sympathetic to her plight,” the court wrote. “Nonetheless, the claim against Google is grounded in copyright law, not privacy, emotional distress, or tort law.”

Garcia is a case that may even go up to the Supreme Court, though until then, interest in Garcia will likely be confined to copyright academics and industry lawyers. Yet lurking beneath the thorny legal and doctrinal issues is the great paradigm shift of the present digital age, the rise of the conscious and affirmative belief that women should have, must have, some kind of legal recourse to threats online. It’s how Cris Armenta wanted to frame her argument, and it is no doubt an important motivating factor to the 2014 Kozinski decision. Cindy Lee Garcia is a woman stuck between a rock and a hard place. Nonetheless, the 2014 Garcia decision is wrongly decided. Garcia is not just a weird copyright case; it’s a case that speaks volumes about popular attitudes towards online harassment and about the dead end that will come about from the focus on content removal.

2. How The DMCA Taught Us All The Wrong Lessons

Cindy Garcia went straight to the DMCA because it was the “only” option she had. But it was also the “only” option in her mind because 16 years of the DMCA had trained her to think in terms of ownership, control, and deletion.

When you assume that your only recourse for safety is deletion, you don’t have very many options. It’s often very difficult to target the poster directly. They might be anonymous. They might have disappeared. They might live in a different country. So usually, when seeking to delete something off the Web, wronged individuals go after the platform that hosts the content. The problem is that those platforms are mostly immunized through Section 230 of the Communications Decency Act (described in detail below). The biggest gaping hole in CDA 230, however, is copyright. That’s where most of the action regarding legally-required deletion on the Internet happens, and all of that is regulated by the DMCA.

The Digital Millennium Copyright Act

The Digital Millennium Copyright Act, among other things, provides “safe harbor” to third party intermediaries so long as they comply with notice-and-takedown procedures. So if a user uploads a Metallica music video without permission, Warner Bros. cannot directly proceed to suing YouTube. Instead, Warner Bros. would send a DMCA notice. If the notice is proper, YouTube would be forced to take down the video, or otherwise they would no longer be in their “safe harbor.”

advertisement

The safe harbor provision of the DMCA is largely touted with encouraging the rise of services like YouTube, Reddit, WordPress, and Tumblr—services that are now considered pillars of the current Internet. These sites host user-generated content. While there are certainly rules on these sites, the mass of user-generated content can’t be totally controlled. Without DMCA safe harbor, these sites couldn’t cope with copyright liability for material that slipped through the cracks. Although today YouTube uses a sophisticated ContentID system that does manage to automatically identify copyrighted content with surprising accuracy, ContentID was developed later in YouTube’s history. This extraordinary R&D project couldn’t have existed without the early umbrella of protection provided by DMCA safe harbor. Theoretically, DMCA safe harbor protects the little guys, ensuring that the Internet will continue to evolve, flourish, and provide ever-new options for consumers.

The DMCA is also one of the handful of ways you force an online intermediary to remove content.

The Communications Decency Act, Section 230

Under present law, DMCA works in lockstep with Section 230 of the Communications Decency Act, which generally immunizes services from legal liability for the posts of their users. Thanks to CDA 230, if someone tweets something defamatory about the Church of Scientology, Twitter can’t be sued for defamation.

There are very few exceptions to CDA 230. The other notable exception is federal law banning child pornography. But the big one is copyrighted material. Copyright infringement is not shielded by CDA 230—instead, any violations would then be regulated by the provisions of the DMCA instead.

CDA 230 was created in response to Stratton Oakmont v. Prodigy, a case where the Web service Prodigy was sued for bulletin board posts that “defamed” Wall Street firm Stratton Oakmont. (Today, Stratton Oakmont is best known as the subject of the Martin Scorsese film The Wolf of Wall Street, a film adaptation of a memoir).

advertisement

At the time, Prodigy received 60,000 postings a day on its bulletin boards. The key was that Prodigy did enforce rules, even if it couldn’t control every single posting. By taking any sort of action to curate its boards, it had opened itself up to liability. Strangely, the Stratton Oakmont decision discouraged moderation and encouraged services to leave their boards open as a free-for-all. Legislators sought to reverse Stratton Oakmont by creating CDA 230.

Changing CDA 230?

CDA 230 was a shield in order to encourage site moderation and voluntary processes for removal of offensive material. Ironically, it is presently also the greatest stumbling block for many of the anti-harassment proposals floating around today. CDA 230 can seemingly provide a shield for revenge porn sites—sites that purportedly post user-submitted nude pictures of women without their consent. Danielle Citron in Hate Crimes in Cyberspace proposes creating a new exception to CDA 230 that would allow for liability for sites dedicated to revenge porn, a smaller subset of a category of sites for which Citron adopts Brian Leiter’s label: “cyber-cesspool.”

CDA 230 has no doubt been essential in creating the Internet of 2015. Any changes to the status quo must be carefully considered—how much of the Internet would the new exception take down, and which parts of the Internet would it be? What kind of exception would there be to news sites and newsworthy material? The matter of crafting the perfect exception to CDA 230 is not theoretically impossible, but then there is an additional practical aspect that muddies the waters.

Any legislation laying out a new exception, no matter how carefully crafted from the start, will likely suffer from mission creep, making the exception bigger and bigger. See, for example, efforts to add provisions to outlaw “stealing cable” in a 2013 Canadian cyberbullying bill. Anti-harassment initiatives become Trojan Horses of unrelated regulation. It is rhetorically difficult to oppose those who claim to represent exploited women and children, so various interest groups will tack on their agendas in hopes of flying under the cover of a good cause.

At the time of writing, CDA 230 remains unaltered. But new considerations are in play. Many of the major revenge porn sites have been successfully targeted either by state attorneys general or by agencies like the Federal Trade Commission. One operator, at least, was not blindly receiving submissions as a CDA 230-protected intermediary, but was actually hacking into women’s email accounts to procure the photos. Other operators were engaging in extortion, charging people to “take down” the photos for a fee. Revenge porn websites have demonstrated a long and consistent pattern of unlawful conduct adjacent to hosting the revenge porn itself. These sites, which Danielle Citron calls the “worst actors,” never quite evade the law even with CDA 230 standing as-is. It turns out that these worst actors are, well, the worst.

advertisement

A new exception to CDA 230 aimed at protecting the targets of harassing behavior stands in an uncanny intersection. A narrow exception does not officially make criminals out of people who were acting badly; it rather targets people who have consistently demonstrated themselves to be engaged in a host of other crimes that are prosecutable. But a broad exception, targeted just a step above the “worst actors,” could be disastrous for the Internet.

Turning Hate Crimes Into Copyright Crimes

When Citron’s Hate Crimes in Cyberspace went to print, she outlined a proposal for a limited and narrow exception to CDA 230, meant to target these “worst actors.” But she also took great pains to explain how it was not targeted at other, more mainstream sites, with Reddit cited as an example of a site that would not be affected.

Shortly after Hate Crimes in Cyberspace was published in September 2014, Reddit became ground zero for the distribution of nude photos of celebrities that had been hacked from their Apple iCloud accounts. “Leaked” nudes or sex tapes are nothing new in Hollywood, but in an era of increasing awareness of misogyny on the Web, this mass nonconsensual distribution of photos struck a new chord. Jennifer Lawrence called what happened to her a “sex crime,” and many pundits agreed.

Reddit was slow to remove the subreddit that was the gathering place for the photos. But eventually it did, with the reasoning being that the images being shared there were copyrighted. A tone-deaf blog post by then CEO Yishan Wong announced that they were “unlikely to make changes to our existing site content policies in response to this specific event,” explaining,

The reason is because we consider ourselves not just a company running a website where one can post links and discuss them, but the government of a new type of community. The role and responsibility of a government differs from that of a private corporation, in that it exercises restraint in the usage of its powers.

The title of the post was, incredibly, “Every Man is Responsible for His Own Soul.” Yishan Wong resigned in November 2014 (supposedly over an unrelated conflict). In February 2015, under new CEO Ellen Pao, Reddit implemented new policies on nonconsensually distributed nude photos. By May 2015, Reddit implemented site-wide anti-harassment policies.

advertisement

As of writing, Reddit is now in a very different place than it was in 2014—but its actions in September of that year are a fascinating case study in the worst way for a platform to handle harassment. Reddit is not a “worst actor” in the hierarchy of platforms, and its relative prominence on the Internet likely did end up influencing its eventual policy changes, despite initial resistance. What’s striking about the September 2014 incident is that in removing the offending subreddit, Reddit did not appeal to morals, the invasion of privacy, Reddit’s pre-existing rule against doxing (the nonconsensual publication of personal information), or the likely crime that had occurred in acquiring the photos in the first place. Instead, Reddit cited DMCA notices, effectively placing copyright as a priority over any of those other rationales.

The affair doesn’t cast Reddit in a particularly good light, but the bizarre entanglement between the DMCA and gendered harassment on the Internet isn’t new. Regardless of their motivations, both Reddit and Cindy Lee Garcia fell into the same trap: They turned a hate crime into a copyright crime.

When people are harassed on the Internet, the instinctive feeling for those targeted is that the Internet is out of control and must be reined in. The most prominent and broad regulation of the Internet is through copyright, as publicized in the thousands of lawsuits that RIAA launched against individual downloaders, the subpoenas the RIAA issued to the ISPs to unmask downloaders, and the RIAA and MPAA’s massive lawsuits against the Napsters, Groksters, and even YouTubes of the world. In our mass cultural consciousness, we have absorbed the overall success of the RIAA and the MPAA in these suits, and have come to believe that this is how one successfully manages to reach through a computer screen and punch someone else in the face.

Online harassment, amplified on axes of gender identity, race, and sexual orientation, is an issue of social oppression that is being sucked into a policy arena that was prepped and primed by the RIAA in the early 2000s. The censorship of the early Internet has revolved around copyright enforcement, rather than the safety of vulnerable Internet users. And so we now tackle the issue of gendered harassment in a time where people understand policing the Internet chiefly as a matter of content identification and removal—and most dramatically, by unmasking users and hounding them through the courts.

Yet an anti-harassment strategy that models itself after Internet copyright enforcement is bound to fail. Although the penalties for copyright infringement are massive (for example, statutory damages for downloading a single song can be up to $150,000), and although the music and movie industries are well-moneyed and well-lawyered, downloading and file-sharing continues.

Content removal is a game of whack-a-mole, as Cindy Lee Garcia learned. Shortly after the first Ninth Circuit decision in her favor, she filed an emergency contempt motion claiming that copies of The Innocence of Muslims were still available on the platform, demanding that Google/YouTube not only take down specific URLs but also take proactive steps to block anything that came up in a search for “innocence of muslims.”

advertisement

From Garcia’s point of view, if her safety was at stake, then only a total black-out could protect her. But copyright law was not created to protect people from fatwas. Her case, already a strange contortion of copyright law, became even messier at this moment, as her lawyer asked for $127.8 million in contempt penalties—the copyright statutory damages maximum of $150,000 multiplied by the 852 channels that were allegedly “still up.” At that moment, Cindy Garcia, who had so far been a sympathetic plaintiff laboring under extraordinarily difficult circumstances, suddenly became indistinguishable from a copyright troll—plaintiffs who abuse copyright law in order to make substantial financial profits.

Google’s reply brief clapped back: “Garcia’s fundamental complaint appears to be that Innocence of Muslims is still on the Internet. But Google and YouTube do not operate the Internet.”

3. The Illusive Goal of Total Control

Garcia may have been right that removing or disabling most or even some instances of the video could have mitigated her circumstances. But it’s hard to say, especially once the cat was out of the bag. Indeed, during the December 2014 oral arguments, Judge Richard Clifton chimed in with, “Is there anyone in the world who doesn’t know your client is associated with this video?” Garcia’s attorney stumbled for a bit, and Judge Clifton interrupted again, musing, “Maybe in a cave someplace, and those are the people we worry about, but…”

In many circumstances, when online content continues to draw attention to a target of harassment, the harassment is amplified, and once the content falls away out of sight, the interest disappears as well. But at the same time, Garcia wasn’t seeking to merely mitigate the harassment; she wanted to wipe the film off the Internet simply because she had appeared in it.

Garcia was chasing a dream of being able to completely control her image on the Internet. It’s an echo of the same dream that the record industry has been chasing since the 1990s. It’s not that you can’t impact or influence or dampen content in the digital realm. But there’s no way to control every single instance, forever.

Any anti-harassment strategy that focuses on deletion and removal is doomed to spin in circles, damned to the Sisyphean task of stamping out infinitely replicable information. And here, of course, is the crux of the issue: Harassing content overlaps with harassing behavior, but the content itself is only bits and bytes. It’s the consequences that echo around the content that are truly damaging—threats, stalking, assault, impact on someone’s employment, and the unasked-for emotional cost of using the Internet. The bits and bytes can be rearranged to minimize these consequences. And that’s a matter of architectural reconfiguration, filtering, community management, norm-enforcement, and yes, some deletion. But deletion should be thought of as one tool in the toolbox, not the end goal. Because deletion isn’t victory, liberation or freedom from fear. It’s just deletion.

advertisement

If you read that whole thing, or I guess if you just skimmed down here to the end, please buy the rest of the book, it’s very good. Sarah also adapted some more of it today in a post about Reddit for Forbes, and Today in Tabs will be back tomorrow with the day’s regular internet garbage. Thanks to Fast Company for helping me do any ridiculous thing I want here, and TinyLetter for getting it to your inbox.

Don’t be an Inbox Zero, be an Inbox Hero. Subscribe to Today in Tabs.

  

powered by TinyLetter

Video