advertisement
advertisement

Exclusive: The Harvard professor behind Facebook’s oversight board defends its role

The idea seems to have its merits, but how Facebook implements the plan will speak volumes about the company’s real motivations.

Exclusive: The Harvard professor behind Facebook’s oversight board defends its role
[Illustration: Flickr user Simon Summerfield Jolly; Enoc vt/Wikimedia Commons]

With Facebook, Twitter, and YouTube facing widespread criticism for the way they manage hateful, abusive, threatening, or fake content posted by users or content partners on their sites, many question whether private tech companies deserve to wield so much power to control what people can and can’t see on social networks.

advertisement
advertisement

Some believe that cultural or political biases shared by the companies’ employees (or that are baked into their algorithms) make it impossible for them to fairly police user content. Lawmakers and regulators are now exploring the idea of breaking up these tech platforms into smaller pieces to limit their reach. Facebook is an especially alluring target after being weaponized by Russia to influence the 2016 presidential election and funneling the personal data of 75 million of its users to the Trump-connected political data firm Cambridge Analytica.

Facebook first established a massive force of content moderators—mostly employed by third-party companies—to help find and remove toxic content. The company said it had 15,000 content reviewers around the world by the end of 2018. Last November, Facebook CEO Mark Zuckerberg announced that Facebook would also, by the end of 2019, form an external, independent oversight board to oversee its content moderation decisions. The board would be “. . . a new way for people to appeal content decisions to an independent body, whose decisions would be transparent and binding” based on the idea that “Facebook should not make so many important decisions about free expression and safety on its own.”

Harvard professor Noah Feldman. (Image: Twitter)

This year Facebook released a draft charter for the oversight board, and it has been holding roundtables and focus groups around the world to get input on how the oversight board should operate. But it’s still unclear who will sit on the board, how those people will be chosen, and exactly what types of problem content will fall into their purview.

The whole idea for an oversight board came from outside Facebook, from Harvard Law professor Noah Feldman, who is the main architect of Facebook’s plan. I spoke to him by phone about the details of his plan, and about some of the problems it’s likely to face.

Fast Company: What has your role been on the board? How did you engage with Facebook to help set up the guidelines for the oversight board?

Noah Feldman: I dreamt up the idea of a “Facebook Supreme Court” at the end of January 2018, and I sent a one-pager [describing it] to Sheryl Sandberg, who said, ‘Let me send it to Mark.’ Mark was intrigued. I talked to him and then after our conversation he said ‘Listen, why don’t you write a white paper for us laying this out in greater length?” So then I produced this 20-page paper [and] sent that to them in March of 2018. So that was the initial vision for the thing. And then subsequently as they decided they wanted to actually go forward with it, I was hired on as an adviser, and I’ve been advising them all the way through.

advertisement

FC: One thing I find troubling about this board, and maybe you can help me understand this, is about the nature of the medium, and how quickly harmful content goes up and begins to spread and influence and potentially wreak havoc. How can this board act quickly enough to respond to these things and really make a difference?

NF: So the first part is they’re going to often, ideally, hear cases where they actually have some time to think it over and write a thoughtful opinion. But when they do that, they’re supposed to write opinions that will set precedents on hard questions going forward. And so a lot of the time they will be able to resolve an issue that will then guide Facebook’s action in an immediate virality situation. So take as an example the Nancy Pelosi video. That’s a case about deepfakes—well, it wasn’t that deep a fake, but it’s the same principle. [In the Pelosi video, the creators merely slowed down the audio to make Pelosi’s own words sound slurred; in a real deepfake, she would have appeared to speak different words.] In a perfect world where this thing works, what we’re picturing is the oversight board would already have issued an opinion on what Facebook’s policy should be according to its own values for deepfakes, and they would have given a full explanation or rationale for it. When the Nancy Pelosi thing came up, then Facebook would have been able to say to the world “pursuant to the oversight board’s ruling on what we’re supposed to do with deepfakes, we took this down or we left this up.”

The second situation is where something brand-new comes up that no one’s ever heard of—something that’s potentially capable of achieving virality and has to be addressed right away. Under those circumstances—what I envisioned and what I think Facebook also envisions (you know, I’m advising them, I’m not them)—is that you could convene an emergency panel that would consider whether the initial decision that Facebook made at the content policy level is the right one or the wrong one. So imagine another Nancy Pelosi–like scenario and that Facebook has decided to leave it up. The board would then review that and, in a matter of a few hours, they would either say Facebook got it right, and here’s at least a preliminary statement of reasons, and then more to follow once we’ve had time to wipe our brow and write a good opinion. Or we look at this and say Facebook got it wrong, they did not follow their own values, and we’re directing them to reverse themselves. I think, without it being perfect—because the medium obviously is such that sometimes things happen even faster than a couple of hours—I think you still have the capacity to respond to emergent situations like that.

Someone could say, “Well, what about Christchurch, and it’s up on Facebook Live?” How will the board fix that? The total number of people watching it on Facebook Live initially was surprisingly small relative to the huge number of people who subsequently were trying to see it. But on that point, the minute a human sees this, it can go to the board. That it still takes a little while for humans to see things, that’s a feature of the technology that the board is not going to fix. It can direct Facebook going forward to work harder on finding a technological solution when a technological solution is needed.

FC: Are the board’s decisions binding in some way? Is there anything written into Facebook’s community standards or the bylaws that says that it has to honor the board’s opinions?

NF: Facebook is now working on completing what will be called the charter of the board. And the charter will be like the constitution of the board. Front and center in that document will be a guarantee from Facebook that they will obey the board’s directives. Mark said that openly and clearly in our conversation, and that’s point number one of the charter.

advertisement

FC: When we’re talking about harmful content, this comes in a lot of different types. There’s hate speech and bullying and child pornography. But there are also things like fake news—is that something that the board would get into?

NF: In its first iteration, the board is going to focus on those decisions where Facebook decides to take some content down or leave it up, like binary decisions. And that’s a huge range of content. When something is misinformation, Facebook tries, roughly speaking, to label it as misinformation. It’s referred to independent fact-checkers. They have a whole bunch of stuff that they do with respect to labeling. In version 1 the board is not going to consider those labeling issues. It’s not going to review whether this piece of information is labeled correctly. If this works, it makes perfect sense that it could be extended by Facebook over time not only to address the up-or-down decision, but the decisions about misinformation and decisions about whether something is promoted or is down-ranked.

I think, as you can hear from what Mark has said, he gets it. That’s important and logical. But he also gets that you have to walk before you can run. And if we launched this board right now and ask it to solve every up-ranking and down-ranking question, that’s going to make it a lot harder for the board to succeed in its first iteration. So my own advice to [Mark] has always been—and I think he gets this—let’s launch this one very innovative thing, let’s make sure it actually works, and then if it actually works and it’s truly independent and truly legitimate, and it’s seen that way, then we can add more content to its scope.

FC: Can you give me some ideas on the first or second or third type of content the board might best focus on in its first iteration?

NF: It’ll focus on content where Facebook, under its community standards, typically makes an up or down judgment. So if Facebook believes, for example, that something is hate speech, they don’t just say we’re going to down-rank this as hate speech, they say we take this down. It shouldn’t be on our site. And so those are the kinds of things we’re going to focus on. And obviously you have a lot of close cases where there’s a hard question of whether it violates the community standards. Is it hate speech or is it not hate speech?

For example, think of the public discussion about whether Mark was correct when he hinted that he thought that the Holocaust denial shouldn’t be taken down as hate speech. And then a lot of people were angry and said, “How dare you say that?” The whole point, the point that Mark gets, is that Mark shouldn’t decide that! It shouldn’t be up to Mark. That is a genuinely hard balancing decision and it will be made in the future by this board. That’s a good example of the kind of hard content decision of what are the borders of hate speech. That’s sort of the architectural situation.

advertisement

Then there’ll also be some situations where Facebook may have set a community standard that isn’t really consistent with its own values. And in those cases, I would envision the board actually saying to Facebook: “Listen, your community standard is wrong; it’s not consistent with the values that you articulated, and so you have to change it.”

FC: OK, but they’re the ones who wrote their standards. Why wouldn’t their values be reflected in their standards?

NF: It’s like what happens in Congress: There’s a Constitution of the United States and every single member of Congress plus the president has sworn an oath to uphold the Constitution. And yet it sometimes happens that Congress passes laws where the Supreme Court takes a good hard look at those laws and decides they are unconstitutional. Congress passed a law prohibiting the burning of the flag because we all love the flag and people fought and died for the flag. But the Supreme Court came in and said, no, that’s unconstitutional. The First Amendment covers symbolic expression and that includes expression we hate, like burning the flag. According to the Supreme Court, that law was not ultimately consistent with the values of the United States.

Similarly, the well-intentioned people at Facebook Content Policy are doing their best to decide every case and inform them with their values, but they’re human and sometimes they’ll be swept away by the sentiment of the moment. Or sometimes they’ll just make a judgment call and the oversight board will make a different judgment call. Mark believes that that judgment call is better made by an independent decision-maker outside of Facebook.

FC: Can the language that’s used in a decision made by the board be added to Facebook’s community standards?

NF: It could be. Facebook has its own internal process for how they change their community standards, and it’s my hope and expectation that over time the community standards will come to incorporate the policy guidance that comes from the board. Sometimes that’ll be because the board says this is what you have to do. And sometimes it might just be that the board says, this is what we think you should do. I can imagine a scenario where the people on the content policy team who make the decisions about changes to the community standards come to the board voluntarily and say, hey look, we’re really struggling over this.

advertisement

FC: That leads into the conversation about all of the countries that Facebook is available in, and how many different cultures there are, how many different legal systems, different mores and values, different nuances. Why shouldn’t Facebook just default to what’s already on the books in the legal system with regard to decisions over free speech? Why does it need its own special set of rules?

NF: Facebook bears a responsibility for what happens on its platform. They shouldn’t—and to their credit I don’t think they do—just say, “This is not up to us. Let’s just let the governments decide.” I don’t believe, as an ethical matter, that it would be right for the creators of this global platform to just pass off responsibility for decision-making to local governments on something as central as expression. So Facebook has to have, at least for purposes of the U.S., a self-regulatory body. With respect to the rest of the world, as you correctly said there’s so much variation around the world, Facebook can actually do its bit to stand up for freedom of expression, articulating standards that are genuinely protective of expression while also looking out for harm.

I have to say I’ve been very impressed by how much Facebook leadership and Mark in particular genuinely cares about freedom of expression. Now, to be sure, it has a good business reason to care about freedom of expression because everything that Facebook does is some form of expression. And I’m not troubled by the fact that it’s also in Facebook’s business interest—it is—but free speech is both convenient and useful and also the right thing. You know, it’s like one of those things that actually is both.

FC: I don’t want to sound too cynical here, but it sounds like it’s going to take a lot of people and a lot of money over time to create a review body for speech in all these different countries. And it comes at a time when more lawmakers want to do away with Section 230 of the Communications Decency Act (which gives tech companies legal immunity from lawsuits over harmful content posted on their platforms by users). I wonder if this is all an effort to demonstrate care about content moderation—to create the impression that Facebook is worthy of 230’s legal shield?

NF: This is just me talking, but I think that this is completely separate from 230. I’ll tell you exactly why. If 230 were repealed tomorrow, it would not affect the overwhelming majority of material that’s posted on Facebook that Facebook currently considers to violate its community standards. It would mean that if someone libeled someone on Facebook, Facebook could be sued, that’s true. And I’m sure that happens sometimes. But [libel] is, you know, I think one sentence out of 46 pages or something of the community guidelines. There are business reasons that obviously push both Facebook and all the other platforms to like Section 230. But the aim here [for the board] is just to try to give some logic and some principle and some expression of honesty and values of transparency to the content decisions that happen every day that are not things for which Facebook would ever be legally liable.

If people really want to pull down 230 protections, they’re going to do it anyway. And if they did do it, we would need [the board] every bit as much. It wouldn’t be more or less.

advertisement

FC: Much of the discussion around 230, at least on the Republican side, has been that tech companies are really publishers, and that they are biased against conservative voices, and so they don’t deserve legal immunity from libel suits. Tech companies like Facebook deny this and say they’re just applying their community guidelines. Do you foresee the board having to make decisions on political bias?

NF: In the long run, absolutely it will. It’s not going to be determined yet whether deplatforming would be in scope in v.1 or only in a subsequent version. I think there are very strong reasons to include it, in my view. But they haven’t issued the charter yet, so it’s not determined yet. But those kinds of decisions that are content-based, decisions that can be interpreted as political are absolutely going to be in front of the board. And it’s going to be up to the board to explain to the world—including critics—why it’s made its decisions and how it’s balancing freedom of expression against values like safety and equality.


Within the scope of what an outside deliberative body can do, Feldman convinces me that the oversight board is a good idea and a feasible one. Such a body may be a necessity for a social network that reaches more than 2 billion people.

Still, I’m not convinced that the board will help eliminate the kinds of content that do the most harm on Facebook. With some content, like the Facebook Live video of the Christchurch shooter, the problem isn’t deciding if the content violates Facebook’s community standards or reflects the company’s values. The problem is a technical one—how to quickly detect all instances of the stream, and the re-uploads of the footage after the fact, and delete them. Twenty-four hours after the Christchurch live stream happened, 300,000 re-uploads of the video could reportedly still be found on Facebook.

As the 2020 elections approach, the oversight board won’t initially be asked to make judgments on how to label specific pieces of content as fake news, or partially fake news, or political parody. It won’t be asked how much to “de-rank” or suppress fake news in users’ news feeds. It won’t be asked to deal with the half-truths pedaled in the Trump campaign’s political ads on Facebook.

The oversight board may be asked if a new legal classification needs to be developed for deepfakes, but deepfakes, too, present a detection and reaction problem, not so much a legal one. “It took a while for our systems to flag that and for fact-checkers to rate it as false,” Zuckerberg said in the wake of the Nancy Pelosi “cheapfake” uproar. “During that time, it got more distribution than our policies should have allowed.” In fact, within 48 hours of its upload, the video had been viewed more than 2 million times and shared 45,000 times on Facebook. When Facebook’s systems and people identified it as fake, the company took the reasonable action of labeling it as such and not deleting it. The Facebook people decided that it’s better to allow people to see the content, and the “fake” label, than to give Facebook the power to remove it altogether.

advertisement

Finally, Facebook has said that it’s turning itself into a “privacy-focused” platform where most social interactions will happen among small groups in private spaces. Communication within these spaces will be end-to-end encrypted. Yet Facebook has said little about how it intends to deal with harmful content within these spaces, and it has not said what role it hopes the oversight board will play. The board seems geared toward measuring free speech rights within Facebook’s public social network. There’s a big difference between yelling “fire” in a public space like a crowded theater —or on Facebook—and yelling “fire” in one’s own bedroom, where there is an expectation of privacy.

It’s way too early to say how meaningful a role Facebook’s oversight board will play in helping control nastiness and incivility on the social network. If Facebook allows the formation and operation of the board to be completely independent, it might be seen as legitimate and helpful. But it’s no panacea.

advertisement
advertisement