Since the 2016 presidential election, it’s become clear that fake news could have impacted the result. And social media was its perfect enabler–built for the rapid sharing of articles that aren’t necessarily even read, and designed to validate quack sites with the same design treatment as the New York Times.
But everyone makes mistakes. The question now isn’t whether fake news swayed the election. It’s how can social media platforms effectively curb viral misinformation, now that we recognize it exists? After all, one recent study found that 75% of people believe the fake news they see. If it’s a problem we don’t solve now, it might be a problem we’re too stupid to solve ever.
With this in mind, Co.Design spoke to the biggest players in the Valley–including Facebook, Apple, and Reddit–to see what initiatives they’ve launched to counter the spread of misinformation.
When I asked Google what it had been doing to curb fake news, the company wrote back with an 800-word list of initiatives. Needless to say, Google has acknowledged the problem of viral misinformation. It funds initiatives in fact-checking and trains journalists to use digital tools. And given that Google just does so much–name another company with an equally expansive set of digital services–fake news does impact the company’s various products in vastly different ways.
Notably, Google News launched a tab that allows you to filter by fact-checked stories in February. However, Google’s two most important steps against fake news come from its two most important pillars: AdSense and Search.
With AdSense, Google has kicked hundreds of sites off the platform within the last year–and while exact offenses vary, posing as a news organization was one of the criteria. Without the revenue pipeline of ads, fake news has a tougher time being viable.
Google has long hired people to look at the results of Google searches, and then rank them for “relevance,” or whether or not these results made sense for a given query. However, the company is now changing how these results are prioritized. Within 200 pages of newly published guidelines on ranking SEO, released this March, Google is having consultants rate fake news stories with an “upsetting-offensive” tag. You can read a lot more on the new Search initiative here, but humans are instructed to flag posts that deny the Holocaust or promote pseudoscience that equates black people with monkeys (these aren’t hypotheticals, they’re both actual examples that Google provided in its documentation).
Now, just because a post is flagged doesn’t mean it will stop showing up near the top in search results. In fact, this work will only serve as data to inform future versions of search algorithms, both written by Google’s employees and AI. However, Google already has improved search results from questions like “Did the Holocaust happen?”
Unfortunately, while Google is making headway now, the company is behind. Seriously, how did Google map the Earth and build a driverless car before preventing all this prejudiced nonsense from topping its Search results? If Google is anything, it’s a product that tells us what we want, when we want to know it, with incredible authority. Social media sites are relatively new to this problem. But Google had a six-year head start on Facebook.
Few in the United States might associate WhatsApp with fake news. It’s a secure messaging app, after all. It’s meant to allow us to have private, person-to-person conversations. How could WhatsApp possibly have a fake news problem?
Well, it does. In India, it could be the worst of all the social network platforms included here. 160 million people use the service across the country, and its viral hoaxes spread fast, in part because the messenger isn’t just person-to-person; thousands of people can exist within the same group.
Buzzfeed published an expansive roundup of WhatsApp’s worst hoaxes in India. In one case, a rumor spread across the app that new Indian money would be fit with GPS sensors, allowing it to track anyone carrying it. Within a day, a major Hindi news channel had picked up the story. But the piece argues that such pranks are rapidly turning into more dangerous rightist propaganda–much like they have in the U.S. Indeed, one expert on the matter told us that it’s largely up to India’s journalists, embedded in mass chat groups, to air hoaxes early and debunk them.
When I contacted WhatsApp’s U.S. HQ about the matter, a spokesperson responded, “WhatsApp is a personal messaging product that’s end-to-end encrypted so we don’t access to the content of messages. And, we don’t have initiatives related to fake news.” Later, they added, “Right now, we’re prioritizing parts of the platform that we can control (like spam detection innovation) because we know it makes the user experience better.” Indeed, fake news seems to be a more India-centric problem, but spam is infiltrating the entire world of WhatsApp.
Fake news is an incredible challenge to solve within such a private platform, but WhatsApp won’t even take the first step–acknowledging the problem at all.
Facebook quickly became the poster child for fake news this fall. The company supposedly had an earnest gut check behind closed doors after the election, but Mark Zuckerberg has also made public statements like, “Identifying the ‘truth’ is complicated.” (To be fair, when you fire your human news team three months earlier, it probably is.)
Facebook should get some credit, as the company has taken steps to curb fake news. For one, it has pulled its own embeddable ad service from fake news apps and sites, in an effort to reduce the profit stream of bogus viral sites.
Facebook also announced a button in January that would allow users to flag posts that looked fake. The post would be sent to a third party fact checker. And if found untrue, it would be labeled “disputed,” not “false.” Furthermore, the story would still be shareable.
Facebook appears to have only just launched the service in March, though it’s unclear how widely. (Anecdotally, no one at Co.Design has spotted a disputed post to date.)
The problem is that Facebook appears to be too afraid of the partisan backlash that preventing the sharing of what is often right wing propaganda would provoke. (More on the sorts of sites conservative Facebook shares here–you’ll see a dearth of legitimate publications.) And it won’t take the real step that matters: To stop making fake news so darn shareable. As Mark Zuckerberg puts it, “…I think it’s important to try to understand the perspective of people on the other side.” Even, apparently, when that perspective is grounded in fiction.
Instead of simply ending fake news, Facebook is funding initiatives around fake news. With its Facebook Journalism Project, for instance, the company speaks of increasing news literacy for all readers. A nice idea. Yet frankly, it’s a page straight out of Big Tobacco, which for years funded ineffective anti-smoking campaigns–and increased warning labels–all while continuing to sell a damaging, addictive substance.
While Facebook has taken most of the blame for the virility of fake news, perhaps it’s a little unfair that the company carry that burden alone. According to one analysis, the sharing of links works almost identically on Facebook and Twitter. So even though you have more racist uncles on Facebook than you do Twitter, and even though Facebook’s timeline appears to curate content far more algorithmically, the patterns of fake news sharing on the two sites are more or less the same.
Twitter has historically been agnostic, not just about misinformation, but about curbing any kind of speech–even when it’s filled with hate, or involves coordinated pile-ons against users. That’s slowly changing. In the last year, Twitter has become more confident with banning abusive users and deleting the accounts of right-wing extremists (some of whom, incidentally, set up shop on a Twitter clone of their own).
The company admits that it hasn’t taken any specific actions to curb fake news following the election. A Twitter spokesperson argues that as Twitter isn’t serving up algorithmic content from anyone you don’t already follow (as Facebook does), you’re less likely to get fictitious content in your feed. Furthermore, Twitter’s view is that verified accounts (which Facebook has, too) assist in distinguishing the fact from fiction. Finally, while some studies show that millions of bot accounts tweeted during the election–likely in attempts of swaying the conversation–Twitter likens these unfollowed bots to trees falling in empty forests, while its algorithms, we’re told, have already been coded to prevent bots from manipulating the platform’s trending hashtags.
It’s all very tough to prove or disprove. Meanwhile, the largest fake news problem at Twitter may simply be that it is the single best amplifier for President Trump’s own tweeted fictions, which The Washington Post addresses weekly. This is a tricky situation. Should Twitter police the speech of the leader of the free world? Or should it allow users to hear the mistruths? Twitter has no policy that mandates people be honest on the service, but did reiterate that all users–even our president–were subject to banning if they broke Twitter’s social policies, which include protection against harassment or violent threats.
Facebook, on the other hand, has more or less clarified that it would never ban the president.
Reddit has long billed itself as the “front page of the internet.” But unlike your typical newspaper, Reddit readily admits that errors–even intentional misinformation–are a natural consequence of the platform, which is just a gigantic, sprawling message board filled with hyper-specific subreddits that moderate themselves.
“There’s definitely some, let’s call them the conspiratorially minded subreddits, which have the relative same amount of journalism of the Weekly World News did when I was growing up,” says Chris Slowe, Reddit founding engineer and head of the Anti-Evil Team, which is attempting to rid Reddit of its most notorious bad behaviors. “Those people need an outlet for those sort of things. I’d be surprised if that kind of community didn’t exist on Reddit. The fact that there are some people with powerful community reality distortion fields on the platform…is a testament to the platform. But the fact that they keep to themselves is also a testament to the platform.”
Reddit’s greatest defense against fake news is the community itself, Slowe contends, arguing that the site is designed with discussions that counterbalance bad facts. Meanwhile, Reddit doesn’t want to dictate truths from an ivory tower. With many mistruths on the platform, “who knows if it’s satire or not?” Slowe says. “I don’t want to be the one to say, ‘you’re not talking like Colbert, you’re talking like crazies.”
The Donald is an ultra-conservative subreddit that constantly walks the line between satire, misinformation, and bigotry. A recent headline read, “Hey Starbucks, you didn’t have to fire your CEO. We’re still not buying your shitty coffee.” As any fact checker would note, Starbucks didn’t fire Howard Schultz. The founder/CEO, responsible for one of the biggest comebacks in corporate history, was reported to legitimately step down from his role at the company. But is this Reddit headline malicious fake news, or is it poetic license? This is the sort of gray area that prevents Reddit from policing factuality altogether.
In theory, Reddit’s program isn’t much better than Facebook or Twitter’s. Reddit allows most groups to share what fake information they want. However, by design, the worst fake news is contained within subreddit silos rather than spilling into other feeds. And the company says that it has countermeasures in place to prevent special interest groups from gaming the system by coordinating a boost of a story that lacked an organic reader upvote.
Yet even if mistruths live mostly in reality-distorted subreddits, those communities can still fester and grow. Redditors have organized at places like KotakuInAction to pile on and dox feminists, for instance.
“What I’d say is, places that tend to peddle in fake news also tend to peddle in things that are equally antisocial,” says Slowe. “These also tend to be echo chamber-y places that crush dissent and try to out their opponents for the betterment of humanity for some reason. That’s where our enforcement comes into play.”
It’s the type of policy that led Reddit to ban two prominent alt right subreddits, not for misinformation, but for taking specific banned actions outlined in Reddit’s Content Policy, like outing personal information and harassing ideological opponents. In this regard, Reddit has made it clear; the service has plenty of tolerance for fake news, but it’s started to crack down on communities when distortion leads to action.
The problem with this approach is that it’s perpetually reactive rather than proactive. And it still prompts the question: is it Reddit’s responsibility to chaperone the very gatherings it enables, to ensure they don’t boil over as they stew in their own mistruths?
Remember how Mark Zuckerberg said, “Identifying the ‘truth’ is complicated”? Tim Cook doesn’t pretend the truth is so vague. He said in February that fake news is “killing people’s minds” and called for an industry-wide initiative to stop it, adding that “it can be done quickly if there is a will.” Apple’s heart is in the right place, but what is it actually doing?
For as big as Apple may be, it’s largely removed from the problem of fake news due to its purview. But though Apple is mostly a hardware company, it is also a media distribution company. Apple told Co.Design that it sees the way it distributes news to be similar to the way it distributes music and apps in the iTunes and App Store. And it’s leveraged its experiences across these marketplaces as a model.
Without a social network or search engine to its name, Apple’s only direct foray into news is Apple News–which isn’t a social media platform, but a curated news service. Still, it’s a big news service, as it’s pre-installed on every iPhone, with the power to drive tens of millions of readers and hundreds of millions of page views to, say, CNN content each month.
Apple only runs specific media partners on Apple News (in full disclosure, Fast Company is among them). And in this sense, the company has basically offloaded fact checking to proven news publications. Meanwhile, the actual stories in your Apple News feed are informed by what you subscribe to when you set it up, along with your actual usage patterns over time. If, for instance, you subscribe to several sports publications but never read about college basketball, your feed will gradually tune out NCAA content. Apple News promotes four stories a day to all its users. Two are chosen by Apple editors. Two are chosen by popularity algorithms.
The Trump-friendly news site Breitbart–whose app actually disappeared from the App Store in a momentary blip that was never explained–is a staple on Facebook, but it isn’t included in Apple News’s exclusive partner list known as its “format publishers.” However, you can still read Breitbart content via RSS subscriptions through the app, and through Channels, which can aggregate many publications under a certain topic.
In the Apple News setup, Channels are the best potential avenue for fake news to sneak its way in. But much like Apple handles App Store approval, it must approve submitted Channels before they’re listed on the service. (And it has–largely due to attempts with spam.) As for fake news itself, Apple’s Content Guidelines bans “false information and features” in the App Store, but there is no equivalent written policy for Apple News. However, while fake news is not technically banned in Apple News documentation, it seems to be in practice.
Finally, Apple has one last line of defense: Community feedback. Users can use the “report a concern” link in Apple News, along with other Apple apps. According to Apple, most concerns that are reported to News are regarding the varying partisan stance of stories on the platform, not veracity.
Who would have believed that the service popularized for tweenage selfies would take the crown for sidestepping the proliferation of fake news early and often?
What most people don’t realize about Snapchat is that it is actually fact checked. Snap employs a team of editors and journalists, broken down into verticals like Sports and Entertainment. And any time you see one of its curated stories, full of crowd-shot clips assembled into one big video, know that each of those clips is scanned for factual statements first. Whether that’s a journalist posting on the ground at the Ft. Lauderdale airport shooting or a student shouting “the Tar Heels are undefeated!” in head to toe body paint, Snap checks that any claims they make are verifiable before including them.
Like Apple, Snap doesn’t fact check its trusted editorial partners, like CNN or Buzzfeed, which post stories and videos to the service. However, Snap admits that its platform is a walled garden, purposefully filled with publications it deems credible–and which have also agreed to only publish truthful information since the launch of the entire Discover platform. This January, however, Snapchat released an updated set of media guidelines with more specifications, articulating its stance on violent and sexual content–aimed more at keeping content clean than true.
Finally, Snapchat limits the potential of fake news by design. Even if I’m an influencer with thousands of followers–a situation that might breed problems similar to WhatsApp–and I decide to make a fictitious story that looks like it was published by Mashable, I can’t. That’s because users don’t have access to the same graphics and text overlays that major media players do on Snapchat. Furthermore, if someone were to share fake news en masse via personal updates–similar to what’s happening with personal sharing within WhatsApp–Snap confirmed to Co.Design that it would take appropriate action, as part of its responsibility to users.
Snapchat is a social media network that seems to borrow the best parts of Apple’s walled garden approach of including premium publishers, but still manages to screen and promote community content at the same time. Now, if only our grandma were on Snapchat, we’d really have something.