advertisement
advertisement
advertisement

This Is Not a Joke

Facebook tested flagging fake articles as “satire”? You’ve got to be kidding me. This is why algorithms can’t solve everything.

This Is Not a Joke
[Illustration: Stanley Chow]

While you were busy liking family photos and taking BuzzFeed quizzes, you may not have realized that Facebook, the social network designed by antisocial people, faced one of the most significant threats to a just and verdant world we’ve ever seen: people who think satirical news stories are real.

advertisement

Thankfully, the disruptive geniuses behind the platform that redefined the word friend decided to test a solution to this grave problem. Could Facebook’s infinite algorithmic wisdom simply hide satirical items from people with proven deficiencies of critical thinking? Nah. It simply added the term “[Satire]” in front of satirical headlines.

That way people would know it’s satire!

I worked at The Onion for nearly five years, so I became quite familiar with the phenomenon of confused readers firsthand–over and over again. A favorite case involved a member of the U.S. House of Representatives who thought the article “Planned Parenthood Opens $8 Billion Abortionplex” was real information worth getting riled up about. He was so incensed that he posted the article to his Facebook wall complaining of “abortion by the wholesale.”

Was it Facebook’s fault that he did not know the story was satirical? Was it the fault of a polarized political environment in which we seek only the information that validates our preexisting ideological commitments? Or was it the fault of a human being who shared before reading and acted without thinking? In other words, was he just stupid? Heaven forbid we expect people to notice the source of the content they’re publishing to their networks. Shouldn’t theonion.com be a better idiot filter than #satire?

The satire flag is a small example of a larger, more dangerous trend: reducing every perceived problem to an engineering challenge. There’s seemingly no societal ill we can’t ameliorate by throwing a high-powered algorithm at it. Poor delivery of public services? There’s a “deep learning” startup on the case. A lack of perfect information in the marketplace of ideas? Sprinkle some big data on there. Soviet-style lines for taxis, groceries, iPhones? There’s an app for that.

advertisement

Let’s amble down this slippery slope together, shall we? How should we deal with public figures who utter inconvenient truths on TV while not realizing their microphones are live? Let’s just add a small electrical cur- rent to their mic packs so they are constantly reminded, through mild electrification, that the mic is “hot.” What should we do about R. Kelly songs that are ostensibly about vehicles but whose subtext refers to sexual acts? Let’s insert a voice-over that whispers “sex song” in the first 10 seconds of the track. One thing that’s always bothered me is the inability to know with 100% certainty what another person is thinking. I mean, I could ask the person, but what if he or she is lying? The solution is obviously to employ a team of neural scientists and have them build “True Thought” interfaces that allow us to see what’s really going on inside someone else’s head.

Algorithms improve my life every day, but it’s distressing that we’re now willing to abdicate our common sense to them. The interpretation of art doesn’t need to be optimized or made more efficient. Jokes don’t need to be flagged. People and relationships don’t need to be so discretely categorized all the time. For the sake of our collective imaginations, we need to think for ourselves.

Some things in life exist in a gray area. And that’s a feature, not a bug.