Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

2 minute read

Facebook Turns to the Crowd to Monitor the Crowdies

facebook

Facebook has begun testing a system that's in vogue at the moment: Using its own users as a data-crunching system. Nothing terribly new there—except that Facebook's using its crowd to actually moderate the rest of the crowd and stamp out the nasty bits, which is a whole new ethically-intriguing level

It's called the "Facebook Community Council" and according to the group's motto it exists to "harness the power and intelligence of Facebook users to support us in keeping Facebook a trusted and vibrant community." This all sounds very lofty, very un-dictatorial and much more hippyish, power-to-the-people than Facebook sometimes seems, with moves like its blanket decisions on user-privacy.

The whole point of the FCC (hah! nice acronym) is to check items published on Facebook for offensiveness along the lines of personal attacks, violence, drug abuse and so on. To that end, FCC members are only allowed to click on one of the following alert flags inside a special FCC members-only app if they find something objectionable: Spam, Acceptable, Not English, Nudity, Drugs, Attacking, Violence. The "attacking" bit apparently applies to public figures, which is where we begin to see the complexity and inherent risks of the FCC itself.

Because Facebook would seem to be deeming "attacks" on non-public figures as something the FCC can't moderate. This will be for blindingly obvious and sad reasons: Facebook's covering its legal arse, and doesn't want to get embroiled in user-to-user abuse cases for fear of the legal and financial ramifications. And when you think about it, this is going to apply to pretty much everything the FCC recommends. Facebook is going to have to tread very carefully when it deals with items FCC members have deemed offensive. You only have to look at the whole stupid, inane fiasco stirred up when Facebook's puritanical execs decided that pictures of breastfeeding babies were offensive, and began to take them down. And what if U.S.-based FCC members take offense to something that European Facebookers would deem wholly acceptable?

Now of course, the opinions of the FCC members are going to be smoothed and averaged out before any actions are taken, but it's still going to be tricky moral ground for Facebook. How will Council members deal with Holocaust denial issues on Facebook pages, for example? One could frame the act of denial itself as a "violent" act, but Facebook has until now taken an extremely hands-off approach to this issue, and has stirred up a hornets nest of controversy on the matter. Will the FCC be empowered to challenge this sort of content on Facebook? Or will it merely be a paper tiger?

That's the obvious worry here. While the concept of the FCC sounds admirable, and would seem to be a sign that Facebook is addressing user concerns over inappropriate content in a very democratic way, it may not end up being as effective as it sounds like it could be. And, in fact, Facebook may have set up the FCC for a totally different reason: To data-mine the opinions of the moderators themselves, as a way of tapping the moral sensitivities of its user base. Is that good news or bad? Given recent draconian moves by Facebook to redefine what user data it can share with the Internet, I'd tend to think it's more of a bad thing.

[Facebook Community Council via InsideFacebook]