The folks at ProPublica unveiled a huge journalistic get today: internal Facebook documents that offer a peek into the secret guidelines it uses to moderate hate speech and violent content. Reviewing posts and deciding what to remove is a Sisyphean task for a social network with two billion users, so it’s not all that surprising that the company tries to rely heavily on mathematical formulas. But the execution is jarring to say the least.
Included in the story is a slideshow quiz (recreated by ProPublica from rules that may have since “changed slightly”) that shows guidelines Facebook has used to distinguish between protected categories and non-protected categories. Protected categories include things like race and gender identity, while non-protected categories include things like age and social class. Then there are a lot of category subsets—like “children” or “drivers“—which can result in non-protected categories when combined with protected ones. So, “Irish teens” is not protected but “Irish women” is, according to the slideshow. Lost yet? Check out the article and click through the slideshow.