The above allegation comes from Sandy Parakilas, a former operations manager on the platform team at Facebook, who worked for the company in 2011 and 2012. Writing an op-ed for the New York Times, Parakilas argues that Facebook can’t be trusted to regulate itself in an age of fake news and foreign governments using its platform to influence elections. He comes to that conclusion because his experience at Facebook showed the company isn’t interested in any kind of self-regulation.
Parakilas relates how he discovered a social games developer using Facebook data to automatically generate profiles of children without their consent and another developer asking permission to gain access to a user’s Facebook messages and posted photos. He says that when he reported these incidents to his superiors at Facebook they didn’t care at all:
At a company that was deeply concerned about protecting its users, this situation would have been met with a robust effort to cut off developers who were making questionable use of data. But when I was at Facebook, the typical reaction I recall looked like this: try to put any negative press coverage to bed as quickly as possible, with no sincere efforts to put safeguards in place or to identify and stop abusive developers. When I proposed a deeper audit of developers’ use of Facebook’s data, one executive asked me, “Do you really want to see what you’ll find?”
In the end, Parakilas says that from his experiences and time at Facebook, the message was clear: “The company just wanted negative stories to stop. It didn’t really care how the data was used.”