advertisement
advertisement

Meet 4 design heroes who are defending democracy online

They’re fighting for fairness and accuracy in algorithms and on digital platforms.

Meet 4 design heroes who are defending democracy online

Design is a political act–and it can be patriotic, too. In recent years, designers and researchers have played a leading role in defending the integrity of digital platforms so they’re not driven by racist, biased algorithms or overwhelmed by misinformation. A great way to show love of country this Fourth of July is to get to know the people who are fighting to uphold the United States’ core values. Here are four defenders of digital democracy–and the crucial work they’re doing in the name of equality, fairness, citizenship, and facts.

advertisement
advertisement
[Source Image: courtesy Joy Buolamwini]

Joy Buolamwini: Exposing facial recognition systems for their hidden bias

Algorithms are biased when their underlying data is biased. Joy Buolamwini, the founder of the Algorithmic Justice League and a researcher at MIT Media Lab is showing just how racially biased facial recognition algorithms are–which could have dire consequences for men and women with dark-skinned faces when it comes to policing and surveillance. As cops pair security cameras with flawed facial recognition software that has trouble distinguishing between people of color, there’s a greater chance that innocent people will be caught up in a broken criminal justice system.

Joy Buolamwini [Photo: TJ Rak]

The project, called Gender Shades, establishes a new way of benchmarking existing facial recognition systems from companies like IBM and Microsoft by testing for accuracy in four different groups: male light-skinned faces, female light-skinned faces, male dark-skinned faces, and female dark-skinned faces. Buolamwini found that in the case of IBM’s algorithm, the accuracy gap between light-skinned men and dark-skinned women is 34.4%. (IBM did its own analysis and found an accuracy gap that was far smaller in the software’s newest version. The company says it’s also working to address problems of bias more generally.) By bringing such problems to companies’ attention, Buolamwini forces them to develop better, more equitable technology while exposing the threats biased algorithms pose to people’s civil liberties.

[Image: Katie Falkenberg/courtesy ORCAA]

Cathy O’Neil: Testing algorithms for their truth

Similar to Buolamwini, writer, mathematician, and activist Cathy O’Neil is working to find the data-driven biases that can make an algorithm unfair and inaccurate for certain populations of people. O’Neil is most famous for her book Weapons of Math Destruction, which documents the real harm that poorly designed algorithms can cause. She recently launched a consultancy that certifies companies’ algorithms as accurate, unbiased, and fair–and so far has certified several companies, including startup Rentlogic. But O’Neil is just getting started, and she hopes her services will inspire both companies and the public to take a more active interest in understanding how algorithms impact people’s lives.

 

Cathy O’Neil

However, O’Neil maintains that her consultancy is no replacement for regulations, and plans to work on convincing lawmakers that algorithmic bias is a real problem that requires legislation. (After all, no company that’s using algorithms for ill would ever come to her voluntarily.) At a time when more companies are integrating AI into products we use everyday, O’Neil’s work is a much-needed approach to tackling a real threat to people’s lives and livelihoods.

advertisement

J. Nathan Matias: Helping online communities “nudge” algorithms in the right direction

Fake news threatens democracy (see: the 2016 election). Millions of voters are influenced by misinformation campaigns online and end up believing that real headlines about our president are just a fake news witch hunt. On the other hand, the digital platforms that have become the vehicles for political discourse have a hard time labeling content “fake” without prompting outcry from conservatives about free speech. Researcher J. Nathan Matias at MIT Media Lab’s Center for Civic Media created a piece of software that gives online communities the power to nudge algorithms in the right direction. His software, called CivilServant, affixes to posts on Reddit and requests that community members add links to other content that either verifies or disputes the claims of the original post. By asking the online community to weigh in, Matias found that CivilServant encouraged Reddit’s algorithms to push disputed posts down on its listings by a factor of two. He calls this an “AI nudge.” The experiment showed that nudging people to think a little more critically about what they were looking at reduced the prevalence of fake news.

[Image: courtesy Bad News]

Sander van der Linden: Teaching people about misinformation campaigns–by asking them to recreate its tactics

Sander van der Linden

If CivilServant is a practical approach toward nudging algorithms to deprioritize fake news–something that Google and Facebook still struggle to do–the game Bad News takes a more educational approach. Created by the researcher Sander van der Linden, who heads up Cambridge University’s Social Decision-Making Lab, and several collaborators, Bad News helps people understand how fake news works. But this is no traditional teaching tool: to play the game, people have to use the tactics of misinformation campaigns themselves, using them to trick algorithms (and people) into believing the bogus. It’s part of van der Linden’s research on what he calls a “truth vaccine,” the idea that when people understand the tricks that fuel the spread of misinformation online, they’re better able to identify it when they see it, and they’re less likely to be swayed by the misinformation. Bad News puts that idea in the form of a fun game, but the underlying goal is the same: When more people understand how fake news works, they’re less likely to fall for it.

advertisement
advertisement

About the author

Katharine Schwab is an associate editor based in New York who covers technology, design, and culture. Email her at kschwab@fastcompany.com and sign up for her newsletter here: https://tinyletter.com/schwabability

More