• 02.21.17

Algorithms Control Our Lives: Are They Benevolent Rulers Or Evil Dictators?

As these collections of code take greater and greater precedence in our decision-making, we need greater transparency and some sort of Hippocratic oath for the people who build them.

Algorithms Control Our Lives: Are They Benevolent Rulers Or Evil Dictators?

From phone apps and GPS maps to music recommendations and artificial intelligence, our lives are increasingly molded by algorithms. Sets of instructions for completing tasks or solving problems, algorithms are the governing principles of our age–the underlying equations that help us make decisions, and, in some cases, make decisions for us.


Are these life recipes a force for good or ill? According to a new Pew Research Center report, they’re a lot of both. Pew surveyed 1,302 experts of various stripes–futurists, academics, coders, IT guys and girls–and opinions were split. Asked if the “net overall effect” for individuals and society would be positive or negative in the next decade, 38% said the positives would outpace the negatives, while 37% saw it the other way. Another 25% sat on the fence, claiming a 50-50 positive-negative balance (which seems somewhat unfeasible, but never mind).

The experts universally expect algorithms to proliferate and become more important, and it’s not hard to see how our lives might be improved as they do. Banks will make better decisions about who to loan money to, approving people who today are refused or discriminated against because of incomplete data, experts say. Governments will become more efficient and more equal-handed because decisions will be dictated by objective data and sensors, not bureaucrats. Algorithms will ease traffic congestion by telling people the best routes and mode of travel. They’ll lead to “improved and more proactive police work, targeting areas where crime can be prevented,” another respondent says.

[Photo: Flickr user Sparkfun]

But the survey highlights all kinds of deleterious consequences as well. AI agents will enable “hyper-stalking,” highly personalized marketing, and “new ways to misrepresent reality and perpetuate falsehoods” (great for election campaigning). Algorithms will be written to maximize efficiency and profitability with humans increasingly treated as “inputs” and algorithms trained to create their own algorithms, perpetuating cycles of autonomous decision-making. And, there will be an increasing divide between people with algorithmic knowledge and those who fail to understand how the omelettes are made–and which eggs were broken to do it.

“Algorithms will capitalize on convenience and profit, thereby discriminating [against] certain populations, but also eroding the experience of everyone else,” says Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, in one typical response. “The goal of algorithms is to fit some of our preferences, but not necessarily all of them: They essentially present a caricature of our tastes and preferences.”

Algorithms promise dispassionate data crunching and more objective decision-making. But an over-reliance on them–for instance, in policing or hiring decisions–could entrench biases and reduce accountability. If the data that goes into the algorithm is biased, the results will be biased as well.

“All of the training data contains biases,” says Dudley Irish, a software engineer. “Much of it [is] either racial- or class-related, with a fair sprinkling of simply punishing people for not using a standard dialect of English. To paraphrase Immanuel Kant, out of the crooked timber of these data sets no straight thing was ever made.”

[Photo: Thom via Unsplash]

Given the determinative importance of algorithms, there are calls for greater transparency. But, given the commercial importance, their owners are unlikely to lift the lid too much. “One of the greatest challenges of the next era will be balancing protection of intellectual property in algorithms with protecting the subjects of those algorithms from unfair discrimination and social engineering,” an anonymous respondent says.

The survey, co-conducted by Elon University’s Imagining the Internet Center, does offer some tentative ideas for what society can do to maximize the good and minimize the bad.

Susan Etlinger, industry analyst at the Altimeter Group, a research firm, says there’s a need for a clear audit trail in making algorithms, much like there’s an provenance trail these days for many foods. Other ideas include “an FDA for algorithms,” a Hippocratic oath for coders, and “industry reform” (tech giants recently conceived some vague collective principles for AI).

Above all, we’ll all need greater literacy about the place of algorithms in the modern world, and more ability to affect their ultimate outcomes. “The solution is design. The process should not be a black box into which we feed data and out comes an answer, but a transparent process designed not just to produce a result, but to explain how it came up with that result,” says Judith Donath, a researcher at the Harvard Berkman Klein Center for Internet & Society. “The systems should be interactive, so that people can examine how changing data, assumptions, rules would change outcomes. The algorithm should not be the new authority; the goal should be to help people question authority.”

See more here.

About the author

Ben Schiller is a New York staff writer for Fast Company. Previously, he edited a European management magazine and was a reporter in San Francisco, Prague, and Brussels.