As artificial intelligence becomes more prolific in the digital age, so, too, have the ethical quandaries that come with it. After all, algorithms are nearly inescapable in daily life. They’re the architects behind what you see when you check your Facebook feed, what you hear when you plug in to Spotify, and what you pay when you place an order from Amazon.
Then there are applications that carry decisions with heavier consequences, such as what news stories propagate furthest or which resumes rise to the top of the pile for a job opportunity. As algorithms shape the trajectory of our lives in increasingly profound ways, some researchers think companies have a new moral duty to illuminate how, exactly, they work.
That’s what a pair of scholars at Carnegie Mellon are saying. “In most cases, companies do not offer any explanation about how they gain access to users’ profiles, from where they collect the data, and with whom they trade their data,” says Tae Wan Kim, an ethics professor at the Tepper School of Business and coauthor of an analysis published in Business Ethics Quarterly. “It’s not just fairness that’s at stake; it’s also trust.”
According to the analysis, the heart of the issue is a shifting definition of what customers sign up for when they agree to a company’s terms and conditions. In the digital age, information is flowing continuously and perpetually being used to effect changes, so it’s impossible to think of checking “yes” once as a complete transaction of rights, especially since the future uses of a customer’s data can diverge wildly in today’s fast-transforming world of automation.
“Data subjects allow (at the decision point) the use of their information for countless purposes,” the authors write. “Before the decision point, a company cannot fully predict how the algorithm will work with the newly incoming data—largely because complicated algorithms are adaptable . . . but the algorithm directly affects and influences the subjects’ behavior. So, we claim that data subjects are entitled to an update about how the company has used their information.”
They’re not the only ones probing algorithmic accountability: The question of how to police artificial intelligence has been spotlighted in recent months. Last year, several New York politicians considered outlawing AI software used in hiring, and this year, the European Commission moved to ban mass surveillance software used to track social behavior.
“Will requiring an algorithm to be interpretable or explainable hinder businesses’ performance or lead to better results?” asks Bryan R. Routledge, a Tepper finance professor who cowrote the analysis. “That is something we’ll see play out in the near future, much like the transparency conflict of Apple and Facebook. But more importantly, the right to explanation is an ethical obligation apart from bottom-line impact.”