Fast company logo
|
advertisement

As corporations rapidly roll out advanced artificial intelligence, we need to develop strong safeguards capable of meeting the risks.

We need an algorithmic bill of rights before algorithms do us wrong

[Images: Aleksangel/iStock; Iaroslava Kaliuzhna/iStock]

BY Kartik Hosanagar8 minute read

Michael Kearns, a leading machine learning researcher, recently addressed a group of 60 esteemed scientists at the prestigious Santa Fe Institute. His subject was the insidious bias built into many of the algorithms used for socially sensitive activities such as criminal sentencing and loan approval.

It was a version of a talk that Kearns had given before. But he couldn’t ignore the irony of discussing the dangers inherent in new technologies in this particular place. The Santa Fe Institute is just 40 miles from the town of Los Alamos, site of the Manhattan Project, where more than 6,000 scientists and support staff worked together from 1939 to 1945 to produce the world’s first atomic bomb. The ultimate impact of the project was enormous: Some 200,000 lives lost at Hiroshima and Nagasaki, and the unleashing of a new technological threat that has loomed over humankind for more than seven decades since.

Looking back at those physicists involved in the Manhattan Project and their response to the social and ethical challenges their work presented offers a valuable precedent. In the years that followed the Hiroshima and Nagasaki bombings, many of them publicly assumed responsibility for their work by taking part in an effort to restrict the use of atomic weapons.

Today, in an era when corporations are rapidly rolling out advanced artificial intelligence and machine learning, it makes sense for us to be thinking about an algorithmic bill of rights that will protect society.

The question is whether there are meaningful earlier efforts on which we can build. Fortunately, many organizations and individuals–from the government, to industry, to think tanks like AI Now at New York University—are thinking and debating the nature of the challenges we face in the age of powerful algorithms—along with potential solutions.

Additionally, governments from the European Union to New York City are putting laws in place that can enforce some of these solutions. We’re in a position to use some of their most useful ideas as we consider how to develop and deploy new algorithmic tools.

Based on what we know about AI and its potential impacts on society, I believe there should be four main pillars of an algorithmic bill of rights, including a set of responsibilities for users of decision-making algorithms.

  • First, those who use algorithms or who are impacted by decisions made by algorithms should have a right to a description of the data used to train them and details as to how that data was collected.
  • Second, those who use algorithms or who are impacted by decisions made by algorithms should have a right to an explanation regarding the procedures used by the algorithms, expressed in terms simple enough for the average person to easily access and interpret. These first two pillars are both related to the general principle of transparency.
  • Third, those who use algorithms or who are impacted by decisions made by algorithms should have some level of control over the way those algorithms work–that is, there should always be a feedback loop between the user and the algorithm.
  • Fourth, those who use algorithms or who are impacted by decisions made by algorithms should have the responsibility to be aware of the unanticipated consequences of automated decision making.

Data transparency and a right-to-explanation

Let’s take a closer look at each of these four pillars. We’ll start by examining how to consider the rights of users and responsibilities of companies with regard to the first two pillars: transparency of data and of algorithmic procedures. To better understand these two pillars, consider the four distinct phases of modern algorithms, as outlined by researchers Nicholas Diakopoulos and Michael Koliska: data, model, inference, and interface.

The first phase, data, is made up of inputs to the algorithm that may be problematic. So one important requirement built into the bill of rights should be for companies to release details regarding the data used in training the algorithm, including its source, how it was sampled, its prior use, known issues about its accuracy, and the definitions of all the variables in the dataset. Additionally, companies should be transparent about how data is modified or “cleaned” prior to analysis. (This is the domain of what is known as data provenance in the computer science literature.)

The second phase, the model, refers to the sequence of steps that enables the algorithm to make a decision given one or more inputs. For example, the model for a recommendation algorithm specifies how it generates a recommendation based on a user’s past purchases. A loan approval model might specify the weights assigned to different variables such as the applicant’s income, education level, credit score, etc. As we have seen, an algorithm’s sequence of steps can be completely programmed by a human being, completely self-learned by a machine learning algorithm, or some combination of the two.

The bill of rights should require companies to release specific details of the model they’ve developed. Reasonable safeguards designed to protect their intellectual property will have to be worked out. Any solution, however, should clarify the portions of the logic that are programmed by humans versus self-learned and the relevant variables used by the model. Importantly, it should be possible to explain the rationale for a decision even when the underlying model is opaque, such as in a deep learning model. Emerging research on interpretable machine learning will be particularly important to achieve this.

Finally, the bill should allow for audit of the source code of the algorithm when things go wrong in “high-stakes” settings such as healthcare and transportation. To ensure that they don’t become burdensome, the criteria for audits should be set such that they are the exception rather than the norm.


Related: How to lift the veil off hidden government algorithms

advertisement

The third phase, inference, consists of understanding how well an algorithm works in both typical and outlier cases. The bill of rights should require companies to release details on the types of assumptions and inferences the algorithm is making, and the situations in which those assumptions might fail.

The final phase, interface, is the part of the algorithm’s output that users interact with most directly. The bill of rights should require companies to integrate information about an algorithm directly into the user interface. At its simplest, this would involve simply informing a user that an algorithm is, in fact, being used. Beyond that, the interface should make it easy for users to interact with the system to access information about the data, model, and inferences. Transparency with regard to these four phases constitutes the first two pillars of an algorithmic bill of rights.

There should always be a feedback loop with the user—and they must be aware of the risks

The third pillar is the concept of a feedback loop, which grants users a means of communication so that they have some degree of control over how an algorithm makes decisions for them. The nature of the loop will inevitably vary, depending on the kind of algorithm being developed, and the types of real-world interactions it manages. It can be as limited and straightforward as giving a Facebook user the power to flag a news post as potentially false; it can be as dramatic and significant as letting a passenger intervene when he is not satisfied with the choices a driverless car appears to be making.

The fourth and final pillar is perhaps the most complicated one–yet perhaps the most important. It concerns the users’ responsibility to be aware of the risk of unanticipated consequences, and therefore to be more informed and engaged consumers of automated decision-making systems. Only by assuming this responsibility can users make full use of the rights outlined in the first three pillars.

Among those scientists who chose to take responsibility for the risks wrought by their invention, the most famous example may be that of Albert Einstein. His 1939 letter to Franklin D. Roosevelt about the potential of atomic weapons helped trigger the launch of the Manhattan Project. Einstein had been motivated by the fear that Hitler and the Nazis might develop atom bombs first, but after seeing the results of the effort that he helped to spark, he was filled with regret. “Had I known that the Germans would not succeed in producing an atomic bomb,” he said, “I would have never lifted a finger.”

Einstein later dedicated time and energy to supporting efforts to control the weapons he had helped to create. In fact, the final public document that he signed, just days before his death in 1955, was the Russell-Einstein Manifesto–an eloquent call to scientists to act for the good of humanity. Supported by other such notable scientists and intellectuals as Max Born, Frédéric Joliot-Curie, Linus Pauling, and Bertrand Russell, the manifesto states:

There lies before us, if we choose, continual progress in happiness, knowledge, and wisdom. Shall we, instead, choose death, because we cannot forget our quarrels? We appeal as human beings to human beings: Remember your humanity, and forget the rest. If you can do so, the way opens to a new Paradise; if you cannot, there lies before you the risk of universal death.

The challenge posed today by modern algorithms may not be as stark as that presented by the power of atomic bombs. But it’s hard not to see the parallels in terms of the opportunities and challenges we face regarding them.

As Kearns reflected on this, his message was a call to action for the members of his audience: “The scientists who designed these systems have to take on the mantle to fix them.” Kearns was right. But his call should be extended beyond scientists and technologists to also include business leaders, regulators, and end users. Together, we have to decide how to design, manage, use, and govern algorithms so we control the narrative of how algorithms impact our lives.


Kartik Hosanagar is the John C. Hower professor of technology and digital business and a professor of marketing at the Wharton School of the University of Pennsylvania.

This essay was adapted from A Human’s Guide to Machine Intelligence, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2019 by Kartik Hosanagar.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics