Fast company logo
|
advertisement

The rush toward ethical AI is leaving many of us behind.

[Photos: www_slon_pics/Pixabay; Blake Barlow/Unsplash; Ashwin Vaswani/Unsplash]

BY S.A. Applin9 minute read

The systems we require for sustaining our lives increasingly rely upon algorithms to function. Governance, energy grids, food distribution, supply chains, healthcare, fuel, global banking, and much else are becoming increasingly automated in ways that impact all of us. Yet, the people who are developing the automation, machine learning, and the data collection and analysis that currently drive much of this automation do not represent all of us, and are not considering all of our needs equally. We are in deep.

Most of us do not have an equal voice or representation in this new world order. Leading the way instead are scientists and engineers who don’t seem to understand how to represent how we live as individuals or in groups—the main ways we live, work, cooperate, and exist together—nor how to incorporate into their models our ethnic, cultural, gender, age, geographic or economic diversity, either. The result is that AI will benefit some of us far more than others, depending upon who we are, our gender and ethnic identities, how much income or power we have, where we are in the world, and what we want to do.

This isn’t new. The power structures that developed the world’s complex civic and corporate systems were not initially concerned with diversity or equality, and as these systems migrate to becoming automated, untangling and teasing out the meaning for the rest of us becomes much more complicated. In the process, there is a risk that we will become further dependent on systems that don’t represent us. Furthermore, there is an increasing likelihood that we must forfeit our agency in order for these complex automated systems to function. This could leave most of us serving the needs of these algorithms, rather than the other way around.

The computer science and artificial intelligence communities are starting to awaken to the profound ways that their algorithms will impact society, and are now attempting to develop guidelines on ethics for our increasingly automated world. The EU has developed principles for ethical AI, as has the IEEE, Google, Microsoft, and other countries and corporations. Among the more recent and prominent efforts is a set of principles crafted by the Organization for Economic Co-operation and Development (OECD), an intergovernmental organization that represents 37 countries for economic concerns and world trade.

In various ways, these standards are attempting to address the inequality that results from AI and automated, data driven systems. As OECD Secretary-General Angel Gurría put it in a recent speech announcing the guidelines, the anxieties around AI place “the onus on governments to ensure that AI systems are designed in a way that respects our values and laws, so people can trust that their safety and privacy will be paramount. These Principles will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all.”

However, not all ethics guidelines are developed equally—or ethically. Often, these efforts fail to recognize the cultural and social differences that underlie our everyday decision making, and make general assumptions about both what a “human” and “ethical human behavior” is. This is insufficient. “Whose ethical behavior?” is the question that must drive AI, and all the other technologies that impact our decision making, guidelines, and policies.

Indeed, when the companies themselves are quietly funding the research on AI ethics, this question becomes even more critical. An investigation this month by Spotlight and the New Statesman found that large tech companies may be stacking the ethics deck in their favor by funding research labs, reporting that “Google has spent millions of pounds funding research at British universities” including support of the Oxford Internet Institute (OII), where a number of professors are “prolific public commentators on ethical AI and ADM.” Even as one of the professors, Luciano Floridi, serves on U.K. and EU governance ethics boards, Google funds OII and others to research outcomes from those groups. It is a normal practice for corporations to fund research, and these sources of funding are expected to be disclosed, but the reporters found that some of these funding sources were not always detailed in the groups’ research publications.

While their funding suggests that Google and other large tech companies are “offshoring” ethics to research groups, the companies seem to have struggled to incorporate ethics—and a deep understanding of the human outcomes of their technologies—into their development cycles at home. Two phenomena in particular may be contributing to this problem. The first is that computer science and engineering, in industry and in education, have developed their processes around the concept of what is often referred to as “first principles,” building blocks that can be accepted as true, basic, and foundational in classical Western philosophy. In cultivating “first principles” around AI ethics, however, we end up with a fairly limited version of the “human.” The resultant ethics, derived from these centuries-old contexts in early Western human history, lack the diversity in education, culture, ethnicity and gender found in today’s complex world.

Because we are different, AI algorithms and automated processes won’t work equally effectively for us worldwide. Different regions have different cultural models of what constitutes sociability and thus, ethics. For example, in the case of autonomous vehicles, AI will need more than just pragmatic “first principle” knowledge of “how to drive” based on understanding the other machines on the road and the local laws (which will differ sometimes by municipality). They will also need to take into account the social actions of driving, and the ethical choices that each driver makes on a daily basis based on their cultural framework and sense of sociability.

Reducing the rules around automation to regulations or laws cannot account for unexpected scenarios, or situations when things go wrong. In the era of autonomous vehicles, the entire road space cannot be controlled, and the actions within it cannot be fully predicted. Thus the decision-making capabilities of any type of algorithm would need to contain the multitudes of who we collectively all are. In addition to accounting for random animals and on-the-road debris, AI will need frameworks to understand each person (bicyclist, pedestrian, scooter rider etc.), as well as our cultural and ethical positions, in order to cultivate the judgment required to make ethical decisions.

Improving the engineering approach to AI by adding the social sciences

A second crucial factor limiting the development of robust AI ethics comes from computer scientist Melvin Conway, PhD. Conway’s Law states that “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” That is, if a team developing a particular AI system is made up of similar types of people who rely on similar first principles, the resulting output is likely to reflect that.

Conway’s Law exists within educational institutions as well. In training technology students on ethics, institutions are mostly taking a Silicon Valley approach to AI ethics by employing a singular cultural frame that reinforces older, white, male, Western perspectives deployed to influence younger, male minds. This approach to ethics could be described as DIY, NIH, and NIMBY—as in “do it yourself, not invented here, not in my backyard”—which pushes for teaching selected humanities, ethics, and social sciences to engineers within their companies or institutions, rather than sending them to learn outside their academic disciplines or workplaces.

All of this means that the “ethics” that are informing digital technology are essentially biased, and that many of the proposals for ethics in AI—developed as they are by existing computer scientists, engineers, politicians, and other powerful entities—are flawed, and neglect much of the world’s many cultures and ways of being. For instance, a search of the OECD AI ethics guidelines document reveals no mention of the word “culture,” but many references to “human.” Therein lies one of the problems with standards, and with the bias of the committees who are creating them: an assumption of what being “human” means, and the assumption that the meaning is the same for every human.

advertisement

This is where anthropology, the study of human social systems, history, and relations, can contribute most strongly to the conversation. Unfortunately, social science has largely been dismissed by technologists as an afterthought—if it’s considered at all. In my own research, which included a summer in the Silicon Valley AI lab of a multinational technology company, I have found that rather than hiring those with the knowledge of complex social systems, technologists are attempting to be “self-taught,” by taking adjunct courses or reducing non-engineers to cognitive scientists hires who specialize in individual brain function.

These hires are often exclusively male, and often do not represent the range of ethnicities and backgrounds of the broader population, nor do they deal with how humans live and work: in groups. When asked about ethics, the vice president of the AI lab where I worked told me, “If we had any ethics, they would be my ethics.”

An additional approach from technology companies has been to hire designers to “handle” the “complex messy human,” but designers are not trained to deeply understand and address the complexity of human social systems. They may appropriate social science methods without knowledge of, or how to apply, the corresponding theories necessary to make sense of collected data. This can be dangerous because it is incomplete and lacks context and cultural awareness. Designers might be able to design more choices for agency, but without knowing what they are truly doing with regard to sociability, culture, and diversity, their solutions risk being biased as well.

This is why tech companies’ AI labs need social science and cross-cultural research: It takes time and training to understand the social and cultural complexities that are arising in tandem with the technological problems they seek to solve. Meanwhile, expertise in one field and “some knowledge” about another is not enough for the engineers, computer scientists, and designers creating these systems when the stakes are so high for humanity.

Artificial intelligence must be developed with an understanding of who humans are collectively and in groups (anthropology and sociology), as well as who we are individually (psychology), and how our individual brains work (cognitive science), in tandem with  current thinking on global cultural ethics and corresponding philosophies and laws. What it means to be human can vary depending upon not just who we are and where we are, but also when we are, and how we see ourselves at any given time. When crafting ethical guidelines for AI, we must consider “ethics” in all forms, particularly accounting for the cultural constructs that differ between regions and groups of people—as well as time and space.


Related: What we sacrifice for automation


That means that truly ethical AI systems will also need to dynamically adapt to how we change.  Consider that the ethics of how women are treated in certain geographical regions has changed as cultural mores have changed. It was only within the last century that women in the United States were given the right to vote, and even less than that for certain ethnicities. Additionally, it has taken roughly that long for women and other ethnicities to be accepted pervasively in the workplace—and many still are not. As culture changes, ethics can change, and how we come to accept these changes, and adjust our behaviors to embrace them over time, also matters. As we grow, so must AI grow too. Otherwise, any “ethical” AI will be rigid, inflexible, and unable to adjust to the wide span of human behavior and culture that is our lived experience.

If we want ethics in AI, let’s start with this “first principle”: Humans are diverse and complex and live within groups and cultures. The groups and cultures creating ethical AI ought to reflect that.


S. A. Applin, PhD, is an anthropologist whose research explores the domains of human agency, algorithms, AI, and automation in the context of social systems and sociability. You can find more at @anthropunk and PoSR.org.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

S. A More


Explore Topics