Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

4 minute read


Machine Ethics

Machine Ethics

The conclusion of the Battlestar Galactica television series a couple of weeks ago left viewers with a decidedly mixed message: a superficial gloss of "ooh, the scary robots are coming!", coupled with a more subtle—and, for me, more important—story about the implications of how we treat that which we create.

You don't have to be a science fiction aficionado to appreciate the importance of the latter narrative. All you need to do is look at this past week's headlines: "ADAM," a robot scientist, making discoveries about genetics; "CB2" ("Child robot with Biomimetic Body") learning to recognize facial expressions and developing social skills; and battlefield robots taking on an increasingly critical role in American military operations. Autonomous and semi-autonomous systems are becoming extraordinarily complex, and our relationship with them differs significantly from how we use other technologies. How we think about them needs to catch up with that.

We've all heard of Isaac Asimov's "Three Laws of Robotics," a fictional set of ethical guidelines for intelligent machines; what I want to see is a set of guidelines aimed at the people who design those machines. I spoke recently to a group of technologists in the San Francisco Bay Area, and proposed my own "Five Laws of Robotics." These should be considered a draft, not a final statement, but I found in that gathering that they provoked useful debate.

Creation 2.0

    Law #1: Creation Has Consequences
    This is the overarching rule, a requirement that the people who design robots (whether scientific, household, or military) have a measure of responsibility for how they work. Not just in the legal sense, but in the ethical sense, too. These aren't just dumb tools to be used or abused, they're systems with an increasing level of autonomy that have to choose appropriate actions based on how they've been programmed (or how they've learned, based on their programming). But they aren't self-aware individuals, so they can't be blamed for mistakes; it all comes down to their creators.

    Law #2: Politics Matters
    The First Law has a couple of different manifestations. At a broad, social level, the question of consequences comes down to politics—not in the partisan sense, but in the sense of power and norms. The rules embedded into an autonomous or semi-autonomous system come from individual and institutional biases and norms, and while that can't really be avoided, it needs to be acknowledged. We can't pretend that technologies—particularly technologies with a level of individual agency—are completely neutral.

    Law #3: It's Your Fault
    At a more practical level, the First Law illuminates issues of liability. Complex systems will have unexpected and unintended behaviors. These can be simple, akin to a software bug, but they can be profoundly complicated, the emergent result of combinations of programmed rules and new environments. As we come to depend upon robotic systems for everything from defense to health care to transportation, complex results will become increasingly common—and the designers will be blamed.

    Law #4: No Such Thing as a Happy Slave
    Would autonomous systems have rights? As long as we think of rights as being something available only to humans, probably not. But as our concept of rights expands, including (in particular) the Great Apes Project's attempt to grant a subset of human rights to our closest relatives, that may change. If a system is complex and autonomous enough that we start to blame it instead of its creators for mistakes, we'll have to take seriously the question of whether it deserves rights, too.

    Law #5: Don't Kick the Robot
    Finally, we have the issue of empathy. We've known for awhile that people who abuse animals as kids often grow up to abuse other people as adults. As our understanding of how animals feel and think develops, we have an increasingly compelling case for avoiding any kind of animal cruelty. But robots can be built to have reactions to harm and danger that mimic animal behavior; a Pleo dinosaur robot detects that it's being held aloft by its tail, and kicks and screams accordingly. This triggers an empathy response—and is likely to become a standard way for a robot to communicate damage or risk to its human owner.

We may not fully realize just how profound the ongoing introduction of autonomous systems into our day to day lives will prove to be. These aren't just more gadgets, or dumb tools, or background technologies. These are, increasingly, systems that—despite being mechanical, created objects—operate in the same emotional and social-intelligence space as animals and even people.

At the moment, the question of how to treat robots appropriately, and the issue of ethical guidelines for roboticists, may seem relatively minor. That's okay—it's going to take us awhile to work through the right ethical and social models. But we really should have a handle on this before the systems we make decide that they've been kicked around for long enough, and start to kick back.

Jamais Cascio covers the intersection of emerging technologies and cultural transformation, focusing on the importance of long-term, systemic thinking. Cascio is an affiliate at the Institute for the Future and senior fellow at the Institute for Ethics and Emerging Technologies. He co-founded, and also blogs at