advertisement
advertisement
  • 10 minute Read

Who Will Protect Artificial Intelligence From Humanity?

The common science fiction trope is of AI that is becomes sentient and turns against us. But if we can create thinking robots, what rights will they have, and who will enforce them?

Who Will Protect Artificial Intelligence From Humanity?
Westworld. [Photo: John P. Johnson/HBO]

Recent progress in robotics and artificial intelligence has blurred the line between man and machine. After more than 50 years of research, artificial intelligence (AI) systems haven’t quite matched their biological analogues, but they’re catching up: Uber’s self-driving truck drove 2,000 cases of beer over 120 miles, Google knows almost exactly what we’re searching for before we’ve typed it out in full; and many of us speak naturally to assistants waiting in our pockets or our kitchens. This is only the beginning.

advertisement

As technology improves, our worldview must evolve to keep pace. It’s time to start talking seriously about ethics in AI.

Artificial intelligence has always been alluring and terrifying all at once. To some it’s a panacea that will “make the world a better place“; to others it portends “our final invention,” the modern Pandora’s box that will start by putting us all out of work, and finish by wiping out our species. It could go either way.

Westworld. [Photo: John P. Johnson/HBO]

So it’s important that industry leaders like Google have convened ethics boards (secretive though they may be) to guide the coming revolution/apocalypse. We ought to protect our privacy and ourselves. Yet popular science fiction like Westworld and Ex Machina bring up a challenging question that’s often neglected in conversations about AI ethics: what will protect artificial intelligence from us?

Sci-fi provides stark examples of humans behaving badly towards AI. In Ex Machina, Ava is a feminine android created by a Zuckerberg-inspired recluse. She possesses remarkable qualities of “humanity,” including awareness, memory, and a desire for freedom—so much so that a human falls in love with her. But she is a prisoner; her creator holds her captive, testing her like a lab rat, and if she doesn’t measure up she’ll be scrapped.

HBO’s Westworld takes place in a futuristic theme park where wealthy human guests live out their darkest fantasies with the help of artificially intelligent androids called “hosts.” The hosts are used as props and playthings to support guests’ experiences, while the audience watches them develop something like consciousness. As the show and this budding consciousness unfold, the hosts begin to question the nature of their existence and their purpose in the park.

Although we’re far from existing alongside this level of artificial intelligence, sci-fi can invite deep thought into the ethical conundrums that will arise as AI advances. If we create artificial life, what rights should it have? What are our responsibilities to that creation?

Ex Machina. [Photo: © Universal Pictures]

As an AI researcher, these are questions I find myself pondering now. There are no easy answers.

advertisement

An approach I’ve found helpful (though it’s certainly not the only one) is to work backwards from people: I think about why we have rights, what inherent qualities make us “deserving” of those rights, and whether an AI could ever possess those qualities. In other words, what makes humans so special?

In Rene Descartes’ view, a human is a machine of a different color. We derive our autonomy from muscle actuators on a calcium-based linkage, two oxygen-combusting power plants, vision sensors, gyroscopes embedded in our sound sensors, and a massive neural network for control. Of course, our particular machinery far outstrips anything we can engineer today–it’s had a major head start, being optimized over millions of years of evolution.

Some would argue that the engineer’s sketch above leaves something to be desired, that humans are distinct from machines in at least one fundamental way: we’re alive. Unfortunately, there is no concrete definition for “being alive.” “Life” is characteristic of systems that exhibit several attributes, including organization, growth, adaptation, response to stimuli, and reproduction. Leaving ambiguity in terminology aside, “life” cannot be a sufficient condition for possessing rights because there are living things that have none: plants, insects, most animals up to mammals (though this is changing).

Westworld. [Photo: John P. Johnson/HBO]

Others argue that the fundamental element of human experience, what endows us with our unalienable rights, is consciousness. The most famous phrase of philosophy suggests this. Cogito ergo sum–I think therefore I am. It is the conscious creatures whose well-being we seek to protect, whose suffering we feel in ourselves. The root of “animal” is “anima,” the Latin cognate of the Greek word “psyche.” The conscious mind seems to me to be the essence of what we are and what we protect by universal rights.

But consciousness is even slipperier than life. It’s variously defined as subjective awareness, the ability to experience “feeling,” or the understanding of the concept “self.” All of these statements feel vague, inadequate, and more than a little circular. It’s for this reason that consciousness has been called “the most familiar and most mysterious aspect of our lives.” Yet if we subscribe to the materialist worldview, we must concede that consciousness arises from purely physical processes. There is no magic.

Take a moment to think about the feelings we are conscious of, like pain. There’s a scene in Terminator 2 where John Connor asks his robotic protector, “Does it hurt when you get shot?” The Terminator replies: “I sense injuries. The data could be called pain.” Materialists like Thomas Hobbes would probably agree. Perhaps pain is simply the confluence of two signals: one indicating a negative external stimulus and the other the goal of self-preservation. In the moment, we just know it hurts. Physiologically, feelings, thoughts, and desires all emerge from the interactions of neurons that build up the mind. Just as simple creatures like ants work together to create colonies of great complexity without any high-level instructions, so individual neurons in the brain might work together to give rise to the phenomenon of consciousness.

In deep learning, we simulate (to varying degrees of biological fidelity) the interaction of neurons all the time. I know of no reason why artificial neurons firing in silicon (or whatever our future chips are made of) couldn’t interact in the same way as biological neurons firing in the flesh. Ergo, I believe that machine consciousness is entirely plausible.

advertisement

I work specifically on deep learning for language. My goal is to build a literate machine. At Maluuba, the company where I work, we train artificial neural networks to converse with people to accomplish various goals, and to read articles and answer questions about them. Language excites me because I’m convinced that it’s one of the most important instruments of thought.

Alan Turing, the father of modern computing, thought similarly. The crux of his “imitation game,” now called the Turing Test, was that a machine could be said to possess intelligence if it fooled a person into believing it was human. It would achieve this through conversation in natural language. The thinking goes: if it can talk like us, it can think like us. Taking the next step: If it can think like us, it should share our rights. The problem is determining when an AI has achieved the capacity to think. At what point does an AI deserve rights? Even if we decide on criteria like consciousness, how do we measure them? It has been shown, for example, that the standard Turing test is insufficient since it has been “passed” by simple systems like Eugene Goostman bot, a chatbot that purported to be a 13-year-old Ukrainian boy. Moving forward, then, it is crucially important from an ethical standpoint to conceive the next generation of Turing tests. The antagonist of Ex Machina worked obsessively toward this goal, but for him the ends justified terrible means.

Efforts to measure machine consciousness will be complicated by the possibility that machines may one day think, but not at all like us. Our minds have emerged through evolutionary processes that geared us for survival and social interaction. It’s almost impossible to imagine how we would think if we weren’t shaped by these motives, but we currently train AI systems through quite different mechanisms. Ludwig Wittgenstein said, “If a lion could speak, we could not understand him.” So it may be with AI. Wittgenstein drives at the notion that communication through language depends significantly on shared points of reference. It’s unclear what these will be for humans and machines. On the other hand, we can be hopeful that, without the deep-seated desire to survive at all costs, intelligent machines might not be as intent on destroying us as most dystopian sci-fi assumes.

Closely related to consciousness is the concept of identity. It seems to me that much of the value of a human life arises from its uniqueness. Could an AI ever be unique, particularly if it were mass produced? Again: what makes humans unique? Beyond the superficial like our faces and fingerprints, the main source of our individuality is memory.

As he is shutting down (or dying), the AI antagonist in Blade Runner says “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I’ve watched sea-streams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in the rain.” The androids of Westworld take a major step toward human-level consciousness when they gain the ability to remember. These sci-fi examples remind us that as individuals, we are collectors of memories. We consume information, arrange it, and add it to ourselves, just as we do metabolically with matter and energy. Memories have meaning because they change us, they make an impression, with all the mechanical connotations of that word. Memories alter how we behave and perceive events, and since each of us starts in a different place at a different time, each one’s memory is unique.

Black Mirror.

As the internal record of human experience, memory also presents an intriguing bridge between man and machine. Consider the themes at play in “Be Right Back,” an episode from the series Black Mirror. A grieving lover uses a service that recreates her dead fiancee as a walking, talking android built from memories and audio clips. This fictional story was mirrored in real life when an entrepreneur created a chatbot of her deceased friend using a trove of the friend’s text messages for training. In these cases an artificial system actively simulates–or hosts?–a human mind, through the medium of memory. It’s a middle ground that makes clear the possibility that “humanity” exists on a spectrum. Ought there to be a similar spectrum of rights, with coverage ranging from people to digital clones to AI? Again we come up against the issue of measurement.

Given its role in intelligence, it should be no surprise that memory is one of the next frontiers for deep learning. At Maluuba and other labs, we’ve found that a readable and writeable set of “memory” neurons can significantly increase the power of our models to understand and to reason. As these models learn and record their experiences in memory, do they become individuals in some sense? Do they gain an identity? How should we feel about wiping an artificial memory that’s been built up over thousands of hours of training? Right now we feel nothing; when I imagine the same memory wipe applied to Dolores, one of Westworld’s oldest hosts, I feel horror and revulsion. That is the power of science fiction–to help us grapple with the future before it gets here.

advertisement

These arguments may smack too much of sentimentality or anthropomorphizing, but I think it safer to err on that side of the issue. History shows us that it’s dangerously easy to deny the identity and rights of those we don’t fully understand. Westworld and Ex Machina evince our tendency to treat our inventions as our property: Dolores is used as a shell for high-paying thrill-seekers; Ava exists mainly to prove her creator’s genius. For better or worse, we take ownership of the things we create. The key will be to recognize when AI evolves beyond the tool that it is now into something that cannot be owned. Thankfully, we have time to think–the androids of sci-fi aren’t yet learning in a lab somewhere. But I suspect that someday soon AI’s will be our pets. Eventually they will be our partners. Like good parents, we must be vigilant, nurture them as they grow, and ultimately let them go.

This piece was originally published by Graphite Publications. Follow their Facebook page for more.

Adam Trischler, is a Senior Research Scientist at Maluuba, a Waterloo and Montreal-based company conducting research in natural language processing and artificial intelligence. He has published novel methods to allow machines to read and understand blocks of text, and has recently published a new dataset to push the capabilities of these models. You can find out more about Maluuba’s research here.

Video