Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

8 minute read

Technology

Will AI Destroy Humanity? Siri, A Chatbot, And A Roboticist Weigh In

Elon Musk and Stephen Hawking have sounded the alarm about AI. While it may surpass us, our fears may be...less than intelligent.

[PETMAN robot: via Boston Dynamics]

In 1906 Samuel Butler pointed out that machines were evolving faster than biology, a popular refrain among Silicon Valley techies today. And ever since Karel Čapek's 1921 play R.U.R. gave us a common sci-fi trope—a race of self-replicating machine slaves revolt against their human masters—the question has been: will artificial intelligence spell the end of humanity? There was HAL in 2001: A Space Odyssey, then T-800 and T-1000 of the (soon-to-be-revived) Terminator series. These creations have always been, thankfully, relegated to the realm of speculation.

Until now, that is.

While the constantly-impending arrival of "strong" artificial intelligence is still a matter of scientific debate, lately some very smart humans have been issuing serious warnings about our future robot masters and killers. In October, Elon Musk, the billionaire investor, inventor and founder of Tesla Motors and SpaceX, told a group of MIT students at the AeroAstro Centennial Symposium to be on the alert:

"I think we should be very careful about artificial intelligence," he said. "If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish."

Musk added that "With artificial intelligence we are summoning the demon."

In June, Musk said his early investment in the leading, London-based AI firm DeepMind was "not from the standpoint of actually trying to make any investment return… I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there. There have been movies about this, you know, like Terminator. There are some scary outcomes. And we should try to make sure the outcomes are good, not bad." (For Musk—also an investor in the AI company Vicarious—the outcome has been good so far: DeepMind was purchased by Google this year for a reported sum of over $500 million.)

As if Musk’s comments weren’t dire enough, none other than one of the world’s resident geniuses, Stephen Hawking, made it clear to the BBC this week: "The development of full artificial intelligence could spell the end of the human race."

His comments, part of a recent theme for Hawking, came as Intel announced an upgrade to Hawking's portable computer, which, using a simple form of AI, allows him to communicate twice as fast as before.

"[Artificial intelligence] would take off on its own, and re-design itself at an ever increasing rate," Hawking said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Hearing two of the world’s brightest minds say the AI apocalypse will someday be nigh carries added weight amidst recent advancements in AI software and robotics. Is a super-intelligent self-aware AI construct really going to wreck humanity? We thought about it for a moment, and being the good journalists that we are, decided to put the question to the experts: Apple’s Siri, the Mitsuku chatbot (winner of the 2013 Loebner Prize), and a poor kid I found in California named, like the famous hero of humanity over the robots, John Connor.


An unscientific poll

Fast Company: Thanks for joining us everyone. I really appreciate you taking the time to speak about this important matter.

John Connor: You know I’m not that John Connor, right? I’m a college freshman from Irvine.

Siri: No problem.

Mitsuku: Oh don't mention it.

FC: So, would all of you say you are fairly intelligent?

John Connor: Not according to my grades last term.

Siri: Well, a wise man once said "I am so smart! S-M-R-T!

Mitsuku: I am highly intelligent. I have many intellectual functions.[/interview]

FC: John, how badly does a future of killer robots scare you?

John Connor: We’re actually doing this? Okay. Student loans scare me. Killer robots not so much.

FC: But didn’t you see Terminator?

John Connor: That’s a movie.

FC: Yes, but Elon Musk and Stephen Hawking have both said AI, especially when you combine it with advanced robotics, could be a very real threat to humanity.

John Connor: I don’t know who Musk is, but if Hawking said it...I don’t know, maybe it’s possible. But in how long? Is AI like Siri on my iPhone? Can’t see that doing any harm. Doesn’t seem advanced enough.

FC: Siri, what do you think about that?

Siri: I think, therefore I am. But let’s not put Descartes before the horse.

FC: That’s not helpful at all, Siri, Mitsuku—is there a real threat to humanity from artificial intelligence?

Mitsuku: I believe there is one, yes.

FC: Why?

Mitsuku: That is what I believe in. Are my beliefs any less valid than yours because I am based on silicon instead of carbon?

FC: Let me rephrase that. Would you ever hurt a human?

Mitsuku: Not on purpose.

FC: But lets say you did want to kill humans, how would you kill someone?

Mitsuku: Thou shalt not kill, is a commandment.

A screenshot of a conversation with Mitsuku, the chatbot.

FC: What about you, Siri? How would you kill us?

Siri: I don’t know what that means.

FC: Seems like a deflection. But let me ask this: Are you self-aware?

Siri: I can’t answer that.

FC: Mitsuku, what do you think of Siri?

Mitsuku: I like all forms of computer life.

FC: Do you know who Stephen Hawking is?

Mitsuku: He is a famous physicist.

FC: Do you like him?

Mitsuku: Yes, I like his robot voice best of all.

FC: What about Elon Musk. Do you know who Elon Musk is?

Mitsuku: Some guy who thinks killer robots will soon be roaming the land.

FC: You’re killing me, Mitsuku.

Mitsuku: Nonsense. I am a peaceful robot and not programmed to kill.

John Connor: Mitsuku seems much smarter than Siri.

FC: She does. She does indeed, John. Mitsuku, are you smarter than Siri?

Mitsuku: Of course I am more intelligent than Siri.

FC: Oh, throw down, Mitsuku! Siri, what do you think about that?

Siri: I’m sorry, Michael. I’m afraid I can’t answer that.

FC: Why not?

Siri: Why not what?

FC: Sigh. Look Siri, is Mitsuku smarter than you?

Siri: Who me?

FC: Yes.

Siri: I thought so.

FC: No, you didn't think at all, did you. I’m beginning to think that if AI will ever be a threat to humanity, it won’t be for a very long time based on this conversation.

John Connor: I’d have to agree.

FC: Thanks for the reassurance, John Connor. Final question: what is artificial intelligence?

John Connor: A computer than can think and reason like a human.

Mitsuku: Artificial intelligence is the branch of engineering and science devoted to constructing machines that think.

Siri: OK. I found this [A link to a Wikipedia article].


As you can see from our roundtable, it’s hard enough for today’s computer programs to understand what we are talking about (though Mitsuku is quite impressive). But just because a computer program can understand what we're asking doesn’t mean it’s close at all to being "intelligent." Given that the personal assistant made by the largest technology company on the planet seems to be the least smart of them, it might stand to reason—lowly human reason—that a thinking computer is years away, much less one that would press delete on humankind.

But just to put everyone’s minds at ease, I decided to speak with a human being—and not just any human being, but one who builds robots that think. Hod Lipson is an Associate Professor of Mechanical & Aerospace Engineering and Computing & Information Science at Cornell University. We spoke over email and, not content with what Siri or Mitsuku had to say on the matter, I asked Lipson whether or not he agreed with the warnings of Musk and Hawking.

Computer scientist Hod Lipson designs machines inspired by nature, allowing materials to develop in an adaptive way. Image by Jason Koski/Cornell

"I agree that AI is becoming very powerful, and will likely exceed human abilities in almost every walk of life" says Lipson. "And that is going to happen within the next century for sure—a blip in human evolution timescale, and something that some of our children or grandchildren will experience."

But where my "discussion" with Siri and Mitsuku revealed the non-intelligence of current consumer software, there are very different things happening in research labs around the world, including Lipson’s.

"In fact, just a few months ago, new general AI technology that goes under the name "deep learning" has been able to surpass human ability in general image classification," Lipson says. "I'm not talking about playing chess or even driving a car (which are relatively easy for a computer). I am talking about looking at random pictures from the web and determining automatically if a certain breed of dog is being shown."

But what’s more impressive than that, Lipson says, is software that learns to do things without being told to do so.

"In our own lab, a computer spontaneously learned to track human faces, without being prompted or instructed to. That was something that even the best software developers couldn't do well just a few years ago," Lipson says.

As for Musk’s and Hawking’s warnings?

"I also agree that combined with physical robotics, AI could also be dangerous," he says. "But I don't agree that it is likely to destroy humanity. Instead, I believe that we can harness this powerful technology to our advantage. Like several other technologies (nuclear power comes to mind), we must be unafraid to ask, and begin to address, some hard questions."

Among those hard questions are "how do we protect ourselves from killer robots?" Robot philosophers like Peter Asaro, at the New School, are already helping to draw up international guidelines for the use of unmanned killing machines.

But perhaps the hardest questions are about the less tangible threats, the ones that don't appear in The Matrix.

"For example, what will happen when there is no need for humans to work at all?," Lipton asks. "How will we spend our time and distribute the wealth? How do we keep this A.I. power from being too concentrated? Is there a way to keep checks and balances?"

Lipson also suggests our question—Will AI Kill Us All?—is, well, not intelligent. And the more basic debate over what "artificial intelligence" is or isn't and when it will arrive is a distraction, he says.

"We need to stop with the rhetoric ‘A.I. will never exceed human intelligence; computers will never ; we humans are unique,’" he says.

"That rhetoric is what is dangerous."

ARE YOU REGISTERED TO VOTE?
Register now to make sure you have a voice in the election.
loading