Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

1 minute read

Fast Feed

Stephen Hawking, Elon Musk Sign Open Letter On The Future Of Artificial Intelligence

The letter recommends refocusing AI research to maximize social good, while "avoiding potential pitfalls."

Stephen Hawking, Elon Musk Sign Open Letter On The Future Of Artificial Intelligence
[Photo: Flickr user Jeff Keyzer]

Hundreds of prominent scientists and intellectuals, including Stephen Hawking and Elon Musk, have signed an open letter urging safeguards for artificial intelligence research and an expansion of efforts to ensure that developments in the field are to the benefit of humanity.

"The potential benefits [of AI research] are huge, since everything that civilization has to offer is a product of human intelligence," says the letter. "[T]he eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

The letter is accompanied by a research document suggesting both long-term and short-term priorities for the field. As it outlines the possible economic and other advantages offered by advanced artificial intelligence, the document also describes the dangers, which include loss of control over dangerous military equipment and cyber attacks.

"As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area," the research document says. "It is also probable that AI and machine learning techniques will themselves be used in cyber-attacks."

Both the letter and the document avoid explicitly mentioning the darker scenarios involving artificial intelligence familiar to us from the movie screen—yet the notion that our creations could one day rise up against us has been around since long before The Matrix.

Some prominent scientists, moreover, are still very much concerned about this possibility. "One can hope that we will always be masters of the technology," Paul Benioff, the man credited with inventing the concept of quantum computers, told me a year ago in an interview.

"But there is no guarantee that a future, in which robots and computers will become so smart and clever that they will be able to manipulate us to their own ends, will never occur."

loading