A group of top companies and researchers in the artificial intelligence field, including Alphabet’s DeepMind, Clearpath Robotics/Otto Motors, Tesla founder Elon Musk, and University of California at Berkeley Professor Stuart Russell have signed a pledge not to participate in or support the development of lethal autonomous weapons, colloquially called killer robots.
“There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others–or nobody–will be culpable,” they write in the pledge. “There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”
More than 170 organizations and 2,464 individuals have signed the pledge, according to the Future of Life Institute, which organized the campaign.
“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” said Future of Life Institute President Max Tegmark. “AI has huge potential to help the world–if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”
This isn’t the first effort by many of the signatories to stand against lethal AI: Musk previously cofounded OpenAI, a nonprofit seeking to research safe general AI, and Google recently announced a set of AI principles, where it pledged not to develop AI weapons. That announcement came after Google was revealed to be participating in the Defense Department’s controversial and secretive Project Maven, which involves automated processing of drone footage with machine learning algorithms.
The United Nations has also held talks on banning or restricting lethal autonomous weapons, which could pick targets and fire at them with minimal human intervention.