Cars are already close to driving themselves down the highway. How far are we then from military weapons systems that decide on their own when and where to fire?
“You want the scary answer?” asks Clearpath Robotics co-founder and CTO Ryan Gariepy. “We can do it right now, with parts I bought off the Internet.”
Clearpath, a 70-person Canadian firm that develops unmanned ground vehicles and autonomous control software for both military and commercial clients, recently became the first company to join a growing movement calling for an international treaty that bans so-called “lethal autonomous weapons.”
Any ban would be preemptive right now. No nation is known to be deploying or developing weapons that kill on their own, without a human calling the shots (even today’s drones don’t do that). But advocates from the Campaign To Stop Killer Robots look at military systems in development, some of which push this boundary by using sensors that “finalize” targets, and say the possibility of self-directed weapons is not far off at all. Assigning software to make deadly decisions would be inherently unethical and dangerous, they say–regardless of how advanced artificial intelligence systems become.
“If these weapons move forward, it will transform the face of war forever,” Nobel Peace Prize winner Jody Williams, a founding member of the campaign and the former leader of the successful movement to ban anti-personnel landmines, told Co.Exist last year.
Clearpath, which supplies systems to U.S. military R&D labs and Canada’s version of DARPA, began discussing the issue internally two years ago, around the time when the UN special rapporteur on extrajudicial, summary or arbitrary executions issued a report raising the issue. The company decided it would be worth the business risk to speak out in order to validate the activist’s concerns and urge governments to take action.
“We don’t have control of how our products are used in the end,” says Gariepy. “And we don’t believe that the responsibility for when lethal force is deployed should lay with people who program some sort of abstract combat directives. … The concept of an ethical computer program remains firmly in the realm of the theoretical.” (The company will continue to work with military clients in developing fully autonomous robots for non-lethal uses, he says).
Gariepy hopes other companies follow his lead, though he expects that that would be unlikely for the large military technology contractors that are public companies, such as Lockheed Martin and iRobot. Clearpath, however, doesn’t make the majority of its revenue from military contracts, and Gariepy believes there are other mid-sized firms like his that might be in a similar position to speak out.
Peter Asaro, vice-chair of the International Committee for Robot Arms Control, notes that companies like Google are thinking about the issue of ethics for AI more generally (Google agreed to form an internal ethics board when it acquired the AI company DeepMind this year). And last October, 270 roboticists, AI experts, and other technologists signed a letter calling for an autonomous weapons ban.
Members of the Campaign to Stop Killer Robots are in New York City to talk to government representatives who are meeting at the UN this week. They are hoping that long before an international treaty would be signed, individual governments will put in place their own bans. That, however, could be an uphill battle: the technology for autonomous lethal weapons is not well-defined, and it also goes to the heart of the military-industrial complex, says Williams.
“The amount of money out there already in the system makes this harder, for example, than banning landmines. Landmines, by comparison, were chump change,” she says.