advertisement

A motley crew of humans rights activists, ethicists, and technologists, are calling on the U.N. to ban autonomous weapon technology before it’s too late.

Can The Campaign To Stop Killer Robots Save Us From Weapons That Kill On Their Own?

BY Jessica Leber4 minute read

Today’s military drones are controversial enough. But at the very least a human is still at the helm–even if in an office far away–and is deciding about when and who to target in attacks. In the future, as autonomous weapons technology advances, it may be compassion-free “killer robots” rather than people making these life-or-death decisions.

A growing number of humans rights activists, ethicists, and technologists who fear this emerging threat are visiting the United Nations this week to call for an international agreement to ban the development and use of fully autonomous weapons technology, in the same way that other tools and tactics of war are banned, such as chemical weapons or anti-personnel landmines.

“If these weapons move forward, it will transform the face of war forever,” says Nobel Peace Prize winner Jody Williams, a founding member of the “Campaign To Stop Killer Robots,” an international coalition formed this April. “At some point in time, today’s drones may be like the ‘Model T’ of autonomous weaponry,” she says. (Williams won the 1997 Nobel for her activism to push for the landmine ban.)

Weapons that “select and engage targets” on their own do not exist yet (well, except maybe to kill jellyfish), and they could be decades from being realized or reliable.

Compass Newsletter logo
Subscribe to the Compass newsletter.Fast Company's trending stories delivered to you daily

The list of challenges in developing them is enormous, says Noel Sharkey, an Irish computer scientist who is chair of the International Committee for Robot Arms Control. They range from the purely technological–computer vision is barely good enough to distinguish between a lion and a car today, he notes–to ones that involve fundamental ethical, legal, and humanitarian questions. A teenage insurgent might not look very different than a child playing with a toy, and it’s hard to imagine a machine substituting for human judgment in conducting the “proportionality test” demanded by the rules of war–whether the civilian risks outweigh the military advantage of an attack.

“What would surrender look like?” Sharkey asks.

Despite these obstacles, militaries around the world, such as China, Israel, Russia, and especially the U.S., are enthusiastic about developing and adopting technologies that help take humans “out of the loop” of combat scenarios, often citing the potential to save soldiers’ lives. Without preventative action, to Williams, the writing is on the wall.

Sharkey cites the U.S. military’s X-47 aircraft, which can take off, land, and refuel on its own and has weapons bays, as evidence of the trend towards greater levels of autonomy in weapons systems. Similarly, the U.K. military is developing a drone with B.A.E. Systems called Taranis, or “God of Thunder,” which can fly faster than the speed of sound and select its own targets–though it will require a human to authorize an attack.

The Campaign to Stop Killer Robots, a coalition of international and national NGOs, only launched recently, but individual groups have been to raise awareness for the last few years. Three days after Human Rights Watch put out a report last year, the U.S. Department of Defense released a policy that put a five to 10 year moratorium on the use of lethal autonomous weapons; however the policy doesn’t preclude research and development, can be easily changed by senior officials, and only is in place for a relatively short time. “I really see it as a fig leaf,” says Williams.

While the U.S. is the only nation with any written “killer robot” policy at all, the issue is starting to gain wider notice in international forums. Earlier this month, 272 engineers, computer scientists and roboticists signed onto the coalition’s letter calling for a ban and the U.N. special rapporteur on extrajudicial killings issued a worrisome call-to-action in April. The coalition is hoping that other nations will join Egypt, France, Pakistan, and Austria in seeking to start early talks on the issue at the U.N. General Assembly First Committee on Disarmament and International Security meeting in New York later this month.

advertisement

Williams first heard about the concept of fully autonomous weapons when working on a paper three years ago. She was deeply terrified, recalling her days as a child hiding under the desk during nuclear drills.

In some respects, the campaign may be easier reach than her work on landmines, since robotic weaponry doesn’t yet exist and when the public hears about the issue they tend to share her instinctive reaction. And there is precedent for a “preventative ban.” Blinding lasers were never used in war, because they were preemptively included in a treaty.

However, unlike the landmine campaign, autonomous weapon technology is not an easily-defined system. It’s a trend that could change the terms of war and combat, lead to proliferation to rogue states and human rights abusers, and would likely involve billions of dollars in defense contracts. In addition, there are also reasonable options besides a ban, like regulation.

The campaign emphasizes it is not advocating to stop autonomous robots for non-lethal uses. Neither is it fighting semi-autonomous weaponry, where a human stays involved. But the distinction is challenging to define. In talking about having a “human in the loop,” asks Williams, “does that just mean the programmer?”


ABOUT THE AUTHOR

Jessica Leber is a staff editor and writer for Fast Company's Co.Exist. Previously, she was a business reporter for MIT’s Technology Review and an environmental reporter at ClimateWire More


Explore Topics