The U.S. military wants civilians to know that it has no current plans to unleash autonomous-robot killing machines anytime soon. No, no, no: The robotic killing machines it plans to let loose on the battlefield will have humans driving them with ethical standards!
According to Defense One, the U.S. military is clarifying its robo-tank program after a February article in Quartz warned that it was turning tanks into “AI-powered killing machines.” That bone-chilling assessment was based on the military’s newfangled Advanced Targeting and Lethality Automated System, or ATLAS, which could help tanks “acquire, identify, and engage targets at least 3X faster than the current manual process” through the use of weapon-grade AI.
After the media noticed this potentially alarming use of artificial intelligence, according to Defense One, the Army decided it was best to change its request for information to calm the concerns of lily-livered civilians by emphasizing that ATLAS will “follow Defense Department policy on human control of lethal robots.” I mean, phew, right?! Don’t you feel better?
Here’s the language added to the request:
All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.
If you’re wondering what the ominous-sounding Directive 3000.09 is, it’s a requirement that humans be in control of weapons, autonomous or not, so they can “exercise appropriate levels of human judgment over the use of force.” That means a human will have to decide whether to kill someone, not just a machine.
See? No reason to worry at all.