Fast company logo
|
advertisement
U.S. Army promises its super smart robot tanks won’t murder you

[Photo: Cpl. Levi Schultz/Marine Corps]

BY Melissa Locker1 minute read

The U.S. military wants civilians to know that it has no current plans to unleash autonomous-robot killing machines anytime soon. No, no, no: The robotic killing machines it plans to let loose on the battlefield will have humans driving them with ethical standards!

According to Defense One, the U.S. military is clarifying its robo-tank program after a February article in Quartz warned that it was turning tanks into “AI-powered killing machines.” That bone-chilling assessment was based on the military’s newfangled Advanced Targeting and Lethality Automated System, or ATLAS, which could help tanks “acquire, identify, and engage targets at least 3X faster than the current manual process” through the use of weapon-grade AI.

After the media noticed this potentially alarming use of artificial intelligence, according to Defense One, the Army decided it was best to change its request for information to calm the concerns of lily-livered civilians by emphasizing that ATLAS will “follow Defense Department policy on human control of lethal robots.” I mean, phew, right?! Don’t you feel better?

Here’s the language added to the request:

advertisement

All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.

If you’re wondering what the ominous-sounding Directive 3000.09 is, it’s a requirement that humans be in control of weapons, autonomous or not, so they can “exercise appropriate levels of human judgment over the use of force.” That means a human will have to decide whether to kill someone, not just a machine.

See? No reason to worry at all.

[h/t Gizmodo]

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Melissa Locker is a writer and world renowned fish telepathist. More


Explore Topics