Cybersecurity experts say the “NotPetya” cyberattack that disrupted computer systems around the world last week was most likely the work of a government intent on attacking Ukraine, where the worst damage was caused. If so, the episode raises disturbing questions about the shape of conflict in the information age–and about whether governments are adequately prepared.
Countries around the world are wielding cyber weaponry these days like never before. Russia is widely credited with previous cyberattacks on Ukraine. The ransomware used last week was based on software tools developed by the U.S. National Security Agency, and leaked by a hacking group in April. Governments like Russia and North Korea are almost certainly behind other recent cyberattacks. And cybersecurity experts generally credit the U.S. and Israel with developing one of the first cyberweapons, a virus called Stuxnet that targeted Iran’s nuclear program.
Should such attacks be considered acts of war? Will future conflicts play out on cyber battlefields as much as they do on physical ones? As Army chief of staff Mark Milley put it, “The first shots of the next actual war will likely be fired in cyberspace, and likely with devastating effect.”
While war is still conducted with fighter jets, assault rifles, and roadside bombs, the world’s governments and armed forces are increasingly bringing new kinds of weapons and information systems to bear. And these software-based systems may soon eclipse most others in the effect they have on the battlefield. At the very least, a shift is under way that will see software come to have a deeper and deeper impact on almost every aspect of conflict.
In short, software is eating the military, and it may just determine whether we win the next war.
Coding For War
I’m standing in an aircraft hangar in Palmdale, California, that’s longer than a football field, looking at the future of military surveillance and, in a very real way, of war.
Apart from the low murmur of engineers discussing flight tests and maintenance, it’s quiet around here–when a fighter jet isn’t roaring down one of the runways nearby. This is the high desert, test pilot country, not far from Edwards Air Force Base, where the most famous test pilot of all, Chuck Yeager, became the first person to travel faster than the speed of sound, in 1947. Yeager made that flight with two broken ribs, and was in such pain that he was unable to seal the cockpit without help.
But the aircraft I’ve come here to see has neither cockpits nor pilots. It navigates almost entirely without human assistance, and on glider-like wings 131 feet wide can stay in the air for 30 hours at a stretch. The hanger I’m in is owned by the Northrop Grumman Corporation, one of the biggest defense contractors in the country, but the plane–a Northrop design known as an RQ-4 Global Hawk–is owned by the United States Air Force. It’s got its guts hanging out at the moment, quite literally: Its dolphin-like forehead has been removed to expose a weather radar unit and satellite dish, and its belly is flat and bare, as if it’s sucked in its breath to show you its washboard abs.
This is what I’m interested in. The plane’s underside is studded with a handful of data bus ports and a dozen small metal fittings called Universal Payload Adapters, all of which allow technicians to swap in a variety of surveillance modules on short notice. The Global Hawk can fly an optical bar camera that carries several miles of unalterable high-resolution wet film, a SYERS-2C multispectral sensor (like that used on the U-2 spy plane), or a next-gen MS-177 sensor, meant to outdo the U-2.
It’s cutting-edge surveillance stuff for sure, but what’s interesting about this plane is how quickly its spycams can be switched out. “Imagine downloading a new OS every time you get a new app on your phone,” says Scott Winship, VP for Advanced Programs at Northrop Grumman Aerospace. “You don’t want to have to go back and rebuild all the software in the airplane every time we change a payload.” But until recently, that’s exactly what you had to do. Now, thanks to a set of software standards known as the Open Mission Systems (OMS) architecture, what used to take two to three months of retooling and retesting now takes 12 hours or less. Instead of spending more than $200 million on an aircraft that can do one thing, Global Hawk customers like the Air Force, the Navy, NATO, and others get a multipurpose “smartplane” that can be quickly repurposed to fly a variety of missions, that can integrate new technologies with a minimum of effort, and which can provide data that a variety of military systems can consume.
While that may sound like common sense, building such robustness into the $600 billion enterprise that is the U.S. defense establishment is an almost unthinkably complex task. The Pentagon is making progress, though, in everything from unmanned surveillance aircraft to missile defense, cyberwarfare, AI fighter jets, intelligence analysis, robot sidekicks, and much more. The Global Hawk may have begun life as a conventional if powerful surveillance drone, but its modular nature and the technology that underpins it is the harbinger of a subtle but important shift in the way we approach national security.
Bringing A New Weapon To Bear
New technologies have always shaped the ways we go to war. From gunpowder in the 16th century to nuclear weapons in the 1940s to drones in the modern era, everyone from generals to grunts have sought the latest war-fighting advances as important tools of their strategic and tactical portfolios. Today’s conflicts are no exception.
But current developments look far different from the crossbow or the Gatling gun. It’s true that everything from autonomous aircraft to tricorder-like battlefield apps, self-aiming rifles, augmented reality visors, intelligence-mining algorithms, and much more are currently in development or deployed “in theater.” But there’s a bigger change afoot than the evolution in ways to monitor, safeguard, or kill people that each of these represents. Underlying all of them is a single enabling technology that is now being leveraged in the military more extensively than ever before.
That technology is, of course, software, which now touches more of the military complex than ever, in deeper ways: More and more weapons and surveillance systems rely on it; more and more tools are being created to take advantage of the possibilities it affords; more and more decisions are being made based on software algorithms that range from the relatively simple to the intractably complex; more and more developers are able to contribute to the vast libraries of code that the military runs; and more and more questions are being raised around issues like what constitutes a weapon and what constitutes an attack.
On the surface, this may seem like nothing more than the military entering the modern age. But look closer and it’s possible to perceive a dramatic shift underway in both the technology and the doctrine of war, in which software is becoming the pivotal element behind weapons and information systems, and is increasingly the thing that will determine who has the upper hand. Yes, there will always be one plane that flies faster than the rest, one tank that can take more punishment, one satellite that can see farther, one missile that’s more devious. But the real differentiator will be the capacity to bring information and computing capacity to bear, and to understand how tactics will need to shift to best take advantage of the new tools and techniques that commanders have at their disposal.
“More and more of what [the military] is doing is going to be software, and software-enabled,” says Pat Antkowiak, Northrop Grumman’s chief technology officer. “Throughout the [defense] community, there seems to be an awakening that this is all becoming much more fundamental. The potential for rapid integration and introduction of new capabilities built into a software framework, this is clearly part of the promise. This notion of being able to have rapid, highly automated prosecution of really complex tasks against an adversary who’s moving rapidly against you, that is certainly part of the benefit on the operational side.”
That benefit needs to be leveraged quickly. America is training up a new crop of cyber soldiers (see sidebar), but cyber warfare is already an active part of the “global threat environment” today. Software exploits can have much more alarming effects than encrypting data, exposing private information, taking down Pentagon email systems (as Russia was credited with in 2015), or even hacking elections. Last year, the Justice Department charged a hacker affiliated with the Iranian government with hacking into the controls of a small dam in upstate New York in 2013. The dam was in “maintenance mode” at the time, and so could not be operated, but the episode illustrates how potentially devastating such a cyberattack on the country’s infrastructure could be.
“Sometimes people tend to think this is just some form of mildly aggressive hacking, as opposed to very serious activities done by very serious people who want to do our country harm,” says Brigadier General Joseph McGee, the Army Cyber Command’s deputy commander for Operations. The armed forces are responding, as fast as they can. U.S. Cyber Command is still young–it was first established in 2009–and its 133 new Cyber Mission Force teams reached “initial operating capacity” only late last year. Many aren’t due to be fully operational until the end of 2018. But given the growing threat and reality of cyber attacks today, USCYBERCOM has had to put its teams into action as soon possible. They are now deployed in around 50 operations around the world, in a variety of offensive, defensive, and support capacities.
“We’re conducting operations right now against ISIS in northern Iraq and Syria, and in support of ground operations going on there,” McGee says. While laptop-toting soldiers aren’t the first thing that come to most people’s minds when they think of war in the Middle East, that image will become more common over time. Being effective in current cyber operations is critical to our ability to wage the wars of the future, McGee says. “There is an absolute seriousness to the conflict that is occurring now in cyberspace,” he says. “It is apparent to me that how we conduct cyber operations now will help determine how we operate in case [cyber] crises ever get to the level of true conflict.”
The Autonomous Algorithms Of War
As software-based weaponry begins to loom larger in conflicts around the globe, military leaders and policymakers will need to think hard not just about how to defend against such attack, but what the right way to carry them out will be.
One of the Pentagon’s latest attempts to put machine learning to work is dubbed the Algorithmic Warfare Cross-Functional Team (aka Project Maven). Established in April of this year, the team’s first task is to combat ISIS by helping process the vast stores of video footage captured by U.S. surveillance drones and other aircraft. As an initial step, Project Maven will “develop, acquire, and/or modify algorithms” to detect and classify various objects, and generate alerts based on the results.
Project Maven’s intent is to “reduce the human factors burden of FMV analysis, increase actionable intelligence, and enhance military decision-making,” according to Deputy Secretary of Defense Robert Work. Given that machine learning algorithms have already been used to help identify couriers working for militant organizations in Pakistan and elsewhere, with some success, it’s not hard to imagine Project Maven’s algorithms being used to help identify terrorists, and then “enhancing” a decision as to whether or not to attack them.
The vast amounts of data collected by military and government “ISR” (intelligence, surveillance, and reconnaissance) is certainly used to help commanders formulate tactics and plans. But so far, artificial intelligence systems haven’t been used to autonomously choose the targets of deadly force–at least, not in a context that’s been publicly disclosed.
Already, though, autonomous weapons systems can determine for themselves how to carry out their orders, even if they’re not yet determining what those orders should be. The Air Force’s “Loyal Wingman” concept envisions the conventionally piloted fighter jets of the future getting one or more autonomous wingmen that would accompany the pilot into combat. Another big defense contractor, Lockheed Martin, demonstrated this capability earlier this year with their Have Raider program, in which a fully autonomous F-16 flew simulated air-to-ground strike missions in support of a manned aircraft. While the Have Raider F-16 was “told” what its targets should be, it was also given the ability to autonomously update its plans based on new threats it encountered along the way, making its own decisions about how to reach and destroy its target without itself being destroyed in the process.
Most visions of autonomous weapons systems contemplate people being involved in some way. But the amount of information being collected and processed by military systems is already far too much for even a substantial team of analysts and decision-makers to synthesize and consume. And the reasoning used by sophisticated machine-learning algorithms to arrive at their conclusions is often notoriously obscure. It’s entirely possible that useful target-selection algorithms will be of a complexity that’s well beyond our capacity to understand and second-guess. After all, there’s not much value in software that does something a person could do just as well.
This is the kind of thing that gives rise to doomsday scenarios in which Terminator robots walk the earth, mopping up the remains of the human civilization they’ve just destroyed. And while that’s unlikely, it’s no stretch to wonder how much power we’re ceding to the algorithms we create–whether they’re used by weapons systems to target enemy combatants or by social networks to target ads.
“More and more in war, as in other parts of life, power and authority are expressed algorithmically,” says Dustin Lewis, senior researcher at the Harvard Law School Program on International Law and Armed Conflict. “This trend will almost certainly continue to increase in pace and in reach in terms of the number of countries and armed forces that are incorporating algorithmic systems into their military functions.”
As that trend continues, Lewis says, the designers of military and other algorithms need to keep in mind at least the minimum legal norms established by international law: “The stakes are extremely high.” Soldiers are trained in rules of engagement that let those laws shape the way they fight. As algorithms make more decisions not just about how to kill, but also who and when, will the engineers who create them be similarly trained?
Compounding the issue, most of us are already culturally habituated to defer to the algorithms we encounter daily, often without even being aware of them. “Where I find this concerning from an accountability perspective is the possibility that [autonomous warfare] systems be used and adopted whole cloth, without thinking through sufficiently the basic legal and other accountability concepts,” Lewis says. Google tells us which search results are most important; Facebook tells us what news to pay attention to; Amazon tells us what we might like to read, Spotify what we might like to listen to. How often do most of us question these pronouncements? How ready are we to make our own decisions once a computer is telling us who we might like to kill?
One Information Network To Rule Them All
Whatever the combination of human and machine at the controls, making effective decisions in conflict–or anywhere else–depends in large part on having timely access to critical levels of information and analysis. This too is a problem the Pentagon hopes to use software to solve, but its latest big steps in this direction have been ponderously slow and, according to some, less than brilliantly executed.
Considering that the DoD has nearly 3 million civilian, uniformed, and reserve personnel on its rosters, getting them all the right information in the right way at the right time is an almost unimaginably complex task. Individual organizations within the various commands have in the past dealt with this problem mostly by ignoring it: Many computer systems in use by today’s armed forces are unable to communicate with other systems even within the same branch or on the same military base.
In some cases, they’re nearly unable to communicate at all: Much of the nation’s ballistic missile defenses and nuclear bombers, for instance, still use 8-inch floppy disks to coordinate many operational functions, according to a 2016 report from the U.S. Government Accountability Office. Individual weapons systems often represent their own information silos (or “stovepipes,” as the military often terms them). Data from a Patriot missile radar, for instance, can be used to aim and launch Patriot missiles, but is generally unavailable to other weapons systems that might be able to use it to enhance the nation’s defense.
The Army is currently working hard to change this last case, bringing its air and missile defense assets into an “any sensor, best shooter” package known as the Integrated Air and Missile Defense Battle Command System, or IBCS. That system would collect and integrate data from a variety of sensors on the network, and use that information to choose from among a variety of weapons systems, prioritize and set targets for them, and launch attacks. But as of early last year, the Northrop-developed system was crashing several times a day, sometimes for more than 10 minutes at a time–making it tough to feel confident in its vigilance. Northrop and the Army have been ironing out the kinks since then, though various estimates put the system’s launch as late as 2018 or 2019.
Another piece of software that has seen its share of tribulations is known as the Distributed Common Ground System, or DCGS. In theory, this is a piece of “military-grade” enterprise software that should tie together information analysis and distribution across all the armed forces, integrating battlefield reports with ISR data from a variety of sources to give planners and commanders a comprehensive and actionable picture of the “battlespace” they’re concerned with.
The DCGS has been operating in one form or another since at least 1996, though the earliest versions of it took the form of Deployable Ground Stations that weighed 200 tons, were staffed by 200 people, and took half a dozen giant C-5 Galaxy military transport aircraft to move into theaters of operation around the world. These days, the system is considerably more mobile, but is implemented differently by different branches of the armed forces, and is badly in need of upgrade and integration across services and organizations–as well as a lot of UX love. Current versions of the software are so difficult to set up, use, and maintain, according to some, that commanders in the field often don’t bother with many of its functions, and sometimes don’t bother to set it up at all. Some branches have already begun moving to a more modern system–including instituting open standards architectures similar to OMS–but the road is long.
That’s not to say it isn’t important. Various branches of the armed forces are exploring handheld or visor-mounted tactical computers for use on the battlefield that would both help warfighters maintain closer communications with commanders and bring additional information directly to the front lines. But those devices will only be as good as the information and insights that flow through them, and right now that information is fragmented at best.
No less a light than Alphabet executive chairman Eric Schmidt, who is also chair of the Pentagon’s Defense Innovation Advisory Board, has called for the military to develop its own centralized, Google-like data storage and retrieval system. And Peter Thiel’s secretive big-data company, Palantir Technologies, wants to build DCGS software so badly that it took the Army to court, where it won the right to compete for the contract. Many of Palantir’s contracts remain shrouded in secrecy, but its executives clearly see enormous value in such deals–so much so that they’ve taken a notoriously combative approach to winning DoD business. The relationship highlights some of the deep issues with the defense acquisitions process. Could great software that contributes to our national defense possibly result from such an adversarial relationship? We may yet have the chance to find out.
New Challenges, Uncertain Future
As software eats the military, it’s becoming clear that the defense establishment is only just starting to take advantage of some of the capabilities that software-based systems afford. It’s also just starting to come to grips with some of the complexity that such systems may present: None of it plays well together; all of it can be hacked; achieving a robust and intuitive design is just as hard as it is for any commercial application (and in many cases harder); and pitfalls and vulnerabilities can arise where you least expect it.
Just being able to update a software system without having to rethink things from the ground up is something of a new trick for the Department of Defense, strange as it may seem. Reaping the benefit of up-to-date software development practices isn’t something that’s limited to experimental autonomous fighter jets, of course. “Being able to upgrade, that’s really something that’s applicable to any platform that has any amount of software capability–which is really all of them [emphasis added] at this point in time,” says Renee Pasman, Lockheed’s director of Mission Systems Roadmaps for Advanced Development Programs.
Making all these systems work well together is no easy task. In fact, just making information from one system or branch of service easily available to another can be a challenge within the vast enterprise that is the Department of Defense–which employs more than 2 million uniformed and civilian personnel, and another 826,000 National Guard and reservists. And keeping the systems and software in use by that enterprise secure is a herculean undertaking in itself. Just maintaining the systems themselves is a task the military struggles with, and at a surprisingly basic level: Many military web pages are unavailable to the public not out of national security concerns but because they have bad credentials or lack current website security certificates.
In an age when we have the destructive potential to end life on Earth as we know it, the difference is not in weapons that can see farther, fly faster, shoot sharper. Those define the lock but they’re not the key. To a great extent, it will be the software that supports and underpins those systems that makes the difference to the fortunes of war. How that software is wielded will become at least as important as where and when the guns are fired and the bombs are dropped. While human ingenuity will always provide an edge (probably), maintaining a competitive advantage will be a task that’s accomplished as much in the realm of software as it is in hardware, and that will fall to different people than it has in the past. Let’s hope Pentagon leaders can keep up.
Mark Wallace‘s work has appeared in The New York Times magazine, the weekend Financial Times, and the Philadelphia Independent, among many others. He lives in San Francisco.