advertisement
advertisement
advertisement

How Big Tech is helping build the Pentagon’s all-seeing eye-in-the-sky

After years of more clandestine contracting, the Pentagon has turned to Silicon Valley’s algorithms to build “Enemy of the People”-style surveillance systems.

How Big Tech is helping build the Pentagon’s all-seeing eye-in-the-sky
[Photo: Tech. Sgt. Robert Cloys]

One day in the spring of 2008, Colonel John Montgomery walked into a ground-control station at Creech Air Force Base in Nevada for his regular shift flying a Predator drone over Iraq. The mission that day was an open patrol over Sadr City, a densely populated neighborhood in northeastern Baghdad. Montgomery’s squadron had been watching the area for weeks.

advertisement
advertisement

As Montgomery settled into his seat, his sensor operator turned to him. “There’s something wrong in this city,” he said. “I don’t know what it is, but things just don’t feel right to me.” Montgomery sensed it too. “It was a vibe. It just wasn’t right,” he later told me of the mission. Montgomery’s crew was so familiar with the streets of Sadr City that they understood the rhythm of the neighborhood; they even committed to memory the exact spots where local women hung laundry from their balconies. When something was off, it was obvious.

Fifteen minutes into the shift, the sensor operator pointed to a man on the screen. “This guy does not make sense to me,” he told Montgomery. The man was wearing a suit, and he was speaking on a cell phone. From 15,000 feet, he wouldn’t have appeared particularly unusual or suspicious to the untrained eye. But the sensor operator was an experienced airman and Montgomery trusted his instincts. He agreed to drop the planned patrol. On the basis of a hunch, the man in the suit became a target.

For more than three hours as the Predator orbited overhead, the man didn’t once set foot inside a building. He seemed to be walking aimlessly, at times strolling down the middle of busy roadways. He kept his cell phone to his ear the entire time.

Eventually, the man made his way into a quiet side street and a Toyota pickup pulled into the frame. Three men emerged and, together with the man in the suit, the group took a mortar tube out of the truck’s flatbed tray and fired two shots toward a nearby US base. After dumping the barrel in an abandoned lot, the three men got back into the Toyota and drove away, and the man in the suit went on walking as though nothing had happened.

An intelligence team was dispatched to follow the Toyota, and a second team crossed the neighborhood to retrieve the mortar. The Predator crew continued following the man in the suit, who disappeared into a house a few blocks away. Montgomery said that he met his maker shortly thereafter.

It has long been known that the Pentagon’s drones generate far more data than its personnel could ever possibly watch. For every enemy like the Man in the Suit that the Pentagon finds, many more go unseen, if not unwatched by the governments vast arsenal of spy tools. Automating the task of imagery analysis is widely seen as the solution. And to be sure, even a simple computer vision system could have followed the Man in the Suit around the city, sparing the operators precious time and resources. But an automated tracker would not have been able to tell that he was, in fact, a member of the insurgency. That’s a life-or-death call—based on subtle cues, lots of experience, and a heavy dose of intuition. Surely a computer wouldn’t be capable of that. Right?

advertisement
[Photo: Air Force]

When the computer says “there’s something wrong in this city”

In the eleven years since that mission, the Pentagon engaged in a wild and largely secret effort to automate these tasks, and what was once seen as pure fantasy is now much closer to reality. In early 2017, after millions in government investment in laboratory experiments, a Pentagon task force concluded that advanced surveillance-analysis algorithms “can perform at near-human levels.” In response, the DOD launched Project Maven, a much-publicized and yet somewhat shrouded effort to take algorithmic spycraft to war.

Also known as the Algorithmic Warfare Cross-Functional Team, Project Maven’s first experiment, a system capable of recognizing targets and discovering suspicious activities in drone video footage, was delivered to 10 intelligence units working on missions over Syria, Iraq, and a number of African countries in late 2017. It has since expanded to cover other “geographic locations,” according to one senior official in a recent speech.

The official seal of the Algorithmic Warfare Cross-Functional Team, also known as Project Maven.

Among the software’s many features, analysts can select a target of interest and the software will assemble every existing clip of drone footage showing that same vehicle or individual spotted in previous missions. Other features are classified, though they can be fairly easily guessed: one of the contractors on the project, a computer-vision startup called Clarifai, sells software capable of analyzing a person’s “age, gender and cultural appearance” in videos and photographs.

In its second “sprint,” Project Maven turned its attention to wide-area motion imagery, or WAMI, which uses high-resolution aerial cameras to watch whole city-sized areas. By the end of 2018 the program aimed to deploy an “AI-based” analysis algorithm for Gorgon Stare, the most powerful known version of WAMI, used aboard a fleet of Air Force drones. (I call it “the all-seeing eye,” and I’ve just written a whole book on the subject).

The thought of an artificially intelligent Gorgon Stare is startling, to put it mildly. The camera system can record thousands of people and vehicles simultaneously; once automated, every single one of those targets can be watched, unblinkingly and relentlessly. The Air Force declined to tell me exactly what this all-seeing AI would be capable of, but in a presentation to members of the Royal Australian Air Force in October 2017, the director of the program showed that an early prototype of the software was capable of instantaneously recognizing cars, trucks, people, and boats. He also suggested that it would eventually be capable of more sophisticated tasks—perhaps like being able to tell when something about a man onscreen is “not right.”

Partners in the initiative include the US national laboratories and all 17 member organizations of the US intelligence community. As the Pentagon would have it, Project Maven will open the door to a new era of artificially intelligent spycraft. Nothing will go unseen.

advertisement
A camera system mounted on the MQ-9 Reaper [Photo: USAF / Airman 1st Class Aaron Montoya]

Learning on the fly

The main barrier to the widespread adoption of this technology earlier in its history was the fact that even the most advanced of these systems are not entirely glitch-free. In one 20-minute test, I noticed that a behavior-detection algorithm built by Kitware, a go-to defense and intelligence contractor that specializes in machine vision, flagged as suspicious an intersection where a car pulled away from a stop sign and a second car replaced it from behind seconds later (“replacement”—when one car replaces another car—was one of the suspicious behaviors that the software was designed to search for). No soldier would be able to trust a system that makes such glaring errors.

But such mistakes are becoming rarer,  thanks in large part to recent advances in machine learning. In the field of automated spycraft, even comparatively little learning, it turns out, goes a long way.  After being trained on ImageNet—a training database containing 14 million annotated images —one WAMI-analysis system, developed by the MIT Lincoln Laboratory, saw its false- alarm rates drop to almost zero. Others trained on similar systems saw their performance skyrocket. Project Maven’s early software  is trained on over 1 million images, and it shows.

Some would say that even a system with such extensive “training” could never matcha human. Whereas even a relatively inexperienced analyst would only mistake a fire truck for an armored combat vehicle once, as long as the error was pointed out to him or her, a computer will make the same mistake over and over again.

This, too, is being addressed, using what’s known as “active learning.” Under Project Maven, when the system misidentifies an object or activity on the ground, analysts can click a “train AI” button and the algorithm will remember not to make the same mistake again in the future. Likewise, when the computer is right, the analysts affirm it.  (If you’ve ever had to identify road signs or street numbers in a CAPTCHA test while filling out an online form, you’ve participated in a similar kind of human-supervised learning program for Google Maps). This way, the computer builds an understanding over time of what works and what doesn’t. The longer the system is in operation, the better it will get at its job.

Eventually, this kind of software will even be capable of learning new tricks on the fly. Analysts using Project Maven software, for example, can teach their systems to recognize entirely new forms of intelligence that they were never trained for. In one example provided by the program’s director, an analyst could teach the software to recognize “an emergency” by the presence of fire trucks and ambulances.

Even ardent skeptics who’ve had the jobs these algorithms are designed to replace find it hard to deny that these advances are bringing us uncomfortably close to a future when a computer, rather than a human, can stare down on us and declare, “There’s something wrong in this city.”

advertisement

Colonel John Montgomery, the former Air Force drone pilot who ran the mission against the Man in the Suit, still wants to believe that the human touch will always be necessary —but he’s no longer so sure. One day, while discussing the possibility of automating the Pentagon’s aerial spycraft, Montgomery told me about a YouTube video he had just watched of an active learning robot playing ball-in-a-cup. At first, the robot, is as hopeless as a toddler. But then, at the 70th attempt, the ball hits the cup’s rim. Like a bright analyst in training, the robot takes note. With each attempt, it gets better. On the 100th try, it nails it.

“Well, after that,” Montgomery said, “It never missed.”

Airmen assigned to the 11th Intelligence Squadron review data prior to a full-motion video exploitation mission on Hurlburt Field, Fla., June 11, 2015. (Image: USAF / Airman Kai White. Portions of this image were blurred for security or privacy concerns)

Silicon Valley enters the fray

The undisputed leaders in building the technology that could replace the human touch are, of course, the firms based in Silicon Valley. This is actually in no small part thanks to the tech industry’s ability to poach talent from the defense and intelligence world. Persistics, a widely touted automated analysis program developed by the Lawrence Livermore National Laboratory, was shuttered in 2014 because too many of its team members left to take jobs at Google, YouTube, and Facebook, among other firms.

Unsurprisingly, then, much of what Silicon Valley has accomplished in the commercial sphere has direct applications in the shadowy world of intelligence. Take YouTube’s exquisitely effective video-recommendation sidebar, which can predict that a viewer who searches for a video of a Huey helicopter will probably also be interested in a video about a nuclear submarine or a clip from Apocalypse Now. It uses a predictive technique known as cluster analysis that can similarly be used to predict that a man walking around aimlessly may be preparing an attack.

The Pentagon is eager to draw those minds and technologies back to the fight, and Silicon Valley’s leadership has signaled that it could be a willing partner, though the industry’s rank-and-file sees the matter differently. When it was revealed, in March 2018, that the Pentagon had hired Google to participate in Project Maven, a media storm began to brew. In response, the company said the technology would be used only for “nonoffensive” activities. Even so, more than 3,000 Alphabet employees signed a petition urging the company to call off the partnership, which it did shortly thereafter.

Many had been shocked by the news of the collaboration. But this was not a first for the firm. As far back as 2013, Google signed an undisclosed cooperative research and development agreement with the Air Force Research Laboratory focused on data-processing technologies for, among other applications, aerial surveillance. As a result of the collaboration, Air Force engineers developed what one webpage posted by the Office of the Secretary of Defense describes as a “revolutionary” prototype for automated “pattern-of-life” analysis in wide-area-surveillance footage.

advertisement

This was not a “nonoffensive” technology by any stretch of imagination. Pattern-of-life analysis is the process of studying an individual’s daily activities in detail from above. It is an integral step leading up to any airstrike. When I brought the collaboration to the attention of an Air Force spokesperson, he declined to provide further details.

There are other signs that Google’s involvement in the defense world was more extensive than previously thought. In its 2019 budget request, the Special Operations Command noted that it needed $4.5 million to purchase a number of cloud computing services, including, it notes, TensorFlow, for a “big data analytics” program.

The Air Force spokesperson would not confirm or deny whether the AFRL-Google CRADA was the only case of Google participating in an Air Force project and noted that the service “will continue to partner with industry and academia pursuing new and emergent technologies to enhance our decision-making.”

Following the Project Maven controversy, Google’s leadership did appear to temper, at least temporarily, its earlier zeal for DOD dollars, withdrawing from the competition for a major Pentagon cloud computing program known as JEDI (for Joint Enterprise Defense Infrastructure) because it “couldn’t be assured that it would align with our AI Principles.”

But there are still plenty of others in Silicon Valley with fewer compunctions about doing business with the military. “If big tech companies are going to turn their back on the US Department of Defense, this country is going to be in trouble,” Amazon’s CEO Jeff Bezos said at an event the week after Google’s announcement. Bezos’s decision to stay in the running for JEDI likely pleased many at the Pentagon: the “everything store” company already provides all 17 intelligence agencies with a cloud computing system optimized for automated analytics.


Related: Everyone’s talking about ethics in AI. Here’s what they’re missing

advertisement

A week and a half after Bezos’s speech, Brad Smith, Microsoft’s president, announced in a company blog post that Microsoft, too, would stay in the running for both JEDI and other DOD technology ventures, including some, he acknowledged, that raised troubling questions: “We are not going to withdraw from the future.”

Looking around at the future that Silicon Valley has made for us—a future where you can order a book on a smartphone and then, with a flick of the finger, access thousands of hours of videos of helicopters and submarines that an artificially intelligent computer, somewhere, has decreed you’d want to watch—the tech world’s growing closeness with the defense and intelligence communities is a bit of a heart-stopping prospect. Headlines that would otherwise seem benign, even welcome—for instance, “Finally: An App That Can Identify the Animal You Saw on Your Hike,” or “Google Uses AI to Find Your Fine-Art Doppelganger”—take on a sinister new meaning. All of these marvels could be co-opted to make surveillance more automated and, as a result, more penetrating, inscrutable, and all-knowing. In one way or another, most of them will.


Arthur Holland Michel is the co-director of the Center for the Study of the Drone at Bard College and author of Eyes in the Sky: The Secret Rise of Gorgon Stare and How it will Watch Us All, from which this story is adapted.

advertisement
advertisement