advertisement
advertisement

How DARPA Deals With Its Overwhelming Stockpile Of Photos

It may seem chaotic, but this interface is designed to make life easier for the user.

You might be surprised to know that the pain of managing the deluge of pet, baby, and food photos is a feeling shared by our most advanced arms of military intelligence. With access to satellite images, spy photos, and even smartphone videos that ISIS is sharing on Twitter, even they’re working to cut through the visual noise to find real intel. Vehicles. Faces. Weaponry. Locations. So how can they help an agent sneaking into enemy territory neutralize the right target?

advertisement
advertisement

DARPA, in conjunction with the U.S. Army Research Laboratory, has developed an interface to solve the problem. It’s part of what they call Visual Media Reasoning (VMR)–software used by analysts to sort through tens of thousands of images at once. And while we don’t have a lot of details on it beyond this murky screenshot and a somewhat vague press release, what’s still interesting is that the system is structured around a single-layer hierarchy.

Visual Media Reasoning InterfaceImage: courtesy of U.S. Army Research Laboratory

In other words, it simply tosses all of the media onto the screen at once, and users can pinch-to-zoom their way through it–in a method that lead researcher Dr. Jeff Hansberger compares to Google Maps–to get a better view. Images appear to be pre-sorted by attributes like location, but for the most part, the interface is set up for human analysts to do the heavy lifting and spot the most important shots themselves. And when they spot something important, tapping or clicking in will reveal all of the available information about that image.

As Dr. Hansberger explains in the press release, he’s designed the interface to play to a human strength and capitalize on our visual processing. Rather than burying image categories under several layers of menus, forcing someone to read a list, consider what was most relevant, and dig their way directory to directory, VMR is built with an understanding that humans can process images without even firing up the high reasoning parts of the brain. So it feeds people a lot of images, and lets their temporal lobe have at it.

Apple Photos

Interestingly enough, we’ve actually seen this same sort of design approach in consumer applications, too. Apple’s Photos app feeds us images in year-long timelines that we can tap into to home in closer and closer to our targets, while the NYT Cooking app displays recipes in a zoomable single-hierarchy view.

Ultimately, it makes sense that both consumer and military apps would reach the same conclusion, as the human brain isn’t wired to parse cute photos of cats any differently than eerie photos of explosives. But it also implies that we may be reaching the upper limits of what’s actually possible with 2-D UIs when it comes to parsing massive amounts of media. Because when DARPA reaches the same conclusion as Apple, I’m not sure that there’s another watershed concept around the bend.

We have reached out to U.S. Army Research Laboratory and will update the story with more media if we get it.

advertisement

[via Engadget]

advertisement
advertisement

About the author

Mark Wilson is a senior writer at Fast Company. He started Philanthroper.com, a simple way to give back every day.

More