In 2001, John Hanke had an idea. What if you could create a searchable, digital version of the entire Earth–and it’d all be easily accessible through a web browser?
Fast-forward three years. His little mapping visualization startup Keyhole was acquired by a newly public Google for $35 million in stock options. “Other search engines thought of mapping as something you print out and take with you in your car–like Mapquest,” says Hanke at the American Geographical Society’s Geography 2050 conference in New York. “[Google] bought into this vision that we had of assembling all the world’s geographical information.”
Hanke went on to scale the idea into Google Earth and Maps, services that live inside millions of our phones today and have become a deeply integrated part of our modern understanding of the places we live and move through.
Now, Hanke is the CEO of Niantic Labs, which released the hugely popular augmented reality game Pokémon Go. Hanke spoke at Geography 2050 with Brian McClendon, who served as the VP of engineering at Keyhole when it was acquired by Google. McClendon led Google’s mapping department with Hanke before leaving Google to work on mapping at Uber.
According to them–two people who have redefined what it means to make a map–the next generation of maps are semantic. That’s a fancy way of saying that maps will have a complex, dynamic understanding of the world around them. Why is this important? Self-driving cars and augmented reality.
The Challenge With Autonomous Vehicles No One Talks About
Self-driving cars are going to need really, really good maps. But the kinds of maps we have today–like Google maps–aren’t accurate enough. While autonomous vehicles will rely on their cameras and sensors to create a picture of the world, they also need to know how to make sense of that information. That’s where a semantic map comes in–a map that continues to learn about the physical world and refine its predictions of what objects are and how they will act through huge amounts of data. Without semantic maps, self-driving cars won’t ever be able to intelligently move through the world–at least not without crashing into something–a development that will arrive when self-driving cars do (between 5 and 30 years from now, depending on whom you ask).
Hanke says that the problems they faced while building a digital version of the Earth are similar to the problems engineers are currently facing today in trying to build these kinds of semantic maps. Back in the early 2000s, they worried about having enough storage and high quality enough imagery that would render via internet bandwidth–“we used more of half the bandwidth of all Google in the first six days and scared the company quite a bit,” McClendon recalled. At the time, the technology to do that was expensive. It’s a large part of the reason Hanke decided to sell Keyhole to Google–to use their storage and computing power.
Today, to build a map that dynamically reflects and understands the world, you need countless sensors recording it so you can constantly update the digital cartography–and so machine learning algorithms can look for patterns in all the data that’s generated. That requires capturing and storing lots of data–a estimated gigabyte per second for a self-driving car. Then, you need to have ubiquitous and powerful enough mobile computing to capture that data, make sense of it on the spot, and render it in a way that’s intelligible and useful. “You need that data to create the maps that are needed for AVs–that will make all the work we’ve done in mapping to date look small in comparison,” Hanke says.
These are the maps that self-driving cars will require. A car’s cameras and sensors can instantly pick up on objects around it, but it also needs to know where it’s going–and that system isn’t going to work unless there’s significantly higher fidelity in the maps these cars rely on. For instance, Google StreetView cars take an image every thee feet or so; but three feet can mean being in the wrong lane or missing a stop sign entirely. A new GPS that can map down to the foot is coming to smartphones in 2018–but autonomous vehicles may need even higher resolution.
Companies like TomTom, Here, and Carmera are already working on this problem. It will require machine learning algorithms, processing and storing vast amounts of data, and a network of sensors that can frequently update the map and power the AI models–a network that self-driving cars will readily supply. In essence, it’s the problem of creating a three-dimensional, live, super-accurate map of our constantly changing messy world. No biggie.
From Pokémon Go To The Perfect Map
Mapping developments are also intertwined with another futuristic technology: augmented reality. And after his time at Google, Hanke started Niantic, an augmented reality gaming company that brought us all the viral AR sensation Pokémon Go.
Hanke is interested in what he calls “human-scale mapping of the pedestrian world”–all the indoor, private spaces you can’t see on Google Earth right now. These are the maps that are going to be necessary to support a future dominated by augmented reality–one Hanke believes will be dominated by augmented reality glasses.
“The vision many of us have about glasses that allow you to interact with the world seamlessly–for that to exist, you have to have that mapping data everywhere, in places that cars and vehicles don’t go,” Hanke says.
There is a host of challenges in the way before AR glasses happen. The batteries need to be small enough and last long enough. The display technology needs to convince your eyes that a projection is there, even in sunlit areas. The hardware has to be light enough so it doesn’t hurt your nose. The glasses have to understand what they see and process it in real time so it can offer you relevant information–in essence, a semantic map. And on top of all that, they need to have an accurate enough map of the world to correctly draw objects so they line up with your vision and the world around you.
A high-fidelity map that knows what you’re looking at doesn’t solve all of those problems, but it certainly helps, and Hanke thinks it’s a necessary part of the future of augmented reality. This is where he and McClendon see high-fidelity semantic mapping impacting our lives in places that aren’t traditionally mapped: the great indoors. That includes private, intimate spaces like your house or office, as well as places like your local coffee shop or grocery store. If they aren’t mapped, AR glasses won’t work.
But the idea of mapping our indoor spaces is rife with problems. First up: Who owns the data? Perhaps you own the mapping data of your home, but what about in commercial or institutional spaces? As McClendon noted, that’s dangerously close to a surveillance state. He proposed that as a rule, only certain data is uploaded to the cloud, and most of the mapping data stays on your device. “Then it’s not personal pictures, it’s the geometry of the world around you,” McClendon says. “That reduces the privacy risk, certainly for glasses.”
It’s a realm of mapping that has never been tackled in a successful way. Perhaps people’s desire for privacy will override their desire for the convenience of digital devices–but we all know how that’s turned out so far.
The vision Hanke and McClendon paint might fill you with inspiration or dread. After all, the closer we get to the high-fidelity semantic maps they’re talking about, the closer we get to total surveillance. But ultimately, that the push for self-driving cars and augmented reality relies on making that real-time, high-resolution, all-knowingly perfect map.