advertisement
advertisement

Soon, You Can Build Apps On The Billion-Dollar AR Platform Behind Pokemon Go

Pokemon Go creator Niantic wants to follow in the footsteps of Amazon and Google to build a lucrative business on top of the platform that powered its own product.

Soon, You Can Build Apps On The Billion-Dollar AR Platform Behind Pokemon Go

Developers who watched with envy as Pokemon Go became the fastest app in history to earn $1 billion in revenue could soon build their own applications on top of the augmented reality platform behind the mega-hit game.

advertisement
advertisement

Today, Pokemon Go developer Niantic unveiled what it calls its Real World Platform, a system it plans on making available to outside developers that will make available many of the technologies and underlying systems that underpinned the mobile app that led to millions of people getting outside and trying to catch Pokemon.

At a press event in San Francisco yesterday, Niantic CEO John Hanke and a series of other executives and tech leads explained the thinking and the technology behind the new platform, and why they believe making it available to outsiders could permanently change the way the world at large experiences mobile and augmented reality apps.

There are widely-varying estimates for the total future value of the AR market, but one analyst, BIS Research, believes it will be worth $198 billion by 2025.

Niantic has reason ample reason to swagger. According to Hanke, Pokemon Go users have collectively walked 20 billion kilometers–“the Earth to Pluto and back”–and tens or even hundreds of thousands of people are expected to attend organized gatherings for the game in places like Chicago, Germany, and Japan in the coming weeks and months.

Now, says Niantic co-founder Phil Keslin, the company wants to follow in the footsteps of companies like Amazon and Google, which built giant platform businesses on top of systems they initially used to power their own products.

At its core, the platform Niantic plans on making available to third-party developers features a core, hosted engine that can support a million simultaneous operations and 7 million simultaneous users, and that handles security, mapping, trading, and more; Client-side support; a social layer for supporting friends, giving, and chat; a data set of 7 million interesting places around the world that was created and curated by Niantic users; a live events system; analytics and CRM tools; geospatial systems; and AR data.

advertisement

The platform will launch initially to a “handful” of developers later this year. It’s not clear if third parties would get access to the database of location information that Niantic’s users have generated over the last few years.

[Animation: courtesy of Ninantic]

The three pillars

There are three philosophical elements, or pillars, to Niantic’s platform, says senior executive Michael Jones.

The first is what the company calls “modeling reality,” a way of mapping not just locations, but “meaning,” says Jones, things like “where kids can play….You can’t just put [a digital] team on a highway. You have to know where it’s safe to play.”

Second is what Niantic says is “understanding reality,” which means thinking about whether there’s actually space in a room for an AR object or character, or a way for such an item to fit in naturally. As an example, he says, if you had an AR integration at a San Jose Sharks hockey game and wanted to have an AR shark swimming around, you’d want to make sure you could do it in a realistic way, such as swimming so that its fin was coming out of a carpet, but not floating in mid-air.

Finally is “shared reality,” a way of making sure that AR objects and characters actually fit into a physical environment so that you can behind or in front of it the way you would with a physical person or object.

At the same time, the company says it has developed a way to make AR work on mobile devices with sub-10 millisecond latency–versus more than 100 milliseconds–which it says makes the experience seem truly real-time.

advertisement

And, Niantic’s platform was designed with technology that allows it to understand the differences between all kinds of objects–moving or stationary–in a scene. According to chief scientist Hanson Zhang, that means the technology powering the platform is able to automatically differentiate between the various elements in a scene–people, cars, trees, and so on, and create boundaries around them in real time. That means those different objects can be acted on individually–making for much richer AR experiences.

As an example, he said it would be possible to release AR bees into a scene with lots of real-world flowers, or mix in real and rubber ducks into a pond scene.

One of the limitations of current-generation AR, Niantic believes, is that the technology has a tough time understanding where digital objects are in relation to real-world objects. Thanks to technology from Matrix Mill, a UK startup Niantic has just acquired, its platform is now capable of doing a much better job of understanding depth and adjusting how AR objects fit amongst physical objects. That means if a Pikachu goes behind something, it should disappear, or occlude–something that’s very hard to achieve in real time today, but which is enabled by Matrix Mill’s technology.

Ultimately, says Niantic AR research lead Ross Finman, “You need to understand reality in order to augment it.”

advertisement
advertisement

About the author

Daniel Terdiman is a San Francisco-based technology journalist with nearly 20 years of experience. A veteran of CNET and VentureBeat, Daniel has also written for Wired, The New York Times, Time, and many other publications

More