After Pokémon Go became an international sensation, all the promise around augmented reality–that a digital layer atop the real world would suck us in even more than our touchscreen smartphones did years before–seemed to be validated. Even to this day, 65 million people play Pokémon Go each month, making it one of the most popular games on the planet.
But there’s a twist. As a good friend, and fanatical Pokémon Go player, pointed out to me recently, he turned off the AR features of Pokémon Go a long time ago. And so do all of the random players he meets during the game’s large-scale raids.
The reason is simple. Pokémon Go in AR relies on you making a difficult finger flick, to hit a monster with a ball in ever-changing, tough-to-discern parts of your environment. Pokémon Go without AR gets rids of the environment, and just sets up the monster at the same angle every time. That means a trick shot becomes a free throw, and that’s essential when virtual pokéballs cost so much, and rare pokémon can flee from your grasp.
In other words, the best novelty of Pokémon Go was just a novelty. When players get serious, they appreciate what we crave in any app: speed, predictability, and never wasting two steps on something that should take just one.
Enter Apple’s ARKit, a new framework inside iOS11 that promises to bring AR to the masses as never before. Already, we’ve seen some spectacular sites. Draw in midair with your phone! Play with holographic Thomas trains in your living room! It feels like the future! But of course, so did Minority Report. And most of us find our keyboards and trackpads more useful for writing emails than gesture-controlled interfaces.
Which is why it’s so encouraging to see some early bright spots in AR–apps that leverage the camera to remove the friction of traditional interface, to make things easier rather than simply more sparkly.
— Brad Dwyer (@braddwyer) September 19, 2017
Take Magic Sudoku, which is $1 on the App Store now. By aiming your camera at any sudoku puzzle, that puzzle is instantly solved, right on your screen for those times that you’re really stuck on a puzzle. You don’t need to take a photo and upload it to some server for the magic to happen. It just does.
Yes, this sudoku trick still has all the gee-whiz appeal of dazzling AR apps. But its instantaneousness makes it practical to use, too.
Big companies are considering quick and practical AR as well. Google recently demoed all sorts of camera search functions that tease the promise of AR when it’s about more than just holograms laid over the real world. Instead of typing in your search query, like the name of a restaurant to check its reviews, Google Lens lets you aim your camera at that restaurant to learn everything about it, in information that’s displayed as a typical Google card. It’s a philosophy introduced by a surprisingly forward-looking app that Google acquired years ago called Word Lens, which translated foreign language signs in your camera frame to the language of your native tongue.
Got iOS 11 already? The camera automagically reads QR codes. Try it out with this one pic.twitter.com/uSq0yUnk8M
— Matt Webb (@genmon) September 21, 2017
Likewise, interface guru Matt Webb (aka @genmof) recently pointed out one of iOS 11’s least discussed tricks. If you aim your camera at a QR code, a notification appears on your screen, and if you click on the notification, you activate the contents lurking inside the image. This means, functionally, any QR coded website or app link is now just a single tap away for the user. (Webb demonstrated the potential–and potential security risk–by creating a QR code that automatically loads Twitter and prewrites the tweet “i feel strangely compelled to announce that @genmof is awesome . . .”
With all these examples, AR isn’t just neat, it’s fast. Thanks to the fact that our machines recognize objects and faces better than ever–and even better than people–we can aim our camera at the world, and rely upon the magic of algorithms to figure out just what we want to know about it at any moment.
There’s still just one, very big catch to even the best of these AR examples taking off: They all require you to pull your phone out of your pocket, aim it at something, and deduce what you want to learn. In the case of a sudoku solver, you still need to load an app, which will in turn load your camera. The fact of the matter is, the phone is just not all that ideal of an interface for AR that faces out into the world, rather than back at your own face. The ergonomics of squinting through a four-inch screen make more sense for Snapchat’s selfies than anything else. No wonder the new iPhone X will have full 3D-mapping cameras on the front to scan your face.
Furthermore, the app ecosystem as it’s built today simply doesn’t scale well to AR. We won’t want to load a single app to solve sudoku puzzles and another app to pull up restaurant reviews, if the real user payoff is an instantaneous aim-to-know. We want to do all of these things simultaneously, and only when we’re interested in doing them. We don’t need to solve every commuter’s sudoku puzzle in our peripheral vision on the subway.
So in the short term, don’t be surprised if you find yourself growing wary of most AR apps. We just don’t have the patience to throw an endless wave of pokéballs at our problems all day. But the AR experiences that do stick around will be successful for the same reason we bought into PCs, and then apps, in the first place: They’re just so much faster than the old alternatives.