advertisement

This is the computer you’ll wear on your face in 10 years

Snap’s new Spectacles 3 camera glasses are an important step toward the augmented reality glasses that could one day replace the smartphone as our go-to computing device.

This is the computer you’ll wear on your face in 10 years
[Photo: Azamat Zhanisov/Unsplash]

Snap’s new Spectacles 3 don’t look that different from their predecessors. They consist of a metal designer frame with a couple of HD cameras. In exchange for the embarrassment of wearing them, the Spectacles 3 offer the chance to shoot 3D video hands-free and then upload it to the Snapchat app, where it can be further affected. And that’s pretty much it. You can’t view the video, or anything else, in the lenses. There are no embedded displays.

Still, the new Spectacles foreshadow a device that many of us may wear as our primary personal computing device in about 10 years. Based on what I’ve learned by talking AR with technologists in companies big and small, here is what such a device might look like and do.

 

Unlike Snap’s new goggles, future glasses will overlay digital content over the real-world imagery we see through the lenses. We might even wear mixed reality (MR) glasses that can realistically intersperse digital content within the layers of the real world in front of us. The addition of the second camera on the front of the new Spectacles is important because in order to locate digital imagery within reality, you need a 3D view of the world, a depth map.

The Spectacles derive depth by combining the input of the two HD cameras on the front, similar to the way the human eye does it. The Spectacles use that depth mapping to shoot 3D video to be watched later, but that second camera is also a step toward supporting mixed reality experiences in real time.

Future AR/MR glasses will look a little less conspicuous than the Spectacles. They’ll be lightweight and comfortable; the companies that make them will want users to wear them all day. They may look like regular plastic frames. Since they are a fashion accessory, they’ll come in many styles and color combinations.

The glasses will have at least two cameras on the front—perhaps not quite so obvious as the ones on the Spectacles. They may also have an additional, dedicated depth camera, something like the TrueDepth camera on newer iPhones. This camera will provide more accurate depth mapping throughout more layers of the real world.

Some AR glasses will allow for prescription lenses. Others might correct the wearer’s vision through image processing in the lenses, rather than by using physical materials to redirect light rays into the eyes.

The lenses will contain two small displays for projecting imagery onto the wearer’s eye. The arms of the glasses will contain the processors, battery, and antennas for the wireless connection.

From tapping to talking—and beyond

We will control and navigate this kind of computer in very different ways than the ones we use with smartphones (mainly swiping, gesturing, typing, and tapping on a screen). The user might control the user interface they see in front of them by speaking in natural language to the microphone array built into the glasses. The glasses may offer a virtual agent along the lines of Alexa or Siri. The user may also be able to navigate content by making hand gestures in front of the device’s front cameras. Cameras aimed at the user’s eyes will be able to track what content the user is viewing and selecting. For instance, text will auto-scroll as the user’s eyes reach the bottom. A blink of the eyes may constitute a “click” on a button or link.

advertisement

It may get weirder. Facebook is working with UCSF to develop brain-computer interface technology that could allow a user to control the AR glasses user interface using their mind.

If apps as we know them survive in an AR-first world, developers will strive to create new app experiences that exploit the unique aspects of the glasses—their emphasis on cameras and visual imagery, their mixture of real-world and digital imagery, their hands-free nature, and their use of computer vision AI to recognize and respond to objects or people seen by the cameras. Examples:

  • Imagine seeing an acquaintance approaching you, then seeing her name and some of your contact history suddenly appear around her head.
  • When driving your car, you might see place labels and turn arrows appearing around your route.
  • A tour through a museum might be augmented with an audio narration and graphics about the works of art.
  • We might play games similar to Pokémon Go where we interact with gaming characters and objects placed or hidden within real-world landscapes or interior spaces.

New device, new experience

The experience of viewing and managing content in a head-worn display will likely be so different than doing so on a phone that it will require a radically new user interface and operating system. The UX and OS will likely not use a familiar “desktop” motif, but will use totally new motifs that appropriate aspects of the real world.

Right now, companies like Apple and Facebook are holding off on releasing AR glasses because of hardware limitations. Snap made the decision to begin creating head-mounted computers early, albeit with a very limited set of features. So far, the main thing they’ve learned is that people don’t want to wear cameras on their faces. But sales numbers aren’t everything.

“Snap is learning by shipping, and that is a key strategy for them as they build out their platform, and build it out specifically around AR,” wrote Creative Strategies analyst Ben Bajarin in a (paywalled) blog post yesterday.

“While Snap may ultimately not be in the hardware business long term, it is important they continue to build the third-party developer part of the Snap platform and prepare those developers and Snap’s developer tools for the future of head-mounted computers,” Bajarin added.

And the consumer tech industry may still need to walk a few more steps before it starts producing the mature AR glasses product described above. One of these interim steps may be head-worn displays (or “smart glasses”) that plug into smartphones and simply display some version of the smartphone’s UX in front of the user’s eyes. But the presentation of the content may remain smartphone-like. People might use such a product for text messaging, reading news clips, gaming, or watching video—all things that might be more enjoyable without the need to crane one’s neck downward at a smartphone screen.

After that, things will get more serious. As the components needed for true AR glasses—the processors, displays, and batteries—mature and get smaller and less expensive, you’ll see the Apples, Facebooks, and Samsungs begin putting AR or mixed-reality glasses out into the market.

In the long view, Google Glass and Snap’s Spectacles might end up being seen as early, not-so-well-received entrants into a nascent head-worn computing category. But those early products may help set the stage for the tech company that eventually brings all the pieces together, including a well-designed and easy-to-use product, a strong developer ecosystem, and a marketing push that clearly spells out the benefits to consumers. At that point, I’m guessing AR glasses will begin heading for the mainstream. Many of us will depend on the glasses every bit as much as we rely on our smartphones today.

About the author

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld

More