Fast company logo
|
advertisement

CONNECTED WORLD

Inside Facebook’s race to build super-smart AR glasses you’ll actually want to wear

The project’s chief scientist speaks about the challenges of creating a computing system that integrates with your world—in the hope that you might eventually trust Facebook enough to wear it on your face.

Inside Facebook’s race to build super-smart AR glasses you’ll actually want to wear

[Screenshot: Facebook]

BY Mark Sullivanlong read

Imagine your smartphone morphing to wrap around your eyes. But instead of a dark black screen, now it’s see-through. Your apps can suddenly interact with the things you see in the real world in front of you.

Like many tech companies, Facebook believes that augmented reality (AR) glasses will eventually replace the smartphone as our primary personal computing device. It’s a tantalizing possibility for the social networking giant, which hopes to use AR to integrate its social apps and content with the world around us. With Facebook AR, the company hopes you’ll one day chat with a faraway friend’s avatar sitting across from you at your kitchen table, or see a stranger’s most recent Instagram post as they approach you on the street. 

Beyond providing a whole new mode of interaction between you and its social networking services, Facebook sees AR as a chance to control an entire hardware-software experience, something it hasn’t achieved before. In the hardware sense, it missed the mobile wave. When iPhones and Android phones came into being, Facebook was in no position to produce a smartphone of its own. Today, its apps are wildly popular but it must rely on hardware made by Apple, Google, and others to deliver them to users.

As with its Oculus VR line, Facebook sees AR glasses providing an opportunity for the company to own the whole system and access the financial and performance benefits that come with that. The company, now with more than 2 billion users worldwide and loads of advertising money, is throwing lots of that cash, and a growing number of people, at creating its own AR glasses and the virtual experiences they’ll eventually deliver.

But while Apple has been very tight-lipped about the development of its augmented reality glasses, Facebook has decided to be open about how it is building its version of this futuristic device. Company representatives spoke plainly about the development of Facebook’s AR glasses at its Connect conference last week. I talked even more candidly on the subject with Michael Abrash, the chief scientist behind the project and a veteran of the personal computing era. Abrash spent much of his career developing games—including the first-person shooter game Quake for id Software in the 1990s. Getting Facebook’s AR glasses off the ground, along with an ecosystem of social AR experiences for Facebook users, may be the biggest challenge of his career.

Abrash is quick to say that Facebook’s AR glasses are still years off. He talks about the breakthroughs that will be needed to build the displays for the glasses, the mapping system needed to create a common augmented world that people can share, novel new ways of controlling the device, deep artificial intelligence (AI) models that make sense of things the glasses see and hear, and tiny processors powerful enough to run it all. And all that must somehow fit into a pair of glasses svelte enough that people will wear them all day.

There’s also the matter of getting people to trust a company like Facebook to store and protect the privacy of the extremely personal data that AR glasses will be capable of collecting. Though Facebook has all the money, resources, and big-name talent it needs to overcome the technical hurdles and eventually bring a pair of AR glasses to market, it will have to beat Apple. If its AR glasses do compete head-to-head with Apple’s, Facebook may find itself at a disadvantage. Once people understand the data-capturing capabilities of AR glasses, they may decide that Apple’s privacy record makes it an easier company to trust.

Abrash spoke frankly about these problems, the long-term vision for the product, and how Facebook is going about realizing it. “I am very keenly aware of how far we’re going to go from where we are,” Abrash says.

[Screenshot: Facebook]

A new UX  

When full-fledged AR glasses do arrive, they’ll bring with them a very different graphical user interface than what we’re used to. The 3D user interface of AR glasses will seem larger and more immersive than the 2D screen and manual control paradigm used in everything from the first personal computer to the latest smartphone. With the displays of AR glasses so close to your eyes, it may seem like the user interface has been built around and within your entire visible world.

“All of a sudden, rather than being in a controlled environment with a controlled input, you’re in every environment . . . in your entire life,” Abrash says.

Because of that immersion, and the idea that the user interface of your personal technology will move around with you in the world, Facebook believes wearers of AR glasses will be communicating with their personal tech in very different ways than they do now. Abrash says there will be times when you use your hands for gesturing (detected by hand-tracking cameras on the glasses), and other times when you can make commands by voice (picked up by a microphone array in the device). 

All of a sudden, rather than being in a controlled environment with a controlled input, you’re in every environment . . . in your entire life.”

Michael Abrash

But, Abrash points out, those ways of controlling the technology may be awkward during some social situations. For example, if you’re talking to someone on the street you probably won’t want to make hand gestures or voice commands. The other person might think you’re crazy or, if they’re aware of the glasses, nervous that you may be pulling up information about them. You’ll need some more discreet way of controlling your glasses. 

The eye-tracking technology in the glasses’ cameras might be more effective in such situations. You may be able to choose from items in a menu projected inside the glasses by resting your eyes on the thing you want. But even that might be noticed by a person standing in front of you. 

Making small movements with your fingers might be even more discreet, Abrash says. He’s referring to a new input method called electromyography or EMG, which uses electrical signals from the brain to control the functions of a device. 

That technology is being developed by a company called CTRL-labs that Facebook acquired in 2019. CTRL-labs researchers have been testing the possibility of using a bracelet device to intercept signals from the brain sent down through the motor nerves on the wrist to control the movements of the fingers. They’ve had some success, as demonstrated in this video.

In theory, a person wearing such a bracelet could be taught to control aspects of the AR glasses’ user interface with certain finger movements. But the actual movements of the muscles in the fingers would be secondary: The bracelet would capture the electrical signals being sent from the brain before they even reach the fingers, and then translate those signals into inputs that the software could understand. 

“EMG can be made highly reliable, like a mouse click or key press,” Abrash said during his speech at the Connect conference, which was held virtually this year. “EMG will provide just one or two bits of what I’ll call neural-click, the equivalent of tapping on a button or pressing and then releasing it, but it will quickly progress to richer controls.”

Abrash says the brain’s signals are stronger and easier to read at the wrist, and far less ambiguous than when read using sensors on the head. As the technology advances, the bracelet may be able to capture just the intent of the user to move their finger, and actual physical movement would be unnecessary. 

The technology isn’t exactly reading the user’s thoughts; it’s analyzing the electrical signals from the brain that are generated by their thoughts. But some may not make that distinction. When Abrash talked about EMG during his Connect keynote, several people watching the livestream noted in the comments section: “Facebook is reading my mind!”

A map of the world

One of the toughest problems of creating AR experiences—especially social ones—is mapping a common 3D graphical world to the real world so that everybody can see the same AR content. Niantic used such a map in its 2016 Pokémon Go AR game to let all the players see the same Pokémon in the same physical places during game play. 

Facebook’s map is called LiveMaps, and it will serve as a sort of superstructure from which all its AR experiences will be built. 

On a more technical level, you need a map so that the glasses don’t have to work so hard to orient themselves. “You really want to reconstruct the space around you and retain it in what we’re calling LiveMaps, because then your glasses don’t constantly have to reconstruct it, which is very power intensive,” Abrash says. He says the glasses can use the map as cached location data, then all the device has to do is look for changes to the map and update them with new data.

Abrash says LiveMaps will organize its data in three main layers: location, index, and content. 

The location layer is a shared coordinate system of all the locations in the world where AR objects might be placed, or where virtual meetings with avatars might occur. This data allows for the placement of “persistent” 3D graphical objects, meaning objects that stay anchored to specific places in the physical world. For example, Google Maps is putting persistent digital direction pointers near streets and landmarks (viewed through a phone camera) to help people find their way.

But LiveMaps must go much further than public spaces. It must also map any private places where you might wear your AR glasses, including the rooms of your home—anywhere you might place virtual objects or hold virtual hangouts with the avatars of your friends.

[Screenshot: Facebook]
The index layer in LiveMaps captures the properties of the physical objects in a space, along with a lot of other metadata, including what an object is used for, how it interacts with other objects, what it’s made of, and how it moves. Knowing all this is critical for placing AR objects in ways that look natural, and obey the laws of physics. The same data allows you to put your friend’s avatar across the table from you in your apartment without having their (virtual) body cut in half by the (real) tabletop, Abrash says. The index layer is being updated constantly. For instance, if you were wearing your AR glasses when you came home, the index layer might capture where the cameras saw you put down your keys. 

The third layer, content, contains all the locations of the digital AR objects placed anywhere—public or private—in a user’s world. But, Abrash says, it’s really a lot more than that. This layer, he explains, stores the “relationships, histories, and predictions for the entities and events that matter personally to each of us, whether they’re anchored in the real world or not.” That means that this layer could capture anything from a virtual painting on your wall to a list of favorite restaurants to the details of an upcoming business trip. This layer also links to knowledge graphs that define the concepts of “painting,” “restaurant,” or “business trip,” Abrash says. “In short, it’s the set of concepts and categories, and their properties and the relations between them, that model your life to whatever extent you desire, and it can at any time surface the information that’s personally and contextually relevant to you.”

As you can see, LiveMaps is really more like a map of your life. And it’s being updated constantly based on where you go and what you do.

A well-informed assistant

The content layer in LiveMaps is built using all the information the cameras, sensors, and microphones on the AR glasses collect about you, your habits, and your relationships. That’s a mountain of data to give a company like Facebook, which doesn’t have a track record of protecting people’s privacy. But with the AR glasses, Facebook could offer a new and different service that’s built off all this data. It can all be fed into powerful AI models that can then make deep inferences about what information you might need or things you might want to do in various contexts. This makes possible a personal digital assistant that knows you, and what you might want, far better than any assistant you have now.

“It’s like . . . a friend sitting on your shoulder that can see your life from your own egocentric view and help you,” Abrash says. “[It] sees your life from your perspective, so all of a sudden it can know the things that you would know if you were trying to help yourself.”

For AR glasses, such an assistant may be vital. AR glasses are hands-free. You’ll be able to wear them while you’re doing other things, like working with your hands or talking to other people. You won’t have the time or attention needed to navigate through a lot of menus, or to type in or speak explicit instructions to the device.

That means AR glasses software will have to use what it knows about you, what you’re doing in the present, and the context in which you’re doing it to proactively display information you might need. Abrash gives the example of eye-tracking cameras detecting that your eyes have been resting on a certain kind of car for longer than a glance. Intuiting interest and intent, the glasses software might overlay information or graphics, such as data about its price or fuel economy. 

Or it might offer a short list of options representing educated guesses of information you might want, or actions you might want to take, at that moment. You could then make a quick selection with a glance of an eye through an eye tracker or, better yet, with a twitch of the finger picked up by an EMG bracelet. 

advertisement

EMG is really the ideal AR core input because it can be completely low friction,” Abrash says. “You have this thing on and all you have to do is wiggle a finger a millimeter.”

LiveMaps is really more like a map of your life.

Such an assistant would also know a lot about your habits, tastes, and choices, or even how you interact with people in certain social situations, which would further inform the details it proactively displays or the options it suggests. The assistant, Abrash says, might even ask you what you think of the questions it asks you or the choices it’s proposed. These questions, too, could be answered with a very quick input. 

So you’re now in the loop where you can be training it with very low friction,” Abrash says. “And because the interaction is much more frequent, you and the assistant can both get better at working together, which really can happen.” 

Abrash says that because the glasses are seeing and recording and understanding what you see and do all day long it can provide far more, and far better, data to an AI assistant than a smartphone that spends most of its time in your pocket. He says this is one aspect of AR glasses that puts them in a whole different paradigm from the smartphone. 

That may offer powerful dividends in productivity and convenience, but the wealth of data about you and your life collected in LiveMaps is stored on a Facebook server somewhere. And Facebook is in the business of monetizing personal data. 

Facebook says the personal data it captures today helps it personalize your experience on the social network. But the real reason it collects that data is so that it knows enough about you to fit you into narrowly defined audiences that advertisers can target. The sensors, microphones, and cameras on AR glasses might give Facebook a far more refined look at who you are and what you might buy. It might know you want some product or service even before you know it.

That’s why I asked Andrew Bosworth, vice president and head of Facebook Reality Labs, about whether Facebook intends to use the highly detailed and personal data collected by AR glasses to target ads, on Facebook.com or within the glasses. He said the company has not started thinking about that issue yet.

Learning what it takes

Facebook still has a lot to learn about building the LiveMaps structure that will store this wealth of data. Abrash says it’s all about learning how to do the mapping and indexing of the world, and to “build semantics on top of that, to start to understand the objects around you.

“To be able to build the capabilities for LiveMaps,” he adds, “[we need] to understand exactly what it is that we need to retain, how hard it is, what kinds of changes there are in the world . . . and how we can do the synchronization and keep things up to date.”

That’s part of the reason why 100 or so Facebook employees will soon be wearing augmented reality research glasses at work, at home, and in public in the San Francisco Bay Area and in Seattle. The glasses have no displays and, Facebook says, are not prototypes of a future product. The employees participating in “Project Aria” will use their test glasses to capture video and audio from the wearer’s point of view while collecting data from the sensors in the glasses that track where the wearer’s eyes are going. 

[Photo: courtesy of Facebook]
“We’ve just got to get it out of the lab and get it into real-world conditions, in terms of [learning about] light, in terms of weather, and start seeing what that data looks like with the long-term goal of helping us inform [our product],” Bosworth says.

The data is also meant to help the engineers figure out how LiveMaps can enable the kind of AR experiences they want without requiring a ton of computing power in the AR device itself.

Facebook is creating LiveMaps from scratch. There is currently no data to inform or personalize any AR experiences. Eventually the data that populates the three layers of LiveMaps will be crowdsourced from users, and there will be tight controls on which parts of that data are public, which are private to the user, and which parts the user can choose to share with others (like friends and family). But for now, its up to Facebook to start generating a base set of data to work with.

“Initially there has to be [professional] data capture to bootstrap something like this,” Abrash says. “Because there’s a chicken and egg problem.” In other words, Facebook must start the process by collecting enough mapping and indexing data to create the initial experiences for users. Without those there will be no users to begin contributing to the data that makes up LiveMaps

“Once people are wearing these glasses, crowdsourcing has to be the primary way that this works,” Abrash says. “There is no other way to scale.”

Hard problems remain

When it comes to hardware, some fundamental problems still need to be worked out before a fashionable pair of AR glasses are even possible.

Abrash says the biggest challenge in the hardware might be building a display system that enables killer 3D graphics while weighing very little and requiring only a small amount of power. It has to have enough brightness and contrast to display graphics that can compete with the natural light coming in from the outside world, he notes. It also must have a wide enough field of view to cover most or all of the user’s field of vision, something that’s not yet been seen in existing AR headsets or glasses

You’re looking through this little square hole,” Abrash says of the AR headsets now on the market, most of which are used in enterprise applications. “It doesn’t give you any sense of being in a world that has virtual objects in it.”

The sheer number of components needed to make AR glasses function will be hard to squeeze into a design that you wouldn’t mind wearing around all day. This includes cameras to pinpoint your physical location, cameras to track the movement of your eyes to see what you’re looking at, displays large enough to overlay the full breadth of your field of view, processors to power the displays and the computer vision AI that identifies objects, and a small and efficient power supply. The processors involved can generate a lot of heat on the head, too, and right now there’s no cooling mechanism light and efficient enough to cool everything down.  

Everyone who is working on a real pair of AR glasses is trying to find ways of overcoming these challenges. One way may be to offload some of the processing power and power supply to an external device, like a smartphone or some other small, wearable, dedicated device (like Magic Leap’s “puck“). Facebook’s eventual AR glasses, which could be five years away, may ultimately consist of three pieces—the glasses themselves, an external device, and a bracelet for EMG.

Talking openly

There’s a good reason why Abrash and Bosworth are out speaking frankly about Facebook’s development of AR glasses. The company knows it must address data privacy questions early in the development process, and make sure the world knows it, if it’s to ever have a chance of selling tens of millions of the devices.

“It is critical that we set up this system so that there is very strong confidence that from the time photons hit sensors, [from] the time that data goes encrypted into the cloud, that privacy and security have been guaranteed,” Abrash says.

These concerns are warranted, because AR glasses have the potential to get into people’s personal space perhaps more than any consumer tech device we’ve seen (Google learned that the hard way with its Glass experiment in the 2010s). AR glasses can capture lots of data about the person wearing them, but also about the innocent bystanders observed and recorded by the device’s cameras and microphones. 

“You know, most products historically have primarily been concerned with the person using it, but that’s not a luxury that augmented reality is going to have,” Bosworth tells me. AR glasses require a new level of conscientiousness about security and privacy from the company producing them.

Bosworth describes a set of “product innovation principles” that Facebook is releasing alongside its Project Aria research project and data collection. One of those principles is giving the people who encounter a Facebook employee wearing the research glasses the chance to complain about being captured by the device’s cameras. Another says Facebook shouldn’t build anything that will “surprise” people, and that it will always default to serving the good of the community when deciding whether to include this feature or that.

That’s just what I’d want to hear from any tech company developing AR glasses, but Facebook isn’t just any tech company. For the better part of its history Facebook has failed to understand and address the harmful side effects of its technology and business model. It’s famous for launching new features, talking a lot about their wonderful benefits to users, and directing attention away from the real cost of those features to users’ personal data privacy.

More recently, Facebook has been slow to manage toxic content like hate and disinformation that’s run rampant on its platform. As its platform has become increasingly weaponized, the company has backed away from taking responsibility for any social consequences. Historically, Facebook has been largely unwilling to understand potential harms from its products before exposing the public to them. As Abrash and Bosworth have hinted, that “move fast and break things” approach may continue with its AR glasses.

“People will sometimes say, ‘Well, shouldn’t you figure out what’s good and bad here and then make sure the bad stuff never gets out?'” Abrash tells me. “I don’t feel like I should be making that decision for society. I feel like actually in a democratic society, you want it done through open, transparent discussion, legislation, regulation, and so on.

don’t feel like I should be making that decision for society.

Michael Abrash

“As you can see, we’re being pretty open about what we’re doing,” he adds. “The whole point is I want us as a society to think about what we want here.” 

However, the question may be whether members of the public, let alone lawmakers, have a real chance to contribute to the privacy and acceptable use standards of Facebook’s AR glasses before the product is released. Even if Abrash has every intention of making sure that happens, there’s no guarantee that Facebook’s senior management and legions of ad-tech executives will play along. 

As Abrash says, no matter which AR glasses emerge in the next 10 years, it’s a paradigm-shifting technology that’s likely to follow a long evolutionary arch far into the future. Once we start seeing realistic and useful AR graphics and avatars convincingly placed within our visible world, it’ll be easier to see where that path could lead. One day, our everyday reality may be half real and half digital—as long as companies like Facebook build technology in a thoughtful, inclusive, privacy-friendly way that everyone feels comfortable embracing.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics