Facebook’s vision of glasses that read your thoughts isn’t just a dream

Facebook and UCSF are making progress on technology that turns brainwaves into speech—which could be a major part of the company’s AR aspirations.

Facebook’s vision of glasses that read your thoughts isn’t just a dream
[Photos: Robina Weermeijer/Unsplash; geralt/Pixabay]

Facebook is funding and assisting in a University of California San Francisco study aimed at giving people with serious brain injuries the chance to type words with only their minds. The research is showing real results and promise. Facebook hopes to use the underlying technology as a basis for a new brain-computer input mode for augmented-reality glasses that you wear all day.


The team at UCSF has been working to map the electrical pulses from specific neurons that form specific parts of words. It’s doing this by working with epilepsy patients who had already had electrodes temporarily implanted in their brains as part of their treatment. The USCF team published a research paper today saying that it was able to decode a small set of full, spoken words and phrases from brain activity in real time. As the work continues, the UCSF team expects to be able to decode a wider set of words with lower error rates.

Meanwhile, back in Menlo Park, Facebook’s Reality Labs Group is working on its own head-worn BCI device that detects the wearer’s linguistic thoughts and converts them into machine-readable text–and doesn’t require any brain implants.

While the Facebook researchers are exploring a number of different approaches, their main approach now is a system that detects small changes in oxygen levels in the brain using near-infrared light. It turns out that when specific neurons fire, they intake a bit of oxygen, leading to a pattern of oxygen level shifts. In the future Facebook may use other technologies like LiDAR to detect the firing of the neurons.

If Facebook’s researchers can get their headset to accurately detect which neutrons are firing, they may be able to use algorithms based on the USCF research to map neurons to specific letters, then words the user is thinking of.

The Facebook Reality Labs group is currently testing the device’s ability to detect simple words, and says it will have its first set of results before the end of this year.


The USCF and Facebook research is astounding because it fills in some of the interim steps between what sounds like a sci-fi concept and something that really works. A working BCI device is still years off, Facebook points out, but the foundational research happening at UCSF is making meaningful progress.

A piece of the AR puzzle

Facebook’s BCI tech could be applied in many different forms and in many different settings. But the company’s blog post about the researchfocuses mainly on how the BCI technology could be used as the primary input mode for tomorrow’s AR glasses.

AR glasses superimpose digital imagery and text over the real world as seen through the lenses. Facebook is deep into virtual reality gear and content with its Oculus division, but the company sees lightweight AR glasses as the future.

From Facebook’s blog post:

The question of input for all-day wearable AR glasses remains to be solved, and BCI represents a compelling path towards a solution. A decade from now, the ability to type directly from our brains may be accepted as a given. Not long ago, it sounded like science fiction. Now, it feels within plausible reach.

Interacting with the head-worn computer will be difficult with a keyboard, the thinking goes, and clunky with swipes and taps on the glasses themselves. So the ability to just think text and commands may be ideal.


Facebook AR glasses make sense because they marry the company’s current R&D investment in computer vision AI with Mark Zuckerberg’s stated desire to control a major hardware platform. Zuckerberg once told our Harry McCracken that one of his biggest disappointments was missing out on owning a smartphone operating system. He may be determined to fight hard for control of AR glasses.

In this scenario, Facebook may see us wearing glasses in which Facebook could be kept open all day long, with elements of the company’s apps appearing over or at the edges of our view of the real world through the glasses. People we see approaching us would be quickly recognized by the software and immediately associated with their social information.

This is Facebook mediating our social intercourse, and our movement between and within the physical and digital worlds. We’d be living in Facebook all day long–the perfect social beings.

Obviously I’m painting a picture with some serious privacy implications. And Facebook seems very aware of the skepticism people may have given its own poor privacy record:

While BCI as a compelling input mechanism for AR is clearly still a long way off, it’s never too early to start thinking through the important questions that will need to be answered before such a potentially powerful technology should make its way into commercial products. For example, how can we ensure the devices are safe and secure? . . . [How] how do we help people manage their privacy and data in the way they want?

Facebook’s vision for the technology is cool, but the company may still have a lot of work to do to convince people that it’s the big tech company to bring it to them.

About the author

Fast Company Senior Writer Mark Sullivan covers emerging technology, politics, artificial intelligence, large tech companies, and misinformation. An award-winning San Francisco-based journalist, Sullivan's work has appeared in Wired, Al Jazeera, CNN, ABC News, CNET, and many others.