This Gadget Reads Sign Language And Then Translates It Into Speech To Give The Deaf A Voice

With the Uni, hearing people and deaf people can talk to each other like it’s no thing.

In the 1960s, when phone companies launched an early version of texting called TTY, deaf customers could finally communicate on land lines. As digital gadgets evolved, things got easier. But talking face-to-face with the outside world is still a challenge: If a hearing person doesn’t know American Sign Language (and most of us don’t), someone might be forced to hire an expensive interpreter or resort to writing or typing notes.


A gadget called Uni is designed to stand in as an automatic translator. Based on Leap Motion technology, it attaches to a tablet and uses two cameras to watch someone sign. Then software translates the gestures into English, speaking in a Siri-like voice. When the hearing person responds, the system uses voice recognition to automatically type the words on the tablet. It’s very close to real time communication.


“To many individuals who do not know what we are doing, it seems like magic,” says Alexandr Opalka, co-founder and CTO of MotionSavvy, the startup developing Uni. “After the initial exposure of trying out the Leap Motion technology they express how amazed they are that something so small can be used to communicate.”

The founders, who are deaf themselves, think the device can help open up new careers for deaf people. Right now, only about 60% of working-age deaf Americans are employed and many are relegated to jobs that don’t require much communication, like dishwashing. Few companies hire full-time translators for more complex roles.

The device may also eventually be more accurate than a subjective translation, and it’s more secure. “What is missing is the privacy factor that Uni provides, and the inability for deaf individuals to know that the interpreter is not misrepresenting what the deaf individual is saying in the right tone and words,” says Opalka.

It’s a big technical challenge, involving gesture recognition, machine learning, computer vision, language processing, and machine translation. “We are developing a truly challenging problem of connecting all these components together to run essentially what is minimal hardware–a tablet–without an Internet connection,” he says.

Other startups are working on some similar technology, like a wearable that translates wrist gestures as someone signs. But the tablet app has some advantages, since wearables miss facial expressions and body language that are a key part of ASL. “You can equate these facial expressions as adding context to the words being signed,” Opalka says. With its cameras–and complicated coding–Uni can read expressions and catch bigger movements than just the hands.


When it comes out in the summer of 2016, the device will only have 2,000 words (a typical college grad has a vocabulary of around 17,000 words). But the company is quickly adding more words, and also beginning to fill the device with more specialized words to solve specific problems in everyday life–like going to the bank. As part of the new class at Wells Fargo’s Startup Accelerator, they’re focusing on financial vocabulary.

“Financial services has a very specific vocabulary, quite different from everyday, and we are working with them through our accelerator program to build a vocabulary that would assist our internal team members and potentially customers and improve their interactions,” says Bipin Sahni, senior vice president and head of the innovation, research, and development team at the bank.

“It’s a great solution that creates a true two-way communications … free of waiting and typing.”


About the author

Adele Peters is a staff writer at Fast Company who focuses on solutions to some of the world's largest problems, from climate change to homelessness. Previously, she worked with GOOD, BioLite, and the Sustainable Products and Solutions program at UC Berkeley.