Print is everywhere. But rarely does a menu, magazine, or business card get translated into Braille. For situations in which Braille text isn’t available, MIT researchers are developing a hyper-intelligent reading ring for the visually impaired.
Read pens, scanners, and various apps already serve the blind by scanning chunks of text at a time. But the FingerReader allows a visually impaired or blind person to use a fingertip to guide progress along the page, emitting words in a computerized voice. The key for visually impaired readers is that the ring doesn’t only recognize and read the text, but also indicates the layout of the page by audio tone.
The idea to package the technology in a ring came about three years ago, when MIT Media Lab engineers part of something called the Fluid Interfaces research group became interested in how fingers point.
“People use that very natural gesture, that very basic thing that we do, from a young age,” PhD student and FingerReader developer Roy Shilkrot says. “We thought, maybe we can put something on the finger that can augment that. Maybe it can help our other devices reference things around us.”
In order to extend that basic pointing motion into something more advanced, the Media Lab team first worked on something called the EyeRing, a device worn on the finger that could stream and interpret visual data by camera to the wearer. Then, the researchers created a version that could read sheet music. Over the last six months, they’ve been narrowing their attention on visually impaired users with the help of the Massachusetts-based Visually Impaired and Blind User Group, or VIBUG, a network dedicated to expanding computer use in the community.
Some apps and scanners already serve visually impaired users in a similar way as the FingerReader, but they do have limitations. Something like Text Detective or SayText will scan a whole section of text and read it out loud, but it doesn’t allow visually impaired users to stop, start, or skip at will. Another advantage of the FingerReader is that the scanning doesn’t get tripped up by page curl at the edges–wherever your finger goes, the camera goes, too. Eventually, the FingerReader might be able to point at other types of written material from farther away–like road signs, for example–if the camera’s resolution allows it.
Shilkrot and his colleagues are still wringing out some kinks. Finding the top of the page can still be difficult, and the constant line break tonal cues can get tiring. Mobile devices with touch screens also have to have the touch-screen element disabled in order for FingerReader to work. But the researchers plan on pursuing more user studies while they’re in academia, and once they’re out, perhaps they’ll commercialize the product.
“Making it something [users] can trust when using it will be the biggest challenge,” Shilkrot says. “Using it on a daily basis takes some training. We need to get that learning curve to be shorter.”