Can artificial intelligence help visually impaired people recognize objects around them and improve their quality of life? That’s the promise of two new apps.
EyeSense, an iPad app developed in Egypt, has the ability to “learn” objects in its environment, having been trained by its users. A visually impaired person can point their device in the direction of where they think something might be–say, a coffee cup–and a voice will say that the app recognizes that object.
“The key strength of the app is that it also recognizes basic facial expressions, like winks or smiles. This enhances human interaction,” says Joanna Marczak, a spokesperson for its developer, ID Labs.
To train the app, you place things in front of the device’s camera at several angles, telling it about the items. Then you repeat the process, taking the objects away, so the app recognizes the difference. On subsequent occasions, it will be able to distinguish, say, your set of keys from another set of keys. “It’s like a new born baby–it’s learning all the time as you show it objects,” Marczak says. Probably the training would be done by a family member or friend.
Marczak says ID Labs is working with visually impaired support groups to improve the EyeSense app, which is free to download (versions for Android and other phones are due soon). It also works offline if necessary. Find it here.
Have something to say about this article? You can email us and let us know. If it’s interesting and thoughtful, we may publish your response.