Technology has solved some problems for disabled people, but it’s also created plenty of problems that remain to be solved. At one of the year’s biggest events on human-computer interaction, we got a glimpse of how some researchers are putting wild new technology to work to increase access for all at the cutting edge of inclusive design.
At the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems last week, a host of researchers showed off experimental technology designed to make the digital world more inclusive to people with disabilities, from blindness to deafness. It’s not a new phenomenon; CHI has had papers on inclusive emerging technology for years. But this year’s work reinforces an emphasis on inclusive design that is increasingly ubiquitous.
Here’s the most exciting new research that reimagines inclusive ways for people to interact with computers.
Making Group Conversations Easier For Deaf People–Using Hololens
Talking to a group of people is an integral part of professional work and just having fun with friends. But their group nature makes them difficult for people who have a hard time hearing.
A group of researchers from National Taiwan University, Texas A&M University, and the National Taiwan University of Science and Technology worked with eight individuals who are deaf or hard of hearing to create a new AR-based speech recognition system that places speech bubble animations over speakers’ heads. The user wears a Microsoft Hololens AR headset, which superimposes the bubbles over a real-time conversation. A study with 12 people who are deaf or hard of hearing revealed that they preferred these kinds of speech bubble visualizations rather than more traditional captions.
Making Images, Interactives, And Maps Accessible To Vision- Impaired People
Blind internet users often rely on screen readers, which use captions embedded in websites to describe the images on a page. But often website authors won’t include captions at all, leaving blind users in the dark.
Researchers from Microsoft Research created a browser plug-in that connects to a typical screen reader. The plug-in uses reverse image search to crawl the internet for a particular image and find existing captions from other websites. Instead of relying on computer vision to describe images, Caption Crawler, as it’s named, takes advantage of what’s already on the internet.
The modern web isn’t just made up of images or text, though. Another prototype from Carnegie Mellon and the University of Washington helps readers navigate a slew of other common website features, like tables, maps, and interactive lists, using just a screen reader and a keyboard.
Helping Blind Developers Program
Most programming is done through simple visual user interfaces where coders can see their work as they go. But this can be a challenge for visually impaired coders and designers, which according to a 2017 Stack Overflow survey, constitute about 1% of the field.
A group of researchers at Microsoft Research India–two of whom are visually impaired–tackled major issues, like sight-based navigation and debugging, that make it difficult for blind programmers to work quickly. They built a plugin called CodeTalk for Microsoft’s integrated development environment Visual Studio to remedy these problems. It includes a code summary that makes it easier for developers using screen readers to get a quick sense of what the code on their screen is doing, and replaces the typical red squiggles that indicate a bug with a combination of both audio and non-speech debugging tools.
Designing A Haptic Cane For VR
Virtual reality is an exciting (though much hyped) technology–but right now it’s only accessible to able-bodied people. For the blind to navigate a virtual world, they would need a cane-like controller. That’s the idea behind a Microsoft Research project called Canetroller. The researchers built a haptic device that people can use in the same way they’d use a cane while walking down the street, except that it primarily uses a braking mechanism to stop the cane when it “hits” a virtual object. The researchers found the Canetroller allowed the visually impaired people in their study to identify all objects in the virtual room and successfully navigate through it. Why might blind people want to experience VR, which is a heavily visual medium? It could be used to help train people to use a real-life cane in a controlled environment before heading out into the world.