Both Android and iOS have an accessibility mode to help people with different degrees of visual and auditive disabilities. That includes interface tweaks like higher contrast, larger type, audio cues, and text-to-speech technology, and these features have enabled millions to use these devices. But there are still disabilities that get overlooked when it comes to accessibility.
Now, artificial intelligence is helping designers close that gap by automatically creating and testing new interfaces based on the unique cognitive models of people with sensorimotor and cognitive impairments. For instance, the software might optimize a smartphone keyboard for people with essential hand tremors by adapting the size of each button or the frequency of spellchecks.
The machine then went on to optimize the typical smartphone keyboard based on the profiles, creating countless variations to measure it against the simulated behavior of each of these three cognitive models. The variations can number in the millions, so it’s impossible to test them with real users. Instead, their software simulated how people would use each interface based on its profile data. Essentially, they created a virtual user with a specific disability and ran millions of tests on them to determine the best UI.
The researchers believe that they could do the same thing for people with many other disabilities and many other tasks, not just text entry on a keyboard. Does that mean that UI designers are out of a job? Of course not, says coauthor and Aalto University postdoctoral researcher Jussi Jokinen: “This is of course just a prototype interface, and not intended for consumer market . . . . Designers pick up from here and with the help of our model and optimizer create individually targeted, polished interfaces.”