Although there have been incremental advances in cochlear implants since they were invented by Dr. William F. House in the ’60s, it is only within the last few years that users have been able to hear voices as well as sounds, such as alarms. And the devices--which are surgically implanted and bypass the damaged part of the ear to directly stimulate the auditory nerve--have never been able to deliver music. But that's about to change.
Les Atlas, Ph.D., Jay Rubinstein, MD, Ph.D., and Kaibao Nie, Ph.D., at the University of Washington, Seattle, have created a new algorithm that is poised to change the way that more than 200,000 cochlear implant users around the world perceive music.
In a standard implant, an algorithm filters sounds into fixed low-, middle-, and high-frequency bands, and then information from each band is routed to electrodes that are connected directly to nerves in the portion of the inner ear that takes in sound--the cochlea. In the new, more dynamic algorithm, the filters are not fixed. Instead, the sound is analyzed in detail, taking into account even small changes and altering the way the information is relayed to the cochlear nerves on the fly. The result is a much more robust perception of music.
Typical cohlear implants allow users to hear lyrics and rhythm very well but not melody or timbre. “If you sing ‘Happy Birthday to You’ to someone who has a cochlear implant, they’ll have no difficultly understanding what you’re saying, but if you play a version that is devoid of lyrics or rhythm, they can’t tell the difference between that and ‘Twinkle, Twinkle Little Star,’” explains Rubinstein, a professor of otolaryngology and bioengineering at the Virginia Merrill Bloedel Hearing Research Center. “The other thing they can’t hear is timbre. So we have several instruments play the same five-note sequence and ask them to say what’s the guitar or what’s the piano. Someone who has normal hearing will do this test virtually perfectly, but someone who has a cochlear implant will score very poorly.”
But the new algorithm has changed that for the subjects in the study. For the first time ever, the researchers were able to greatly improve patients’ ability to perceive musical instruments. The average implant user scores a 45% on the timbre test, but the test subject who did the best in the experiment reached nearly 90% with the new algorithm.
Rubinstein and Atlas both separately became interested in the area of cochlear implants back in the '80s, and they started working on this system in 2007. "I started in the area of cochlear implant coding back when the technology did not work so well. Totally deaf people could hear sounds, but could not understand speech or hear music," explains Atlas, who is an electrical engineering professor. It’s a topic that’s especially dear to him--he is progressively losing his own hearing, as his father and grandfather did before him.
Together, he and Rubinstein ran through the scientific method at breakneck speed: determining if the new algorithm could actually carry more information to cochlear nerves; creating simulations of what music could possibly sound like; crafting software to run the algorithm; designing and then running tests on eight volunteers.
While the test subjects still didn’t hear music quite the same way a person with normal hearing does, they experienced a remarkable improvement in perception. (“Several of them during the testing said, ‘Gee, can I go home with this?’” Rubinstein says.) And that has implications that reach far beyond being able to hear an acoustic version of “Happy Birthday To You”--researchers widely contend that, due to its complexity, if you can hear music, you can hear anything.
That means the new algorithm should eventually help users overcome another major shortcoming of the current crop of cochlear implants: hearing in noisy situations. Today, an implant is nearly useless when there is background noise or multiple voices speaking. But the hope is that this system will solve that problem, and be a boon to patients who speak tonal languages, such as Mandarin Chinese, where a slight change in pitch--typically undetectable by implants--can alter a word’s meaning.
Next up is finding a way to replicate the complex algorithm on a processor small enough to fit into existing implants, which is probably a few years away. “In the same way you can download apps for your iPhone, one could imagine being able to download different software to your speech processor,” Rubinstein says.
Hopefully it will become as simple as downloading a song.
[Image: Flickr user Robert Couse-Baker]