Sleep apnea and smartphone interfaces don’t immediately sound related but as Shyam Gollakota, a computer science professor at the University of Washington, and his fellow researchers discovered, the technology used to track the former can mitigate some problems with the latter.
Last year, Gollakota, fellow professor Nathaniel Watson, and student Rajalakshmi Nandakumar developed ApneaApp, a program that uses smartphones to detect sleep apnea, a disorder that causes breathing interruptions. An estimated 18 million American adults have sleep apnea, but in-clinic tests are often labor-intensive, time-consuming, and expensive, requiring overnight sleep studies. The current crop of at-home tests aren’t terribly accurate since they involve instruments and sensors that become detached during sleep.
By using old-school sonar—technology developed in the early 20th century that uses sound waves to detect an object’s location—Gollakota and his colleagues figured they could detect changes in breathing patterns without cumbersome diagnostic devices. Because smartphones are readily accessible and typically have microphones and speakers, they’re well-equipped to handle the job. In essence, the phone emits soundwaves undetectable by the human ear and records the reflections from the subject’s body. An algorithm then translates the data into information about how the subject is breathing and the bodily motions that could signal apnea.
Meanwhile, the rise of smartwatches and challenge of interacting with ever-shrinking screens—an obvious problem that many engineers and developers have grappled with—sparked an aha moment. “The ApneaApp got us thinking: If we can track minute motions with the app, can we reuse the microphone and speakers that already exist within phones to enable interactions?” Gollakota says. “Just try to use your smartwatch with a limited screen area. Your finger gets in the way of seeing what’s displayed. Reading an email on your watch is a huge disaster. Devices are getting smaller, but let’s not just make the surface bigger and bigger [as a solution].”
FingerIO is designed to work on Android phones with at least two microphones and Android smartwatches with at least one microphone. It uses Orthogonal Frequency Division Multiplexing, a common wireless modulation technique, to produce soundwaves. With an accuracy of 7 millimeters to 8 millimeters, it can track the position of a user’s finger within about 1.3 square feet from the device. Right now, it only tracks two-dimensional movement. But in effect, the app turns the space around smartphones and smartwatches into interactive surfaces, allowing users to control the devices without physically touching the screens. For example, instead of scrolling through a tiny smartwatch face, users could gesture next to it so that their fingers don’t block the screen and limit what they see. This could be especially beneficial for users who don’t have fine motor skills and find small screens challenging to use.
Gollakota—who worked with University of Washington professor Desney Tan and students Rajalakshmi Nandakuma and Vikram Iyer—presented a research paper detailing the application at the Association for Computing Machinery’s CHI 2016 conference last week and received an honorable mention award.
Other developers have experimented with using echoes to enable gestural interfaces, like the Google Advanced Technology and Projects group. Its invention, Soli, uses radar to detect hand motion. Unlike sonar, which uses sound, radar uses radio frequencies for echolocation and requires an additional chip to produce the frequencies. Moreover, radar’s electromagnetic waves travel at the speed of light and demand more power to emit and processing bandwidth to read than sonar’s soundwaves, which travel 900,000 times slower. FingerIO works with the existing power and processing infrastructure of a mobile device, which is its biggest benefit.
Gollakota and his fellow researchers are now thinking about how the technology could be adapted to track multiple fingers for potential application in augmented and virtual reality. The challenge with VR and AR is enabling natural motion without using a controller. Companies like LeapMotion and Microsoft (via the Kinect) have been exploring ways to accomplish this through cameras that detect gestures. But the additional gear is pricey. By using what’s already within smartphones, Gollakota and his colleagues are sidestepping the cost issue since users wouldn’t have to buy more hardware to have more natural interactions within virtual environments. Gollakota cites Google’s cheap cardboard VR headset as an example: “With Google Cardboard, you could track hand motion without the need for a camera.”