SideSwipe Lets You Control Your Phone With Gestures Instead Of Touch

SideSwipe is a 3-D gestural user interface that uses your phone’s cell signal to read your hand motions.

SideSwipe Lets You Control Your Phone With Gestures Instead Of Touch
[Photo: courtesy of U of Washington]

Despite recent advances in gestural interfaces, we’re nowhere near the Minority Report-style future we were promised. Sure, technologies like Kinect and Leap Motion make for some impressive, sci-fi-seeming projects, but when was the last time you saw somebody waving their hands in front of a computer or smartphone in the wild?


That day may come sooner than you think.

SideSwipe is a clever new approach to 3-D gesture control from researchers at the University of Washington. It uses the device’s own wireless signal transmissions to detect nearby hand gestures, effectively turning the 3-D space around your phone into an interface. It even works when your phone is in your pocket.

“Today’s smartphones already include multiple antennas for spatial diversity and to support multiple wireless standards,” says Matt Reynolds, a UW computer science and engineering professor who helped lead the research. “We expect that the simple broadband receivers that we have developed could be integrated with existing antennas, and the detection of reflected power could be built-in to the phone’s chipset by the chipset manufacturer.”

Since SideSwipe doesn’t rely on processor-hogging resources like the phone’s camera or internal sensors, it lets the device effectively “listen” for its owner’s gestural commands at all times without sapping the battery. In doing so, SideSwipe removes the biggest obstacle phone manufacturers face when it comes to including persistent gestural control in mobile devices: preserving battery life. For most people, gee-whiz functionality like this just isn’t important enough to justify the power it would normally consume.

Admittedly, the use cases will be limited. Multitouch and voice will endure as the primary input methods for handheld devices, perhaps coupled with biometric sensors and, who knows, maybe even brainwaves. But in certain scenarios, the ability to activate or control a device without touching it could come in… handy.

Forgot to silence your phone at the movie theater? You can train it to understand that a double swipe above your pants pocket means to stop ringing. Need to skip a track on Spotify from a few feet away? No problem. Want to surreptitiously record a conversation? That’s pretty weird. But with this technology, it will be easier than ever.


In a blog post, the UW team behind the research describes how it works:

When a person makes a call or an app exchanges data with the Internet, a phone transmits radio signals on a 2G, 3G or 4G cellular network to communicate with a cellular base station. When a user’s hand moves through space near the phone, the user’s body reflects some of the transmitted signal back toward the phone.

The new system uses multiple small antennas to capture the changes in the reflected signal and classify the changes to detect the type of gesture performed.

In the initial research, SideSwipe was successfully trained to respond to 14 different swiping, tapping, and hovering gestures and it recognized those gestures 87% of the time.

“The fact that SideSwipe does not produce an image is an important distinction from a privacy perspective,” says Reynolds. “SideSwipe is inherently privacy preserving, compared to an imaging sensor. Recent news describes plenty of examples of what can go wrong with smartphone photographs and videos!”

About the author

John Paul Titlow is a writer at Fast Company focused on music and technology, among other things.