Today, there’s a massive hole in the way we issue commands to our machines. While the traditional mouse and keyboard, and even newer technologies like voice recognition and multitouch screens, have served us quite well, these controls simply don’t map well to the potential of wearable computing.
Imagine Oculus Rift controlled by a D-pad, or Google Glass driving us all to mutter to ourselves all day. There’s a need for a go-anywhere, 3-D interface that will be fulfilled by some technology, and Thalmic Labs hopes their Myo can be just that solution.
The Myo is a band that you slide on your forearm, where it can read the muscle-based electrical impulses driving the fine motor controls of your hand. The best analogy might be a Kinect that works from within. Though rather than attempting to discern your gestures by appearance, the Myo can actually feel your intent. And rather than feel the intent of your legs or neck, Myo is anatomically positioned to hone in on the tools we use most–our hands.
“Hands are the ultimate tool for input,” explains Thalmic’s Stephen Lake. “We could be measuring you typing on your keyboard, or even passive actions you may be doing in everyday life, like recognizing that you’re holding a coffee cup or opening a door.”
Setting aside the fact that Myo hasn’t yet shipped their 25,000 pre-ordered units–meaning the masses have yet to put the tech to the test–for wearables, Myo could be missing-link technology, building highly specific activities by mixing the core sensor information we already have, like GPS location (your home), with action recognition (waxing a car). Suddenly, cloud services gain contextual insights about your analog life, then send you the appropriate data to support it. Maybe Google Now pulls waxing tips to ensure you don’t damage the paint, or better still, maybe the Myo + Now spot that you’re waxing your car incorrectly, then beams you warnings to make sure you stop damaging your paint.
But for now, the Myo team is after redefining active inputs rather than interpreting passive activities. That’s a challenge akin to becoming the mouse for the next era–which, even assuming the tech works perfectly, is still a major gamble. The mouse became ubiquitous because it works, but it only works because it became ubiquitous.
Even in the best of circumstances, new control paradigms are a chicken or the egg scenario. Lake admits that his technology only sings under use cases beyond the desktop–the things we haven’t seen or even imagined yet. But how do we create the new wearable hardware and UIs to support those gestures before people are using Myos?
That sounds really tough, right? Now let’s complicate it with one more step: How many dominant forms of interaction will we have in the future? Inherently few. Apps, OSs and hardware all require a level of ubiquity to appease the mass market. Put differently, do you know what controller came in second place against the mouse? Me neither.
“Is it risky? Hell yeah, but we’re definitely shooting for the stars,” Lake admits. It also doesn’t hurt that the team has built APIs for PC, Mac, and is finishing more for iOS and Android. So with any luck, their first 25,000 customers will also become their first 25,000 rabid developers.
[Hat tip: Wired]