Maybe you would like to turn on the radio with a shimmy. Or shut off your desk lamp when you close your book at night. Neither of these is a particularly complicated interaction, yet each would involve developing all sorts of special hardware and software to make happen. So they’re impossible.
Unless you have an Objectifier. Developed by Bjørn Karmann as a graduate project at the Copenhagen Institute of Interaction Design, the Objectifier is a camera that plugs into any electronic device. Using a smartphone app, a user can perform a gesture to train a lamp, turntable, or tea kettle to turn on and off at will–all without understanding the complex machine learning algorithm behind the scenes.
“It gives an experience of training an artificial intelligence; a shift from a passive consumer to an active, playful director of domestic technology,” Karmann writes. “People will be able to experience new interactive ways to control objects, building a creative relationship with technology without any programming knowledge.”
Featured on Creative Applications, the entire concept is less crazy than it sounds. In fact, there’s quite a bit of precedent for this sort of domestic gesture UI across the industry. Before more or less giving up on the Kinect motion controller, Microsoft disclosed to me a project informally dubbed Home 2.0, which used the Xbox as the hub for a smart home of the future. In this system, gestures, not merely voice, could complete actions like turning on your lights. Similarly, Mark Rolston and Jared Ficklin’s Room-E and Interactive Light concepts played with ideas like rotating your salt and pepper shakers to change the volume on a stereo. What did they find? It sounds weird but feels perfectly normal. Finally, the Nintendo Wii development kit–if you ever saw it in action–really didn’t work so differently from the Objectifier. A programmer would create a command, then swing the Wiimote in the shape of a desired gesture, again and again, training the machine to recognize the specific movement.
What Objectifier does differently–and well–is that it proposes a feasible system of gesture personalization, and a user-friendly way of programming any dumb object around you. Granted, the Objectifier simply turns things on and off, which makes it, more or less, a very smart Clapper (but no smart home device is capable of much more than that anyway).
So could you commercialize it? Potentially, but much like the Clapper, you would need dozens of units to scale to every plugged-in object in your home. And, as the incredibly earnest test video demonstrates: there’s a learning curve before the computer gets things right, as with any machine learning platform. Put differently, I have about a three-shimmy limit before I’d start cursing at the Objectifier and turn on my radio the old way.
[Images: via Bjørn Karmann]