How Do You Create Hybrid Interfaces You Touch, Talk To, and Poke?


It's challenging enough to design a single effective interface for a product or application. Now, driven by technological advances and rising consumer expectations, a growing number of products can present multiple methods for interaction, at least under certain contexts.
For example, Microsoft’s Xbox Kinect allows gestural and verbal control interaction with the gaming system in addition to using the standard physical controller. Similarly, with Ford's over-named SYNC With MyFord Touch, a driver and passenger can control media, navigation and related functions with either steering–wheel buttons, a dashboard touch screen or voice commands (you can see a full online demo here).

Providing a choice in how to interact with product is not entirely new -- for years computer users have relied on keyboard shortcuts rather than just the mouse for inputting commands, but these tend to be exceptional or specialized. While you can do some commands one way, you need to rely on a primary input to access all of the functions. But we should soon expect to see complete control through a choice of several user interfaces. In fact, people naturally interact with the world via multiple modalities --consider a rider “interacting” with a horse through a mixture of vocalizations and physical gestures. Now that our technologies are catching up to human (or at least animal) capabilities, product designers need to figure out the best way to choose and combine these interaction modalities. I recently facilitated a DesignSlam* -- think Iron Chef for interface and product designers -- to explore the design challenges inherent to creating multiple interfaces for a product. The design brief was deceptively simple: create a next-generation interface for a remote-controlled toy. In this case, a flyable helicopter with built-in video camera, similar to the Air Hogs Hawk Eye. The current product relies on a handheld control with buttons and levers. In the DesignSlam, teams of industrial and interface designers had to not only come up with a new concept for the physical control design, but solutions to vocal, gestural and touch screen versions of the interface as well -- and all within an hour (you can read more about the event here and here). The presented concepts were a mashup of low-tech/high-tech solutions that included a bicycle handlebar-inspired physical controller, a humming vocabulary for vocal control and of course a variety of touch screen apps. But it was the process of getting to these ideas that was of interest as it revealed some of the limitations of our current, single-interface mindset. For example, the interface concepts tended to be relatively complex, reflecting a bias towards the relative ease of interacting with on-screen displays versus the exertion of gestural commands. Conversely, vocal interfaces often received less attention because they didn't require visualization to document. In other words, our strengths in designing one type of interface can be a weakness for designing another. Ideally, we will design interfaces that can receive a combination of inputs based on the user's behavior and preferences, just as we do in our person-to-person interactions. Eventually, it might even be as natural as riding a horse. *Thanks to the Philadelphia chapter (PhillyCHI) of ACM SIGCHI and the Industrial Designers Society of America (IDSA) for making the DesignSlam happen. DesignSlam images by Ilyssa Shapiro

Add New Comment

0 Comments