Google doesn’t know if Glass is a well-designed product. Sure, they put some of their best and brightest onto the task of creating it. But as Paul Dourish argues in his oldie but goodie treatise on interaction design “Where The Action Is,” meaningful interaction is created by users, not designers. And after a months-long beta test of Glass with a self-selecting group of geekorati consumers, Google is rumored to be doubting Glass’s near-term viability as a consumer product at all. Instead, Glass could find success as a niche “enterprise solution” for unsexy but meaningful problems in inventory management, financial monitoring, and medical practice.
It’s a smart move, given how much Google has invested already in Glass’s impressive engineering and industrial design. As a consumer product, Glass’s design seems easily mockable at best and downright sinister at worst. But change the context into which that system is deployed–from “anywhere”/”who knows what” to “a surgeon at work (and not at home)”–and suddenly Glass’s tone-deaf design details start to look a lot more harmonious.
Take Glass’s head-movement control shortcuts, for instance. Snapping your head back and forth in casual conversation looks ridiculous and (more importantly) is illegible as phatic communication. In other words, other people around you have no idea what your jerky head movements might or might not signify: Is he turning something on, or off? Is he going to record this conversation, take a picture, or is he just silencing an incoming notification? Or did a bug just land on his ear?
But confine Glass to a specific context as a specialized tool–like recording the details of a tricky surgical procedure to review later with colleagues–and that inscrutable, awkward interaction flips into meaningful practice. Brain surgery takes two hands, after all, so a quick head snap or two (as long as it actually works reliably) is an efficient way to activate the recording. But again, and more important, here the interaction acquires a situated social meaning, embodied in that specific practice and the people participating in it. The nurse next to the surgeon doesn’t have to wonder what the double head snap means or what it doesn’t mean. In this operating room, with this team, doing this job, the interaction makes sense.
This context-sensitive sense-making (or “coupling,” in Dourish’s terms) is improvised and created by the user, not the designer–the designer’s job is to create opportunities for users to create this meaning for themselves. Google knows this–that’s what the Explorer program was all about, after all: putting Glass into the hands of potential users to see what they would or wouldn’t do with it. But Glass is just too strange, too ahead of its time, to afford opportunities for much else than alienation, irritation, or confusion–at least when it’s shotgunned indiscriminately into mainstream culture as a consumer product. By marketing Glass as an enterprise solution, Google can narrow the scope of these deployments to be more understandable and standardizable, which will in turn make its design more meaningful to its potential users. After all, if you met a doctor for dinner and she were still wearing a surgical loupe on her eyeglasses, you’d think something weird was going on. But in the operating room, you wouldn’t bat an eye at it. So it can be with Google Glass.
Does that mean that Glass-style wearables will never migrate into the mainstream the way touch-screen phones have? Who knows. But if Google wants Glass’s product design to succeed right now, they might not have to change it much at all. Instead, they can just start by changing our expectations.
[Image: Flickr user Tedeyetan