The strange thing about Google Glass is and isn’t its lame design. Google has produced something that, however clumsily, genuinely attempts to alter the body’s sensory perception. But the product doesn’t fully realize its potential. Google Glass augments reality much the same way your phone or tablet already does, that is, it does little to actually amplify your senses. To do so would require moving beyond just another wearable technology–the latest in a long lineage of them–and pursuing a more “extreme” approach.
Eidos, a different kind of augmented reality (AR) device, claims to do just that. Developed by a team of students at the Royal College of Art in London, the product attempts to rethink what it is to fundamentally heighten human perception. According to co-creator Tim Bouckley, Eidos is a “challenge to the kind of AR epitomised by Google Glass.” It and other similar products simply overlay screens over a person’s normal field of vision. Eidos, Bouckley says, “is a much more physical and visceral form of AR.”
Eidos differs from Google Glass in one fundamental way: The device lets users tune into specific perceptions, be they sounds or images, and scale their magnitude to the exclusion of rival stimuli. In visual terms, Bouckley and his co-designers–Millie Clive-Smith, Mi Eun Kim, and Yuta Sugawara–compare the Eidos effect to “long exposure photography [mapped onto] live experience.” Per the video demo, the result could resemble anything from Matrix bullet-time to the diving “strobe vision” from the 2012 Olympics. The audio equivalent would be a soundboard, where individuals sounds, or channels, could be dialed up or down and possibly even muted to focus on the desired track–or in the Eidos’s case, speech.
The two prototypes that the team presented in February at the Royal College of Art’s Work In Progress were the result of months of intense work. The band of designers had 12 weeks to see their abstract and admittedly ambitious concept (“How do you add value to the human body?”) through. That included development and engineering phases, user testing rounds, numerous design iterations, and manufacturing. The pair of 3-D-printed prostheses that the students displayed at the exhibition “demonstrated real benefit and application potential,” Bouckley said.
The visor, which wraps around the eyes and temples, contains a camera that feeds live footage to an external processor; it then applies special effects to visuals in real-time, teasing out patterns or movements from, say, live sports and entertainment events.
The second mask fastens over the user’s ears and mouth and is embedded with a multidirectional microphone to record sounds. The audio passes through a processor, which purges it of background chatter before it’s relayed to three audio channels fitted into the mask. (This video above explains how it works: There are two earpieces and a central mouthpiece for speech. The outer ear is targeted by the speakers while the two small transducers in the mouthpiece directly target the inner ear via bone conduction.) According to Bouckley, the effect is akin to hearing someone talk inside your head, an unnerving yet effective tool that filters out distractions in social environments, and which the designers think could create better learning conditions, say, for children with ADHD.
At the moment, the students are focusing on finishing up their terms, but they have plans to develop their project. Bouckley discusses how they will work towards making Eidos wireless, in addition to exploring the prototype’s bone-conduction technology and its possible application within healthcare. As for their showdown with Google Glass, we’ll have to wait and see.