Everyone Wants To Be The First To Ban Google Glass

Even though these early attempts to ban Glass are mostly knee-jerk reactions, they speak to a key flaw that spells trouble for the wearable computer if it’s ever going to be an integral part of our daily lives.

Everyone Wants To Be The First To Ban Google Glass

Organizations are warning Glass users to “keep out” before the device even hits the street. Fast Company is tracking a number of places Glass is likely to be banned, including movie theaters, public schools, and dressing rooms.


The New York Times also wrote about a number of different organizations considering action against the device, including most Las Vegas casinos. A Caesars Entertainment spokesman noted: “We will not allow people to wear Glass while gambling or attending our shows” in the same way the company’s casinos bar computers or other recording devices. The move is inevitable–if preemptive–because of the need to avoid tech-based cheating in gambling parlors.

(If Rain Man can get thrown out of a casino for having an extraordinary brain, then surely tech like augmented-reality card-counting apps for Glass would be barred too.)

Outside of casinos, most groups seem to be concerned with the privacy and safety of using Glass. West Virginia legislators are looking at making it illegal to use Glass while driving to prevent accidents, and public venues like bars are moving to disallow the use of Glass to prevent wearers from capturing embarrassing videos of drunk patrons.


There are arguments against banning Glass in cars and bars, of course, but the early trouble may be why Google has rolled out the product so cautiously thus far. If nothing else, it speaks to a point we’ve made often in this tracker: Wearable tech is only useful if it truly integrates itself into our lives and bodies rather than standing out like a sore thumb. So far, Glass isn’t passing that test.

This is a big story, so we’re writing it as news rolls in. Continue reading to learn the context around this story, or skip down the page to read previous updates.

Why We’re Tracking This Story

Wearable computing is finally edging into the mainstream, but we don’t necessarily like what we see. Sure, there’s nothing wrong with “quantified self” devices like FitBit, Nike FuelBand, and the Jawbone UP, and we’re just as curious as anyone about the reported smartwatch war brewing between Apple, Google, Microsoft, and Samsung.

But many of these gadgets look and feel predictable and kludgy. In the age of the “Internet of things” they’re still held captive by the confines of the rectangular screen. When it comes to wearables, it’s not enough to slap a wristband on iOS and call it a smartwatch. When the (hotly debated) Google Glass launches later this year, it will be a milestone product, separating people with a cyborg vision of the future from people who prefer their smart devices to have no UI at all.


Designing objects meant to be worn by actual humans will need a new approach: A re-imagining of conventional clothing design principles, and the creation of products that are less intrusive and more responsive than what we’re used to today. Most of all, these principles need to give way to devices that are elegant enough to appeal to more than just the geek squad.

What This Story Covers

With computing devices rapidly invading our closet, we’ll be following new developments in smart materials and wearable UI design. Mainstream adoption of wearable tech will require an unprecedented marriage of form and function–we’ll need products we can’t live without, wrapped in a package we can get comfortable living in. Simultaneously, we’ll examine how these wired devices introduce new behavioral norms, as well as pose new privacy concerns for both the wearer and those who come into contact with him or her.

Previous Updates

Google dreams that future Glass hardware won’t look so weird–in fact it may look like a pair of normal sunglasses. Google’s thinking about next-gen Glass has got so far, in fact, that it has filed for a U.S. Patent: Number 0130100362. Whereas the existing implementation is eccentric and striking, possibly even reminding you of the strange psychic headsets that featured in TV show Caprica, Google’s imagined that advances in screen projection tech will reduce Glass to being just like “normal” glasses.


An Engadget writer, for example, has managed to get an early developer model and recently picked it up. He notes that a choice of color was available, and that he’d hoped for something pastel…but the colors available were merely grey, dark grey and white:

I went with the latter, a particularly conspicuous hue that I may learn to regret. Indeed, I wasn’t more than a few steps out the door before the curious looks started and, on my first subway ride, I noticed a total stranger smiling at me. This is not a typical thing.

Hence Google’s patent, which sees a different image projection system. It’s based on a see-through panel on the lenses of some traditional shaped eyeglasses, the panel being an

“optically transmissive substrate; and a repeating pattern of diffraction elements disposed across a viewing region of the optically transmissive substrate and organized into a reflective diffraction grating that both bends and focuses the CGI light”


…with the computer-generated imagery light coming from a small projection unit mounted to the side frame.

Google’s plan is essentially about diffracting the light from the projector into the user’s eyes instead of refracting it through a prism.

The change is minimal, though it’s interesting to see Google imagining Glass elements on both eyes (which hints at 3-D AR implementations?). But it may turn Glass from a sideshow into something the average Joe would be comfortable wearing. Of course this innovation may be years off arriving in real tech, by which time we may have got used to Glass’s futuristic look–just the same way we’ve got used to people gabbling into their Bluetooth headsets today.


Wearable devices are likely to be always on, running in the background to wirelessly send and receive updates, as well as perform more power-sucking tasks like running apps or streaming video. To keep the wearer from having to juice up at an outlet every few hours, manufacturers are going to need to find some smart solutions for powering these devices.

A burgeoning industry in energy-harvesting fabrics and sensors seeks to capture the ambient energy all around us to make wearables work smarter, not harder. One expert is Professor Joanna Berzowska, chair of the Department of Design and Computation Arts at Concordia University in Canada, who has been developing interactive electronic fabrics that do exactly that by capturing and storing energy from the human body.

“Our goal is to create garments that can transform in complex and surprising ways–far beyond reversible jackets, or shirts that change color in response to heat. That’s why the project is called Karma Chameleon,” says Berzowska. The major innovation of this research project is the ability to embed these electronic or computer functions within the fiber itself: rather than being attached to the textile, the electronic components are woven into these new composite fibers. The fibers consist of multiple layers of polymers, which, when stretched and drawn out to a small diameter, begin to interact with each other. The fabric, produced in collaboration with the École Polytechnique’s Maksim Skorobogatiy, represent a significant advance in the development of “smart textiles.”

Unfortunately, it’ll be another 20 to 30 years before we’ll actually be able to manufacture clothing with these composite fibers. However, Berzowska’s prototypes allow designers to start envisioning how such clothing might look and behave, paving the way for T-shirts that double as mobile phone chargers or shape-shifting garments that react to a particular environmental setting or situation.


Several other research-stage energy-harvesting devices were recently displayed at the Printed Electronics Europe 2013 conference in Berlin. The trade show portion of the conference featured everything from vibration-powered sensors to photovoltaic fabrics.

Perpetuum‘s Vibration Energy Harvester (VEH) is a wireless sensor that gets attached to rotating components, such as wheel bearings, on trains. Cleverly, the device both measures and is powered by mechanical vibration. It also measures temperature, and it wirelessly transmits the results to the train’s operator so they can immediately spot a failure in its early stages.

Another EU-funded project called Powerweave aims to create two kinds of fiber–one for harvesting solar energy and the other for storing it–that can be woven together into one self-contained system. This could theoretically be used to power soft sensors in clothing, but there are far more large-scale applications in store.

According to Christian Dalsgaard, founder of consortium member Ohmatex, the goal is to create a fabric that can generate 10W per square meter. Once that is achieved, he noted, there are “no limits to how big such a fabric can be made,” and a 100-square-meter piece of fabric would in theory be able to generate a kilowatt of power.

Sounds promising. But once again, the research has a long way to go before this tech will be ready to hit the market in the form of a pair of jeans. Sure, watch manufacturers have been making use of kinetic and photovoltaic energy harvesting for years to power self-winding wrist watches, but the process gets considerably more complicated when you’ve got a Mac Mini in there. Looks like we won’t be seeing these same techniques put to use in our mobile phones or Nike FuelBands just yet, but this tech will be part of what makes or breaks the wearable computing trend in years to come.

The last thing you need is another screen in your life. One of the great promises of wearables is their potential to help us transcend the screen, which dominates the majority of our waking hours. Between TVs, computers, cell phones, and GPS devices, the average adult spends roughly 8.5 hours a day staring at a screen (and that’s according to 2009 data, so chances are that number’s even higher now).


With computing power embedded in everything from the fabric of your shirt to the shoes on your feet, responsive design and gestural commands can help us spend less time interfacing with our technology and more time with the people around us. When your clothes are wirelessly connected to the web, you can check the weather, tweet, and check in on Foursquare without ever having to whip out your phone. Which is ostensibly great for eliminating the dreaded phone check over dinner, but only contributes to our overshared, “always on” culture by making this state even more passively ingrained than it already is.

It’s not enough to cobble together a video camera, a phone, and a push-to-Facebook feature and call it innovation. That Frankenstein approach to design is what Frog Design’s Jan Chipchase says is “a lazy futurist’s vision of what might be,” citing Google’s Glass as an example. It’s as if technological innovation is following the blueprints laid out in science fiction and videogames. We’re creating the gizmos we fetishized as children.

Take the trajectory of one product (displays becoming smaller/cheaper/more efficient over time) and integrate it with another (eyeglasses), sprinkle in connectivity and real-time access to content and big-data-analytics. Our expectations of what it could be are raised in part because this join-the-dots vision of the future fits neatly into Western un/popular young-male culture, from “The Terminator” through to Halo. Glass has a certain inevitability about it, like the weight of expectation on a child born to a great composer or, if you will, to a middle-aged suicide.

He goes on to point out that, while the hardware in Glass and products like it is actually not incredibly revolutionary–at least, not when compared with some of the crazy shit they’ve got going on in Japan–Glass’s real opportunity is in creating a meaningful user experience through software design.


As any visitor to Yodobashi camera over the past decade will tell you, the hardware technologies that make Glass hardly feel novel (and for recent competitors, see Sony, Golden-i, or this Telepathy device prototype) but neither do they need to be, because this is all about how they are brought together into a holistic experience.

It may very well be that Glass is a necessary step for us to take before we can even begin to introduce devices like Telepathy into the fray. Anything too radically innovative is likely to shock the system. There’s a reason most products are only incremental variations on versions that came before. If you give people something they’ve truly never seen or experienced, you then face the uphill battle of training them to use the damn thing and getting them comfortable doing so.

Is it wise to take pages from Apple’s book? Companies like Misfit Wearables are taking inspiration from Cupertino by defying every current convention in the wearable space. Their Shine, which is a circular personal activity tracker about the size of a quarter, is the first wearable that’s all metal, doesn’t need to be charged, and can be worn any number of ways to accommodate a wide variety of users, including women, who were largely overlooked by the more male-centric designs of UP, FuelBand, and FitBit. Simplicity is the priority–and an admirable one.

The team also developed what they’re calling a “glanceable UI” that’s designed for the more casual interactions users have with wearable devices, as opposed to a laptop or cell phone screen. Here’s what they say about creating a wearable UI:


A glanceable UI is about creating a second’s worth of meaning out of important and impactful data. Whether that’s a moment to convey how well you’re doing toward your daily fitness goal or a single blinking light to encourage more movement. As Om wrote recently, as data becomes the world’s currency, data without emotion, empathy or narrative is meaningless. Wearable gadgets can track as much data as they want, but if the user isn’t exposed to the data in a way that impacts their lives, and in a time frame that they can work with, then the device has failed.

While their instincts are correct–materials do make a difference and designing UIs around the unique social context of wearables, which we interact with in a different way than we do with our phones, is equally as important–it’s also easy to be overly reductionist and simplify things to the point of trivialization. Exactly how much “emotion, empathy, or narrative” can one glean from a blinking light?

Software and interaction design for these devices calls for a new paradigm of interaction that is more gestural, organic, and less intrusive than what we’ve become used to on screens. Fjord’s Andy Goodman and Marco Righetto imagine a new language of interaction altogether, composed of micro gestures and expressions.

The new language will be ultra subtle and totally intuitive, building not on crude body movements but on subtle expressions and micro-gestures. This is akin to the computer mouse and the screen. The Mac interface would never have worked if you needed to move the mouse the same distance as it moved on the screen. It would have been annoying and deeply unergonomic. This is the same for the gestural interface.

Why swipe your arm when you can just rub your fingers together. What could be more natural than staring at something to select it, nodding to approve something? This is the world that will be possible when we have hundreds of tiny sensors mapping every movement, outside and within our bodies. For privacy, you’ll be able to use imperceptible movements, or even hidden ones such as flicking your tongue across your teeth.

The switch to a gesture-based language will involve both appropriating existing gestural behavior and introducing new micro-gestures that will become indoctrinated into the cultural language of wearables, just like the swipe, the double tap, and the pinch to zoom were for smartphones.


Wearables present all kinds of new challenges for privacy–both the privacy of the wearer, whose every movement, breath, and heartbeat is being tracked, as well as those in the wearer’s vicinity. The majority of the discussion is presently centered on Google’s Glass, since the embedded 5MP camera poses questions of unwanted surveillance.

As a product that is both on-your-face and in-your-face, Glass is set to become a lightning rod for a wider discussion around what constitutes acceptable behavior in public and private spaces. The Glass debate has already started, but these are early days; each new iteration of hardware and functionality will trigger fresh convulsions. In the short term, Glass will trigger anger, name-calling, ridicule and the occasional bucket of thrown water (whether it’s ice water, I don’t know). In the medium term, as societal interaction with the product broadens, signs will appear in public spaces guiding mis/use1 and lawsuits will fly, while over the longer term, legislation will create boundaries that reflect some form of im/balance between individual, corporate and societal wants, needs and concerns.

But equally important as what the Google Glass wearer sitting across the table from you is doing with his or her camera right now is the question of what Google is doing with that information–or any other corporate entity. For as much as we have to fear from our fellow peers violating our privacy, we have even more cause for concern when it comes to the companies who make these products and own our user data misusing that information for financial gain. Just look at Facebook.

Glass app developers are limited by the device, and Google– which is being perhaps uncharacteristically wary in giving developers free reign. Though Glass runs a full Android OS, the first Glass developers in the “explorer” program have now found out that Google’s API only allows interaction with the wearable goggles through the cloud, and the only apps that can be built for Glass are web apps.


What Google’s doing here seems to be strictly controlling how much processing goes on aboard Glass’ own electronics in order to deliver a day’s useful battery life. The company says that Glass can work for a whole day as long as you don’t record too much video. This seems to be the key–if video eats up so much power, Glass’ battery must be relatively small (as befits the needs of it being comfortably portable). Google’s sacrificing utility for a convincing user experience: Glass is clearly meant to be donned and used for an extended period of time, and users would quickly lose interest if they had to recharge it halfway through a typical day.

The limitations of the HTML and CSS services Google wants Glass developers to use also limits the kind of apps that seem possible: TechCrunch notes that “real” augmented reality apps probably aren’t possible, nor is it easy to “stream audio or video from the device to your own services (though you can obviously use Hangouts on Glass).”

Part of the limitation to the web apps that the first Glass developers create will be failure of imagination. When the iPad first arrived many of the first apps weren’t exactly demonstrative of the revolution in mobile computing the iPad represented. First-gen Glass apps will probably suffer the same fate, and users will be even more at sea because Glass is a wholly new type of device:

Will Glass be judged as good based on its ability to entertain? Its power to keep our smartphones in our pockets? Its ability to deliver real-time information when we need it most? It could easily be all of the above. One thing’s for sure: trying to evaluate what is and isn’t a “good” Glass experience will be one of the more exciting undertakings the tech world has seen in a long while.

Google’s Eric Schmidt certainly isn’t hyping expectations as to how revolutionary this wearable tech will be: At AllThingsD he mentioned things like checking your messages on the go. That’s an act that Glass, being wearable, makes much more accessible and swift…but it’s hardly a paradigm shift.

The feeling you may get from this is that though Google’s really bringing the first wearable computer to the masses, it’s being really cautious about it. It’s not leaping boldly, experimentally into the fray.

Augmented reality constantly in your vision may be a distraction rather than a boon, according to some thinking by a surgeon who’s really experimented with the tech. This may be bad news, or at least represent a serious designer and developer challenge, for wearable AR tech like Google Glass.

AR would seem to be one of the most promising aspects of wearable computing, thanks to its ability to display “augmented” information on your view of the world. This, of course, is likely to ultimately include things like coupons and real-time location-based ads (even if Google’s forbidding it in the first-gen Glass apps).

Head and neck surgeon Ben Dixon, who has used all sorts of augmented data systems like on-screen prompts during endoscope-based procedures assumed:

…head-up displays would be a valuable resource. In theory they would provide anatomical guidance (“cut here”) while alerting me to potential problems (“avoid that”). This would lead to safer, more efficient and less demanding surgery.


…it was not long before I hit a major problem: distraction. Trained surgeons were unable to efficiently complete tasks while being presented with additional stimuli.

In one study, published in Surgical Endoscopy, surgeons completed tasks on a realistic model with some salient, but unexpected, findings placed in their field of vision.

Just 41 per cent of surgeons recognised additional information using a standard display, such as a computer monitor. In a group using an augmented reality display the rate was even worse. Almost every member of the group completely missed the unexpected finding.

The issue is “inattentional blindness,” which means that when you’re really concentrating on a task you completely fail to notice an unexpected stimulus. This has all sorts of important implications for how developers should write apps: Will cyclists using Glass miss important navigation instructions as they weave through traffic? Or, on the other hand, if Glass’ nav alerts are too bold or demanding, will they divert attention to their AR display and make an injury-threatening mistake?

Glass gets a big “keep out” warning. The New York Times has written that before Glass even hits the high street, and thus greater availability than it does now via its developer community, the tech is getting a big thumbs down from several different sorts of organization concerned about privacy.

According to the Times “large parts of Las Vegas will not welcome wearers,” and a Ceasars Entertainment spokesman noted that “We will not allow people to wear Glass while gambling or attending our shows” in the same way the company’s casinos bar computers or other recording devices. The move is inevitable–if rather preemptive–because the secrecy and security of casinos is well known and if Rain Man can get thrown out of a casino for having an extraordinary brain capable of out-maneuvering the odds, then surely tech like card-counting apps for Glass would be barred too.

More seriously West Virginia legislators are looking at making it illegal to use Glass while driving, and public venues like bars are moving to disallow the use of Glass too. These moves are about safety and privacy, the two worries that seem to be framing the “con” arguments against Glass.

Hardly anyone in America would wear Glass right now, thanks to geekiness. A new survey from BiTE interactive, which questioned 1,000 U.S. adults, has revealed a rather startling statistic–only 10% of people would consider wearing Google Glass. It’s all because of the social awkwardness of sporing the eyewear, apparently, with 45% of respondees citing this factor.

Or at least it’s largely down to that, because the statistic rose to about 38% of people if the price of Glass dropped much lower than its introductory $1,500 fee… which suggests that the steep cost of Glass is also influencing people’s opinions.

Mashable quotes EVP of operations at BiTE Joseph Farrell explaining the figures:

The average American perceives Google Glass as a toy made solely for the tech-savvy elites. This aura of exclusivity further limits the user base for Google Glass, as smartphone users are likely to remain satisfied using their existing iPhone or Android apps to accomplish daily activities like taking pictures or searching for directions.

But one thing to remember is that it was once socially awkward to talk in public on a cellphone, and later to speak into phone headsets. So we may eventually get over the strangeness of head-worn computers. Bluetooth headsets, on the other hand, have never quite shaken off their geeky image, so it’s not necessarily plain sailing for Google.

APX labs imagines the user interface for Google Glass’ grandchildren. APX Labs, which specializes in writing software to power what it calls “smart glasses” has been experimenting with Epson’s Moverio device. The Moverio is an Android-powered interactive display headset that’s slightly more of a real augmented reality device than Glass is. The company’s Northstar technology, which is like a virtual tag in the real world that activates additional information displays in the Moverio when the user selects it, has been showcased in a new video (via Adafruit blog):

Northstar, as you can see, is an example of the kind of tech that could replace awkward QR codes or even NFC tags on advertising posters, or even replace those tiny little explanatory text panels that museums and art galleries use to inform visitors about exhibits.

But the question you have to ask, based on this video, is how are developers like APX going to make this kind of interface much more natural to interact with? As it stands it seems slightly awkward to use, although this probably improves with familiarity, and it would seem prohibitive to add too many facial or fingertip gestures to the way users will interact with their future headsets.

Mickey Getting Into Wearable Tech In A Small Way. This week at the D11 event the chairman of Walt Disney Theme Parks and Resorts said that Disney was going to be launching an experiment with wearable tech. The MagicBand bracelets will let visitors buy things in the resorts as well as reserve some experiences and also work as hotel room keys. Equipped with Bluetooth and RFID the bracelets are more sophisticated than a simple NFC credit card.

The idea is that the MagicBand bracelets will deliver value to both the wearers and the resorts, but in two different ways. The wearers get a degree of convenience in paying for things, thanks to the speedier process of waving your bracelet at a cash register. Because the devices are smart you can disable the payment feature or set a spending limit, which means you can give them to kids and allow them a degree of freedom. Because the resort system can also assign identities to each band, that means characters in the park may be able to greet kids by name.

For its part Disney can use the security and encryption in the bracelets to cut down on ticket fraud and also prevent partial ticket sales. Because the Bluetooth tech means bands can be identified over a reasonable distance, it may allow resorts to track foot traffic around the park–though this tracking is an opt-in feature.

Glass apps get controversial, just as expected. Google glass isn’t even available yet, and yet it seems that just as you may have predicted developers have pushed to create exactly the kind of controversial app for the device that you had expected.

San Francisco company Lambda Labs has crafted a face recognition app for Glass that may automatically identify people the camera sees in your field of view and flag them for you.

The company has adapted its existing face-recog tech for Glass, and in its first instance it works after the wearer snaps a photo with Glass and then compares the photos to sample images that the user has already identified. This is perhaps a less unsettling system because of the human interaction, but the company is said to be planning automated real-time recognition in the future.

Google itself has shied away from automated face recognition because of the risks of privacy invasion. But Lambda’s experiment suggests that in the future third party developers could easily implement systems that are even more complex, possibly even data-mining Facebook for face and name data.

Separately adult app developer Mikandi has developed what it says is the first adult entertainment app for Glass. Speaking to the U.K.’s The Register, the company says the app will allow users to view porn content on Glass’ screen and also–here’s the big catch–allow people to create pornographic content for other users to watch.

Mikandi is said to be excited about the prospect of point-of-view style content being shared in a very personal new form of amateur porn. CEO of Peter Acworth spoke to recently on exactly this matter and said the tech could revolutionize the porn business:

You could film picking up someone at a bar and taking them home, for example. It takes the whole genre of [point of view filming] and reality productions one stage further. You’ll hopefully get something very authentic.

While the idea of creating content from a wearer’s point of view is certainly innovative, it is essentially a private choice between consenting individuals–not least because it would be impossible for the wearer to conceal wearing Glass. But Mikandi’s public porn viewing raises different questions. While passers-by won’t be able to hear or see the content they’ll be able to see the wearer’s reactions. Public viewing of porn on smartphones and tablets is already proving an issue for some, although recent actions in Canada have ruled it legal.

We’re continuing to track this story as it develops. Check back for more updates.


About the author

Julia Kaganskiy is an editor and curator exploring technology's creative potential and creativity's potential to disrupt technology. She is Editor-at-Large at The Creators Project and founder of the ArtsTech meetup