As our various electronic devices gain more and more sensory awareness, we open up the potential for entirely new forms of interaction. Not just new interfaces–tapping and shaking and whatnot–but a shift in presence. With few exceptions, we use these new technologies in rather familiar ways. We might speak instead of type, or tap instead of click, or wave a control wand instead of mash a control pad, but these are essentially the same kinds of direct input processes we’ve done for years, just dressed up in a new look.
The real shift comes when we move away from direct interaction and input, towards a world of ambient interaction and awareness.
Our laptops, mobile phones, and sometimes desktop computers increasingly come with built-in microphones, cameras, accelerometers, and even GPS. For the most part, these sensory technologies only come into play when we call upon them directly by launching a related application (to take a picture, or find something on a map, etc.). The rest of the time, these senses are turned off. Battery life probably plays a role in keeping the senses off, but I suspect a bigger reason is that we’re simply not accustomed to thinking about our tools as always “paying attention.”
Sudden Motion Sensor,” but Lenovo, Acer, and HP all have similar systems. But think about this in the abstract–in the laptop I’m typing on right now, there’s an environmental sensor paying constant attention, ready to act if certain conditions are met. Now, imagine that same concept holding true for other kinds of sensors.
Imagine a desktop with a camera that knows to shut down the screen and eventually go to sleep when you walk away (but stays awake when you’re sitting there reading something or thinking), and will wake up when you sit down in front of it (no mouse-jiggling required).
Or a system with a microphone that listens for the combination of a phone ringing (sudden loud noise) followed by a nearby voice saying “hello” (or similar greeting), and will mute the system automatically.
Perhaps a “sudden motion sensor” for phones, not to detect when the phone is dropped, but to detect when the phone has too-quickly gone from freeway speed to zero (perhaps with the microphone picking up collision noises, or sounds of distress), and auto-dialing a 911-like service.
These are just a few simple examples, relying on some fairly basic rules. But imagine if you combine the sensory awareness with a more complex Bayesian-style learning system. What if your digital device could learn your habits, and adjust accordingly?
Imagine a phone that pays attention to what kinds of lighting and noise conditions typically cause the user to turn off the ringer (or perhaps turn it up), in order to eventually do so automatically.
Or a mobile device that could keep track of the user’s location, changing settings (network, mail servers, desktop image, even available applications) automatically.
What prompted this line of thought for me was the story about the Outbreaks Near Me application for the iPhone. It struck me that a system that provided near-real-time weather, pollution, pollen, and flu (etc.) information based on watching where you are–and learning where you typically go, to give you early warnings–was well within our capabilities.
Or a system that listened for coughing–how many different voices, how often, how intense, where–to add to health maps used by epidemiologists (and other mobile apps).
And, of course, there are the misuses and abuses, whether by malicious hackers (listening for social security codes and credit card numbers) or by government agencies.
Most of these are technically possible today, although they would probably be too much of a drain on the batteries of smaller devices. Nonetheless, the question isn’t “can this happen?,” it’s “will we want it?” Are you ready for your phone, your laptop, your digital environment to be paying attention to everything you do?
Read more of Jamais Cascio’s Open the Future blog.