The real shift comes when we move away from direct interaction and input, towards a world of ambient interaction and awareness.
Our laptops, mobile phones, and sometimes desktop computers increasingly come with built-in microphones, cameras, accelerometers, and even GPS. For the most part, these sensory technologies only come into play when we call upon them directly by launching a related application (to take a picture, or find something on a map, etc.). The rest of the time, these senses are turned off. Battery life probably plays a role in keeping the senses off, but I suspect a bigger reason is that we’re simply not accustomed to thinking about our tools as always “paying attention.”
Imagine a desktop with a camera that knows to shut down the screen and eventually go to sleep when you walk away (but stays awake when you’re sitting there reading something or thinking), and will wake up when you sit down in front of it (no mouse-jiggling required).
Or a system with a microphone that listens for the combination of a phone ringing (sudden loud noise) followed by a nearby voice saying “hello” (or similar greeting), and will mute the system automatically.
Perhaps a “sudden motion sensor” for phones, not to detect when the phone is dropped, but to detect when the phone has too-quickly gone from freeway speed to zero (perhaps with the microphone picking up collision noises, or sounds of distress), and auto-dialing a 911-like service.
These are just a few simple examples, relying on some fairly basic rules. But imagine if you combine the sensory awareness with a more complex Bayesian-style learning system. What if your digital device could learn your habits, and adjust accordingly?
Imagine a phone that pays attention to what kinds of lighting and noise conditions typically cause the user to turn off the ringer (or perhaps turn it up), in order to eventually do so automatically.
Or a mobile device that could keep track of the user’s location, changing settings (network, mail servers, desktop image, even available applications) automatically.
Or a system that listened for coughing–how many different voices, how often, how intense, where–to add to health maps used by epidemiologists (and other mobile apps).
And, of course, there are the misuses and abuses, whether by malicious hackers (listening for social security codes and credit card numbers) or by government agencies.
Most of these are technically possible today, although they would probably be too much of a drain on the batteries of smaller devices. Nonetheless, the question isn’t “can this happen?,” it’s “will we want it?” Are you ready for your phone, your laptop, your digital environment to be paying attention to everything you do?
Read more of Jamais Cascio’s Open the Future blog.