One of the underappreciated repercussions of transitioning from a world more concerned with the movement of electrons than of gears, pistons, and flywheels is the elimination of noise. Sound, after all, is vibration propagated through some kind of medium (usually air), so the fewer moving parts there are disturbing the atmosphere around us, the fewer vibrations there are to reach our ears.
When I think back to the old “beige box” computers many of us had, I find that my memories are just as auditory as they are visual or tactile. I recall the sounds of 5.25-inch drive doors snapping open, the 3.5-inch floppies ejecting, and the crunching of mechanical hard drives; the rush of CPU fans; the shrill whine of dot matrix printers; and most of all, the dial tone issued forth from my external modem’s speaker followed by the multi-frequency melody of a bulletin board being dialed, the expectant ringing, and finally, the otherworldly hisses, tics, and sputters that signified the miracle of one computer connecting to another somewhere else in the world.
A New Sensory Void
Looking around me today, I’m surrounded by computers with CPUs that are either cooled by static heat sinks, or that have fans so carefully engineered that they’re hardly noticeable; laptops with no discernible moving parts other than screens secured by permanent magnets, and muted scissor- or butterfly-switch keyboards; and devices with gigabytes to terabytes of completely silent solid-state storage. In fact, despite having staggering amounts of computing power and storage capacity constantly within arm’s reach, my home office is so still and silent that I sometimes have to run a fan just to generate a little auditory cover so I can concentrate on writing.
With our technology becoming much less likely to produce noise through its normal operation, manufacturers, industrial designers, and end users are faced with an interesting question: should our devices fill this new sensory void with intentional, artificial sound?
In some cases, it might be considered a matter a safety. Although there’s something extremely seductive about gradually reducing the noise of traffic to nothing more than displaced air and rubber against asphalt, when moving at low speeds through areas where there are likely to be pedestrians, many believe that electric cars should be required to broadcast their presence through an auditory alert. In other cases, personal privacy might be the concern. Without physical shutters to snap open and shut, and film to advance from spool to spool, some believe that digital cameras should be required to make enough imitation racket that people in the immediate vicinity will know that a picture has just been taken.
Noise As Engineered Aesthetic
These issues are natural reactions to a world that sometimes seems to be changing faster than humanity is able to adapt to it. Once our collective psyches finally begin to catch up with our technological advances, such auditory anachronisms will probably go the way of visual skeuomorphism. Therefore, once we’ve finished reducing—or even entirely eliminating—the noises that machines and devices have to make, and once we’ve removed the noises that we’ve come to believe machines and devices should make, what’s left are the most interesting sounds of all: those which we actually want them to make. Noise becomes an engineered aesthetic; an essential fourth dimension of design; a collection of auditory cues with the potential to improve our interactions, and even help facilitate emotional connections.
Although many of us seldom use our phones for actually talking to each other anymore, we take it for granted that we can make and receive voice calls when necessary, and that we’ll be promptly notified of time-sensitive events like messages, appointments, and navigational instructions. Since devices like phones, tablets, and now smart watches are also sophisticated media devices, we also expect to be able to listen to audio books, music, news, and podcasts. These are all audio capabilities that I would put into a category called “essential sounds,” or sounds that, without which, a device would feel broken, buggy, or woefully lacking in functionality.
The more interesting category is secondary sounds, or sounds which devices optionally make for no other reason than to augment our interactions with them. One of the best-known examples is probably the “keyboard clicks” preference that most mobile operating systems support.
Features like keyboard clicks and haptic feedback (in this case, your device vibrating as you tap a virtual key) aren’t just attempts at emulating the tactility of a physical keyboard; they’re also attempts to both improve accuracy, and increase engagement, by activating more senses. Although, like many other people, I find keyboard clicks incredibly annoying, I also find that they somehow help keep me oriented while I type, and while I generally turn them off out of respect for those around me, I do enjoy engaging my ears, and/or my sense of touch, while interacting with a phone or tablet.
Recalibrating the Sensory Experience
The iPhone played a critical role in making virtual keyboards mainstream, but its predecessor, the iPod, also deployed clicks. The iPod classic was well known for its user interface, and in particular, for its click wheel. The click wheel made it possible to not only navigate hierarchical menus efficiently, but also to scroll through thousands of tracks, albums, and artists with surprising ease. Part of the success of the click wheel was due to its auditory feedback designed to convey a sense of navigational velocity, assisting users in creating a mental model of the information they were scrolling through—or, more importantly, a model of their movement through that information—and in developing a type of interaction intuition. Although multi-touch interfaces have largely obviated the need for components like click wheels, their influence clearly lives on.
Our devices have gone from strictly utilitarian (alarm clocks designed to do little more than ring or buzz loudly enough that it’s impossible to sleep through them), to far more experiential (pleasant crescendos of electronic loops). The fact that we continue to replace both incidental and purely functional dissonance with carefully engineered and composed harmonics—and to artfully integrate it with other types of sensory experiences—shows that we are developing an increasingly sophisticated understanding of not only our technology, but of ourselves.