Resolving such relationship issues is the realm of Susan Brennan, a professor at SUNY Stony Brook, whose research draws from anthropology, psychology, and technology to figure out — and improve — how we communicate with our silicon-based counterparts.
Brennan’s pressing concern these days is speech — how we relate to the voice-recognition applications that drive automated reservations and customer-service systems. “A computer scientist trying to create a natural-language or spoken application has this big problem,” she says, because the things we take for granted in conversation — intonation, inflection, even hand gestures and eye movement — can’t easily be replicated or understood by software.
Brennan and her SUNY colleagues are in the midst of a five-year study of variability in language — how and why people tailor what they say to fit the needs of their audience. The way people and computers communicate, they’ve found, is sensitive to mutual expectations, programmed or otherwise. In one case, Brennan observed that when a computer asked for input using “please” (as in, “Please enter your request”), people were more likely to respond with a “please” — a word the computer wasn’t programmed to comprehend.
Other unexpected pathologies crop up as well. When people think they’re being misunderstood by a computer, they tend to hyperarticulate — that is, to e-nun-ci-ate ev-er-y sin-gle syl-la-ble. Problem is, computer systems are programmed to recognize relaxed speech. So the more a person enunciates, the harder it becomes for a computer to understand what he’s saying. It’s worse when a system instructs to speak clearly, encouraging people to slow down even more.
There’s no easy fix, but much of the solution, Brennan believes, comes down to context. Communication stripped of context can become mysterious and confused. “If you put that language in context, it’s often very clear what they mean.” That’s the point of her study: to document and account for conversational cues we typically take for granted.