Brennan and her SUNY colleagues are in the midst of a five-year study of variability in language — how and why people tailor what they say to fit the needs of their audience. The way people and computers communicate, they’ve found, is sensitive to mutual expectations, programmed or otherwise. In one case, Brennan observed that when a computer asked for input using “please” (as in, “Please enter your request”), people were more likely to respond with a “please” — a word the computer wasn’t programmed to comprehend.
Other unexpected pathologies crop up as well. When people think they’re being misunderstood by a computer, they tend to hyperarticulate — that is, to e-nun-ci-ate ev-er-y sin-gle syl-la-ble. Problem is, computer systems are programmed to recognize relaxed speech. So the more a person enunciates, the harder it becomes for a computer to understand what he’s saying. It’s worse when a system instructs to speak clearly, encouraging people to slow down even more.
There’s no easy fix, but much of the solution, Brennan believes, comes down to context. Communication stripped of context can become mysterious and confused. “If you put that language in context, it’s often very clear what they mean.” That’s the point of her study: to document and account for conversational cues we typically take for granted.