advertisement
advertisement

Why Your Software Should Act Like People

We have had experience design and persuasive design–but one sociologist says we need to design our social interactions with software to be more personable. Software is a social actor, says Clifford Naas, and developers need to teach it to respect our social mores.

Why Your Software Should Act Like People

Have you ever gotten angry with your iPhone or given a name to your car? Sociologist Clifford Nass has spent his career investigating how people interact with technology and concluded that we subconsciously expect software to respect the same social rules as other people. Software is a social actor, and developers need to design it to respect social mores.

advertisement
advertisement

Naas was studying Computer Science when he took a Sociology course, thinking that it was the easiest way to gain the credits needed to finish his degree. Soon he realized that a sociologist who knew how to code could do new types of research. As he puts it: “Galileo was one of the first people in the world to have a telescope. He was bound to discover something.” FastCo.Labs sat down with Nass to learn his philosophy on properly socialized code.

What do developers need to know about the social rules we apply to technology?

To a remarkable degree people approach technology using the same social rules, expectations, and heuristics that they use when interacting with other people. Very frequently technology behaves very badly from a social perspective. When any interface is designed the simple question that people should ask is “How would I feel if a person did this?” and it’s remarkable how often you would find that you would do things very differently if you just ask yourself that question.

advertisement

Can you give me some examples of software behaving badly?

Almost all of it. If you do something and someone says to you “That’s wrong!” without giving any guidance or help, that would be considered extremely impolite and uncaring. In a lot of query or help systems, you type in a query and it gives you some answers. If you type the same request again, it’s implicit that you don’t want the same answer, and yet that is what software will frequently do. No matter how many times you ask it will give you the same answer. That comes off as passive aggressive.

What about newer personal assistant systems like Siri?

advertisement

The voice interface increases the power of social response. There are some things that Siri does well in the social realm, some things not so well. One problem is that when Siri doesn’t understand something, it blames itself. It says “I didn’t understand that or could you repeat that?” It turns out to be better to not blame yourself but instead to ignore who is to blame or even to blame a third party. For example, “There was noise on the line. Can you repeat that?”

What techniques should developers use to make people like their software more?

We don’t get much praise from technology. That’s a significant omission. In the language space as technology starts using full sentences rather than simple words and commands there are plenty of opportunities, for example, mirroring the person. We love people and technology that mirrors us. Here in the Bay Area we can San Francisco The City. Hardly anyone says San Francisco. So if I use the phrase “The City” the software should not only understand that but use it back at me. That’s what humans do to each other to indicate caring. People like to feel that they are being cared for, that the technology cares about them.

We talked in a previous interview about social prosthetics–software to make us more likeable. Do any such systems exist?

advertisement

We are getting closer to the social prosthesis of technology giving advice. Google Glass is a great opportunity for this. It could refer me to information about you. It could remind me of things you did that I should refer to, not just trivial things like “It’s her birthday” but more complex things like “She really likes talking about such and such so you should discuss that.” Because Glass is constantly available, you wouldn’t have the awkwardness of pulling out a phone. The whole wearable movement allows the technology to be a prosthetic in a much more powerful way because it’s seamless.

What about interaction with robots, which seem to be headed for the mainstream as consumer products?

Robots up the social ante. When you have a body, that suggests humanness. You have higher social response and social expectations. A piece of software that violates a social rule is not as annoying as a robot that breaks a social rule. Failures to do it right have much greater penalties. For example, if the robot is small it has to adopt a subservient role. The language it uses has to be subservient, its voice has to be subservient. It’s very bad to have a robot which is small and diminutive but then speaks in a rather aggressive way. It’s very very important that the voice, language, etc. of the robot conforms with its size, its appearance, and whether it looks male or female.

advertisement

You once said that technology companies shouldn’t hand out T-shirts since it interferes with team bonding. What social strategies can companies use to build teams?

In all societies there are markers of belonging. Having things that say “we are bonded, we are similar, we are tied,” is very important. Most groups will have inside jokes, or things they say or certain ways of doing certain things. Those all exist to create a feeling of bonding and that’s exactly why companies have T-shirts. It’s supposed to remind them that they are part of a team. However, If everyone has a T-shirt it’s no longer bonding. If you share your inside joke with everyone, it’s no longer an inside joke and it therefore loses its power and meaning.

To create a team you need two things. You need identification, clear reminders that we are part of a team, and dependence–the notion that we need each other. Incentive structures that reward individuals versus other individuals work against that. You want to emphasize similarity, shared interests, and a shared stake in outcomes.

Having people praise each other. Cross-supporting praise is very powerful. The problem is that we think that critics are more intelligent than people who praise. We like people who praise more but we think our critics are smarter. So the trick to make this work is to develop these mutual admiration societies. I praise you so you might think I am not that cool, but then a third party says “Hey, Cliff is smart.” That’s how you do it.

advertisement

[Image: Flickr user Noli Fernan “Dudut” Perez]

advertisement
advertisement

About the author

Lapsed software developer, tech journalist, wannabe data scientist. Ciara has a B.Sc. in Computer Science and and M.Sc in Artificial Intelligence

More