A lot has been written in the last year about Silicon Valley’s lack of diversity and the recent steps taken by major players like Google and Intel to address the problem.
Initiatives to strengthen the pipeline of diverse applicants are a sensible place to start, but the pipeline won’t help you if you can’t get diverse workers to stay. And that is a problem you can’t fix until you understand the way bias works and how our brains see people who are different.
Psychologists have been studying how we make sense of other people–an area of research we call person perception–for the better part of a century. It turns out that almost entirely below our awareness, our brains transform bits of disparate information–what someone looks like, how they sound, what they say, and their behaviors, along with what we know or think we know about them–into a cohesive impression.
We feel like we simply see people as they are, but nothing could be further from the truth. Impressions are just as much about the perceiver–and his or her expectations, assumptions, and memories–as they are about the perceived. Beauty really is in the eye of the beholder–and so is everything else.
The process of perception is, not surprisingly, a biased one. We have loads of biases hardwired into our brains: preferences for people who are similar to us or who are in our group; wariness of those who are different; a tendency to save mental energy by using short-cuts like stereotypes to fill in the blanks about others.
And while many of these unconscious biases might have made sense and helped keep us alive in our hunter-gatherer days, in the modern world–and the modern workplace–they cause all sorts of trouble.
The first step to addressing the problem is for everyone to get past the idea that only blatant racists, misogynists, and homophobes are biased. If you have a brain, you are biased. End of story.
The good news is that more and more organizations are accepting this fact and raising awareness through “unconscious bias training.” This typically involves teaching people about how bias works and giving them experiences that reveal some of their own biases, like Harvard’s online Implicit Association Test (IAT).
Unfortunately, while teaching people about unconscious bias is a good first step, it doesn’t actually break bias.
That’s because bias is still unconscious, a word that needs to be taken very seriously here.
You can’t access the mental processes that create bias, even when you know they exist. It’s a little like teaching someone all about how their pancreas works, and then asking them to use that knowledge to change how much insulin their pancreas makes. Simply knowing how something works doesn’t give you conscious access to its operations–and that’s as true of your brain as it is for your other organs.
As Daniel Kahneman, the world’s foremost expert on bias, has written, “It’s difficult for us to fix errors we can’t see.”
So any strategy that requires you to catch yourself being biased in the moment and correct it is bound to fail. What we need, instead, is to find strategies that break bias even when we don’t think we are biased. Which brings us to the really good news: we have found these strategies.
Psychologists and neuroscientists have learned a great deal about fighting bias in the last few decades. We’ve learned that there are different kinds of bias based on different neural underpinnings or cognitive quirks and that each need a different kind of strategy to break it.
For instance, biases that are caused by mental laziness can be corrected by creating simple step-by-step processes for people to use when making decisions that will get them to be more deliberate and thorough.
Biases that are motivated by our unconscious negativity towards those who seem different–like women in male-dominated tech companies–can be broken by focusing on similarities or shared goals.
Take a recent study by social neuroscientists Jay Van Bavel and Will Cunningham as an example. In it, white participants were asked to quickly categorize words as “good” (e.g., love, puppy) or “bad” (e.g., hate, garbage). Before each word, the face of a white or black male was briefly shown on the computer screen. The idea here is that seeing something positive before a positive word will make it easier to categorize, but seeing something positive before a negative word will lead to more mistakes. The same logic holds true for seeing something negative before a word.
Participants’ performance showed the typical effect of unconscious bias–namely, that they had a harder time categorizing negative words when shown after white faces, but not black ones. In other words, because they unconsciously saw white faces–not black faces–as positive, their performance on negative words was affected.
Next, the researchers took another set of participants through the same task, only this time, they were shown the faces before it began and were told that some of the white and black faces belonged to students who would be on their team on a later task, while the other faces were from the opposing team.
White participants showed the same positivity bias toward black faces of fellow team members that had previously only been present for whites. This illustrates how knowing someone is a member of the team completely wipes out the unconsciously biasing effect of race.
Taken together with results from hundreds of other studies, it’s clear that where raising awareness alone can fail, simple strategies–like taking a moment to focus on similarities and common identities, or slowing down to weigh all the evidence–can go a long way to increasing not only the diversity of hires in organizations, but also to creating the kind of inclusive environment that will make those hires feel like it’s worth sticking around.