Over the past decade, a growing number of organizations, including Google, Starbucks, and Facebook have embarked on a mission to implement unconscious bias training, hoping to improve their formal diversity and inclusion (D&I) initiatives, which are now found in virtually all Fortune 500 companies.
Broadly speaking, these interventions, which are also referred to as implicit bias training, focus on making people aware of their prejudices, such as racism. The idea is that this will enable people to eliminate them, or at least keep them in check. Advocates of this approach argue that 95% of the population is affected by unconscious biases. And lots of successful D&I consultants swear by the merits of this multimillion-dollar business.
Now, as the corporate world unites to take a strong stand against racism, can unconscious bias training be the way to eradicate prejudice and discrimination from the workplace? Is there evidence that this training actually works? How robust is its underlying science?
While these questions have already been answered by independent academic reviews, as well as systematic general reviews on the effectiveness of different D&I initiatives, there is usually a disconnect between science and practice, particularly in talent management.
As the polymath, Nobel Prize-winning physicist Richard Feynman famously noted, “Science is the belief in the ignorance of experts.”From a scientific perspective, there are reasons to be cautious that unconscious bias training will have a significant impact on racism, sexism, and other forms of workplace discrimination.
Most biases are conscious rather than unconscious
Contrary to what unconscious bias training programs would suggest, people are largely aware of their biases, attitudes, and beliefs, particularly when they concern stereotypes and prejudices. Such biases are an integral part of their self and social identity. As eloquently summarized by the great poet and Civil Rights activist Maya Angelou: “When someone shows you who they are, why don’t you believe them?”
This is why you should just ask if you want to find out what people think about something. Sure, there are instances in which they may not want to answer this truthfully, but that is quite different from saying that they are unaware of their thoughts. Generally speaking, people are not just conscious of their biases, but also quite proud of them. They have nurtured these beliefs through many years, often starting in childhood, when their parents, family, friends, and other well-meaning adults socialized the dominant cultural stereotypes into them. We are what we believe, and our identity and self-concept are engrained in our deepest personal biases. (The euphemism for these is core values.)
That’s not to say that we are always honest with ourselves. Self-deception is one of the most universal attributes of human cognition. But prejudiced attitudes tend to represent the explicit rather than implicit side of self-deception. For example, as a middle-age white-ish male, I can deceive myself into thinking that I was unfairly overlooked for a promotion because I am neither black nor female. I am explicitly drawing on my prejudices to boost my fragile self-esteem. When I was growing up, I embraced the delusional belief that being Argentinean was far better than being Chilean or Brazilian, even though I never fully managed to believe this. Perhaps this means that people can be explicitly prejudiced but implicitly open-minded, too?
There is only a weak relationship between attitudes and behaviors
Contrary to popular belief, our beliefs and attitudes are not strongly related to our behaviors. Psychologists have known this for over a century, but businesses seem largely unaware of it. Organizations care a great deal about employee attitudes both good and bad. That’s only because they assume attitudes are strong predictors of actual behaviors, notably job performance.
However, there is rarely more than 16% overlap (correlation of r = 0.4) between attitudes and behavior, and even lower for engagement and performance, or prejudice and discrimination. This means that the majority of racist or sexist behaviors that take place at work would not have been predicted from a person’s attitudes or beliefs. The majority of employees and managers who hold prejudiced beliefs, including racist and sexist views, will never engage in discriminatory behaviors at work. In fact, the overlap between unconscious attitudes and behavior is even smaller (merely 4%). Accordingly, even if we succeeded in changing people’s views—conscious or not—there is no reason to expect that to change their behavior.
There is no accurate way to measure unconscious bias
Ever since Freud first argued that our behaviors are driven by unconscious (and deeply repressed) fantasies, there has been wide acceptance of the notion that we are all dominated by an inner dark side. Freud’s version of the unconscious has been discredited by science, but those who believe in it must undergo years of psychotherapy to tame their inner beast, or at least come to terms with their embarrassing self.
Another fantasy is the idea that advances in neuroscience and brain scanning technologies may somehow help us detect racism or sexism in the brain, so that we can objectively determine someone’s unconscious prejudices. Unfortunately, or perhaps, fortunately, none of this is real. The closest science has come to measuring unconscious biases is via so-called Implicit Association Tests (IAT), like Harvard’s racism or sexism test. (Over 30 million people have taken it, and you can try it for free here.) These tests assume that you can model prejudice by measuring whether you are quicker to respond positively to terms associated with women or men, or whether you are quicker at connecting positive words with being white or black.
Ingenious as this may sound, it’s easy to fake responses to these tests, and they have come under significant academic criticism for being weak predictors of actual behaviors. For example, on race questions (black vs. white), the reported meta-analytic correlations range from 0.15 to 0.24.
Furthermore, if the test tells you what you already knew, then what is the point of measuring your implicit or unconscious biases? And if it tells you something you didn’t know, and do not agree with, what next? Suppose you see yourself as open-minded (non-racist and nonsexist), but the test determines that you are prejudiced. What should you do? For instance, estimates for the race IAT suggest that 50% of black respondents come up as racially biased against blacks.
Research shows that trying to avoid implicit stereotyping actually makes people project their biases, through overcompensation or unsuccessful thought suppression. Try not to think of a white bear, and you will probably think of a white bear. Try to ignore that the job candidate you are interviewing is female or black, and you will probably think of nothing else.
And what are the ethical implications of labeling someone a racist or a sexist when they aren’t? It is clear that humans have a general tendency to do this, but there are clear ethical concerns when we do this under the name of science, and this is backed up only by flimsy evidence and a disputed scientific tool.
It’s hard to change people’s beliefs, especially when they don’t want to
The hardest thing to influence through any D&I initiative is how people feel about concepts such as gender or race. A comprehensive scientific review of 260 studies concluded: “Someone who is prejudiced against African Americans before taking diversity training may experience a positive shift in attitudes and become less prejudiced. Yet, their attitudes may shift back closer to what they were pretraining in response to media accounts of riots and unrest.” More systematic reviews of diversity training concluded: “The positive effects of diversity training rarely last beyond a day or two, and a number of studies suggest that it can activate bias or spark a backlash.”
It’s clear that in order to evaluate the effectiveness of any intervention focused on changing peoples’ biases, attitudes, or beliefs, we need a before-and-after comparison. In the case of D&I interventions, when people volunteer to sign up for training, we would expect them to be open to the content and amplify whatever benevolent or prosocial attitudes they had to begin with. We can also expect them to be more curious and open-minded than those who don’t sign up. This is why mandatory training works better than optional training. However, training should not just work for those who need it the least.
Although these facts would appear to make for a gloomy conclusion, there is no reason to feel discouraged, let alone defeated. After all, what these facts indicate is that there might be better ways to reduce racism and improve D&I interventions.
The ultimate proof of ROI or training efficacy must come from the measures organizations obtain within their own cultures. That is, to find out whether your D&I approach works, you should look for evidence that it works. This may sound obvious, yet it is not just rare for companies, but also for researchers to conduct rigorous experimental tests to evaluate the effectiveness of D&I training.
There are no clear-cut criteria, outcomes, or undisputed bulletproof benchmarks to determine whether racism and other forms of prejudice and discrimination have been significantly mitigated by training. Companies will probably want to look at improvements in people’s perceptions of the culture (by taking surveys), measure inclusivity and diversity (recruitment, promotions, leadership representations), and track a decline in counterproductive work behaviors, including a reduction in formal lawsuits, claims, settlements, and other reputational fiascos.
Whatever metrics are chosen, they would certainly need to go beyond what is by far the most common criterion for judging the efficacy of D&I interventions today, namely explicitly asking participants whether they enjoyed the training or not, which, in the case of implicit bias training, is rather ironic.
Finally, let’s not forget that pointing the finger at individual employees for their biases, whether unconscious or not, may reflect an attempted lack of accountability by the organization. Yes, individuals come with biases and we should try to contain them to avoid serious and shameful antisocial or counterproductive work behaviors, such as racism, and eliminate toxicity from the workplace.
And yes, it is also true that there is bias at the level of society and culture, in any society and culture, which companies may not be able to change. But leaders must think harder about shaping their own organizational cultures, creating truly ethical, fair, and inclusive environments, and acting as real-life examples of the prosocial and moral values they want to promote in their own organizations.