We all know people who have suffered by trusting too much: scammed customers, jilted lovers, shunned friends. Indeed, most of us have been burned by misplaced trust. These personal and vicarious experiences lead us to believe that people are too trusting, often verging on gullibility.
In fact, we don’t trust enough.
Take data about trust in the United States (the same would be true in most wealthy democratic countries at least). Interpersonal trust, a measure of whether people think others are in general trustworthy, is at its lowest in nearly 50 years. Yet it is unlikely that people are any less trustworthy than before: the massive drop in crime over the past decades suggests the opposite. Trust in the media is also at bottom levels, even though mainstream media outlets have an impressive (if not unblemished) record of accuracy.
Meanwhile, trust in science has held up comparatively well, with most people trusting scientists most of the time; still, in some areas at least, from climate change to vaccination, a share of the population doesn’t trust science enough—with devastating consequences.
Social scientists have a variety of tools to study how trusting, and how trustworthy, people are. The most popular is the trust game, in which two participants play, usually anonymously. The first participant is given a small amount of money, $10 say, and asked to decide how much to transfer to the other participant. The amount transferred is then tripled, and the second participant chooses how much to give back to the first. In Western countries at least, trust is rewarded: The more money the first participant transfers, the more money the second participant sends back, and thus the more money the first participant ends up with. In spite of this, first participants on average transfer only half the money they have received. In some studies, a variant was introduced whereby participants knew each other’s ethnicity. Prejudice led participants to mistrust certain groups—Israeli men of Eastern origin (Asian and African immigrants and their Israeli-born offspring), or black students in South Africa—transferring them less money, even though these groups proved just as trustworthy as more esteemed groups.
If people and institutions are more trustworthy than we give them credit for, why don’t we get it right? Why don’t we trust more?
In 2017, the social scientist Toshio Yamagishi was kind enough to invite me to his flat in Machida, a city in the Tokyo metropolitan area. The cancer that would take his life a few months later had weakened him, yet he retained a youthful enthusiasm for research, and a sharp mind. On this occasion, we discussed an idea of his with deep consequences for the question at hand: the informational asymmetry between trusting and not trusting.
When you trust someone, you end up figuring out whether your trust was justified or not. An acquaintance asks if he can crash at your place for a few days. If you accept, you will find out whether or not he’s a good guest. A colleague advises you to adopt a new software application. If you follow her advice, you will find out whether the new software works better than the one you were used to.
By contrast, when you don’t trust someone, more often than not you never find out whether you should have trusted them. If you don’t invite your acquaintance over, you won’t know whether he would have made a good guest or not. If you don’t follow your colleague’s advice, you won’t know if the new software application is in fact superior, and thus whether your colleague gives good advice in this domain.
This informational asymmetry means that we learn more by trusting than by not trusting. Moreover, when we trust, we learn not only about specific individuals; we learn more generally about the type of situations in which we should or shouldn’t trust. We get better at trusting.
Yamagishi and his colleagues demonstrated the learning advantages of being trusting. Their experiments were similar to trust games, but the participants could interact with each other before making the decision to transfer money (or not) to the other. The most trusting participants were better at figuring out who would be trustworthy, or to whom they should transfer money.
We find the same pattern in other domains. People who trust the media more are more knowledgeable about politics and the news. The more people trust science, the more scientifically literate they are. Even if this evidence remains correlational, it makes sense that people who trust more should get better at figuring out whom to trust. In trust as in everything else, practice makes perfect.
Yamagishi’s insight provides us with a reason to be trusting. But then the puzzle only deepens: If trusting provides such learning opportunities, we should trust too much, rather than not enough. Ironically, the very reason why we should trust more—the fact that we gain more information from trusting than from not trusting—might make us inclined to trust less.
When our trust is disappointed—when we trust someone we shouldn’t have—the costs are salient, and our reaction ranges from annoyance all the way to fury and despair. The benefit—what we’ve learnt from our mistake—is easy to overlook. By contrast, the costs of not trusting someone we could have trusted are, as a rule, all but invisible. We don’t know about the friendship we could have struck (if we’d let that acquaintance crash at our place). We don’t realize how useful some advice would have been (had we used our colleague’s tip about the new software application).
We don’t trust enough because the costs of mistaken trust are all too obvious, while the (learning) benefits of mistaken trust, as well as the costs of mistaken mistrust, are largely hidden. We should consider these hidden costs and benefits: Think of what we learn by trusting, the people whom we can befriend, the knowledge that we can gain.
Giving people a chance isn’t only the moral thing to do. It’s also the smart thing to do.
This article was originally published at Aeon and has been republished under Creative Commons.