Let me tell you a story about strawberry jam. In 1991, the psychologists Timothy Wilson and Jonathan Schooler decided to replicate a Consumer Reports taste test that carefully ranked forty-five different jams. Their scientific question was simple: Would random undergrads have the same preferences as the experts at the magazine? Did everybody agree on which strawberry jams tasted the best?
Wilson and Schooler took the 1st, 11th, 24th, 32nd, and 44th best tasting jams (at least according to Consumer Reports) and asked the students for their opinion. In general, the preferences of the college students closely mirrored the preferences of the experts. Both groups thought Knott’s Berry Farm and Alpha Beta were the two best-tasting brands, with Featherweight a close third. They also agreed that the worst strawberry jams were Acme and Sorrel Ridge. When Wilson and Schooler compared the preferences of the students and the Consumer Reportspanelists, he found that they had a statistical correlation of .55. When it comes to judging jam, we are all natural experts. We can automatically pick out the products that provide us with the most pleasure.
But that was only the first part of the experiment. The psychologists then repeated the jam taste test with a separate group of college students, only this time they asked them to explain why they preferred one brand over another. As the undergrads tasted the jams, the students filled out written questionnaires, which forced them to analyze their first impressions, to consciously explain their impulsive preferences. All this extra analysis seriously warped their jam judgment. The students now preferred Sorrel-Ridge—the worst tasting jam according to Consumer Reports—to Knott’s Berry farm, which was the experts’ favorite jam. The correlation plummeted to .11, which means that there was virtually no relationship between the rankings of the experts and the opinions of these introspective students.
What happened? Wilson and Schooler argue that “thinking too much” about strawberry jam causes us to focus on all sorts of variables that don’t actually matter. Instead of just listening to our instinctive preferences, we start searching for reasons to prefer one jam over another. For example, we might notice that the Acme brand is particularly easy to spread, and so we’ll give it a high ranking, even if we don’t actually care about the spreadability of jam. Or we might notice that Knott’s Berry Farm has a chunky texture, which seems like a bad thing, even if we’ve never really thought about the texture of jam before. But having a chunky texture sounds like a plausible reason to dislike a jam, and so we revise our preferences to reflect this convoluted logic.
And it’s not just jam: Wilson and others have since demonstrated that the same effect can interfere with our choice of posters, jelly beans, cars, IKEA couches and apartments. We assume that more rational analysis leads to better choices but, in many instances, that assumption is exactly backwards.
These studies represent an important reevaluation of the human reasoning process. Instead of celebrating our analytical powers, these experiments document our foibles and flaws. They explore why human reason can so often lead us to believe blatantly irrational things, or why it’s reliably associated with mistakes like cognitive dissonance or confirmation bias. And this leads me to a wonderful new paper by Hugo Mercier and Dan Sperber (I found it via this insightful talk by Jonathan Haidt) that summons a wide range of evidence – such as the strawberry jam study above – to argue that human reason has nothing to do with finding the truth, or locating the best alternative. Instead, it’s all about argumentation. Here’s their abstract:
Reasoning is generally seen as a mean to improve knowledge and make better decisions. Much evidence, however, shows that reasoning often leads to epistemic distortions and poor decisions. This suggests rethinking the function of reasoning. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given human exceptional dependence on communication and vulnerability to misinformation. A wide range of evidence in the psychology or reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively with the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow the persistence of erroneous beliefs. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all of these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: look for arguments that support a given conclusion, and favor conclusions in support of which arguments can be found.
Needless to say, this new theory paints a rather bleak portrait of human nature. Ever since the Ancient Greeks, we’ve defined ourselves in terms of our rationality, the Promethean gift of reason. It’s what allows us to make sense of the world and uncover all sorts of hidden truths. It’s what separates us from other Old World primates. But Mercier and Sperber argue that reason has nothing to do with reality. Instead, it’s rooted in communication, in the act of trying to persuade other people that what we believe is true. And that’s why thinking more about strawberry jam doesn’t lead to better jam decisions. What it does do, however, is provide up with more ammunition to convince someone else that the chunky texture of Knott’s Berry Farm is really delicious, even if it’s not.
The larger moral is that our metaphors for reasoning are all wrong. We like to believe that the gift of human reason lets us think like scientists, so that our conscious thoughts lead us closer to the truth. But here’s the paradox: all that reasoning and confabulation can often lead us astray, so that we end up knowing less about what jams/cars/jelly beans we actually prefer. So here’s my new metaphor for human reason: our rational faculty isn’t a scientist – it’s a talk radio host. That voice in your head spewing out eloquent reasons to do this or do that doesn’t actually know what’s going on, and it’s not particularly adept at getting you nearer to reality. Instead, it only cares about finding reasons that sound good, even if the reasons are actually irrelevant or false. And this is why it’s so important to be aware of our cognitive limitations. Unless we take our innate biases into account, the blessing of human reason can easily become a curse.