Blatant bias and blood libel

Biases are, arguably, experimental psychology’s best export. Many a psychologist has built a successful career exploring, cataloguing, and attempting to explain the myriad biases supposed to plague human cognition (for a taste, see the Wikipedia list of cognitive biases).

This is not a healthy development. It has helped spread a reign of error [1] in psychology, fed by ‘gotcha experiments’ suggesting that humans are broadly irrational and quite a bit dumber than, say, rats. On the contrary, human cognition is extraordinarily efficient and adaptive—not to pat ourselves in the back too much, but, cognitively, we’re pretty dope. With a keen sense of irony, Gerg Gigenrenzer, one of the stalwarts of human rationality, has decried a bias bias [2,3] that mistakes adaptive heuristics for biases.

One of the most often decried bias is the ‘belief bias’ (sometimes also called confirmation bias), which would taint the way we acquire information, making us more inclined to believe information that fits with our preconceived ideas, and to reject information that doesn’t. As you can imagine, such a bias could lead to polarization—as supposedly demonstrated in a classic 1979 Lord, Ross, and Lepper experiment. In their own terms,

“subjects supporting and opposing capital punishment were exposed to two purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty. As predicted, both proponents and opponents of capital punishment rated those results and procedures that confirmed their own beliefs to be the more convincing and probative ones, and they reported corresponding shifts in their beliefs as the various results and procedures were presented. The net effect of such evaluations and opinion shifts was the postulated increase in attitude polarization.” [4]

Their demonstration has recently come under attack. A few years back, Andrew Guess and Alexander Coppock [5] performed a Bayesian analysis of the experiment, showing that participants were behaving rationally enough. After all, taking priors into account is a very sensible thing to do and it needn’t lead to any epistemically hazardous outcomes. More recently, Ben Tappin, and Stephen Gadsby, using similar methods [6], have questioned other results taken to demonstrate belief bias.

I love that stuff (indeed, in a forthcoming book, I defend exactly this stance regarding belief bias, so big thanks to the guys who did the math). However, in our work on human reason [7], Dan Sperber and I have defended the existence of a myside bias: a tendency to find reasons that support our priors. Even if we claim this bias is an adaptive feature of reason, it remains a bias in the statistical sense, in that, on its own, it leads to poor epistemic outcomes (we believe it’s fine because in the right context—proper group discussion in particular—these bad outcomes turn into good results instead).

In our book, we take the example of Bertillon, the respected criminal expert who went bonkers attempting to frame Dreyfus. The whole case against Dreyfus rested on a piece of paper—a bordereau—written by a French spy working for the Germans. Bertillon, hired to ascertain whether (i.e. prove that) the handwriting was Dreyfus’s, came up with a jewel of sophistry. Allow me to quote at length:

Bertillon’s mind works tirelessly with a single purpose: proving that Dreyfus wrote the bordereau. Here’s what he has to work with: two letters—the bordereau and a sample of Dreyfus’s writing—that have some similarities but also marked differences. These differences are sufficient for real experts to conclude that the two letters have not been written by the same person. But Bertillon is smarter than that. Only by imagining what clever deceptions Dreyfus has devised will this connoisseur of the criminal mind be able to prove the traitor’s guilt.

Bertillon wonders: What kind of spy would write such a compromising message in his own hand? (The real spy, as it turns out, but no one knows this yet.) In Bertillon’s mind Dreyfus, a spy, and a Jew to boot, is too shrewd to make such a glaring mistake. He must have disguised his hand. This explains the differences between Dreyfus’s normal writing and the bordereau.

But now Bertillon has another problem: How to account for the similarities? Why hasn’t that shrewd spy simply used a completely different writing? To answer this question Bertillon comes up with his chef-d’œuvre, the keystone of his system: the theory of the auto-forgery.

Imagining what a shrewd spy might do, Bertillon realizes that transforming one’s writing would work only if the potentially incriminating document were found in a non-incriminating place. Then Dreyfus could use the disparities to claim that he was not the author of the bordereau. However, if the letter were discovered on Dreyfus’s person or in his office, he could not simply claim that it wasn’t his. Instead, this master of deception would have to say that he was being framed, that someone had planted the bordereau. But if someone were to try to frame Dreyfus, surely they would be careful to reproduce his hand-writing. And so Dreyfus set out to imitate his own handwriting—he engaged in auto-forgery.

However much the bordereau matches Dreyfus’s handwriting, it points to Dreyfus’s guilt. I believe Bertillon has managed to outsmart any attempt to make, by Bayesian means or others, his reasoning sound.

The point of this post is to mention that I found a much earlier version of a similar reasoning while listening to the lovely podcast Medieval Death Trip (medievaldeathtrip.com). Episode 11 reads from The Life and Miracles of St. William of Norwich [8], which has the sad distinction of describing the first instance of blood libel in English story. In 1144, a 12-years-old boy was found dead in Norwich, and the local Jewish community was accused of ritual murder. How did the folks of Norwich know the Jews were to blame? The body exhibited marks of torture. In particular, one hand and one foot showed signs of having been pierced with a nail, pointing to the (alleged) Jewish practice of mock crucifixion. But why only one hand and one foot? The good people of Norwich aren’t fooled. Had the signs of crucifixion been perfect, everybody would have believed the Jews had done it. And so the Jews attempted to hide their mischief by making it look like a half-crucifixion (??). Such an attempt to conceal the true nature of the crime is further proof of the Jews’ guilt — them being so devious and all that. This is a very Bertillonesque reasoning: if it looks like a crucifixion, the Jews did it; if it doesn’t quite look like a crucifixion, the Jews did it even more.

It can be argued (as we have) that the cognitive mechanisms giving rise to such reasoning are broadly adaptive, and yield, on the whole, epistemically sound outcomes. Yet the way they do so is by being biased and, when the biases aren’t compensated in some way—for example by someone pointing out how stupid this reasoning is—they can precipitate epistemically, and sometimes practically, disastrous results. People really can be biased, and the long history of antisemitism offers a depressing treasure trove of evidence.


[1] Kruger, J., & Savitsky, K. (2004). The “reign of error” in social psychology: On the real versus imagined consequences of problem-focused research. Behavioral and Brain Sciences27(3), 349-350.

[2] Brighton, H., & Gigerenzer, G. (2015). The bias bias. Journal of Business Research, 68(8), 1772-1784.

[3] Gigerenzer, G. (2018). The bias bias in behavioral economics. Review of Behavioral Economics5(3-4), 303-336.

[4] Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109.

[5] Guess, A., & Coppock, A. (2015). Back to bayes: Confronting the evidence on attitude polarization. Unpublished Paper, Yale University.

[6] Tappin, B. M., & Gadsby, S. (2019). Biased belief in the Bayesian brain: A deeper look at the evidence. Consciousness and Cognition68, 107-114.

[7] Mercier, H., & Sperber, D. (2017). The enigma of reason. Harvard University Press.

[8] Jessopp, A., & James, M. R. (Eds.). (1896). The life and miracles of St. William of Norwich. University Press. 

3 Comments

  • comment-avatar
    Carles Salazar 29 January 2019 (12:19)

    Does argumentative reason always lead to optimal results?
    That’s an excellent post, a sound defence of the argumentative theory of reason. I really enjoyed Hugo and Dan’s theory of human reason and I think it is really an insightful approach to this slippery topic of human rationality. But I wonder why they do not push their argument to its, to my mind, logical but less optimistic conclusion. If the evolutionary logic of human reason is social rather than cognitive, that is, humans make use of reason in order to convince others rather than to achieve a better result, why do Hugo and Dan insist (and Hugo’s post provides a nice example of this position) that the argumentative reason leads, in the end, to some form of overall cognitive enhancement, for which the interactionist procedure seems to constitute a necessary ingredient? In other words, if I understood them correctly, argumentative reason leads in the end to better decisions precisely because it takes place in a social milieu in which all positions can be freely discussed and evaluated. But is this really always the case? Why should the need to convince others or the need to find ‘good reasons’ for one’s action always lead to an optimal decision? Let me bring in the example of family business. It is a real mystery for rational choice theory why family business have been so successful in the history of capitalism. If decisions in a family business are systematically biased towards non-rational or non-economic objectives under the pressure of kinship obligations, hierarchies and emotional bonds, how is it that they have not been wiped out of the market by their more rational competitors? There are different perspectives on decision-making in family business that have tried to tackle this issue. But an interesting hypothesis, which seems to contradict Hugo and Dan’s optimistic conclusion, is that better decisions are made in family business than in non-family business precisely because in the first case decision-makers do not have to submit ‘reasons’ for their behaviour to an executive board of objective, rational calculators, at the risk of losing their jobs if those reasons do not sound convincing enough. It is very often the case, in business as in other aspects of life, that experienced people have a hint as to what could be the best course of action that cannot be easily explained in strictly rational terms. Thus, it is the fact that in non-family business executive officers are more concerned with convincing their colleagues or their bosses of the soundness of their decisions rather than with making the properly right (but hard to justify) decision, that puts them at a disadvantage in front of the family-business competitors. Notice that this does not contradict the argumentative theory of reason per se, but only the optimistic conclusion that argumentation always leads to a better decision.

  • comment-avatar
    Hugo Mercier 29 January 2019 (13:48)

    Evaluation matters
    Thank you for your comment, and for the opportunity of clarifying a crucial point about our theory. As you say, we claim that reason evolved because it serves social ends, such as convincing others. However, under our theory, reason would also have evolved because it allows us to evaluate the reasons provided by others, so as to only be convinced by good enough reasons. What explains why reason often works betters in social, dialogic contexts than in solitary contexts is largely the fact that more evaluation goes on in the former, as people evaluate each other’s reasons (while they don’t really evaluate their own). Some more details are provided there: https://43e24fb9-a-62cb3a1a-s-sites.googlegroups.com/site/hugomercier/Mercier%20The%20Argumentative%20Theory-%20Predictions%20and%20Empirical%20Evidence.pdf?attachauth=ANoY7co3qQdSFDFGUFZ1Sn-f7gcL0Vk1mbVHIx3S7-eWoj4FzrTKxvJL7OabcNs827Tckmgd5mGPNANTAdlTRQfraoRU-Aad3NhGVvqoAtKqUnMo2DrVYf798hvYBZyJPuKq3gCTwLX8ZM4U-h-TNPe0NyfOqhYwBgmHUjrgRMVGSGgBRAoDcxpVSwsuvWH0l_etB8GBs58a-U4Yo_WprjBPMAVZmRQWz3xAz06D7f-hKBzCCZiYlKrED07Tm_RH1a-XxzT3lxv-9bBf3OGRxVOp2N6s1J-m3zCs9zCBixWWav9ugMYIQmM%3D&attredirects=0

    More generally, you’re right that most of the variation in decision making quality as a function of context will be down to people having better intuitions to start with, rather than being better able to use reason.

  • comment-avatar
    Burt 14 February 2019 (14:35)

    Mechanisms of Bias
    Back in the late 80s I began reading the literature on the heuristics and bias as background for a course I was developing in scientific reasoning. I was particularly interested in the three high-level heuristics of representativeness, availability, and anchoring posited by Tversky and Khaneman. At the time, the literature seemed focused on how use of these heuristics led to errors in thinking when compared to logical and probabilistic methods of decision making. What I realized, however, was that these three heuristics were the necessary cognitive mechanisms for any expression of experience in language. That is, one must represent entities, have an available background of similar entities for comparison, and have an “anchoring” framework within which to assign meaning. So, the point was not to find better mechanisms but rather to learn how to use these necessary tools without falling into the sorts of cognitive illusions to which they were vulnerable. From this perspective I began a study of the history of science in terms of the development of means to use them while avoiding illusions. What appeared was a view in which, although dealing with each of the three heuristics was always important, there have been particular high-level “crisis” periods in the evolution of science in which a particular heuristic became the focus of concern. With the ancient Greeks, this was representativeness, with the necessity of discovering proper rules and criteria for reasoning categorically (resolved, roughly speaking, by Aristotle); then, in the scientific revolution it was availability, with the necessity of developing rules and criteria for evaluation of empirical data, leading to statistical tests and error analysis. The further projection of this is that today the main issue is with anchoring, with the need to develop rules and criteria for reasoning about different paradigmatic approaches (as in different cultural assumptions, etc.). At some point I hope to publish a book about this, although I keep getting distracted.