Truth is not always the point
Thank you, Alberto, for writing this book! In addition to being a captivating reading, it shed light on some thoughts I had on how people share information on social media, and what truth has to do with it. I outline these thoughts, hoping that they can be refined in the general discussion.
One of the objectives of Alberto’s book is to show that one can have a reading of digital phenomena very different from the usual one, which highlights the difficulty of keeping a critical mind in a world overwhelmed by information of uneven quality. And indeed, there is a lot of convincing evidence, gathered in the book, against the fashionable idea that, for lack of time and critical thinking, humans of the 21st century are, in general, more vulnerable to misinformation. If an idea successfully spreads on the web, it is because favourable ground was present in the first place, not because we readily accept information encountered on social media (as argued by Hugo Mercier in “Not born Yesterday”). This also applies to misinformation. If someone shares some fake news (for example, that a study demonstrates the toxicity of a vaccine), it is probably because this particular information resonates with her priors and values – which simply means that she endorses at least some aspects of the information, and not that she doesn’t care about truth.
We have been repeatedly told that we have entered an era of post-truth, in which truth no longer matters and political debates are based on fake stories. But truth has not become an obsolete value. Even Trump and his supporters, who massively rely on misinformation, accuse their opponents and mass media of lying and of intentional false reporting to deceive the public. However, the accuracy of a piece of information is not always on the radar. We certainly want to appear as interesting as we can, and we don’t want to share things we know to be of poor quality. But, as Alberto points out, the quality of an idea or a piece of information doesn’t always lie in its truth value: “We are not sensitive to an abstract notion of truth, but to various cues that points out to the importance of the context and which may be associated only on average with truthfulness” (p. 184)
Sometimes the truth value of an information is what makes it valuable. Mainstream media, politics, scientific and educational institutions are expected to relay true information in an objective way, and I think my friends on Facebook expect me to do the same on my wall (they would probably tell me if I were to spread any fake news). An article from The Onion is funny (and relevant) even if it’s not true, but when a politician falls for it, then the fact that it is fake makes it even funnier. Truth may be a necessary condition for the relevance of a piece of news but is never a sufficient condition. As Wilson & Sperber (2002) stressed it, verbal communication is governed not by expectations of truthfulness but by expectations of relevance.
Sometimes people just share information in order not to attract people’s attention on the truth of particular facts, but rather to highlight the relevance of the ideas these facts may serve to illustrate. In this case, their intention is to point to an idea or an attitude and make it more manifest through this illustration. Most people who share misinformation on the internet don’t do it with the intention of fooling others, nor do they “forget” to be careful – although it can obviously happen that misinformation passes through the net of our epistemic vigilance. False information can highlight a particular feature of the world, which can be informative in itself: Si non è vero è ben trovato! [Even if it is not true, it is well thought up] as they say in Italy.
In a recent study, Pennycook and colleagues suggested that individuals spread false news about Covid-19 because they failed to think sufficiently about whether or not the content was accurate when deciding what to share. Participants were indeed worse at discerning true from false content when deciding what they would share on social media as compared to when they were asked directly about accuracy. This is an interesting result, but an alternative explanation is that, while people are generally able to make the difference between true and false information, they don’t necessarily share and interpret information on social media with an assumption of accuracy. It’s not that we don’t care about truth, but we also use the internet to exchange views, opinions and attitudes. When caught spreading misinformation, we may feel embarrassed because we care of our reputation. But more often than not, accuracy was not really the point.
The Internet has promoted a new culture of communication, in which the attitude of, say, derision relative to the content that is shared can matter more than the content itself, even if that content is a recycled meme, a bad joke or just old news. For instance, a clever status accompanying the sharing of a fake news can be of no informational value, and yet be seen as relevant and witty. Because content can be produced and shared at a minimal cost – it is possible to share an article without reading it – sharing a content can be interpreted in multiple ways (to endorse its point of view, to point to its existence, or as a joke, an hyperbole, and so on). As a consequence, content shared on social media often doesn’t come with a strong presumption of truthfulness, but rather with the expectation of conveying a relevant attitude on some issue.
The relevance and attractiveness of a piece information depends not only on its content, but also on the values with which it is associated. When information is shared on social media, higher-level ideas such as political opinions can be implicitly conveyed as well. For example, the news reporting evidence of the toxicity of a vaccine is closely associated with the idea that the interests of “Big Pharma” often go against the well-being of people. I wonder whether social media actually contribute to reinforcing the link between social and political values on the one hand and, on the other, specific stances on concrete issues, just because they encourage this kind of association. Dissociating one’s opinion regarding a scientific issue from the political set of ideas it is generally associated with is not an easy task, and social media might make it even more difficult.
We are obviously able to switch to a more local accuracy mode. In real life, we don’t fast-check everything – it would be very impolite to begin with – but we do so when it is relevant to us. We can focus our attention on the accuracy of some information (whether it is the efficacity of a treatment against Covid-19 or the spelling of a given word) when needed to improve our model of the world so as to make it both practically and socially functional – or when we want to prove that we are right, which is always a nice feeling.
The logical conclusion of Cultural Evolution in the Digital Age is that we should not worry too much about online misinformation. I’m happy with this positive conclusion, but it makes me even more curious on how humans readily switch from being wary learners and hard to convince, to being overly generous to ideas that fit with values they are committed to uphold.