Exploiting the wisdom of others

Exploiting the wisdom of others: A bumpy road to better decision making

(link to the article)

Ilan Yaniv & Shoham Choshen-Hillel

While decision makers often consult other people’s opinions to improve their decisions, they fail to do so optimally. One main obstacle to incorporating others’ opinions efficiently is one’s own opinion. We theorize that decision makers could improve their performance by suspending their own judgment. In one study, participants used others’ opinions to estimate uncertain quantities (the caloric value of foods). In the full-view condition, participants could form independent estimates prior to receiving others’ opinions, while participants in the blindfold condition could not form prior opinions. We obtained an intriguing blindfold effect such that the blindfolded participants provided more accurate estimates than did the full-view participants. Several policy-capturing measures indicated that the advantage of the blindfolded participants was due to their unbiased weighting of others’ opinions. The full-view participants, in contrast, adhered to their prior opinion and thus failed to exploit the information contained in others’ opinions. The results from these two conditions document different modes of processing and consequences for accuracy.

3 Comments

  • comment-avatar
    Hugo Mercier 9 May 2011 (17:39)

    Thanks a lot for a great paper! I’ve been very interested in egocentric discounting for a while, and this is a very neat demonstration of its potentially deleterious effects. I’m also inclined to accept your interpretation in terms of accessibility of reasons. I would simply like to suggest that this explanation may be usefully complemented by an evolutionary explanation. When you say that preferential access to one’s own reasons for a given belief cause egocentric discounting, you’re giving an explanation at the proximal level. But one can still wonder why we have this bias. After all it’s easy to imagine that it could be fixed: there is nothing computationally complex here. An evolutionary explanation of the same finding would be that people have an egocentric bias because communicated information is dangerous. While your own cognitive mechanisms work for our own good, other people can try to communicate misleading information. A priori, it makes good evolutionary sense to favor our own beliefs over communicated information, at least to some extent. Dan Sperber has written a lot about this (see for instance Epistemic Vigilance http://www.dan.sperber.fr/wp-content/uploads/EpistemicVigilance.pdf). Moreover, since heavy discounting can lead to suboptimal decisions, evolution has found a way to improve on advice taking: argumentation (see this paper by Sperber and me: Why Do Humans Reason? Arguments for an Argumentative Theory, there: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1698090). As Julia Minson’s work shows, dyads are very good at aggregating opinion if they are able to exchange reasons (see her publications there http://opimweb.wharton.upenn.edu/people/faculty.cfm?id=459). Again, this explanation and yours are not mutually exclusive. On the contrary, they complement each other very well. Along similar lines: you mention at the beginning that people tend to aggregate many opinions and weight them as if they were only one. This clearly leads to poor outcomes in many cases, but could it also protect oneself against the possibility of dependent data points? After all, if everyone else got their ideas from the same source, it makes sense to treat their opinions as one. The exception would be if people have opinions that are very different from one another but also very different from that of the participant. There we should observe less aggregating. But it’s hard to see that on a numerical scale, because the opinions of the advisors are bound to bracket that of the participants (if they are not on the same side obviously), in which case it makes sense to provide a final answer close to one’s original answer. Maybe something different could be observed with problems that have more than two dimensions. Thanks again for the paper, I’m looking forward to have your views on this!

  • comment-avatar
    Ilan Yaniv 17 May 2011 (16:33)

    Thank you for your insightful comments, Hugo. Your point on the role of perceived dependence among advisory opinions is well taken. Incidentally some of our previous experimental results speak exactly to this issue (please see “Spurious Consensus and Opinion Revision: Why Might People Be More Confident in Their Less Accurate Judgments?” http://psychology.huji.ac.il/.upload/Ilan/Yaniv_et_al_JEP.pdf).In our article we find however that people tend to underestimate the informative value of independently drawn opinions, if these appear to conflict with one another. Yet they overestimate the informative value of “spurious consensus” – that is, in opinions that are sampled interdependently (i.e., from correlated sources). Thus this article suggests that, if anything, people tend to (incorrectly) be influenced more (rather than less) by correlated opinions. It should be noted that in our setting though, participants had no reason to assume any “conspiracy” among the advisors and thus, the consensus opinions induced a degree of comfort and excessive confidence. This brings us to your main point, namely, that decision makers’ discounting arises due to their distrusting their advisors. Interestingly, this hypothesis could be tested experimentally. Suppose decision makers are given the opinions of advisors who had been promised bonuses that are dependent on the decision makers’ achievements. If egocentric discounting is due to distrust, then such an incentive structure should eliminate the egocentric bias. This is indeed a worthy project and an intriguing research direction. One (a priori) reservation we have about explanations based on all-around-suspicion, though, is that such an approach seems to violate the Gricean analysis of communication, whereby speakers and listeners are supposedly bound by the (implicit) cooperative principle. This raises a fundamental question about people’s basic assumption in social interactions. In particular, under what conditions, would you think, people trust (or cooperate in the Gricean sense) and under what conditions would they suspect any communication or advice as misleading? Ilan Shoham

  • comment-avatar
    Hugo Mercier 18 May 2011 (15:01)

    Dear Ilan and Shoham, thank you for your answers. Thanks for pointing out this very relevant paper, it addresses perfectly my question about opinion aggregation. You make a very interesting suggestion by trying to find a situation in which the incentives of advisor and advisee would coincide perfectly. While this would make for a great experiment anyway, it’s not clear it would be as definitive as one may like regarding the evolutionary hypothesis. If it turned out that people still discount advice then, one could argue that our minds are not equipped to deal with such rare situations as perfectly overlapping interests. To take an analogy, altruism in economic games is sometimes explained by the unnaturalness of the setting, the fact that our minds are not designed to cope with perfect anonymity. Maybe a more ‘evolutionary’ way to proceed would be to have close family members communicate on a topic about which their interests overlap (but even family members are often in conflict…). In any case, evolutionarily one may predict lower rate of advice discounting then. Regarding Grice, Sperber made an argument that goes in the other direction. Instead of having to think that everybody is trustworthy because communication wouldn’t work without the maxim of quality (i.e. do not lie), he suggested that the maxim of quality cannot hold because people do communicate lies very effectively. That was one of the many reasons to abandon Grice’s framework. This argument is spelled out in (among other places) ‘Epistemic Vigilance’: http://www.dan.sperber.fr/wp-content/uploads/EpistemicVigilance.pdf This paper also addresses your last question: people are always vigilant, and it is because they are always vigilant that they can trust most of the time. Trust and vigilance are not opposite, on the contrary the later enables the former. But I would really advise you to read the paper, you may find many things of interest to you there (I hope you won’t discount this advice too much — I want to believe that researchers work together towards a common purpose 🙂 ).