The cost of collaboration

The cost of collaboration: Why joint decision-making exacerbates rejection of outside information

(link to the article)

Julia Minson & Jennifer Mueller

Existing research asserts that specific group characteristics cause members to disregard outside information which leads to diminished performance. In the present study we demonstrate that the very process of making a judgment collaboratively rather than individually contributes to such myopic disregard of external viewpoints. Dyad members exposed to the numerical judgments made by another dyad gave significantly less weight to those judgments than did individuals exposed to the judgments of another individual. The difference in the willingness to use peer input shown by individuals versus dyads was fully mediated by the greater confidence that the dyad members reported in the accuracy of their estimates. Consequently, although dyad members made more accurate initial estimates than individuals, they were less able to benefit from peer input.

2 Comments

  • comment-avatar
    Hugo Mercier 22 May 2011 (23:27)

    Thank you for a great paper! I think your results fit in very well with Yaniv’s explanation for the discounting of advice. His point is that people discount advice because they can easily find reasons for their own opinion but not for that of other people. This is presumably why days who can discuss perform much better than single dyads: through argumentation, they are able to sift through each other’s reasons and focus on the best. But once a dyad is exposed to outside advice, the phenomenon that happened within individuals happens in the group: neither member finds it easy to find reasons against their solution or for the advice. And with two of them, they are even better at finding reasons to discount the advice. Maybe things would turn out a bit differently if the advice was given to the dyad prior to discussion. Even on a two dimensional scale, several things could happen. If the two dyad members have substantially different opinions, then the advice will either fall between that of the two participants, in which case it would probably have had little effect anyway, or it will fall squarely on the side of one’s participant, in which case it may help the participants determine which of their initial answers is closer to the truth. In such a case, the advice may have more weight than if it’s given afterwards, as it is endorsed by one of the dyad members. Even if both dyad members start from similar opinions and the advice is very different, they may move more towards the advice if they have to take it into account before they can realize they agree with their dyad partner. This reminds me of individual vs. group polarization. People reasoning on their own as well as groups of like-minded people are known to polarize. It seems that the confirmation bias — the relative facility with which we find reasons supporting our opinion as opposed to that of others — can explain both polarization and your results. However, I don’t know of any study comparing directly individual to group polarization. In light of your results, I wouldn’t be surprised if group polarization was stronger than individual polarization.

  • comment-avatar
    Julia Minson 31 May 2011 (15:56)

    I agree that our results don’t contradict Yaniv’s proposed mechanism, but greater access to one’s own reasons is not necessary for our finding (nor for that matter, for the basic under-weighting of advice phenomenon). In the paper we point to the increase in confidence that happens when judgments are made collaboratively as mediating the effect. Dyads are more accurate not only because they can discuss their judgments, but also because their judgments are based on a sample of two estimates rather than one. For example, if dyads were to make a joint judgment without discussion (by for example following a procedure where they simply exchange numbers until those numbers converge on something they can agree on (Minson, Liberman & Ross, 2011, PSPB)), they would still be more accurate than individuals. Regarding the access to reasons story, Jack Soll and Al Mannes have a paper in IJF that tested that explanation and didn’t find support for it. I have a manuscript in prep right now where we increase access to a peer’s reasons by asking participants to explicitly consider them and actually find an increase in underweighting (because participants think that their partner’s reasons are worse than their own and thinking about them only exacerbates that impression). So basically, I am not sure that the access to reasons mechanism really works in the one on one case, and therefore I am not ready to use it to explain why dyads underweight advice even more. I agree that offering peer input prior to the dyad making a joint judgment would yield a different result. It would probably look something like an anchoring effect. The effect of this third piece of information on accuracy would depend on whether this is “real” advice of another participant in the sample or if it has been artificially generated by the experimenter to have particular properties. If the advice is “real” and falls close to the judgment of one of the dyad members, this is good evidence for that dyad member being correct in their judgment. If the advice is not “real” either because it was experimentally generated or because you asked your closest friend for their judgment, knowing full well that they would agree with you, then it does not carry the same informational value. Yaniv and his students have some really interesting work on whether people realize that opinions of similar others are less informative than opinions of randomly selected others, and the answer seems to be that “no, people don’t realize this.”