Words or Deeds

Words or deeds? Choosing what to know about others

(link to the article)

Erte Xiao & Cristina Bicchieri

Social cooperation often relies on individuals’ spontaneous norm obedience when there is no punishment for violation or reward for compliance. However, people do not consistently follow pro-social norms. Previous studies have suggested that an individual’s tendency toward norm conformity is affected by empirical information (i.e. what others did or would do in a similar situation) as well as by normative information (i.e. what others think one ought to do). Yet little is known about whether people have an intrinsic desire to obtain norm-revealing information. In this paper, we use a dictator game to investigate whether dictators actively seek norm-revealing information and, if so, whether they prefer to get empirical or normative information. Our data show that although the majority of dictators choose to view free information before making decisions, they are equally likely to choose empirical or normative information. However, a large majority (more than 80%) of dictators are not willing to incur even a very small cost for getting information. Our findings help to understand why norm compliance is context-dependent, and highlight the importance of making norm-revealing information salient in order to promote conformity.

5 Comments

  • comment-avatar
    Hugo Mercier 28 March 2011 (14:55)

    Thanks Erte and Cristina for this very interesting paper! I have a few questions/comments for you, in no particular order. Have you thought about doing something similar, but about receivers’ expectations? Do you think it would make any difference? Do you think that saying “we will reveal what 60% of the dividers did/think is right to do” could have introduced some kind of effect (as opposed to saying, 80%, 100%?). For instance, if we think that the two most salient norms are to give 5 (or close to it) or 0 (or close to it), then having information about 60% of the people is not all that useful. If you have two choices, and you know that 60% of the population made one, then that’s nearly enough information, you don’t learn much more by learning which choice the 60% made. Maybe people would have paid more for 80% or 100% (which would have indicated a very strong norm, one that it would be more costly to disregard). Also, I’d be curious to see what would happen if people thought they could use the result as an excuse or if, on the contrary, they were likely to think it might drive them in a direction they didn’t want to go. For instance, if people want to keep the money, but think others would have given, then they may want to ignore the information. By contrast, if they think others wouldn’t give either, then they should seek the information that may excuse their behavior. Such effects may have been more likely to appear if the existence of a strong norm (80%, 90%) had been suggested.

  • comment-avatar
    Dan Sperber 28 March 2011 (19:53)

    Interesting study, thank you! But how easy is it to generalize from it? Participants may assume that, in this situation, there is no shared norm to guide their behavior and therefore feel free to do what they want. Or participants may assume that, in this situation, their intuitive sense of what they should do is likely to converge with the relevant social norm. If participants make such assumptions, what others did and thought is of very marginal relevance them. When the information is free, why not take it? On the other hand why pay a cost for it? Given this possible interpretation of the participants’ choices in this particular experimental situation, can we generalize the results to other, more real-life-like situations? In situations where people would assume that there is a social norm and not feel confident that they know it intuitively, isn’t it likely that they would be more willing to pay for relevant information? Here are possible examples of such situations: you wonder how much to tip a waiter in a foreign country; it is the first time your child is invited to the birthday of a school friend, and you wonder how expensive a present to give; you are a member of a jury and have to determine punitive damage; you have to advise a friend who is hesitant about giving a kidney to a sibling she is not particularly close to. When you do want information, should you prefer empirical or normative information? In many cases you may assume that the two converge, the normative being a bit more demanding and a bit less ‘realistic’ than the empirical. In such a case, both types of information would be roughly equally relevant. The economic game situation may be a case in point. In other situations, there may be a serious mismatch between the norm and the practice. An example in point is the mismatch between expressed high norms on the one hand and actual and expected low practices described by Diego Gambetta and Gloria Origgi in their paper “L-worlds: The curious preference for low quality and its norms” [url=http://www.sociology.ox.ac.uk/documents/working-papers/2009/2009-08.pdf]here[/url] with examples such as academic interactions in Italy (and, I would add in many other places including my country, France). In such situations, knowing both the norms and the practices is relevant, but if you must choose, you would be better off knowing about the practices (if only because you are more likely to be able to infer the norms from the practices than the other way around).

  • comment-avatar
    Nicolas Baumard 29 March 2011 (16:47)

    Erte and Cristina’s experiment raises a very interesting question. Their experiment suggests that individuals have a conditional preference for following a norm. In some cases, however, it seems that individuals follow norms unconditionally: even if I do not know what others think about murder, or even if I have doubt about their willingness to refrain from killing others, I will abstain to do so. So it seems that there are also unconditional norms or moral norms. Is there a difference of nature? Or, put differently, do these norms rely on different psychological mechanism? Some seem to think there is indeed a difference of nature. Jon Elster, in More Nuts and Bolts for the Social Science distinguishes between moral norms and quasi-moral norms: [i]”Moral norms include the norm to help others in distress, the norm of equal sharing, the norm of “everyday Kantianism” (do what would be best if everyone did the same), and others. What I shall call quasi-moral norms include the norm of reciprocity (help those who help you and hurt those who hurt you) and the norm of conditional cooperation (cooperate if others do, but not otherwise). Quasi-moral norms are conditional, in the sense that they are triggered by the presence or behavior of other people. (…) quasi-moral norms [are triggered] when the agent can observe what other people are doing. Moral norms, by contrast, are unconditional. What they tell us to do may, to be sure, depend on what others do. If I have a utility-based philosophy of charity, how much good I can do (and hence how much I will give) depends on how much others are giving. The norm itself, however, makes no reference to other donors, only to the recipients.”[/i] Elster’s reason for thinking that these norms are different in nature is that they do not bear on the same mechanisms: [i]”Quasi-moral norms can obviously be powerful in inducing altruistic behavior. Do they merely mimic altruism or are they altruistic motivations? The reason I refer to them as quasi-moral and not as moral is also why I lean to the first answer. The norm of reciprocity allows you not to help others in distress unless they have helped you previously. A typical moral norm is to help others in distress unconditionally, even if there is no prior history of assistance. The norm of conditional cooperation allows one to use normal amounts of water if nobody else is reducing their consumption, whereas both utilitarianism and everyday Kantianism would endorse unilateral reduction. Moral norms, one might say, are proactive; quasi-moral norms only reactive.”[/i] In her Grammar of society, Cristina seems to agree: [i]“The point [in distinguishing conditional and unconditional norms] is that under normal conditions, expectations of other people’s conformity to a moral rule are not a good reason to obey it.” (p.21)[/i] [i]”There is nothing inherently good in our fairness norms (…). However, many of us would feel there is something inherently bad in taking a life (…). What needs to be stressed here is that what makes a social or a moral norm is our attitude toward it.” (p.21)[/i] However, in a footnote, Cristina suggests that there may be borderline cases: [i]“Imagine finding oneself in a community where violence and murder are daily occurrences (…). One would probably at first resist violence, then react to it and finally act it out oneself.” (p.20)[/i] I agree that these cases are on the borderline but they may tell something about justice. Imagine, as some think, that we have an instinct of fairness that lead us to act in a mutually advantageous way, to share equally the benefit of cooperation and to help each other when we think it is useful to do so. This instinct of fairness would sometimes need information about what others think. Indeed, there are cases where mutual advantage is quite easy to figure out: you help someone who is drowning in a pond because it does not cost you a lot and it save this person’s life. But others cases might be more difficult. Take Elster’s example about water reduction: [i]”In Bogotá, under the imaginative mayorship of Antanas Mockus, people followed a quasi-moral norm when reducing their consumption of water. Although individual monitoring was not feasible, the aggregate water consumption in the city was shown on TV, so that people could know whether others were for the most part complying. It appears that enough people did so to sustain the conditional cooperation. People were saying to themselves, “Since other people are cutting down on their consumption. it’s only fair that I should do so as well.”[/i] Without information about what other think about water, it is hard to know where mutual advantage lies: would it make sense to reduce water on you own if you were the only inhabitant of Bogotá to do so? It is such an improbable collective action that you need at least some cues to know if you have a duty to do so and that’s why you are a conditional cooperator. Take another case: should you buy drinks to others at the pub? The question is debatable. You might think that you should as it is more pleasant to share drinks or, on the contrary, if say, you do not drink a lot, you might think that it would better if each person would buy her own drink. Among information needed to solve this difficult question is what other people do. Imagine that you think that it is much more pleasant to buy drinks to each other. But what if no one does that? In this case, it might not be as pleasant as you had thought before. Conversely, you might think that it is better if everyone pay for his own consumption (people do not drink equally, etc.). But what if others buy you some drinks? It is now clearly fair that you reciprocate. And what if you are broke and people buy champagne? In this case, people would excuse you, because they would understand that, in this particular circumstance, although it is usually mutually advantageous to share drinks, it would be unfair to ask you to buy others some champagne. So, to sum up, my hunch is that the difference is more of degree than a difference of nature: some actions are unfair in most circumstances while others are much more dependent of the circumstances. But, in each case, the same psychological mechanism is involved. But I guess it would be useful to have more experiments like the one presented by Xerte and Cristina in which circumstances and information make a difference in people’s behaviours!

  • comment-avatar
    cristina bicchieri 29 March 2011 (18:39)

    I have thought quite a bit about the difference between moral and social norms. They are on a continuum, but I still believe that the conditional/unconditional preference makes a difference. I had a discussion with Shaun Nichols about it, since Shaun claims that even in case of what he calls “personal norms” (and he means moral norms) there is a conditional element. I do not deny that, but restrict it to certain conditions. If you go to the site http://upenn.academia.edu/CristinaBicchieri/Papers, you may look at my reply to Nichols on this point. I would be interested in knowing your opinion about it. The article’s title is: Norms, Preferences, and Conditional Behavior

  • comment-avatar
    cristina bicchieri 8 April 2011 (07:45)

    The Dictator game is a particular case, for two reasons. On the one hand it is not obvious what ought to be done. Indeed, I ran a survey years ago at CMU, and the result is a bimodal distribution. It’s just 10,0 or 5,5. In other words, there is no clear convergence to a unique, ‘right’ allocation. On the other hand, there is no punishment — as opposed to the Ultimatum game — for choosing an ‘unfair’ allocation. In the UG, it may make sense to pay a small fee to avoid a big mistake. So I would predict that, in a UG, participants would want to learn, with or without a fee. But of course in this case I would not be able to tease apart the intrinsic value of information from an opportunistic desire to acquire it. In the DG, the opportunistic element is absent. I may be curious to know what others did or think should be done, but this information is not constraining (even if it turns out that participants are influenced by it in their subsequent choices). After all, I am not going to suffer if I behave differently. Why then when the information is free, most people take it, but if it has a small cost they ‘refuse to know’? I suspect that paying for the information is perceived as a commitment to follow up with what is learned, whereas the free info is not that compromising (even if in the end people are influenced by what they learn, but they cannot predict it beforehand). Paying for information seems to suggest that what I learn will count, and weight on my subsequent choices. It may bind me to a costly choice (5,5) that I do not want to make. Here I am close to Hugo’s suggestion that “it might drive them in a direction they didn’t want to go.” In this case, I do not want to pay not because of Dan’s suggestion (that the information is really irrelevant for me), but because I fear my action will make it (if only symbolically) relevant, and I may not like what I learn. This is a testable hypothesis, and can be generalized to many real life situations. As to the descriptive vs. normative information, I agree that in the opportunistic case the empirical would have greater weight, since when we learn that “most people do x” the injunction to “avoid x” looses force. In the DG case, if my hypothesis that the free information is non binding is correct, then there is no point in differentiating empirical and normative. Regarding Hugo’s suggestion about the percentages, I have not thought of the consequence of learning that 60%, as opposed to, say, 80% acted in a particular way. We all have different thresholds, and depending on our greater or lesser commitment to a norm, we may be differently swayed in one or another direction. The results of our experiments, however, suggest that 60% is typically enough to push people in the direction of the majority.