Moral Compensation and the Environment

Moral Compensation and the Environment: Affecting Individuals’ Moral Intentions Through How They See Themselves as Moral.

(link to the article)

Ann Tenbrunsel, Jennifer Jordan, Francesca Gino & Marijke Leliveld
To maintain a positive moral self-image, individuals engage in compensation: current moral behavior licenses future immoral behavior and current immoral behavior stimulates future moral behavior. In this paper, we argue that moral compensatory effects are a function of changes to one’s moral self-image. In two studies, we examine the relationship between behaviors that stimulate changes to one’s moral self-image and to ethical actions. In Study 1, we have individuals recall either few or many (im)moral behaviors that they take in regards to the environment. In Study 2, we provide individuals with either minor or extreme feedback about the states of their moral selves. We then examine their intent to engage, as well as their actual engagement in, in various moral or immoral behaviors. We find that having people engage in extreme, but not moderate, moral recalls leads to compensatory environment-related moral behavior. We propose that this effect is due to the ability of extreme moral behavior to alter individuals’ moral self-images and hence their desires to alter these states via moral behavior.

4 Comments

  • comment-avatar
    Hugo Mercier 25 April 2011 (16:14)

    Thanks for this very interesting paper! I have a few questions / comments. Do you think that by increasing the number of moral behaviors to remember, the effect could be inversed (as in the metacognition research showing that people like, say, a BMW less after giving 10 rather than 2 reasons to like it, because finding reasons becomes hard after a while). Do you think there may be cases in which the opposite effect may happen because people want to feel consistent? For instance, in the case of a behavior that is only ambiguously bad, people may be tempted to persevere. When you say that you may have grounds to predict people’s actions, that seems a bit counterintuitive to me. To the extent that these actions lay far in the future, the only difference between the four groups is the question you asked them, not their actual past behavior. In the future, if their past behavior guides their actions, the memories should be, on average, the same for everybody–and so their effects on behavior, no? Finally, you talk about ‘self-image’, but I don’t see any data there showing that it’s not in fact other people that are the ‘real’ target. What I mean is that even thought preserving a good self-image may be a plausible proximal cause, the ultimate, or evolutionary source of the mechanism must lay in self-presentation. It might be interesting to see if social variables impact the outcome of such manipulations (for instance, cases in which people’s own moral judgments on the behaviors differ from that of an esteemed audience).

  • comment-avatar
    Nicolas Baumard 9 May 2011 (19:18)

    Very interesting! What are people’s motivations in compensating for a misdeed? Throughout the article, the authors argue that they are motivated by preserving their self-image. Although this seems possible, it also seems to imply that people are just hypocrites: they do not compensate because they have genuine concerns for the environment but because they want to preserve their self-image. In a way, the authors suggest, they are just like corporations such as Wal-Mart (the example is taken by the authors). Wal-Mart (considered as an agent) does not care about the environment or about morality in general. It only cares about its benefits (this is the goal of the corporation) and it has moral concerns only insofar as it impacts its self-image and on its profit. Although I do not dispute the amoral character of Wal-Mart, I have more doubt when it comes to humans. Indeed, as Trivers noted a while ago, being hypocrite and caring only about one’s self-image might not be the best strategy: [quote]’One can imagine, for example, compensating for a misdeed without any emotional basis but with a calculating, self-serving motive. Such an individual should be distrusted because the calculating spirit that leads this subtle cheater now to compensate may in the future lead him to cheat when circumstances seem more advantageous (because of unlikelihood of detection, for example, or because the cheated individual is unlikely to survive). Guilty motivation, in so far as it evidences a more enduring commitment t o altruism, either because guilt teaches or because the cheater is unlikely not to feel the same guilt in the future, seems more reliable.’ (1971, p. 51)[/quote] In a recent [url=https://sites.google.com/site/nicolasbaumard/Publications/Baumard%26Sperber2010Draft.doc?attredirects=0]paper[/url], Dan Sperber and I develop further this idea. We argue in particular that aiming directly at achieving a moral reputation carries cognitive costs and practical risks. [quote]’From a practical point of view, an error–for instance, mistakenly assuming that no one is paying attention to a blatantly selfish action–may compromise an agent’s reputation. Such a mistake may not only cause direct witnesses to lower their opinion of the agent, but is also likely, given the typically human way of spreading information, to influence many more people. From a cognitive point of view, a Machiavellian strategy is a demanding one. It is often difficult to tell whether others are paying attention to our behaviour, and to predict how they might interpret it and what they would think or say about us as a result. Even if a Machiavellian agent cleverly manages to avoid being caught cheating, she might still behave in a way that suggests she is being clever rather than moral, and compromise her reputation as a result. A number of studies in behavioural economics confirm that it is not that easy to pretend to be genuinely moral. Participants in experiments are able to predict in advance whether or not their partners intend to cooperate (Brosig, 2002 ; Frank, Gilovich, & Regan, 1993). They base their judgments on the likely motivations of others (Brehm & Cole, 1966; Schopler & Thompson, 1968), on the costs of their moral actions (Ohtsubo & Watanabe, 2008), or on the spontaneity of their behaviour (Verplaetse, Vanneste, & Braeckman, 2007). More generally, many studies suggest that it is difficult to completely control the image one projects, and that there are numerous indirect cues to an individual’s propensity to cooperate (Ambady & Rosenthal, 1992; Brown, 2003). [/quote] Thus, we conclude: [quote]”Machiavellian strategies for securing a good moral reputation without paying the cost of morality are thus both hard to follow and risky. Is there a cognitively easier and safer way of securing such a reputation? Yes: it consists in deserving it, that is, in having a genuine, non-instrumental preference for moral behaviour and a disposition to act on the basis of this preference. At the cost of missing a few opportunities for profitable cheating, a genuinely moral person is in a uniquely good position to be regarded as such.”[/quote] Of course, ultimately, people compensate to improve their self-image. but at the proximal level, they may not do so because they acre about their self-image.

  • comment-avatar
    Jennifer Jordan 10 May 2011 (00:10)

    Thanks for this very interesting paper! [b]you’re welcome. our pleasure to share it with you :)[/b] I have a few questions / comments. Do you think that by increasing the number of moral behaviors to remember, the effect could be inversed (as in the metacognition research showing that people like, say, a BMW less after giving 10 rather than 2 reasons to like it, because finding reasons becomes hard after a while). [b]this is a great question, Hugo. and indeed a possible alternative hypothesis. But unlike reasons I love my Bimmer, coming up with 8 things I do to help/harm the environment might not be too tough for people (particularly given that our participants were already pretty environmentally conscious folks). Another reason is that given the current focus on environmental issues (e.g., oil spills, global warming), coming up with even 4 or 5 things that one does to harm the environment might be particularly shameful. [/b] Do you think there may be cases in which the opposite effect may happen because people want to feel consistent? For instance, in the case of a behavior that is only ambiguously bad, people may be tempted to persevere. [b]What a great suggestion and thought! In fact, that is a question that Francesca, Ann, and I are currently pursuing: when does bad action/good action perpetuate further bad/good action? [/b] When you say that you may have grounds to predict people’s actions, that seems a bit counterintuitive to me. To the extent that these actions lay far in the future, the only difference between the four groups is the question you asked them, not their actual past behavior. In the future, if their past behavior guides their actions, the memories should be, on average, the same for everybody–and so their effects on behavior, no? [b]Ah! Not so, Hugo. If you can determine how a question, a recall, or an action affects the individual’s moral self, then, according to our theory, you can predict their future behavior. However, as we also propose, that question, recall, or action, must be one that is deeply personal. [/b] Finally, you talk about ‘self-image’, but I don’t see any data there showing that it’s not in fact other people that are the ‘real’ target. What I mean is that even thought preserving a good self-image may be a plausible proximal cause, the ultimate, or evolutionary source of the mechanism must lay in self-presentation. It might be interesting to see if social variables impact the outcome of such manipulations (for instance, cases in which people’s own moral judgments on the behaviors differ from that of an esteemed audience). [b]This is an excellent question, Hugo. And, one could say that such compensatory action could be chalked up to mere self-presentation. However, as some of the studies that we are currently running show, even when the individual is pretty certain that no one will witness their compensatory or stimulating behavior, individuals still compensate. This notion of the “public witness” is still not completely resolved – but we are pursuing evidence. The issue is proposed in a paper that I wrote with Liz Mullen and Keith Murnighan, which just came out this month in Personality & Social Psychology Bulletin: Striving for the moral self: The effects of recalling past moral actions on future moral behavior. However, Francesca, Ann, and I are currently trying to find some empirical resolution on the question[/b]

  • comment-avatar
    Jennifer Jordan 11 May 2011 (23:29)

    Very interesting! [b]Thank you, Nicolaus.[/b] What are people’s motivations in compensating for a misdeed? Throughout the article, the authors argue that they are motivated by preserving their self-image. Although this seems possible, it also seems to imply that people are just hypocrites: they do not compensate because they have genuine concerns for the environment but because they want to preserve their self-image. In a way, the authors suggest, they are just like corporations such as Wal-Mart (the example is taken by the authors). Wal-Mart (considered as an agent) does not care about the environment or about morality in general. It only cares about its benefits (this is the goal of the corporation) and it has moral concerns only insofar as it impacts its self-image and on its profit.
Although I do not dispute the amoral character of Wal-Mart, [b]Hypocrites? Not at all. We are not implying that this compensatory mechanism is conscious. In other words, it is not a conscious calculation of: “I did something environmentally destructive. Now, I must do something environmentally good.” We also do not claim that it requires a public witness (of either the initial nor compensatory behavior) in order to take place. But even if it were a conscious process, such thinking is not necessarily seen as hypocritical in modern society. Isn’t that why there is a market for carbon offsets? A company (or country) can purchase away their environmental sins. [/b] I have more doubt when it comes to humans. Indeed, as Trivers noted a while ago, being hypocrite and caring only about one’s self-image might not be the best strategy: Quote: ‘One can imagine, for example, compensating for a misdeed without any emotional basis but with a calculating, self-serving motive. Such an individual should be distrusted because the calculating spirit that leads this subtle cheater now to compensate may in the future lead him to cheat when circumstances seem more advantageous (because of unlikelihood of detection, for example, or because the cheated individual is unlikely to survive). Guilty motivation, in so far as it evidences a more enduring commitment t o altruism, either because guilt teaches or because the cheater is unlikely not to feel the same guilt in the future, seems more reliable.’ (1971, p. 51) In a recent paper, Dan Sperber and I develop further this idea. We argue in particular that aiming directly at achieving a moral reputation carries cognitive costs and practical risks. Quote: ‘From a practical point of view, an error–for instance, mistakenly assuming that no one is paying attention to a blatantly selfish action–may compromise an agent’s reputation. Such a mistake may not only cause direct witnesses to lower their opinion of the agent, but is also likely, given the typically human way of spreading information, to influence many more people. From a cognitive point of view, a Machiavellian strategy is a demanding one. It is often difficult to tell whether others are paying attention to our behaviour, and to predict how they might interpret it and what they would think or say about us as a result. Even if a Machiavellian agent cleverly manages to avoid being caught cheating, she might still behave in a way that suggests she is being clever rather than moral, and compromise her reputation as a result. A number of studies in behavioural economics confirm that it is not that easy to pretend to be genuinely moral. Participants in experiments are able to predict in advance whether or not their partners intend to cooperate (Brosig, 2002 ; Frank, Gilovich, & Regan, 1993). They base their judgments on the likely motivations of others (Brehm & Cole, 1966; Schopler & Thompson, 1968), on the costs of their moral actions (Ohtsubo & Watanabe, 2008), or on the spontaneity of their behaviour (Verplaetse, Vanneste, & Braeckman, 2007). More generally, many studies suggest that it is difficult to completely control the image one projects, and that there are numerous indirect cues to an individual’s propensity to cooperate (Ambady & Rosenthal, 1992; Brown, 2003). Thus, we conclude: Quote: “Machiavellian strategies for securing a good moral reputation without paying the cost of morality are thus both hard to follow and risky. Is there a cognitively easier and safer way of securing such a reputation? Yes: it consists in deserving it, that is, in having a genuine, non-instrumental preference for moral behaviour and a disposition to act on the basis of this preference. At the cost of missing a few opportunities for profitable cheating, a genuinely moral person is in a uniquely good position to be regarded as such.” Of course, ultimately, people compensate to improve their self-image. but at the proximal level, they may not do so because they acre about their self-image. [b]Great points! But again, we are not claiming that this is a conscious calculation. Perhaps, it would be better to refer to what we call the [i]moral self image[/i] as one’s [i]moral self[/i]. Gollwitzer’s Theory of Self Completion explains our assertion nicely. It claims that we have parts of the self (e.g., our academic self, our parental self) that we highly value and want to complete. It views these parts of our self as goals that we seek to achieve. Thus, we aim to collect symbols of this self. When we gain those symbols, we relax our strivings. And when we lack those symbols, we work to gain them. The moral self is no exception (Jordan, Mullen, & Murnighan, 2011). [/b]