What’s wrong, in the end, with Homo Œconomicus?

Everyone likes to bash Homo œconomicus – not one stone was left uncast at the poor chap. Now, don’t get me wrong, I enjoy a good stoning just as much as the next religious fanatic, but this may be a case in which we executed the right fellow for the wrong reasons.

Most people argued that Homo oeconomicus (henceforth HE), as described in economics textbooks, was way too smart for his own good. He had complete information and all the computing power required to hold that information; he also had the cognitive capacities to derive valid inferences from all that information, to ignore all trivial or irrelevant inferences, and was able to do all that instantaneously; he was dealing with scarcity but these tremendous computational feats were essentially free.

This was clearly pushing it. To peg him down a notch, many people pointed out that actual economic agents are rather more limited. They do not have full rationality but some form of “bounded rationality”. They are swayed by biases and heuristics [1] in situations of uncertainty. Behavioral economics suggested that in many cases they simply ignore their own interest and are motivated by norms of fairness and moral sentiments [2] (although this last point may be misleading, see comments by Nicolas Baumard).

This is all wrong, wrong, wrong. The problem with HE is not that he is too smart – but that he lacks smart instincts. It is not that he is super-human, but that he is infra-cognitive. To explain economic behaviour, we need a model of human beings that is more sophisticated than HE, not less.

It would take a whole book or series of books and articles to justify this – and I expect that these books are being written as we speak, as these are fairly simple points – but here are two illustrations, admittedly far from standard market processes, but clearly relevant to economic theory.

Rioting as collective action

In economics, collective action is a difficult problem. A typical collective action is one in which various agents may engage in some coordinated behavior that is costly to each individual, that would result in positive payoffs to individuals if successful, and whose success depends on the fact that a minimal number of people choose to participate. A problem, obviously, is that the payoff to any individual is even better if many others participate while she abstains, which should guarantee, by recursion, that no-one ever participates. Ever since Mancur Olson’s seminal work [3] on the question, a variety of models have been put forward to formalize these difficulties. There is no straightforward solution to the problem in economic theory. There are only some rather awkward fixes. Olson for instance suggested that people are more likely to join a trade union (a form of collective action for the provision of public goods) if they receive direct rewards for just joining (e.g. A discount in the union shops).

This may seem puzzling, as collective action behaviors are extraordinarily common, indeed ubiquitous in social life and have been observed in all human societies for as long as the record goes.

In their descriptions of these issues, economists often take as a prime, and supposedly pure, example of collective action the occurrence of a spontaneous uprising against a hated regime [4], of the kind seen at Tien An Men in 1989, or in Tunisia and Egypt these past weeks.

I will not try to explain why such events occur, but simply point to the reasons why standard economic models cannot explain their occurence. To consider just one problem, Medina points out that in most game-theoretic treatment of the collective action problem, his own included, [a] the agents do not know or care about the identity of the other agents involved in the collective action, they are not affected by how the final result is achieved, and [c] they are not interested in the payoffs to other agents (they are only motivated by their own payoffs).

These are all, note, straightforward consequences of the characterisation of HE in rational choice models.

These three assumptions stand in stark to what we observe in actual human coalitional enterprises, where [a] the identity of individuals acting for or against “us” is a matter of great importance, the ways results are achieved may have important effects on how committed people are, and most important [c] what other agents may or do get out of the coalitional enterprise is a matter of great importance and of constant scrutiny.

In other words, in such a situation the standard HE seems to process less information than the standard human being.

Benoit Dubreuil’s “Paleolithic Public Good Games”

In an excellent article [5], Benoit Dubreuil has argued that various forms of extensive cooperation among humans certainly pre-dated the cultural explosion of the late Paleolithic. Dubreuil mentions collective hunting and allo-parenting. Both clearly result in the provision of public goods. They also require the kind of cooperative preferences and capacities that a great deal of anthropological theory is trying to explain. Yet both developed in Homo heidelbergiensis long before the appearance of expansive trade, cultural markers, sophisticated technology and other hallmarks of the Late Paleolithic cultural explosion.

This poses a problem, if you assume, as some do [6], that cooperation and the provision of public goods require culturally transmitted norms of “strong reciprocity”. These are the outcome of a cultural group selection that was won by the more cooperative groups. It is a problem because it suggests that people had cooperative activities long before they had all the marks of cumulative, group-specific cultures of the kind envisaged in group-selection models. (Dubreuil graciously opines [7] that it is not an insuperable problem; I beg to differ. But that is neither here nor there).

What matters here is the specificity of cooperative norms. That is, at some stages of evolution some Homo lineages evolved the capacity for cooperation on breeding, and the capacity for cooperation in hunting, without apparently evolving a capacity for cooperation in general, for cooperation as a general norm.

Again, think of our standard HE. To the extent that HE is capable of cooperation (that is, not much), that capacity applies to all domains. He thinks on terms of rewards and sanctions. But if punishment is what maintains a norm, it can maintain any odd norm.

Humans do not seem to be like that. At some stages of evolution, the sight of an infant triggers cooperative behaviours, but not the sight of a less fortunate forager or that of a diseased person. So the cognitive system seems to contain more specification than that of HE.

Parsimony requires more evolved structure, not less

Not to put too fine a point on it, Homo sapiens in his economic decision making seems to be, not at all the dumbed down version of Homo oeconomicus suggested by many, but on the contrary a baroque collection of highly intelligent and highly specific capacities. For some reason, that is anathema to many social scientists, who prefer to take as a starting point the description of a multi-domain agent, whose capacities and preferences have some structure but less content – that is, are applied the same way to all domains.

If I wanted to be polemical, I would say that this is mostly a product of evolution-blindness and cognition-blindness, two endemic maladies of the social sciences. Even a short study of mice, say, will convince one that these animals have a foraging psychology and a danger-avoidance system and a reproductive strategy, but not a unified utility-maximization process. Even the most cursory glance at visual perception will convince one that cognition is a highly domain-specific business. But this will be for another post, and at ICCI we do not want to encourage controversy.


[1] Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge university press.

[2] Gintis, H., Bowles, S., Boyd, R. T., & Fehr, E. (Eds.). (2005). Moral sentiments and material interests: The foundations of cooperation in economic life (Vol. 6). MIT press.

[3] Olson, M. (1965). The logic of collective action: public goods and the theory of groups. Cambridge, Mass.: Harvard Univ. Press.

[4] Medina, L. F., & Sierra, L. F. M. (2007). A unified theory of collective action and social change. University of Michigan Press.

[5] Dubreuil, B. (2010). Paleolithic public goods games: why human culture and cooperation did not evolve in one step. Biology & Philosophy25, 53-73.

[6] Henrich, J. (2004). Cultural group selection, coevolutionary processes and large-scale cooperation. Journal of Economic Behavior & Organization53(1), 3-35.

[7] Dubreuil, B. (2008). Strong reciprocity and the emergence of large-scale societies. Philosophy of the Social Sciences38(2), 192-210.

4 Comments

  • comment-avatar
    Konrad Talmont-Kaminski 3 February 2011 (03:09)

    While I thoroughly agree with your conclusion that parsimony requires more structure, I would argue that bounded rationality theory makes precisely the same point and, therefore, does not deserve to be tarred with the heuristics and biases brush. Indeed, I think this is a very useful way of understanding the disagreement between Kahneman and Gigerenzer – one sees our limitations, the other how clever we have been in coping with them. Herbert Simon’s concept of a heuristic is of a procedure that uses the structure of the environment in which it is used in order to simplify the task it is meant to be used for. In effect, heuristics allow us to arrive at (fallible & satisficing) solutions to problems that would be beyond our abilities if we tried to use a universal procedure such as utility maximisation. Of course, different heuristics are needed for different tasks and different task environments, giving rise to the “highly intelligent and highly specific capacities” that you talk about.

  • comment-avatar
    Nicolas Baumard 4 February 2011 (18:35)

    I totally agree with Pascal. It’s unfair to reproach economists for the unrealistic aspects of HE. They knew very well the limits of their models and they knowingly choose these limited models as a first step to understand social phenomenon. Their logic was Adam Smith’s one: “I know that humans are moral – actually, I wrote a whole book on that – but let’s assume they are only interested by their own interests, can we explain complex things like prices, supply and demand, etc?”. Thus, it’s unfair to reproach economists for their wrong vision of human kind, as many behavioural scientists do, building economic straw-men to then demonstrate obvious truths (try telling your mother: “behavioural scientists have recently discovered that people are really moral!”). Economists never said that their hypothesis were true representations of the reality. So the good thing to do is not to do as if they were doing psychology – they never did – but to point out, as Pascal does that their psychological assumptions are far too simple to explain real phenomenon like collective actions or religious beliefs.

  • comment-avatar
    Christophe Heintz 22 February 2011 (18:57)

    Pascal points out that there are two ways to take into account the fact that real humans do not have infinite cognitive powers, as it was attributed to the old Homo Economicus. These two ways lead to either the Wrong Program or to the Right Program. The Wrong Program consists in putting limits to the old Homo Economicus: you then get a new Homo Economicus, with the usual domain general processes of maximisation of expected utility, but maximisation is made under constraints of memory, time, processing power, etc. This is the theory of bounded rationality as understood by, e.g., Gintis. The Right Program consists in understanding human rationality as resulting from multiple domain specific cognitive mechanisms. Pascal illustrates this with the puzzle of cooperative behaviour: a puzzle, he suggests, that is not properly solved by the Wrong Program. Konrad is right to point out that bounded rationality denotes both the Wrong Program (personified, for Konrad, by Kahneman) and the Right Program (personified by Gigerenzer). But Nicolas wants to be even more generous with theories of rational decision-making: he points out that, anyway, Homo Economicus is not meant to be a psychological theory. So why throwing stones at it on the basis that it is not a good account of how people actually think? This is rephrasing Pascal’s initial question so that we get: What is wrong with HE, given that it is not a psychological theory? One could answer that what is wrong is that it is not a psychological theory: you’d better have one if you want to account for and predict human behaviour, be it economic or not. What, indeed, is the use of HE if it is not a psychological theory? It might be that HE, or rather the theory of expected utility maximisation, provides one of the best means to *arrive at* a good psychological theory. In particular, if the mind is constituted of many domain specific functional mechanisms, then the methods for discovering these mechanisms include calculating expected utility and questioning whether behaviour/decisions/cogitive mechanisms do it and in which conditions. In other words, the Right Program is well pursued by following some kind of Wrong Program. 1) In many cases, HE provides the best null hypothesis in town. The heuristic and bias program is a very fruitful program for that reason: it derived from HE very predictive and yet interesting null hypotheses. HE is one of the best target for throwing stones at. And good science mainly consists in throwing good big stones at good big targets: it’s the surest way to be constructive (I guess Pascal would enthusiastically endorse this claim). 2) In many conditions, behaviour will indeed maximise expected utility. In some others, it does not. Faced with this unsettling data, several attitudes can be adopted: – The Truly Wrong Program: stick to HE and tinker the variables. Be realistic about it or just don’t care about the actual psychological mechanisms. – The Nice Wrong Program: stick to HE as much as you can for good pragmatic reasons, knowing that it will just get you just as far as you need. It’s the “as if” program. For being a good “as if” you need to specify the conditions under which HE is predictive. Maybe it is because of the institutional environment (against the evolutionary psychologists who assert that the only environment were maximisation occurs is the Pleistocene environment). Maybe it is because agents, although unable to know at first how to maximize, learned, maximizing routines (but how did they do that?). Or Maybe it is because our cognitive apparatus is just ‘naturally’ good at dealing with the tasks at hand. Wondering about which above case applies is part of the Good Wrong Program, but the Nice Wrong Program just doesn’t care. – The Good Wrong Program: Specify what and in which conditions there is maximisation, deduce that some functional cognitive mechanism is at work and make hypotheses about its properties. In the process, tinker with utility (what is it supposed to refer to? Hedonistic pleasure or satisfaction? Inclusive fitness? For instance, some human ethologists had hypothesised that some hunters were maximising the number of calories brought home when hunting but then realised that other factors needed to enter the utility function). Specify the domain where maximisation occurs: that’s part of the characterisation of domain specific cognitive mechanisms. Wonder why the mechanisms do not maximise in other situations in order to gain insights about the implementation of the mechanism (some psychologists talk about the ‘signature’ of a mechanism). – All this eventually gets you to the Right Program: make hypotheses about the mechanisms that maximise in some conditions, but not in other. The Right Program attempts to discover cognitive mechanisms by assuming that they perform a function: i.e. given the resources, these cognitive mechanisms lead to maximise utility in some specific situation. I agree with Pascal that, when studying human behaviour, you need to side with the Right Program or with the Wrong Program. But emphasising this difference could prevent seeing a real diversity of attitudes of scientists using the theory of expected utility maximisation. The calculation of expected utility and the assumption that it is maximised can serve many purposes, not all of them are reprehensible.

  • comment-avatar
    Konrad Talmont-Kaminski 24 February 2011 (03:36)

    While I agree with much that Christophe suggests, I would reject the idea that HE provides us with a real null hypothesis. The problem is that when EUT is actually used simplifying assumptions are generally necessary in order to bring the machinery of the theory to bear on the situation at hand. The resultant solution need not be optimal, though if the assumptions have been chosen appropriately, it will be satisficing. Had no simplifications been necessary, the EUT may have guaranteed the optimal result – but that is generally not a tractable option. In effect, any use of EUT that includes such simplifying assumptions should not be thought of as providing us with the golden standard of optimality but, instead, merely one of the competing satisficing results. It can still be compared with the results obtained by the mechanisms under investigation and differences can still be potentially explained in terms of the characteristic ‘footprints’ of the heuristics applied – to use Bill Wimsatt’s phrase. But the hypothesis is not null.