Modularity and decision making

Modularity & Decision Making

(link to the article)

Robert Kurzban

Mechanisms that are useful are often specialized because of the efficiency gains that derive from specialization. This principle is in evidence in the domain of tools, artificial computational devices, and across the natural biological world. Some have argued that human decision making is similarly the result of a substantial number of functionally specialized, or “modular” systems, brought to bear on particular decision making tasks. Which system is recruited for a given decision making task depends on the cues available to the decision maker. A number of research programs have advanced using these ideas, but the approach remains controversial.

3 Comments

  • comment-avatar
    Hugo Mercier 4 July 2011 (16:38)

    Thanks for a very interesting paper. I’m going to have trouble commenting, since I agree with most of what you’ve said… A detail. In the information integration section, you only give a fairly shallow example of information integration. Being put in a different position activates different modules–or maybe provides different input to the same module. I’m not sure this is the type of situation that critics of modularity would think of when arguing from information integration. Mentioning metarepresentational modules may have been more convincing. Metarepresentational modules may look ‘domain general’ since they can take for representations of just about any representation one is able to hold. Yet they are arguably functionally specialized, each having a different function — such as theory of mind, pragmatics or reasoning. Moreover, they are also specialized in the sense of only attending to one particular type of objects: representations (as opposed to predators, food, mates, etc.). (see for instance Sperber, 2000) Sperber, D. (2000). Metarepresentations in an evolutionary perspective. In D. Sperber (Ed.), Metarepresentations: A Multidisciplinary Perspective (pp. 117-137). Oxford: Oxford University Press.

  • comment-avatar
    Dan Sperber 5 July 2011 (15:10)

    A couple of remarks on this excellent paper: Thinking of decision in the light of modularity, highlights one of the least plausible of Fodor’s criteria of modularity, viz., that the operations of a module are not only automatic (in the sense that no decision need be made for them to take place) but also mandatory (in the sense that, once an input for the module is present, the modular process will start and follow its course). This assumes that there are no energy constraints on the operations of modules, when in fact two kinds of such constraints are likely to be involved. Firstly, as a matter of efficient design, processing costs should be incurred only in proportion to expected cognitive benefit. So a stimulus with no expected relevance should be less likely to be processed. This is indeed the case when a stimulus is repeated with no new relevance and elicit a lower and lower cognitive response (what is called ‘habituation’). Secondly, there may be too many inputs fitting the input conditions of diverse modules for each and every one of these modules to run its course. This is well illustrated with attentional blindness, as in the famous Simons & Chabris’ experiment where a gorilla in full view is not seen by participants attending to the passing of a ball among players in a video. In such a case, modules are in competion with one another for brain resources. In a typical decision situation different decisions (e.g. fight or run?) drawing on different modular abilities are in competition, and, as suggested by Cosmides and Tooby cited in this paper, emotion can favour one module and inhibit another one. I would like to draw attention in this context to the work of George Ainslie and in particular his book [b]Breakdown of Will[/b] (see also the [url=http://picoeconomics.org/] picoeconomics site[/url]) where he argue for the view that human action results from an internal conflict of what he calls “interests”. Ainslie is a behaviourist and not a modularist evolutionary psychologist, but his “interests” are very much like action-guiding module that enter on competition for the control of the organism, and I find his work highly relevant to the understanding of decision and action in a modularist perspective.

  • comment-avatar
    Robert Kurzban 6 July 2011 (18:20)

    [b]Hugo[/b]: That’s a very well taken point. This paper is for an edited volume, and I have a chance to revise it, and I will make changes to reflect this idea. Thanks much. [b]Dan[/b]: Thanks for the kind words. And I agree. The notion of “automaticity” might not be nearly as useful as other ways to talk about (modular) processes, including thinking in terms of which processes require – or do not require – the same computational resources to execute one or more steps. By a somewhat odd coincidence (or perhaps not), a colleague of mine recently drew my attention to Ainslie’s work as well. This was in the context of issues surrounding “self control,” which I believe has a convenient modular interpretation. (Modules designed to bring about immediate gains vs. modules designed to bring about gains extended in time.)