How automatic are human social skills?

This January in Biology and Philosophy [1], philosopher Mitch Parsell questions the view that some parts of social cognition, like face-perception or gaze-following, are independent mechanisms working independently from other cognitive processes – what philosophers call “informational encapsulation”. I cut-and-pasted a few excerpts.

This is a face with a pair of eyes. Your eyes are sensitive to the label attached to this picture; your attentional response would have been different if it had read “this is a car” (Kingstone et al. 2004).

“Our success as a species depends on efficient, real-time processing of social information. Recognizing a conspecific as a threat or an opportunity needs to be done fast, before the threat has been visited upon us or the opportunity has passed…

For defenders of SCT efficiency entails modularity. Specialized, modular systems will, due to basic computational considerations, deliver results more efficiently than general-purpose, Quinean systems. Thus for any significant evolutionary skill, selective pressures are likely to result in the development of a specialized, modular system for that problem domain. But significant evolutionary problems also demand reliable solutions. There is simply no point coming to fast conclusions if they are mostly wrong.”

So evolutionary pressures on social cognition are ambiguous: it should be encapsulated, but not too much. This seems to imply at least that the most basic building-blocks of social cognition, such as directing our eyes towards social stimuli, would be encapsulated.

“Humans are sensitive to gaze from birth. The development of gaze abilities follows a strict development path. Only specific social stimuli fully engage the eye-gaze system(s). The neural real estate that supports these skills shows preferential responses to social stimulus. Further, we share many of these capacities and abilities with other primates. If such low-level basic skills turn out to be unencapsulated there seems little hope for higher-level, socially significant abilities being encapsulated.”

The authors then review some interesting pieces of evidence against the encapsulation of gaze-direction, for example:

“The sensitivity of the gazed-cuing response to top-down modulation is demonstrated by experiments using ambiguous figures. Ristic and Kingstone (2005) showed subjects a stimulus that could be perceived as representing either a face (with eyes) or a car (with wheels). Automatic attentional reorientation only occurred when the stimulus was referred to as a face possessing eyes. Thus reflexive attentional reorientation is sensitive to top-down influence”

Other beautiful examples from the study of face-processing are mentioned in the paper. I am not really convinced that visual attention to social stimuli is a perfect place to look for low-level, encapsulated social cognition mechanisms. Attention is a special faculty: it has to do with allocating processing time, and prioritizing certain stimuli over others. If attention as a faculty exists at all, then it cannot fail to consider several kinds of distinct cognitive processes, and compare them. To some extent, this seems incompatible with encapsulation – so I am not sure that challenging modularists with visual attention is such a feat. Still, Mitch Parsell’s paper is really well argued, documented, and exciting.


[1] Parsell, M. (2009). Quinean social skills: empirical evidence from eye-gaze against information encapsulation. Biology & Philosophy24(1), 1-19.

1 Comment

  • comment-avatar
    Dan Sperber 29 January 2009 (13:18)

    Thank you, Olivier, for attracting our attention to this interesting paper. Talking of attention, is it really a ‘faculty’ that cognitively compares alternatives (as we sometime consciously do, when, say, we hesitate between listening to the lecturer or discreetly reading our mail)? Or is attention the aggregated outcome of various processes that determine the allocation of cognitive resources? I am more attracted to this second view and to the idea that modules compete for such resources without any arbitrating mechanism judging who the winner is (just as in a street fight). Modules are in the competition once some input meeting their input conditions has pushed up their level of activation. To get the resources for full activation, they need further factors of activation such as their initial level of activation when the input became available, the level of activation of other modules with which they are connected upstream or downstream, and so on. I assume that activation tends to spread to some degree to connected modules. Now, an ambiguous stimulus can be seen as one that meets the input condition of two modules—face detection and car detection, for instance—that now enter into competition. When one of these two modules is given some extra activation through the activation of a neighbouring module—by talking either of faces or of cars for instance—then the ambiguity of the stimulus may be resolved in favour of that module. This story may be wrong but it is not incoherent (I hope) and therefore suffices to show that the kind of evidence adduced by Parsell is quite compatible with modularity.