Language faculty? Semiotic system? Or what?

To what extent does the use of language involve a language-specific ability, to what extent is it subserved by a more general symbolic or semiotic system? This is an old and ongoing controversy to which an article pre-published online in PNAS on Nov. 18, 2009 (doi: 10.1073/pnas.0909197106) and freely available, “Symbolic gestures and spoken language are processed by a common neural system” by Jiang Xu, Patrick J. Gannon, Karen Emmorey, Jason F. Smith, and Allen R. Braun, makes an interesting contribution. Their abstract:

“Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating ‘‘be quiet”), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. … Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal- auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain’s language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects”

Examples of pantomime (top row, English gloss: unscrew jar) and of emblem (bottom row. English gloss: I’ve got it!)

The authors distinguish different types of meaningful gestures. With good reasons, they focus on gestures that are neither linguistic, like in sign language, nor peri-linguistic like gesticulations accompanying speech. As in the two examples illustrated above, they look at what they call ‘pantomimes’ and ’emblems’. These cause similar patten of brain activation as their linguistic glosses.

I am not competent enough to interpret brain imagery evidence, let alone criticise it. Still, I would like to raise two issues.

First, while I understand that this is evidence for a competence that handles both such gestures and linguistic utterances, I don’t see how it weighs against the view that there is also a more specific competence that handles the linguistic and in particular syntactic properties of utterances. In any case, the choice is not between just two possibilities: a general ‘semiotic’ capacity and a more specific linguistic capacity. A third possibility is that both are at work. Moreover, as the authors themselves note, the linguistic stimuli they used were syntactically extremely simple. Stimuli with more common syntactic complexity might better reveal more specifically linguistic treatment.

Second, the gestures they chose for their clear non-linguistic symbolic character have another interesting property: they are produced intentionally with the overt intention of communicating. They are what Deirdre Wilson and I called ‘ostensive stimuli’ (in Relevance: Communication and Cognition 1986/1995). So is speech, of course. This raises an interesting question: Is the system they think they have identified a ‘semiotic system’ or is it an ostensive communication system (as Deirdre and I have suggested there might exist – see our Pragmatics, modularity and mind-reading (In Mind and Language, 2002, 17. 3-23).

To decide, one would have to test with non-ostensive communicative stimuli. Many kinds of communicative behaviour can be used either ostensively or non-ostensively, for instance, smiling, sighing, or crying. These would not provide conclusive evidence unless the ostensive interpretation could be blocked, but I see no easy way to do so. There are however some types of communicative stimuli that, unless they are ostensively displayed, are typically non-ostensive, for instance, blushing, the eyebrow flash of recognition, or the widening of eyes in fear. Would these non-ostensive stimuli cause the same pattern of activation as their ostensive verbal gloss (or as ostensive gesture with the same interpretation)?

Here is the hypothesis I would like to see tested: Verbal comprehension activates (inter alia) a pragmatic mechanism geared to the interpretation of ostensive stimuli that works as well for non-linguistic ostensive stimuli such as those used by the authors of this paper. It should not work, on the other hand, for non-ostensive communicative stimuli.

1 Comment

  • comment-avatar
    Royston Snart 29 November 2009 (09:54)

    Another aspect I find interesting is the question of identity, i.e. the corresponding identities of originator and recipient of any particular item of communication. Could the mental processing systems being investigated (and suggested to be identical in the paper cited)be influenced by whether the originator can be identified as a particular individual (friend, acquaintance, ally, enemy,….) or not, and whether the intended recipient is me, you someone else, everyone else,….)? I have a feeling this factor could be crucial. For myself, I think I process a communication differently depending on whether I know who its from and whether its intended solely for me.