Is there a language instinct?
Chomsky’s theory has played a pivotal role in the cognitive revolution and is often seen as one of the pillar of cognitive sciences, especially in the cognition and culture field. It is therefore quite exciting to see some cognitive scientists attacking such a venerable theory and proposing a radical alternative. That’s exactly what Nicholas Evans and Steven Levinson do in a recent article with comments and reply in Behavioral and Brain Sciences. The debate presents a unique confrontation of widely different opinions ranging from Pinker and Jackendoff (“The reality of a universal language faculty”) to Michael Tomasello (“Universal Grammar is dead”), and offers a rare epistemological discussion about what counts as proof or a theory in cognitive sciences. Finally, by questioning the needs of substantial cognitive universals, the article is the occasion to launch a debate about the future of cognitive sciences and its relation to culture. A must read in cognition and culture!
An alternative to Universal Grammar
Of course, the debate is far from new (see Tomasello’s review of Pinker’s The language instinct for instance). But the article takes a fresh start by presenting a summary of several decades of cross-linguistic work by typologists and descriptive linguists, and its consequences for a cognitive approach of language. The authors open the summary by questioning the linguistic universals put forward by Pinker and Bloom in their BBS article (1990) and claim that there are counter-example for all of them:
1. “Major lexical categories (noun, verb, adjective, preposition)” (sect. 2.2.4)
2. “Major phrasal categories (noun phrase, verb phrase, etc.)” (sect. 5)
3. “Phrase structure rules (e.g., “X-bar theory” or “immediate dominance rules”)” (sect. 5)
4. “Rules of linear order” to distinguish, for example, subject from object, or “case affixes” which “can take over these functions” (sect. 5)
5. “Verb affixes” signaling “aspect” and “tense” (including pluperfects) (sect. 2.2.3)
6. “Auxiliaries” (Many languages lack auxiliaries (e.g., Kayardild, Bininj Gun-wok).
7. “Anaphoric elements” including pronouns and reflexives (Many languages (e.g. Mwotlap) lack dedicated reflexive or reciprocal constructions altogether. Some Southeast Asian languages lack clear personal pronouns).
8. “Wh-movement” (Not all languages (e.g., Chinese, Japanese, Lakhota) move their wh-forms).
In response to the Universal Grammar (UG) difficulties, Evans and Levinson offer an alternative based on population-thinking and evolutionary approaches. For them, each language’s particular feature are
“evolutionarily stable strategies, local minima as it were, that are recurrent solutions across time and space, such as the tendency to distinguish noun and verb roots, to have a subject role, or mutually consistent approaches to the ordering of head and modifier.”
These solutions result from myriad interactions between communicative, cognitive, and processing constraints which reshape existing structures through use. For instance, the device of ‘subject’, whether in English, Warlpiri, or Malagasy, is a way of streamlining grammars to take advantage of the fact that three logically distinct tasks correlate statistically. In a sentence like “Mary is trying to finish her book,” the subject “Mary” is:
(a) a topic – what the sentence is about;
(b) an agent – the semantic role of the instigator of an action;
(c) the “pivot” – the syntactic broker around which many grammatical properties coalesce
Having a subject relation is an efficient way to organize a language’s grammar because it bundles up different sub-tasks that most often need to be done together (I used the same example about the evolution of Na’vi’s language, see also Tallerman’s commentary to Chater and Christansen for a very pedagogic example of cognitive constraint on language evolution).
Evans and Levinson thus use the inverse strategy of Bloom and Pinker’s. In a similar article (also in BBS) Chater and Christiansen made the same point:
“Instead of puzzling that humans can only learn a small subset of the infinity of mathematically possible languages, we take a different starting point: the observation that natural languages exist only because humans can produce, learn, and process them. In order for languages to be passed on from generation to generation, they must adapt to the properties of the human learning and processing mechanisms; the structures in each language form a highly interdependent system, rather than a collection of independent traits. The key to understanding the fit between language and the brain is to understand how language has been shaped by the brain, not the reverse.”
(This viewpoint does not rule out the possibility that language may have played a role in the biological evolution of hominids. Good language skills may indeed enhance reproductive success. But the pressures working on language to adapt to humans are significantly stronger than the selection pressures on humans to use language.)
Such a theory can thus explain at the same time why some linguistics’ structures are rare (albeit existent) and some others so widespread (albeit not universal). Indeed, rare structures are not totally absent because they are not cognitively impossible (there are speech communities that learn them). However, it may be that the immediately preceding springboard state requires such specific and improbable collocations of rare features that there is a low statistical likelihood of such systems arising. By contrast, conditional universals almost always turn out to be mere tendencies rather than absolute universals because there are always alternative strategies.
Of course, beyond their particular approach, Evans and Levinson launch huge debate about universal grammar. The response to commentaries is particularly interesting as they question the falsifiability of the Chomskyan program. Indeed, they argue that it is becoming more and more difficult to explain cross-linguistic data in terms of an abstract capacity for language.
“Abstractness has a cost: the more unverifiable unobservables, the greater the explanatory payoff we expect. Judging the point where explanatory superstructure becomes epicyclic and unproductive may be tough (…). But the increasingly abstruse theoretical apparatus is like a spiralling loan that risks never being paid by the theory’s meagre empirical income (cf. Edelman & Christiansen 2003). Even attempts to deal with the growing evidence of variability through the theory of parameters – projecting out diversity by limited number of “switches” pre-provided in Universal Grammar (UG) – has empirically collapsed.”
In a nutshell, the Chomskyan program is degenerating (sensu Lakatos):
“Generative theory is just one version of a theory of linguistic structure and representation, and it is marked by a lack of external explanatory variables, making no reference to function, use, or psychological or neural implementation. It has delivered important insights into linguistic complexity, but has now run into severely diminishing returns. It is time to look at the larger context and develop theories that are more responsive to “external” constraints, be they anatomical and neural, cognitive, functional, cultural, or historical.”
I am not an expert in language so I am not going to take part in this debate but, whether or not you agree with Evans and Levinson, it is refreshing to read such a fundamental debate where theories are explicitly evaluated for themselves, at a time when empirical articles are more and more exclusively focused on data (see also Tomasello’s comment below).
In contrast to universal grammar, Evans and Levinson claim that their approach is very parsimonious in terms of adaptation. For them, there is no specific cognitive adaptation to language except two key elements:
– the vocal apparatus and the capacity for vocal learning (both biological properties unique in our immediate biological family, the Hominidae)
– the refinement of our social cognition (in particular communicative intention recognition).
The epidemiological turn in cognitive sciences
Finally, Evans and Levinson’s approach on language may reflect an interesting move in cognitive sciences.
First, instead of opposing nativism and culturalism, they propose to embrace a strong nativist stance and a strong culturalist stance.
“Cognition is less like the proverbial toolbox of ready-made tools than a machine tool, capable of manufacturing special tools for special jobs. The wider the variety of tools that can be made, the more powerful the underlying mechanisms have to be. Culture provides the impetus for new tools of many different kinds – whether calculating, playing the piano, reading right to left, or speaking Arabic.”
In a previous post, I commented on Dehaene’s “neuronal recycling hypothesis” according to which cultural inventions invade evolutionarily older brain circuits and inherit many of their structural constraints (see his talk at the LSE). In the case of reading for example, while the occipito-temporal cortex could not have evolved for reading, the shapes used by our writing systems were submitted to a cultural evolution for faster learnability by matching the elementary intersections already used in any primate visual system for object and scene recognition.
Second, Evans and Levinson argue that the cognitive approach should embrace a population stance or epidemiological perspective (and those familiar with cultural epidemiology would have recognised in Chater and Christiansen’ scheme a variation on Sperber’s Cultural Cognitive Causal Chains).
“Embedding cognitive science into what is, in a broad sense including cultural and behavioral variation, a population biology perspective, is going to be the key to understanding these central puzzles.”
Such a perspective is of crucial importance if we want to reconcile cultural variations (due to historical and contingent factors) with universal attractors (due to cognitive constraints).
Finally, they note that the strong integration between cognition and culture perspective implies that while cognition has a role to play in the explanation of cultural phenomenon, the reverse is also true. The study of cultural phenomenon such as languages, writing systems or norms of politeness may indeed help to discover how our brain works!
Universal grammar is dead
Universal grammar is, and has been for some time, a completely empty concept. Ask yourself: what exactly is in universal grammar? Oh, you don’t know – but you are sure that the experts (generative linguists) do. Wrong; they don’t. And not only that, they have no method for finding out. If there is a method, it would be looking carefully at all the world’s thousands of languages to discern universals. But that is what linguistic typologists have been doing for the past several decades, and, as Evans & Levinson (E&L) report, they find no universal grammar.
I am told that a number of supporters of universal grammar will be writing commentaries on this article. Though I have not seen them, here is what is certain. You will not be seeing arguments of the following type: I have systematically looked at a well-chosen sample of the world’s languages, and I have discerned the following universals . . . And you will not even be seeing specific hypotheses about what we might find in universal grammar if we followed such a procedure. What you will be seeing are in-principle arguments about why there have to be constraints, how there is a poverty of the stimulus, and other arguments that are basically continuations of Chomsky’s original attack on behaviorism; to wit, that the mind is not a blank slate and language learning is not rat-like conditioning. Granted, behaviorism cannot account for language. But modern cognitive scientists do not assume that the mind is a blank slate, and they work with much more powerful, cognitively based forms of learning such as categorization, analogy, statistical learning, and intention-reading. The in-principle arguments against the sufficiency of “learning” to account for language acquisition (without a universal grammar) assume a long-gone theoretical adversary.
Given all of the data that E&L cite, how could anyone maintain the notion of a universal grammar with linguistic content? Traditionally, there have been three basic strategies. First, just as we may force English grammar into the Procrustean bed of Latin grammar – that is how I was taught the structure of English in grade school – the grammars of the world’s so-called exotic languages may be forced into an abstract scheme based mainly on European languages. For example, one can say that all the world’s languages have “subject.” But actually there are about 30 different grammatical features that have been used with this concept, and any one language has only a subset – often with almost non-over lapping subsets between languages. Or take noun phrase. Yes, all languages may be used to make reference to things in the world. But some languages have a large repertoire of specially dedicated words (nouns) that play the central role in this function, whereas others do not: they mostly have a stock of all-purpose words which can be used for this, as well as other, functions. So are subjects and noun phrases universal? As you please.
Second, from the beginning a central role in universal grammar has been played by the notion of transformations, or “movement.” A paradigm phenomenon in English and many European languages is so-called wh- movement, in which the wh- word in questions always comes at the beginning no matter which element is being questioned. Thus, we ask, “What did John eat?”, which “moves” the thing eaten to the beginning of the sentence (from the end of the sentence in the statement “John ate X”). But in many of the world’s languages, questions are formed by substituting the wh- word for the element being questioned in situ, with no “movement” at all, as in “John ate what?”. In classic generative grammar analyses, it is posited that all languages have wh- movement, it is just that one cannot always see it on the surface – there is underlying movement. But the evidence for this is, to say the least, indirect.
The third, more recent, strategy has been to say that not all languages must have all features of universal grammar. Thus, E&L note that some languages do not seem to have any recursive structures, and recursion has also been posited as a central aspect of universal grammar (in a very different way than such notions as noun phrase). The response has been that, first of all, these languages do have recursive structures, it is just that one cannot see them on the surface. But even if they do not have such structures, that is fine because the components of universal grammar do not all apply universally. This strategy is the most effective because it basically immunizes the Universal Grammar (UG) hypothesis from falsification.
For sure, all of the world’s languages have things in common, and E&L document a number of them. But these commonalities come not from any universal grammar, but rather from universal aspects of human cognition, social interaction, and information processing – most of which were in existence in humans before anything like modern languages arose. Thus, in one account (Tomasello 2003a; 2008), human linguistic universals derive from the fact that all humans everywhere: (1) conceive nonlinguistically of agents of actions, patients of actions, possessors, locations, and so forth; (2) read the intentions of others, including communicative intentions; (3) follow into, direct, and share attention with others; (4) imitatively learn things from others, using categorization, analogy, and statistical learning to extract hierarchically structured patterns of language use; and (5) process vocal-auditory information in specific ways. The evolution of human capacities for linguistic communication draw on what was already there cognitively and socially ahead of time, and this is what provides the many and varied “constraints” on human languages; that is, this is what constrains the way speech communities grammaticalize linguistic constructions historically (what E&L call “stable engineering solutions satisfying multiple design constraints”; target article, Abstract, para. 2).
Why don’t we just call this universal grammar? The reason is because historically, universal grammar referred to specific linguistic content, not general cognitive principles, and so it would be a misuse of the term. It is not the idea of universals of language that is dead, but rather, it is the idea that there is a biological adaptation with specific linguistic content that is dead.
Case-marking systems evolve to be easy to learn and process
All languages employ some morphosyntactic means of distinguishing the core noun phrase (NP) arguments within a clause. The two basic predicate types are intransitive and transitive verbs, giving three core grammatical functions: S indicates intransitive subjects (The girl slept); A, “agent” of a transitive verb (The girl saw a pig); and P, “patient” (The girl saw a pig). Some languages (e.g., Chinese, English) distinguish A and P using word order: thus, we know which mammal saw which, because A always precedes the verb and P follows.
However, many languages employ case-marking to distinguish A and P, as in Latin:
1a. Puella venit.
“The girl comes.”
1b. Puella puer-um audit.
girl.(NOM) boy-ACC hears.PRES.3SG
“The girl hears the boy.”
1c. Puella-m puer audit.
girl-ACC boy.(NOM) hear.PRES.3SG
“The boy hears the girl.”
Since S (intransitive subject) never co-occurs in a clause with either A or P, it needs no unique marking. Conversely, A and P always co-occur, and therefore must be marked differently to avoid confusion. Assuming a resolution of the tension between speaker effort (only produce essential morphemes) and listener comprehension (keep potentially ambiguous forms distinct), there are two major solutions. To distinguish A and P morphologically, it is most economical to either group S and A together, using the same case-marking for both, or else group S and P together, again using the same case for both.
These groupings, maximizing economy and comprehensibility, are exactly what we find: only two major morphosyntactic systems occur in the world’s languages. The “accusative” system groups all subjects together (nominative), as opposed to all objects (accusative), as in Latin, Turkish, and Japanese, giving an [SA][P] pattern. Conversely, the “ergative” system groups intransitive subjects and objects together (absolutive case), as opposed to transitive subjects (ergative), giving an [SP][A] pattern. Here is an illustration of this from the Australian language Yalarnnga (taken from Blake 1977):
2a. ngia wakamu
2b. nga-tu kupi walamu
me-ERG fish.(ABS) killed
“I killed a fish.”
2c. kupi-ngku ngia tacamu
fish-ERG me.(ABS) bit
“A fish bit me.”
Strikingly, languages rarely mark each of the three core functions; clearly, only one member of the opposition needs overt marking. We therefore usually find a morphologically unmarked case: in the accusative system, the SA grouping (i.e., nominative), and in the ergative system, the SP grouping (i.e., absolutive). Thus, in the Latin examples, only accusative P has an overt suffix, while nominative SA is unmarked; and in Yalarnnga, only ergative A has a case suffix, while absolutive SP is unmarked (the parentheses in the  and  examples shown earlier indicate this null case morphology on SA and SP, respectively). In both systems, the unambiguous argument S is typically unmarked, again maximizing economy and clarity.
Both the accusative and the ergative systems are widespread, among languages which are really and genealogically diverse. Clearly, humans are not genetically adapted for one or the other system; moreover, since these are the major, but not the only systems that occur cross-linguistically, it would be incoherent to suggest that they are parametrized. It seems reasonable to conclude, then, that languages have generally adapted to maximize learnability and economy by utilizing the major systems. Logically, other possible alignments of A, S, and P exist. For instance, [AP][S] marks A and P in the same way, but S differently; this, however, would set up exactly the confusion between A and P, which the major attested systems neatly avoid. This system occurs in a restricted set of pronominals in some Iranian languages; however, Comrie (1989, p. 118) notes that it is not stable, instead representing the change from an earlier ergative system to an accusative system. Such marking is unattested for core NPs. Since it does not occur, it is most likely unlearnable – hardly surprising, since it is dysfunctional. Three broad possibilities remain. First, a tripartite system consistently uses a distinct form to mark each of A, S, and P. This lacks the economy of the two major attested systems, and is vanishingly rare. One or two Australian languages are reported as having a tripartite system for all NPs: Warrungu (Comrie 2005) and Wangkumara (Breen 1976). Clearly, this system is learnable, but is strongly dispreferred by human learners; as predicted, then, languages have generally not adopted this system. Second, a neutral system would not differentiate between A, S, and P at all, either by position within the clause, case-marking, or head-marking (i.e., verbal morphology indicating the person/number of the core arguments). Although this occasionally occurs, in a very restricted manner, for pronominals (Comrie 2005), it is again unattested as a system for marking core NPs, and is thus, we can speculate, unlearnable. The third possibility is the split-S, or active system, which case-marks S (intransitive subjects) differently according to whether they are semantically agents or patients. This case system does occur, but is cross-linguistically rare (Blake 2001, p. 124), arguably, again, because it lacks the economy of the two major systems.
Mixed systems, however, occur frequently. Both major systems exhibit DIFFERENTIAL CASE-MARKING (see Ja¨ger 2007), meaning that NPs receive different – or zero – marking according to their position on a hierarchy of animacy and/or definiteness, sketched in Example (3) below (Blake 2001, p. 137):
3. 1st person pronoun > 2nd person pronoun > 3rd person pronoun > proper noun > full NP
Accusative languages typically mark P overtly only towards the top of the hierarchy; English has case distinctions for a subset of pronouns (I/me, etc.), but none for full NPs. In fact, accusative languages nearly always have differential object- marking (Blake 2001, p. 119). P arguments lower on the hierarchy are zero-marked, while higher ones are overtly accusative. Conversely, ergative systems work upwards, typically confining overt ergative marking to full NPs, or a subset thereof: Blake (2001, p. 192) notes that the Australian language Mangarayi marks ergative only on inanimate nouns, lowest on the hierarchy.
In both systems, restricting overt marking to a subset of arguments achieves greater economies. Interestingly, most ergative languages are actually “split ergative,” often marking NPs high on the hierarchy (pronouns) by the accusative system, but lower NPs as ergative. This alignment may appear difficult to learn, but Ja¨ger (2007) demonstrates, using evolutionary game theory, that split ergative is actually highly efficient and stable, and fully functionally motivated.
Finally, case-marking and head-marking often co-occur within a language as grammatical function-signalling strategies for core NPs. Crucially, though, these strategies typically conspire, ensuring that no function is marked twice: a case/agreement hierarchy has subjects at the top (generally signalled by verb agreement) and indirect objects and other non-core NPs at the bottom (often signalled by case). This is another highly efficient system, and again illustrates the way languages apparently evolve to be learnable.