• 'Speaking Our Minds' Book Club

Cats, tacs and kunvenshuns

First of all, thanks to Thom for his excellent book. I agree completely that pragmatics has been under-represented in discussions of the evolution of language (with the notable exceptions you mention). I was, I recall, the only pragmaticist speaking at Evolang in Paris in 2001. I recall also that I was advised in the strongest possible terms not to go by a certain person: he knows who he is, but shall remain nameless! Thanks also to Tiffany and Olivier, and to cognitionandculture.net, for inviting me to participate.

As someone whose interest in relevance theory has come via linguistics, rather than say, psychology, anthropology or cognitive science, I will not address the areas of the book with which I broadly agree – the centrality of ostensive-inferential communication, the emergence of language as a tool to make that more explicit, mindreading, cultural attractors etc. Much of the book is, as far as I can see, right. However, there is one thing I’d like to take issue with.

On page 19 Thom introduces two pieces of terminology from my 2003 Mind and Language article (and 2009 book [1]): ‘natural codes’ and ‘conventional codes’. These, he admits, he has adapted to suit his own purposes, but the definitions he offers are in pretty much the same spirit as mine. ‘Natural’ codes are codes such as those used in communication that relies on strict coding and decoding (bee-dancing, bull-frog calls etc.) In the class of natural codes I include human behaviors such as smiling, with the proviso that these can be recruited for use in ostensive-inferential communication by being either deliberately shown (in the Gricean sense) or even faked.

Conventional codes, on the other hand, are those regularities perpetuated by tacit agreement between members of a particular community: driving on the left (or, inexplicably, the right…); Morse code; leaving a gratuity in a restaurant; the person who initiated a phone-call calling back in the event that you are both cut off. Thom then goes on to define language as ‘the rich, structured collection of conventional codes that augment ostensive-inferential communication within a given community’ (p. 20).

But here is where we differ. You see, I presented the notions of ‘natural’ and ‘conventional’ codes not in order to point out that language is an example of the latter, but rather to point out that the human linguistic code is, crucially, neither.

Let me explain: I could just about be persuaded that we, as members of the same speech community, could all agree to call ‘cats’ ‘tacs’, or even spell ‘convention’ . Indeed, something along these lines goes on when young people decide as a group (and to the exclusion of old fuddy-duddys such as myself) to describe a positive experience as ‘really bad’ or a really cool band as ‘really sick’ (or, for that matter spell the word ‘cool’ ).

But there are properties of language – the headedness of phrases, the fact that dependencies are local, so-called ‘island’ effects and many more – that surely cannot be the result of tacit agreement among members of a speech community. Moreover, these properties of language cannot be induced by children acquiring that language because they are simply not there in the data they hear.

I’ll quote William Lycan, as I do in my 2003 paper:

“…most sentences of a language are never tokened at all; since hearers instantly understand novel sentences, this cannot be in virtue of pre-established conventions or expectations directed on those sentences individually” (1991: 84).

So, what I’d like to ask is this: How does ‘language’ as defined as a set of conventional codes fit with the notion of language viewed from an internalist, modular, domain-specific perspective, or, for that matter, with the myriad advances made by generative linguists? Relevance theory, as I understand it, was conceived as a framework intended to complement such a view of language.

And relatedly, to touch on some points raised in Chapter 6, to what extent does the view sketched in Speaking Our Minds really still allow room for some innate specification in language evolution? Thom doesn’t appear to rule out the possibility of an evolved Universal Grammar-like mental faculty, but in the end he sits on the fence. Then, on p. 136 he proposes that the cultural attractor account is an alternative to Universal Grammar. This makes me feel uneasy and I’d like to know more.


[1] Wharton, T. (2009). Pragmatics and non-verbal communication. Cambridge University Press.

6 Comments

  • Thom Scott-Phillips 3 July 2015 (13:47)

    Tim, you tease! You must tell! Who told you not to go to Evolang? And why? (Email me the answers!)

    Whoever it was, Tim ignored the advice, and attended the 2000 (Paris) and 2002 (Harvard) conferences. His non-attendance since then has been language evolution’s loss. Here is why: it was Tim who first drew attention to the distinction between natural codes and conventional codes, and greater awareness of the difference between these would without question represent progress in language evolution. Perhaps SOM will help to facilitate this progress.

    In the comments here Tim suggests, contrary to the view I put forward in SOM, that (at least some parts of) languages cannot be conventional codes: “there are properties of language – the headedness of phrases, the fact that dependencies are local, so-called ‘island’ effects and many more – that surely cannot be the result of tacit agreement among members of a speech community.” Why not? Tim endorses William Lycan’s elaboration of the problem: “most sentences of a language are never tokened at all; since hearers instantly understand novel sentences, this cannot be in virtue of pre-established conventions or expectations directed on those sentences individually”. Perhaps an important first point to make is that, contrary to this quote, it’s not sentences that are conventions, it’s the component parts, including the structural elements. This includes words, phonological patterns, morphosyntactic operations, and so on. Critically, all of these are tokened. It’s in this way that languages are sets of conventions.

    But I suspect this is missing the point. Tim may have something else in mind. He draws attention to some textbook examples universal, or near-universal patterns in the world’s languages (headedness, local dependencies, island effects), and comments: “these properties of language cannot be induced by children acquiring that language because they are simply not there in the data they hear”. This is a classically Chomskyan, poverty of the stimulus argument.

    But who says these are properties of language? They are properties of languages, but we cannot assume that the properties of languages derive directly from a cognitive mechanism that we would willingly call language. Just as, say, systems of kinship cannot be assumed to be direct manifestations of a faculty of kinship, we cannot assume that language structure is a direct manifestation of a faculty of language. General cognitive constraints may suffice. Moreover, culture is not simply the design of the mind writ large. So the question is: are these properties of languages the causal consequence of a faculty of language worthy of the name, or are they the causal consequence of less domain-specific aspects of the mind?

    This latter hypothesis is certainly plausible. Arguably the most substantial empirical finding of the field of language evolution to date is one that I summarised in §5.7 of SOM, that semantic compositionality can be explained in large part by two factors that are not domain-specific cognitive mechanisms: expressivity (languages should be good for communication), and learnability (languages should be as easy to learn as possible). It is certainly possible that there is a similar story for all properties of languages. See, for instance, the examples I listed in a preview of SOM that I wrote for Replicated Typo. Regarding the sort of properties that Tim draws attention to, one hypothesis would be that learners modify languages in the direction of forms that are easier to memorise, in which case ease of memorability would be an important factor of attraction (this is just a speculative hypothesis to illustrate the point). Whatever the relevant factors, the important point to make here is that in the process of acquiring a language, individuals sign up to conventions, and sometimes contribute to changing the conventions as the same time. In fact, I’d say this is true by definition: acquiring a language just is learning a community’s given set of conventions (with perhaps some intelligible modifications), and tacitly agreeing to abide by them.

    Where does this leave Chomskyan Universal Grammar? In §5.7 of SOM I argued that the agenda for research into the cultural evolution of language should focus on the following question: what are the relevant factors of attraction for each linguistic feature of interest? As I said above, for one feature (semantic compositionality), we have already identified two critical factors, and for other features, some other factors have been identified. What I leave open is the possibility that a Chomskyian Universal Grammar is an important factor of attraction in some cases. At the same time, it may be the case that there is no such Universal Grammar, and that none of the factors of attraction that influence language evolution are language-specific. It is in this way that the cultural evolution view is a possible alternative to Chomskyian Universal Grammar.

  • Dan Sperber 3 July 2015 (14:09)

    I find the issue of conventionality a difficult one, both for conceptual and for theoretical reasons. Here I just want to make two points in haste:

    Agreeing with the insightful ideas that the cultural evolution of languages adjust them to evolved psychological dispositions leaves open the issue of whether (or better: the degree to which) these dispositions are language specific. For the sake of comparison, take right-handedness, an evolved feature of the species and a potent factor of cultural attraction. Many cultural practices, shaking hand, cleanliness usages, placing of cutlery, and so on, have culturally evolved in line with this statistical regularity. Could the advantage in coordination provided by a sie bias have been a factor in the evolution of this bias? In principle yes (this is by way of illustation, I am not claiming that such is the case). In language too, it is possible - and some of Chomsky's linguistic arguments are highly relevant here - that some of the psychological dispositions that are factors of attraction in the cultural evolution of languages evolved biologically to influence the cultural evolution of languages.

    In thinking about the notion of convention, I find Ruth Millikan's 1998 paper, "Language Conventions Made Simple" (http://philosophy.uconn.edu/wp-content/uploads/sites/365/2014/02/Language-Conventions-Made-Simple.pdf) more on the right track than David Lewis's and related views.

  • tim wharton 3 July 2015 (16:35)

    Many thanks to Thom for his generous response. I’ll reveal the identity of the person who told me not to go to Evolang Paris (yes, it was 2000…) when we finally meet. And meet soon we shall: SOM has spurred me into action and I plan to dedicate my next book to evolutionary issues. This has to be a quick post, so apologies if I don’t deal with everything you raise: the retiming of the club has not been ideal for me.Firstly, yes, as you say, sentences are not conventions. But if language is a conventional code, the structures internal to those sentences (read utterances) – including, as you also say, phonology, morpho-syntax etc. – are conventional, and tacitly agreed on by language users. Those structures, you point out, “are all tokened”. But they’re not: children are not exposed to every linguistic structure; despite this they acquire them all. That’s why it’s taken linguists so long to fathom out the kind of universals I mentioned. I thank Dan for pointing me in the direction of Millikan 1998 on conventions: I will revisit it.

    Secondly, Thom writes of the language universals I identify: “But who says these are properties of language? They are properties of languages, but we cannot assume that the properties of languages derive directly from a cognitive mechanism that we would willingly call language.” I think there is some arguing past each other going on here, and return to a point Deirdre made. I say these have to be properties of language because language is the cognitive mechanism. The object of study is not languages, but language. And this, I submit, is much, much more than a mere terminological quibble. Accounts that criticize Chomsky’s views on the evolution language nearly always mistake what he means by language.

    Here’s a nice joke from Georges Rey, who writes some interesting stuff in this area: 'A linguist asked him [Chomsky]: “Noam, I like your new work, but you can say the following in Welsh” [and he produced a Welsh sentence that would have been excluded on Chomsky's view]. To which Chomsky replied: “Well, that just shows you Welsh is not a natural language. In fact, come to think of it, it's commonly presumed that people speak natural languages. There's not a shred of reason to believe it."'

    Finally, and returning once more to your response to Deirdre, I think you over-simplify Merge. (And I thank David Adger for useful communication about this.) Yes, Merge is indeed the recursive operation that works on syntactic objects, but Merge comes in different varieties: set Merge, list Merge, pair Merge. And as well as Merge, there are the general principles that are required to tell you to project the information in a certain way (which gives you headedness), and the cyclicity that leads to island effects and more. I don’t know of any other species that has anything like the human ability to put things together the way Merge, together with these general principles, puts things together.

    Thanks again Thom, and thanks to everyone for an illuminating, enriching book club. (And I've never been called a tease in an online forum before. I think I like it)

  • David Adger 4 July 2015 (16:50)

    Thanks to Tim for pointing me in the direction of this interesting discussion. I think, though, that Thom, and many others, underestimate the problems in deriving universal properties of language as a cognitive mechanism from externalities interacting with general (but innate) principles of learnability and communication following Simon (Kirby)'s work.

    For example, bound variable anaphora (things like `every child thinks he deserves a present', with the covariant reading of the pronoun, where the quantifier binds the variable), are subject to conditions across languages (whenever the phenomenal lay of the land lets it show), where the covariant reading tracks the scope of the quantifier, and the scope of the quantifier is determined by the finiteness of the clause that contains it. The poverty of stimulus issue here is particularly sharp, but even without it, the basic empirical issues are, I think, already decisive. The best (in fact only empirically successful) accounts of this all need to refer to principles that, while they may be at play elsewhere in cognition, are not principles of communication or learnability at all (I have a recent paper on lingbuzz at http://ling.auf.net/lingbuzz/002511 about this particular issue that's currently being revised, so any comments welcome). They are principles of structure generation and periodicity, and chunking of interpretation, etc. That is, they are basically principles of the internal manipulation of configurations (in the case at hand, linguistic configurations), and their interpretations.

    I couldn't really care less whether these were language specific (so it may be that the set of language specific non-data-derived properties of human langyage culd be zero, so no UG in the tecnical sense of the term), but they are, I think, deep principles of certain aspects of cognition. My guess is that these principles interact with equally general principles of learnability, memory, processing, social structure etc, to give rise to the panoply of phenomena we see. But without them, there's not a snowball's chance in hell of explaining relative clauses, parasitic gaps, bound variable anaphora, scopal properties of modals and negation, constraints on subject extraction, ... and basically the huge literature of phenomena discovered over the years in generative grammar works. You can see this by looking at non-generative frameworks, like cognitive or construction grammar, that, when they analyse these linguistic phenomena posit structures of quite immense complexity that are just stated to be extra-linguistic, but which, actually still require a great deal of specificity, it's just specificity that's tied to, say, the terms of gestalt psychology.

    Now, there's a certain amount of nostra culpa here, as we generative grammarians haven't been good enough in explaining, in accessible terms, the nature and robustness of our results, but nevertheless, these results tell us about crucial properties of languages that reveal deep regularities in language (the underlying mechanism), and they are very robust.

  • Thom Scott-Phillips 12 July 2015 (22:06)

    Thank you David for joining the discussion.

    Let me respond directly.“Thom, and many others, underestimate the problems in deriving universal properties of language as a cognitive mechanism from externalities interacting with general (but innate) principles of learnability and communication following Simon (Kirby)'s work… I couldn't really care less whether these were language specific… but they are, I think, deep principles of certain aspects of cognition. My guess is that these principles interact with equally general principles of learnability, memory, processing, social structure etc, to give rise to the panoply of phenomena we see.”

    I actually agree that learnability and communication are not the only factors at play here. (Simon might disagree; I don’t know.) Indeed, I said as much in SOM: “Clearly, numerous attractors are important for the cultural evolution of languages. I identified two above, for the purposes of exposition… but there will be many more” (p.124). (There is - mea culpa - a small mistake here: I should have written “Clearly, numerous factors of attraction are important for the cultural evolution of languages”. Still, I think my meaning was clear nevertheless.) So I totally agree that deep principles of cognition are critical. The challenge is to identify exactly what these principles are. As such, one important subsequent question is: are these principles language-specific? (You may not care what the answer to this question is, but many people do, including many generativists.)

  • David Adger 14 July 2015 (11:36)

    Thanks for the reply Thom, I guess I'd assumed that the attractors you were discussing weren't the kind of computational principles I had in mind, but I'm probably wrong about that! You say that the challenge is to identify exactly what these principles are. But I gave a few in my comment and in general I think we have a pretty good idea about at least some of these principles (principles of constituency formation, of interpretive domain, of structure alteration etc). These principles may not be language specific, in that they may be used elsewhere in cognition (e.g. music, arithmetic, whatever), but it's pretty hard to explain various phenomena of human language without them and our best understanding of them comes from investigation of syntax/semantics across languages. So I guess the question for you would be, is something like Merge, or say cyclic interpretation of structure, a possible attractor in your view?