{"id":833,"date":"2015-06-24T13:21:45","date_gmt":"2015-06-24T11:21:45","guid":{"rendered":"http:\/\/cognitionandculture.local\/?p=833"},"modified":"2023-07-23T19:55:36","modified_gmt":"2023-07-23T17:55:36","slug":"combinatoriality-and-codes","status":"publish","type":"post","link":"https:\/\/cognitionandculture.local\/webinars\/speaking-our-minds-book-club\/combinatoriality-and-codes\/","title":{"rendered":"Combinatoriality and codes"},"content":{"rendered":"
I read this book as part of an interdisciplinary reading group at Cardiff. As we found, there\u2019s a lot to agree with in the book, but the commentary below focuses on two points that we found confusing.<\/p>\n
Combinatorial communication<\/strong><\/p>\n Chapter 2 claims to be about the impressive expressive power of language: we can construct an infinite number of sentences, expressing new ideas and capturing a huge range of meanings with a finite set of building blocks. At least, this is what Pinker and Fodor and others find extraordinary about language. The apparent target explanation of this chapter is whether code communication or ostensive communication is the most likely route to it. But instead of trying to explain the productivity and expressivity of language by focusing on compositionality and systematicity, the chapter focuses on \u2018combinatorial\u2019 communication of a particular kind. However, this neither fits with usual definitions of either combinatoriality (combining meaningless units) or compositionality (combining meaningful units where the meaning of the whole is composed of sub-meanings of parts), and I think ends up answering a very different sort of question.<\/p>\n So, combinatorial communication as defined in the chapter is where two (or more?) meaningful signals are combined to form a new signal whose meaning is not the sum of the meanings of its subparts (p. 27). Instead, a new signal is formed with a totally different meaning. So, adding monkey \u2018pyows\u2019 (leopard!) to \u2018hacks\u2019 (eagle!) results in a \u2018pyow-hack\u2019, which means that the group will soon move to a new location. This is not therefore a case of compositionality or combinatoriality. Scott-Philips admits that this is not really a case of \u2018combination\u2019 either (p. 50), since this is effectively just adding two holistic signals to form another distinct holistic signal, rather than a \u2018combinatorial\u2019 signal.<\/p>\n As far as I can tell, the most plausible way to interpret the claim in this chapter is that it is difficult to combine signals to (non-compositionally) form new signals, if there is not something obvious in the environment which correlates with the meaning of the new signal (so for which the new signal cannot be either a cue or a coercive behavior). This means that building up new, non-compositional, vocabulary by adding together existing signals is unlikely under code communication.<\/p>\n This seems plausible\u2026 but it\u2019s hard to identify just what the question this chapter is actually addressing. The massive flexibility and expressivity of language has very little to do with whether we can add existing signals together and get a signal with a totally different meaning. Instead, it is usually taken to be related to our ability to add signals together to form new signals whose meanings are composed of the meanings of its sub-parts, and how we can add those together further in systematic ways to form e.g. sentences where again, the meaning of the whole is at least strongly related to the meanings of its sub-parts. What Scott-Philips instead seems to be addressing in this chapter is a form of vocabulary building \u2013 how to get new holistic signals from existing holistic signals, whose meanings are all unrelated.<\/p>\n The section on ostensive communication also seems aimed at vocabulary building \u2013 if you\u2019re good at ostension, then you can come up with signals for whatever meaning you want. Here though, instances of “combining” (e.g. p. 43) really are instances of compositionality \u2013 new signals are generated whose meaning is composed of the meaning of its subparts. Here then, a) we are not always comparing like with like, and b) the role of both codes and ostension are related specifically to vocabulary building, not directly to the compositional nature of language.<\/p>\n So, while vocabulary building probably is easier under ostensive communication rather than the code model, and so ostensive systems might be more expressive in this sense, the compositionality of language and so its impressive productivity is not addressed. Further, to the extent that proto-language demands compositionality (does it?), this then does not show that code communication is inadequate to build proto-language.<\/p>\n The relationship between ostensive-inferential communication and communication via codes<\/strong><\/p>\n This is obviously supposed to be the theme throughout the book: that you cannot get to language via an elaboration of the code-model, even by adding in ostension. Instead, the claim is that in some sense, ostensive communication is primary, and made more expressive via conventional codes (e.g. p. 16, and elsewhere). However, I found it hard to track what exactly this claim amounts to throughout the book (is it a claim about the actual development of linguistic systems? a conceptual claim? just a plea for a shift in research attitudes?).<\/p>\n The best I re-construct the claim is as this: code communication with ostension tacked on is not as flexible as language actually is; to get real flexibility, you have to start with ostension first, which is helped along by conventional codes. Given the claims made throughout the book that language is not code-like anyway (using ostension you can get words to mean whatever you want), this seems like a straightforward statement. But I do wonder how this plays out in the actual development of early linguistic systems. There it seems less straightforward.<\/p>\n First, the \u2018code-like\u2019 features of language are incredibly useful. Even the metaphorical or more flexible uses we put language to are often based on nets of semantic associations. Further, while we can in theory use words in radically flexible ways, a huge amount of communication does rely on meanings being fairly stable (conventional codes). This is because codes not only make communication more powerful, they also make it (cognitively) much easier. Accordingly, one way of minimizing ambiguity in communication that is often discussed in language evolution is iconicity (e.g. here in Chapter 5). Iconic signals \u2018look like\u2019 the things they represent, which should make it easier to grasp their meaning. Discussions of the important role that iconicity may have played in early communication systems is predicated on precisely the idea that linguistic communication is hard, even with ostension, so signs probably started off iconic and then later became arbitrary.<\/p>\n Another way of minimizing ambiguity is of course with codes. It then seems reasonable that at least some conventional codes were likely to have been derived or adapted from existing natural codes, with others added on via different means (e.g. perhaps conventionalized iconic signs). In this case, there would have been some \u2018continuity\u2019 (p. 48) between earlier code communication and later ostensive communication. Indeed, one of the questions that kept coming up in the reading group was what happened to earlier natural codes \u2013 surely hominids would not just drop them entirely, but, as great apes do, use them in ever more flexible ways. Something like this reliance on codes is also found in e.g. Section 5.4 \u2013 where proto-language includes a set of \u201cmore-or-less stable communicative conventions\u201d (p. 117).<\/p>\n In this case, early language users would presumably have relied on, among others, both complex coding\/decoding mechanisms (association) and mind-reading abilities (metapsychology) to get early proto-linguistic systems off the ground \u2013 not just one of them in isolation. On a conceptual level, and rather obviously, all you need for ostensive communication is ostension, but practically, associative mechanisms would also have been crucial in getting linguistic ostensive systems going.<\/p>\n However, it is hard to tell if this picture where both mechanisms have a role to play, and neither route (pure code or pure ostension) in isolation looks particularly plausible, amounts to a challenge to Scott-Philips\u2019 view. Perhaps the difficulty in identifying claims here (and elsewhere in the literature) is based on ambiguities in the explanatory roles that particular mechanisms are supposed to play, and in exactly which stage of the evolution of language. Clearly, if the primary marker of the difference between linguistic communication and non-linguistic communication is deemed to be the use of ostensive-inferential abilities, then these will play a key role in explaining the emergence of “language proper”, but if linguistic communication is identified in some other way (e.g. displaced reference), then the focus may well be elsewhere.<\/p>\n","protected":false},"excerpt":{"rendered":" I read this book as part of an interdisciplinary reading group at Cardiff. As we found, there\u2019s a lot to agree with in the book, but the commentary below focuses on two points that we found confusing. Combinatorial communication Chapter 2 claims to be about the impressive expressive power of language: we can construct an […]<\/p>\n","protected":false},"author":1403,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[61],"tags":[],"acf":[],"yoast_head":"\n
\nThis seems particularly relevant in the development of early linguistic systems, where, given that the first forays into ostensive communication were likely to be hit and miss, you\u2019d need all the help you could get. Communicating with someone in the absence of a shared language is hard work, even in fairly simple contexts, and even if you both have A+ mind-reading abilities. What you need are ways of minimizing ambiguity to a level where mind-reading has a reasonable shot.<\/p>\n