Exit Ghost?

Biological Reviews publishes a 25-authors paper lead by Simon Townsend and titled “Exorcising Grice’s ghost: an empirical approach to studying intentional communication in animals.” I was quite amazed to find such a huge number of authors coming together from different horizons, and agreeing to sign such a provocative paper. Post-workshop, co-authored contributions like this one tend to be bland; that certainly can’t be said here. Add to this the exceptional stature of the contributors, and the result is a fascinating paper.

Animal communication researchers have long been arguing for characterisations of animal communication that treat animal signals on their own terms. Humans, of course, have such an idiosyncratic communication system that they make a very odd reference point to study the biology of information transfers. Things get even more worrying if our standard definition of communication only applies to some humans (adults, proficient mind-readers). It is hard, then, not to sympathise with the paper’s agenda: why not aim for a theory-free, species-neutral, minimal definition of communication. Why not, indeed; and the paper comes very close to achieving this goal—for instance, by providing a yardstick that puts gestural and vocal communication on an equal footing.

On the face of it, the authors have nothing against Grice’s legacy, per se. They’re simply trying to get his ghost out of the methodology section of ethology papers. (He might creep back into the Discussion section—fine. That’s what ghosts tend to do.) Put this way, that’s surely a laudable goal—but the detail of the discussions leave an altogether different impression—the impression that the authors want to make a ghost of Grice, even where his ideas on human communication are concerned.

When explaining, for instance, that metarepresentations are difficult for young children and effortful for adults, the authors choose to ignore the side of the controversy that argues exactly the opposite. Young children have been claimed to compute others’ perspectives automatically, to pass implicit versions of the false belief task, to master truth-functional negation. The standard false belief task has been thoroughly critiqued as a yardstick of metarepresentational capacities. All of this is debated, of course, but opting so clearly against one side of the debate is not exactly the theory-neutral move we’d been lead to expect. Likewise, the view that human communication aims at affecting other people’s mental states is dismissed because it “has been challenged (Moore 2015).” (Questioned by Richard Moore = forever tarnished by doubt?)

Then there are the loopholes left open by the paper’s definition of communication. As I understand it, it is sufficient that an animal acts voluntarily with a certain goal, by means of producing a signal that will change another animal’s behaviour in a way that suits that goal. At this point, we expect the authors to provide a definition of what counts as a signal; but none really comes forth. Instead, we’re told that signals must be produced voluntarily in a way that pays attention to whether the audience is paying attention. Which sounds suspiciously meta-representational—but that’s not the main worry. The main worry is that if we go by this definition, we should be calling communication a series of behaviours that aren’t usually called that, and for good reasons.

I am fly fishing. I drop some bait in the water to attract graylings. The graylings come. Only now do I set my lure flying. One of them bites! Communication?

http://tinyurl.com/jvagg69

A pass by Miguel Angel Perera. Would you call this communication between man and animal?

The torero waits until he has the bull in his sight. He then produces the red muleta from behind his back. Now he definitely has the bull’s attention. He waves the muleta. As intended… the bull charges! Communication?

If we apply the paper’s “new framework” to the letter, then the answer has to be Yes, both times. A voluntary behaviour is produced, with the intention of inducing another animal to perform another behaviour, by means of a signal that is voluntarily produced (if and when the audience is in the right state to process it). On the other hand, though, counting these episodes as communicative obscures an important distinction between manipulation and communication (as theorised by John Maynard-Smith and David Harper). There is nothing in the bull’s brain that evolved to be affected by muletas; nothing in the grayling’s brain that increased the graylings’ fitness by following lures. Quite the opposite: both would fare better if they paid no heed to human traps. Yet there is arguably something in your brain and in mine that did evolve to be receptive to human words.

Is there really? And if there is, what might this communicative capacity look like? The Gricean perspective has answers to all these questions that the new, theory-neutral perspective ignores (so far). Something tells me Paul Grice’s chains will still be heard clunking in the corridors of animal communication labs a few years from now.

6 Comments

  • Richard Moore 2 September 2016 (13:13)

    A few quick comments, before I disappear into the Polish countryside for the weekend. (With hindsight, the response turned out to be longer than I planned.)

    Olivier’s bull counter-example here is a very nice one. I think he’s right to think that a lot of the relevant work is going to be done by the appeal to signalling; and that by appealing to signalling here the authors are smuggling in a concept that is going a lot of work for them. Presumably they would want to avoid the graylings counter-example by just stating that breadcrumbs are not signals. And perhaps the authors’ answer to the question ‘What is a signal?’ would be: you know one when you see it.

    Obviously that isn’t good enough, though. I suspect that in the background was the idea that a signal could be defined in the terms used by primate communication researchers to identify gestures and calls. This turns on the idea that a sign is causally inefficacious; but causal inefficacity is independent of concerns about whether it works because of some prior adaptation. (Given that, I’m not sure I get the point of Olivier’s invoking Maynard Smith.)

    Of course, an appeal to causal inefficacity won’t really solve the problem either. While it might suffice to rule out the case of the graylings, it’s not clear that it rules out the case of the bull. At least not cleanly.

    I think this points to a much deeper problem in all of our characterisations of communication, though – going back to Grice’s third clause.

    When Sperber and Wilson describe communication as depending on intentions to affect mental states, one way to read that is in terms of an attempt to deal with Grice’s original third clause. This is the clause that stipulated that H’s response to S should be rationally, and not merely a causal response (like tickling or bleeding would be). There are very good reasons to reject this third clause, though – as many (notably Sperber and Wilson) have pointed out.

    By introducing the idea of changing mental states into the content of the first clause, Sperber and Wilson earn a way to both rule out certain counter examples for what should count as communication, while rejecting Grice’s original third clause. For example, I can’t stab you with the intention that you bleed, and with the intention that you recognise my intention that you bleed, because intending that you bleed isn’t the right sort of content for a communicative act. Communicative acts are only those that intentionally change mental states.

    But here is where my objection (developed in both a Current Anthropology response to Thom Scott Phillip’s pragmatics paper, and in my Philosophical Quarterly paper; and mentioned by Olivier) comes in. The problem is that on Sperber and Wilson’s formulation, it looks like communication requires that speakers and hearers have mental state concepts; and that these are deployed in formulating the contents of their utterances. However, this looks too strong. One thing that is undeniable is that communication works because communicators change one another’s mental states, and they do this intentionally. But it’s not clear why I need to change your mental states under a mentalistic description or conception of what I’m doing. For example, consider the following case: S utters the words “Go away!”, intending that:
    1. H go away, and
    2. H recognise that S intends (1).
    If it is insisted that S intends (1) in a way that deploys mental state concepts, it seems like RT is committed to treated (1) as some sort of ellipsis for:
    1*. H form a belief that S wants him to go away.
    This looks intellectualised. For it may be that H does form that belief on the basis of S’s utterances – but why require that S entertain a content like (1) and not (1*)? Why require that S consider the content of her own utterance in mentalistic terms? Problematically, the alternative move seems to be to deny that one can form a communicative intention by uttering the words “Go away!” with the intention that H go away (and (2)). This looks absurd.

    So the Sperber and Wilson restriction on contents seems like it saves us from the problem of giving a good account of the causal inefficacity of communicative acts, but at the cost of dramatically intellectualising their contents. That’s why I have argued that we ought to drop the requirement that communicators intend to change their interlocutors’ mental states under a mentalistic description.

    Of course, by dropping it, one loses the response to the problem of causal inefficacity. And this is a problem. But spelling out what this problem amounts to is a problem for anyone. For it’s very hard to spell out the right notion of causal inefficacity in more than intuitive but inadequate ways. For example – how can one reconcile the fact that communication is causally inefficacious with the RT claim that much processing of speaker meaning is subpersonal and automatic? Aren’t automatic subpersonal processes paradigmatically causal processes?

    In short, then, Townsend et al’s challenge will be to spell out what signalling amounts to in a way that distinguishes it from Olivier’s counter examples. And one way to do this is by appeal to the notion of causal inefficacity, but this is fraught with difficulty. Another way would be to drive for a mentalistic conception of content, but this strikes me as equally problematic. I don’t claim to have the answers, but I don’t think it’s a given that Olivier’s solution is better.

    One thing I should say: those who have read the paper will know that I am a co-author. I wasn’t at the original workshop, though; I came on board at the end just because I helped to clean up some of the formulations. In that sense (as the title probably indicates) it isn’t my paper in the way that it is Simon and Sonja’s.
    At the same time, I still feel protective of the content, and so I want to acknowledge that Olivier’s challenge is a good one, and that meeting it is going to be important for the authors of this paper (me included).

    P.S. I just finished a draft of a paper on Grice’s original third clause. It doesn’t deal with these issues directly, but may be interesting to some anyway. It’s on my Academia page.

  • Richard Moore 2 September 2016 (15:40)

    P.P.S. There may be a route between Scylla and Charybdis; a compromise between Dan and Deirdre’s position and my own. The contents of utterances might be specified in terms of their necessarily working by changing the mental states of H, and not some other (e.g., physical or emotional states) of H.

    This wouldn’t require that the causal basis of the success of S’s utterance be part of the content of her utterance. So S could still utter intending H to go away, without having to entertain some intellectualised reformulation of that goal. But there still seem to be a few problems with this approach.

    First, even here there will be lots of blurry cases – including, I suspect, Olivier’s bull. (Which of the bull’s states does the bull fighter intend to change?)

    Second, perhaps now it still seems like there should be some feature of S’s psychology that reflect the appropriate constraints on content. That is, S should be able to grasp that there was some difference between cases of communication and cases of, say, beating someone up; that they work by somehow different mechanisms. (Robert Thompson makes a suggestion like this in his 2013 M&L paper.) But specifying these constraints will also be difficult.

    Perhaps one could formulate a workaround, in which S’s grasps that her utterance work by changing H’s [soul*] and not his [body*]. But here the contents of the square brackets would only ever be place holders – something like rigid designators picking out we know not what.

  • Richard Moore 2 September 2016 (15:42)

    Sorry, important typo: “First, even here there will be lots of blurry cases – including, I suspect, Olivier’s bull. (Which of the bull’s states are changed by the bullfighter, relevantly contributing to its charging?)”

  • Olivier Morin
    Olivier Morin 5 September 2016 (09:57)

    Thanks for the comments Richard; good to read you on this site. My ambition in this post wasn’t to provide a full-blown defence of the relevance-theoretic conception of communication (others have done it much better than I could, and we had a book club on the topic). The issue deserves more than a blog comment; after all, there’s so much ground to clear. We don’t agree on the evidence for or against metarepresentational abilities in young children, and we don’t agree on what counts as “overintellectualising communication.”

    I know this is not the fashionable point of view, but most of the major advances in cognitive science seemed, at first, to “overintellectualise” cognition. As they should. After all, most cognitive processes happen under the threshold of consciousness or on its edge. Like a good software, a good cognitive process is one that feels easy, fluent, intuitive, and is anything but. How could vision be as complex as a Bayesian algorithm when it feels so immediate? How could syntactic processing include so many trees while we don’t feel that sentences aren’t in any way shaped like trees? Because these things happen, in part, under the hood. Intuitions mislead, and intuitions about how intellectualised cognition can be mislead us almost systematically (I would argue). To me the slogan “let’s not over-intellectualise communication” makes as much sense as “let’s not over-intellectualise cognition” (though I know that is a plausible aim for many).

    But then again, I doubt we’ll settle this in a blog comment, so let me just point out that you don’t seem to disagree with the substance of this particular post.

    Townsend et al. offer a new characterisation of communication. It provides, they claim, a reasonably complete list of criteria, and one that is not weighted by controversial theoretical commitments (“a more theory-neutral approach to studying intentional communication (…) a less theory-laden approach to intentionality”). I think you and I agree that the paper doesn’t, in fact, deliver either.

    Townsend et al.’s account is incomplete (so far), since at least some important non-communicative interactions still fall under it. As you point out, we could accept this incompleteness as a price to pay for having a better theory of communication: a theory where [added after Richard’s comment: fourth-order] meta-representations are not required for communication (even in humans).

    You may or may not be right on this, but I think you’ll agree: the theory-neutral ship has sailed a long time ago now. We are embarked on a theory-laden (indeed: philosophical) debate. That is not how you and your coauthors presented this contribution. It seems important to make clear that the “theory-neutral” view is in fact wedded to a very specific theoretical agenda.

    ps. I’ll make sure I read your paper on the 3d clause.

  • Richard Moore 5 September 2016 (10:19)

    Thanks for the reply Olivier. A few very quick thoughts.

    I spent some time over the weekend wondering if it was right to characterise Townsend et al. as trying to give a new account of communication. They could answer that question better than mine, but I’m not really sure that that is what they are trying to do. At least, I don’t think their account is metaphysical. Rather, I think they are trying to make explicit an account of what they are committing themselves to when they talk about intentional communication in animals; and they are trying to do this in a way that frees them from issues about Grice and high orders of metarepresentation.

    So perhaps this paper should be viewed as something of a hybrid account that reiterates some basic features that are thought central to the identification of communicative acts, and an account of the cognition this is thought to presuppose. But I don’t think it’s intended to be an analysis of the nature of communication in the way that Grice attempted; and I see no reason for your attributing to them authors the intention to do away with appeals to Grice in human communication.

    Second, a couple of times you set up the issue as being one of an account of communication that turns on metarepresentations and one that does not. I think that’s a false dichotomy – at least here.

    It may be that some accounts (perhaps Dorit Bar-On’s? Or Mitch Green’s? Or some new work by Bart Geurts?) do or will try to argue that communication can be explained without recourse to any metarepresentational abilities at all. But that wasn’t claimed in this paper, and has certainly not been claimed on my account – which is explicitly metarepresentational. The issues I raise are about how much metarepresentation is needed (one layer or four?).

    With respect to infants and apes, there is very good evidence that they can attribute goals to others (a first order metarepresentation, or something like it), but no evidence whatsoever that they can attribute fourth order metarepresentations. Indeed, in kids there is robust evidence that ten-year-olds fail to attribute such metarepresentations. So I don’t know what evidence you think we disagree about here; by all means elaborate if you think I’m missing something.

    Perhaps you think that engaging in Gricean communication alone is sufficient evidence of the fourth order states, but I’d want to see independent evidence of this ability. I don’t think a controversial theoretical analysis is itself empirical evidence of much.

  • Martin Stehberger 5 September 2016 (11:34)

    Communication is not a topic where I’ve read much literature, and perhaps what I’m going to say is already discussed or refuted elsewhere, but f.w.i.w. here is a proposal for a definition of communication that aims to capture the notion of causally inefficacious signals that Richard is discussing in his comments. Define communication as an attempt to change the audience’s representation of a part of the world without changing the represented part of the world itself.

    As to what constitutes an “attempt” and what the audience’s representation of the world looks like, that’s for us as observers to determine, using the intentional stance (Dennett) towards the communicator and towards the audience. Olivier says that the authors of the paper he is discussing aim for a minimal and theory-free definition, but also that they use the communicator’s “goal” in their definition, which means they adopt the intentional stance as well, so I’m not making it more complicated. And notice there would be no requirement for communicator or audience to be using the intentional stance or to have any meta-representational abilities at all.

    The definition as proposed rules out Olivier’s fly-fishing case quite clearly. The same act that lets the fish get the idea there is food also acts on the supposed food itself. If the fish only thought something like “I shall eat now”, then it would be communication, but clearly they have a more precise representation of what they are going to eat and where it is. Otherwise the fishing ruse would not work.

    By contrast, Richard’s example “Go away!” does qualify as communication. The hearer now knows that the speaker wishes him to go away, a wish that had already existed before the utterance and was not changed by it.

    As for Olivier’s bull case, it could qualify as communication if the bull reacts to a challenge that it sees as being issued from the torero, and I would be fine with that. But I guess in reality the bull just gets irritated without thinking that far. So no communication there.

    Olivier’s two counterexamples of fly fishing and bull work well, but I think their manipulative aspect might be a red herring. (Probably this is also what Richard means when he says that he does not get the point of Olivier’s invoking Maynard Smith.) We could imagine the bull as blind so that the torero needs an assistant who (somehow) lets the bull know a red muleta is waved. That would be equally manipulative but now seems to be a proper case of communication.