Can teleology explain why very young children help a mistaken agent?

3-year-olds fail to accurately predict where a mistaken agent is likely to look for her toy if they are explicitly asked to do so. However, preverbal infants (who are not asked anything in implicit tasks) have been widely shown to expect a mistaken agent to act in accordance with the content of her false belief (cf. Baillargeon et al., 2010 for review). This is the puzzle of the discrepant developmental findings.

One of the most influential attempts at resolving this puzzle is Josef Perner’s account based on the distinction between implicit and explicit understanding of another’s mind. On Perner’s account, young children who are unable to correctly predict the likely action of a mistaken agent when explicitly requested to do so by an experimenter lack an explicit understanding of others’ false beliefs. Infants have an implicit understanding, but no explicit understanding, of others’ false beliefs. What experiments based on infants’ looking behavior show, on Perner’s view, is that they can implicitly represent the content of another’s false belief and on this basis form expectations about her likely behavior. But, unlike beliefs about others’ false beliefs, infants’ mere representations cannot guide their own social intentional actions: even if they spoke, they could not correctly predict the likely action of a mistaken agent when explicitly requested to do so. Nor could they intentionally help a mistaken agent (cf. Clements and Perner, 1994, 2001).

One serious challenge for Perner’s account is a famous study by Buttelmann et al. (2009) who provide evidence that 18-month-olds used their representation of the content of an agent’s false belief in order to actively help the agent find her toy (cf. Carruthers, 2013). [1] Thus, it is understandable that Priewasser, Rafetseder, Gargitter and Perner (2017) have recently challenged Buttelmann and colleagues’ (2009) findings.

In Buttelmann et al.’s (2009) original study, an agent is facing a pair of opaque boxes A and B (a pink and a yellow box), one of which contains a toy, and the other which is empty. In the false-belief (FB) condition (where the agent falsely believed that the toy was still in the box where she had last placed it before it was moved in her absence), Buttelmann and colleagues found that toddlers reliably helped the agent find the toy by opening the non-empty box (which the agent was not trying to open). In the true-belief (TB) condition (in which the agent had been present when the toy was moved from one box to the other), the toddlers reliably helped the agent open the empty box (which the agent was trying to open). Buttelmann and colleagues take the different helping behaviors displayed by young children in the FB and TB conditions as evidence that 18-month-olds can represent the content of the agent’s true or false belief.

Priewasser and colleagues’ challenge is two-tiered. On the one hand, they offer an alternative interpretation of Buttelmann and colleagues’ findings, based on a teleological rather than a mentalistic understanding of others’ actions. According to Perner and Roessler, 2010, 2012 and Roessler and Perner, 2013, teleology makes an agent’s action intelligible in terms of objective reasons, i.e. non-represented facts, not in terms of subjective reasons, which are mental representations of facts. Moreover, Priewasser and colleagues report new findings, which they take to refute the prediction based on Buttelmann and colleagues’ mentalistic interpretation of their own previous findings.

1. A non-mentalistic alternative

1.1. The behavioral asymmetry between the FB and the TB conditions

If correct, Priewasser et al.’s (2017) non-mentalistic account of toddlers’ helping behavior would seem to be more parsimonious than the mentalistic account in that it does not require toddlers to represent the content of the agent’s relevant (true or false) belief. This non-mentalistic account rests on three significant differences between the FB and the TB scenarios highlighted by Priewasser et al. (2017). In the FB scenario, but not in the TB scenario, the experimenter “sneakily” moves the toy from one box to the other in the agent’s absence, which suggests that (i) the experimenter is “playing a trick” on the agent; (ii) the agent owns the toy, and (iii) the agent will be strongly motivated to find her toy. According to the non-mentalistic account, in the FB condition, when the agent tries to open the empty box, it is clear to young children that her goal is to find her toy. As a result, they help the agent fulfill her goal by opening the non-empty box. In the TB condition, it is less clear than in the FB condition that the agent’s goal is to find the toy. When the agent unsuccessfully tries to open the empty box, her goal might instead be to open it for some unknown reason. As a result, they help the agent fulfill her goal by opening the empty box. Priewasser and colleagues argue that their account “provides teleological reasons for children to show a distinct helping pattern in the two conditions that are not based on belief reasoning.” In a nutshell, their non-mentalistic claim is that they can account for the children’s distinct pattern of helping without positing that young children must represent the content of the agent’s belief in each condition.

1.2. How non-mentalistic is the non-mentalistic account? 

Priewasser et al.’s claim raises two separable issues, the first of which is that in the FB scenario children are required to understand that the experimenter is “playing a trick” on the agent. The question is whether children could understand that the experimenter is playing a trick on the agent without understanding that the experimenter’s goal is to cause the agent to have a false belief. As Priewasser et al. further put it (p. 3), “when [the agent] is looking for the toy in the wrong box children have good reason to help her find the toy in [the other] box… When she is trying to open box A children recognize her error and correct her by redirecting her to her toy in box B” [my emphasis]. How could children represent the empty box as the wrong box, let alone recognize the agent’s error and correct it, unless they could represent the content of the agent’s false belief?

1.3. How teleological is the non-mentalistic account?

Secondly, the question is: to what extent could the non-mentalistic account of the toddlers’ helping behavior rest on the toddlers’ teleological understanding of the agent’s action? Teleology in the sense of Perner and Roessler (2010, 2012) and Roessler and Perner (2013) is a mode of understanding of others’ intentional actions primarily based on others’ objective reasons for their actions, at the expense of their subjective reasons. The hypothesis that most children before 4,5 years of age are teleologists has been put forward to explain why they fail explicit FB tasks about object-location. They fail when explicitly asked to predict the mistaken agent’s likely action for two related reasons: the question itself makes sense only if the agent’s action is intentional; in order to answer it children must appreciate the location where it would be rational for the agent to look for her toy. However, young children primarily think of an agent’s intentional action in terms of objective reasons provided by “worldly” facts and find it difficult to distinguish the agent’s objective reasons from her subjective reasons (based on the contents of her mental states, in particular her beliefs). Could young children’s ability to attend to the agent’s objective reasons at the expense of her subjective reasons shed light on the children’s helping behavior? There are three related reasons for skepticism.

(i) It is unclear in what sense the three-point contrast between the FB and the TB scenario — in particular the fact that the agent can naturally be represented as the owner of the toy in the FB scenario, but not in the TB scenario — fits the teleological understanding of others’ actions. (ii) What is distinctive of the teleological understanding of another’s action is that it is based on a representation of the agent’s objective reasons rather than on the agent’s subjective reasons. In the TB scenario, it is a puzzle what is the agent’s objective reason for opening the box she knows to be empty. May be there is one, but it is unclear which one it is. In the FB scenario, only the agent’s subjective reason (her false belief) for opening the empty box is manifest, not her objective reason. In fact, if young children are teleologists, they should be mystified by the agent’s action in the FB condition precisely because it lacks objective reasons. If so, then it is quite unclear how they could provide efficient help to the mistaken agent. (iii) In the FB condition, the toy’s actual location, which is known by the children, far from being an objective reason for the agent’s action, affords an objective reason for the children’s own action of helping the mistaken agent. But teleology is supposed to shed light on young children’s understanding of an agent’s objective reasons, at the expense of her subjective reasons, not on the children’s own objective reasons for their own actions. All of this, I think, casts serious doubt on the claim that the “non-mentalistic” alternative is a genuine instance of a teleological understanding of the agent’s action.

2. The refutation of the mentalistic interpretation

Priewasser and colleagues have not merely offered a tentative non-mentalistic alternative to the mentalistic interpretation of Buttelmann et al.’s (2009) findings. They have also offered new evidence that they claim refutes the prediction derived from the mentalistic interpretation.

They have run a new pair of FB and TB conditions, involving not only two boxes (as in the previous study), but three boxes A, B and C. In both the new FB and the new TB conditions, after the agent’s toy has been moved by the experimenter to box B (either in the absence or the presence of the agent), the agent now unsuccessfully tries to open the third box C, not box A, in which the agent first placed her toy.

Priewasser and colleagues report that in the new FB condition most children reliably helped the mistaken agent by opening the non-empty box B that contained the toy and in the new TB condition, most of them reliably helped the agent open the empty box C that she was unsuccessfully trying to open. They plausibly argue that since the early stages of the new FB scenario are similar to the early stages of the old FB condition, children in the new FB condition, just like in the old FB condition, should be expected to open the non-empty box B in order to help the mistaken agent find her toy. They also argue that in the new TB condition, just like in the old TB condition, children should be expected to help the agent open the empty box that she is unsuccessfully trying to open, namely box C. They further argue that the mentalistic interpretation should make the same prediction about children’s helping behavior in the new TB condition, but not in the new FB condition. As they put it (p. 3), “for the new-FB condition the two theories make different predictions. If children use [the agent’s] belief and knowledge to infer what she wants…, then children should behave in the new-FB condition in the same way as in the original TB condition: they should help open box C, since [the agent] knows that box C does not contain her toy anymore and she cannot be looking for it” [my emphasis]. Since the findings show that in the new FB condition, most children reliably helped the mistaken agent by opening box B, not box C, they take their findings to refute the mentalistic prediction.

When Priewasser and colleagues try to justify their claim that the mentalistic prediction is committed to treating the new FB condition on a par with the old TB condition, they characteristically write that children “should help open box C, since the agent knows that box C does not contain her toy any more.” Is this a mistake (a ‘thinko’) or a typo? Box C never contained the agent’s toy at all. This is a major difference between the old TB and the new FB condition: in the TB condition, the agent knows that box A does not contain her toy any more. But in the latter, the agent cannot be said to know that box C does not contain her toy anymore, since it never did. Whether it is a mistake or a typo, it casts doubt, I think, on Priewasser and colleagues’ grounds for imputing to the mentalistic interpretation the prediction that children should behave in the new FB condition in the same way they behave in the old TB condition.

But I see no convincing reason why the mentalistic prediction should accept the burden of assuming that children in the new FB condition should behave as they did in the old TB condition. Nor do I see any reason why the mentalistic prediction should be prohibited from making good use of the three-point contrast between the TB and the FB condition highlighted by Priewasser and colleagues. There are at least three differences between the old TB condition and the new FB condition, the first of which is the main bone of contention between advocates and critics of the mentalistic account, namely the difference between the agent’s having a true and a false belief.

The second difference is that in the old TB condition, the agent first placed her toy in box A, not in box B, before it was moved in her presence by the experimenter to box B. The fact that the agent selected box A to place her toy in the old TB condition is consistent with her having some unknown motivation to open box A. When the agent unsuccessfully tries to open box A while knowing that her toy is in box B, young children may assume that she has some reason or other for trying to open box A based on her prior selection of box A over box B for placing her toy, although they do not know her reason. By contrast, in the new FB condition, when the mistaken agent tries unsuccessfully to open box C, children cannot draw on the fact that the agent earlier placed her toy in box A (not C), in order to infer that the agent must have some unknown reason to deal with box C (in which she never earlier placed her toy).

Finally, as Priewasser and colleagues rightly emphasize, there is a third relevant difference between the early stages of respectively the old TB condition and the new FB condition. Only the early stages of the FB conditions (not the early stages of the TB conditions) are consistent with the assumption that the agent owns the toy and is greatly motivated to find it.

In light of the second difference between the old TB and the new FB condition, the mentalistic account is likely to predict that the children will be baffled by the fact that the agent’s attempt at opening box C cannot be justified by her false belief that her toy is in box A (as the agent’s action was in the old FB condition). They will also be more baffled by the agent’s action in the new FB condition than by the agent’s action in the old TB condition. In light of the fact that in the new FB condition (but not in the old TB condition), the agent holds a false belief about her toy’s location and is also naturally construed as eager to find the toy that she owns, the children are likely to reason that if their goal is to help the agent, then the most efficient means at their disposal is to provide her with her toy (about whose location she has a false belief).

In this short note, I have argued that the non-mentalistic account of the findings based on active helping (put forward by Priewasser and colleagues) does not comfortably count as an instance of teleology (in the sense of Roessler and Perner, 2013). I have further argued that their new findings do not squarely refute the mentalistic prediction. [2]


References

Baillargeon, R., Scott, R. M., & He, Z. (2010) False-belief understanding in infants. Trends in Cognitive Sciences, 14(3), 110–118.

Buttelmann, D., Carpenter, M., & Tomasello, M. (2009) Eighteen-month-old infants show false belief understanding in an active helping paradigm. Cognition, 112(2), 337–342.

Carruthers, P. (2013) Mindreading in infancy. Mind & Language, 28(2), 141–172.

Clements, W. A., & Perner, J. (1994) Implicit understanding of belief. Cognitive Development, 9, 377–397.

Clements, W. A., & Perner, J. (2001) When actions really do speak louder than words-but only implicitly: young children’s understanding of false belief in action. British Journal of Developmental Psychology, 19, 413–432.

Perner, J., & Roessler, J. (2010) Teleology and causal reasoning in children’s theory of mind. In J. Aguilar, & A. Buckareff (Eds.). Causing human action: New perspectives on the causal theory of action (pp. 199–228). Cambridge, MA: Bradford Book, The MIT Press.

Priewasser, B., Rafetseder, E., Gargitter, C., & Perner, J. (2018). Helping as an early indicator of a theory of mind: Mentalism or Teleology?. Cognitive Development46, 69-78.

***

[1] In Buttelmann et al.’s (2009) study, the agent was male. However, I will refer to a female agent in accordance with the study by Priewasser et al. (2017), in which they first replicated the study by Buttelmann and colleagues before providing new data.

[2] Thanks to Gyuri Gergely and Dan Sperber for their comments.

5 Comments

  • comment-avatar
    Beate Priewasser 20 November 2017 (11:39)

    Response to “Can teleology explain why very young children help a mistaken agent”
    Dear Pierre,

    here are some of our questions/responses:

    (1) “The question is whether infants could understand that the experimenter is playing a trick on the agent without understanding that the experimenter’s goal is to cause the agent to have a false belief” As we have emphasised in the paper, young children do not understand this but love playing the hide and seek game. They even seem oblivious having to keep the other person ignorant of one’s location. In Buttelmann’s procedure they may realise that E2 does not know where the object is, but they do not need to represent his false belief.

    (2) “How could infants represent the empty box as the wrong box, let alone recognize the agent’s error and correct it, unless they could represent the content of the agent’s false belief?” Once children assume that E2 comes back to get his toy–an idea they formed before E2 had left the room–E2 going to the empty box is an error because the toy is not there, not because he thinks its there.

    (3) “In the FB scenario, only the agent’s subjective reason (her false belief) for opening the empty box is manifest, not her objective reason. In fact, if infants are teleologists, they should be mystified by the agent’s action in the FB condition precisely because it lacks objective reasons.” In the FB condition, so our claim, children assume that E2 comes back to get his toy. They perceive as the objective goal that E2 should get to his toy. With this goal there are objective reasons to go to box B. Since E2 goes to box A children conclude that he must have made a mistake since there are no objective reasons to go to box B in order to get the toy. Moreover, teleologists consider the perceived goal as objectively good and thus feel compelled to contribute by helping correct E2’s error. No belief needed.

    (4) You spotted a stupid mistake. Many thanks for that. We asked the journal to include an erratum changing “does not contain the toy anymore,” to “never contained the toy.” We are eternally grateful. Nevertheless, we cannot quite understand your conclusions from it: “Whether it is a mistake or a typo, it casts doubt, I think, on Priewasser and colleagues’ grounds for imputing to the mentalistic interpretation the prediction that infants should behave in the new FB condition in the same way they behave in the old TB condition. … But I see no convincing reason why the mentalistic prediction should accept the burden of assuming that infants in the new FB condition should behave as they did in the old TB condition. In fact, I don’t not see why the mentalistic prediction should be prohibited from making good use of the three-point contrast between the TB and the FB condition “.

    Of course the mentalist may assume that children are influenced by the “three point contrast”. But if so, the mentalist also has to explain why children make different assumptions just because E2 is trying to open box C instead of A. Without such an explanation the whole enterprise is just cherry picking. But exactly this part of a complete explanation we have not yet grasped.
    If we assume that children, as you suggest, make both, the mentalist and the 3-point assumption things won’t work: One would have to assume that children form the pre-transfer assumption that E2 is likely to come back to continue playing with his toy. Difficult to explain why this should happen in the newFB but not the oldFB condition, since the pre-transfer phases are pretty much the same. With the transfer children realise that E2 thinks that the toy is still in box A. This is also the same for old and new FB condition. Then E2 returns, approaches box A and tries to open it. Children direct E2 to Box B two reasons: (1) they know from earlier on that E2 will be looking for his toy and/or (2) he is trying to open the box where he believes his toy is, which indicates that he is looking for his toy. Already at this juncture we have ruined the force of Buttelmann’s demonstration, since the typical logic in this field is that understanding of X is demonstrated if the observed behaviour could not occur without X.

    The further question is whether our data speak against the use of belief attribution. In fact, belief attribution to figure out what E2 is after does not work for the newFB condition since E2 is not trying to open box A. By the mentalist logic of the old TB condition, trying to open an empty box for which E2 has no belief about its content makes children help with that box. So on the mentalist account children should do the same in the newFB condition unless they have reason to switch to the other approach. But what does mentalism have to say why they would switch? Just because in the oldTB condition E2 looks in A “where the toy is not anymore” while in the newFB condition E2 looks in C “where the toy has never been” doesn’t give me an explanation for why they should change tack.

    Hope this makes sense and thanks again for spotting the misplaced “anymore”.
    Josef, Eva & Beate

  • comment-avatar
    Pierre Jacob 21 November 2017 (11:43)

    Goal ascription in active helping rests on prior attribution of false belief
    Dear Josef, Eva & Beate,

    thank you for your informative and thoughtful replies. I think we disagree about four related issues. I hope you don’t think we talk at cross-purposes. Sorry for my lengthy answer.

    (A) Our deepest disagreement is about whether in the old FB condition (but not in the old TB condition), children could ascribe to E2 the goal of getting his toy when he returns unless they had antecedently ascribed to E2 a false belief about his toy’s location. In point (1), you write of the old FB condition that children “may realize that E2 does not know where the object is, but they do not need to represent his false belief.” In point (2), you write that “E2 going to the empty box is an error because the toy is not there, not because he thinks it’s there.” In point (3), you write that “no [ascription of] belief needed.” I disagree.
    You convincingly argued in your paper that in the early stage of the old FB condition alone, children ascribe to E2 the goal of getting his toy when he returns from the fact that in E2’s absence, they see E1 sneakily move the toy from box A (where E2 had placed it before leaving) to box B. You further convincingly argued that children are likely to infer from E1’s sneaky behavior that E2 is the owner of the toy and that on his return, E2’s goal is likely to get his toy back. Thus, the crucial step from which children ascribe to E2 the goal of getting his toy on his return is their observation of E1’s sneaky behavior in E2’s absence.
    I take it that children could only recognize E1’s behavior as sneaky if they can ascribe to E1 the goal of causing E2 to have a false belief, not just to be ignorant, about his toy’s location. Children can only recognize E1’s behavior as sneaky if they take note of the fact that E2 first placed his toy in box A before leaving and that in E2’s absence, E1 moved the toy from box A to box B. If E2 had not placed his toy in box A before E1 moved it to box B, but instead if E1 had simply placed the toy in box B in E2’s absence, then E2 would indeed be ignorant of the toy’s location. E2 would not have a false belief. But then E1’s behavior would not qualify as ‘sneaky’. Nor would children have grounds for thinking that E2 is the owner of the toy and that his goal is likely to retrieve it on his return.

    (B) I also disagree with you (a minor disagreement, but still worth mentioning) about when in the old FB condition, children can ascribe to E2 the goal of getting his toy when he returns. You write in point (2) that “once children assume that E2 comes back to get his toy–an idea they formed before E2 had left the room.” (You also write in your penultimate paragraph that “children form the pre-transfer assumption that E2 is likely to come back to continue playing with his toy.”) It seems to me uncontroversial that in the old FB condition, children can only ascribe to E2 the goal of getting his toy back when he returns, not before but after E2 has left the room, when they see E1 sneakily move the toy from box A to box B.

    (C) We further disagree about whether or not the advocate of the mentalistic interpretation of the findings in the old FB and TB conditions is committed to predicting that children will behave in your new FB condition on the model of how they behave in the old TB condition. You now write in your last paragraph that “by the mentalist logic of the old TB condition, trying to open an empty box for which E2 has no belief about its content makes children help with that box. So on the mentalist account children should do the same in the new FB condition unless they have reason to switch to the other approach.” I disagree. As you acknowledge, “of course the mentalist may assume that children are influenced by the three point contrast.” This allows the mentalist to sharply distinguish the new FB from the old TB condition on at least two grounds about which we should, I think agree. First, in the new FB condition (as in the old FB condition), but not in the old TB condition, children ascribe to E2 the goal of getting his toy when returns. Secondly, in the old TB condition, E2 tries to open box A in which he first placed his toy. But in the new FB condition, E2 tries to open box C in which he did not place his toy before leaving the room. If, as you recognize, the mentalist is entitled to make use of these two distinctions between old TB and the new FB conditions, then you should recognize, I think, that the mentalist is not committed to predicting that children will behave in the new FB condition the way they behave in the old TB condition.

    (D) In both the new TB and the new FB condition, in which E2 tries to open box C where he never placed his toy in the first place, E2’s behavior is teleologically more opaque than E2’s behavior in either the old TB or the old FB condition. What matters, however, is the observed difference between children’s responses in the new TB and the new FB condition. You predict and explain the difference by the fact that in the new FB condition (but not in the new TB condition), children have ascribed to E2 the goal of retrieving his toy when he returns.
    If I am right and if you grant me point (C) that the mentalist is not committed to predicting that children will behave in the new FB condition as they behave in the old TB condition, then what follows, I think, is that you are wrong to claim that the new FB condition is a test between the teleological account that you favor and the mentalistic interpretation of the Buttelmann study. In other words, according to the mentalist, there is one efficient action that children can perform that will provide help to the agent whose action is teleologically opaque in the new FB condition, but whose goal is to get his toy when he comes back: open box B where his toy actually is. This efficient action is not open to children in the new TB condition because although the agent’s action is teleologically opaque, they have not ascribed to the agent the goal of retrieving the toy when he comes back.

  • comment-avatar
    Hal Morris 26 November 2017 (03:09)

    A suggestion: 18 month olds “read” others’ minds but are far from reasoning about them.
    I agree with Pierre Jacob’s critique of the significance of subjects showing the agents that the toy is in box B when agent is trying to open box C. Despite the fact that subject is looking neither in the right place, nor in box A, where agent “should” think it is from the child’s perspective, this does not force us to give up “mentalism” as it is called here.

    What does it mean to say the child thinks the agent thinks the toy is in box A? I don’t believe the “meaning” is any propositional statement such as an adult, or even a 5 year old child might have in mind.

    Much of the incredulity about mindreading in toddlers and infants may be due to how complex and implausible it sounds when expressed in propositional language. But the alternative, simulation theory takes some radical rethinking, and even Michael Tomasello, who shows clear leanings, seems not to want to get much embroiled in that controversy, maybe picking his fights and focusing on his controversial key points. All the coauthors of “Buttelman et al” are affiliated with one department of the Max Planck Institute for Evolutionary Anthropology which Tomasello has co-directed (not the department but the institute) since 1998, while also co-directing the Wolfgang Köhler Primate Center since 2001).

    Though neither a philosopher or an experimentor, but one who has read widely in the many disciplinary approaches to human nature, I will try to assemble some of the points in favour of simulation theory, including some points which I doubt have appeared in the same argument. I hope the weight of the sources will somewhat make up for what I lack.

    How does an 18 month old think? They are still essentially non-verbal despite knowing a few words. I will jump to a proposal that may seem overly radical and detailed, and later on assemble some pieces of a case for it. Suppose the toddler perceives something like a space of possibilities in a largely visual way (help open box A, or help open box B, visualizing that the toy is actually there, if hidden. Suppose also that the child can imagine the physical perspective of another, and maintain a simulation of what has and hasn’t been seen when looking through the simulated others eyes, and “sees” where the other should, as a result visualize the toy. The “wrongness” of the other might be perceived as a kind of dissonance, not as the proposition “agent thinks the toy is in box A”, though if the agent moves towards box A, the dissonance might be stronger, but there is still dissonance when in the “new FB” case, the agent approaches the other box where the toy isn’t, box C, and this might trigger an urge to resolve the dissonance between the toddler’s “correct” view of things and the other’s mistaken view.

    Aside from a few mentions of “Theory of Mind”, my first exposure to mindreading and its controversies came from Alvin Goldman’s Simulating Minds. I believe he made a compelling case that mindreading is not “theory theory” or “folk psychology”. The explanation seemed intuitive to me. I could relate it to regular experience of having an argument end on a painful and frustrating note, and at some later time replaying the argument with attempted “improvements” on my side, sometimes met with new crushing arguments by the other. This was not voluntary. I would have liked to have turned it off.

    The workings of our dreams also demonstrate some kind of ability to simulate the appearance and behavior of other people, while at the same time simulating being ourselves, in some familiar or novel environment. This simultaneous multi-character simulation of an evolving situation or story line suggests at some seemingly unconscious level a sort of birds-eye-view imagining of the situation involving multiple people the brain somehow generates it. The ability of fiction writers to imagine communities of characters living out some imaginary history, and the fairly common reports of some authors that they must “wait and see” what the characters will do next suggests, like my automated “rematches” of arguments, the dream-like function may play a role in waking life, at least of some people.

    “The Avatars in the Machine: Dreaming as a Simulation of Social Reality” by Antti Revonsuo, Jarno Tuominen & Katja Valli (2015 https://open-mind.net/papers/the-avatars-in-the-machine-dreaming-as-a-simulation-of-social-reality) uses a massive set of studies of dream reports including those from people in preliterate simple societies. It examines correlations between waking and dreaming life. It is also a good exercize in method, emphasizing the need for falsifiable assertions in further research.

    “Avatars” makes a strong case for a “Social Simulation Theory” of dreams in humans; i.e. they are “offline” exploration of social situations, that contribute to our ability to handle them.. Renvonsuo et al was a product of Thomas Metzinger’s and Jennifer Windt’s Open Mind project, of which the OP’s author, Pierre Jacob was also a participant. The Open Mind papers deal with issues of consciousness, dreaming, intersubjectivity, neurology, etc., and Windt is the author of Dreaming: A Conceptual Framework for Philosophy of Mind and Empirical Research (MIT Press, 2015), according to Daniel Dennett “the most comprehensive book on dreaming that I have ever encountered … well written, superbly researched, imaginative, and very astutely reasoned. It has my highest recommendation”. While “Avatars in the Machine” is still pretty obscure (I can’t find it using PhilPapers) maybe due to Open Mind’s experimental approach to publishing, he gets 10 works cited in Windt’s Dreaming and his name appears 80 times altogether in that book.

    I believe evidence is converging on the idea that the places, things, and beings in dreams represent things dealt with in waking life, and a function of dreams is to refine and integrate our conceptions of these entities, and that the same simulation facility used by dreaming also serves mindreading in some way. In the case of people we regularly meet, dreaming helps to shape our waking interpretations of these individuals, and helps bring about the aspects of communication described in Wilson & Sperber _Relevance_, and in Thom Scott-Phillips’ _Speaking Our Minds_.

    Michael Tomasello et al, “Understanding and sharing intentions: The origins of cultural cognition” (Behavior and Brain Sciences 2005) is only the most concise statement of a thesis that Tomasello has reiterated many times.

    “Human beings are the world’s experts at mind reading. As compared with other species, humans are much more skillful at discerning what others are perceiving, intending, desiring, knowing, and believing. Although the pinnacle of mind reading is understanding beliefs – as beliefs are indisputably mental and normative – the foundational skill is understanding intentions.”

    From extensive data, he concludes that young children, besides having some ability to mind-read, have a primary urge to create situations in which they and another (or others) are seeing and appreciating the same phenomenon; conditions for satisfaction include similar excited affect by the other, and alternating gaze between thing observed and the child. Language develops in large part because it fills the need to know that you and another person are sharing psychological states, which come to include simple knowledge, and human’s extreme form of collaboration arises from the need to share intentions.

    Besides 17 pages of the original statement, “Understanding and Sharing Intentions” includes 31 responses, many argumentative, from usually not just one author, but a team of authors, and Tomasello’s responses, which are valuable for engaging a great number of often irreconcilable frames for understanding human minds and sociality.

    E.g. “[In contrast to Hobson’s view and critique of simulation theory]
    For infants to simulate the psychological states of another (e.g., to imagine what the other is feeling when he is frustrated in his actions toward a goal), … they do not need to conceptualize the self or the other at all, where conceptualize means something like ‘take an outside perspective on,'”

    Tomasello’s book The Cultural Origins of Human Cognition (Harvard 1999) goes into detail to argue that children bootstrap their way to complex cognition by years spent talking with others, and after a while, thinking in words. I would infer that for 18 month olds, just starting this process, to reason about others’ beliefs is more alien than the average person reasoning about ontologies, and it is through this process that “system 1” mindreading, so to speak, evolves into “system 2” mindreading.

  • comment-avatar
    Pierre Jacob 29 November 2017 (20:01)

    Is mental simulation the solution?
    I am grateful to Hal Morris for his interesting comments and his suggestion that mental simulation might shed light on the process whereby toddlers attribute a false belief to the mistaken agent in the old FB condition of the active helping paradigm, which, if I am right, is one of the necessary conditions for them to be able to attribute the goal of getting her toy back on her return.

    In accordance with Alvin Goldman’s (2006) overall approach to which Hal approvingly refers to, I wish, however, to emphasize that simulation of an agent’s mental state (e.g. a desire) involves entertaining a mental state that is similar to the agent’s (e.g. a desire). If so, then simulating another’s mental state is likely to fall short of attributing it to the agent. Entertaining a desire may be sufficient for simulating an agent’s desire, but one can only attribute a desire to an agent if one forms a higher-order belief about the agent’s lower-order desire. This, I think, is why Alvin Goldman himself endorsed the view that simulation is only one stage of the full process whereby a mental state is attributed to an agent, which also involves the further stage which he calls “projection.”

  • comment-avatar
    Hal Morris 1 December 2017 (03:35)

    Difficulties of interpreting “teleology” and “implicit/explicit” understanding
    I admire your advocating that “mindreading has a genetic basis and is part of human core cognition” (“Why reading minds is not like reading words”), and am somewhat baffled at the distinction implicit/explicit understanding, which seem to follow closely cases in which we *infer* a child’s understanding through their eye movements vs cases in which the child *explicitly* states some understanding. These seem like two ways of inferring understanding, not two kinds of understanding.

    In Tomasello’s 2005 BBS forum, his first reply to commentators, addressing some behaviorists, winds up with “infants understand that people are not happy when their goals are unsatisfied which must be based on some kind of teleological reasoning in which the actor compares the real state of affairs as he perceives it to some desired state of affairs represented internally, a process that rocks simply do not engage in.” Interestingly, he is calling *his* interpretation teleological, which is interesting; he is also being rather sharp-tongued, which might affect some peoples’ attitude towards him. W.r.t “teleology”, I am used to construing it as projecting agency onto inanimate objects, e.g. “Galileo did not believe the ball came to a rest because it desired to be in its natural state.” This is one person’s paraphrase of Aristotle’s Physics, or maybe the sharp-tongued Galileo threw in the “desire”, though if so, Aristotle’s “natural states” of inanimate objects also makes me think of extrapolation from the desires of sentient beings.

    He next addresses Perner and Dougherty who in his words “believe that infants have some kind of externalist way of explaining the behavior of others that does not rely on an understanding of others as goal directed agents with internal goals”, a sort of seeing causality, but not intention. This seems to be the sense in which Priewasser et al speak of teleology. They clarify if for me in stating “Hence, enabling her to do so will make for a better situation (she’ll be happy; a goal to be achieved) than preventing her from doing so (she’ll be nervous and grumpy).” So a “goal” stripped of belief is like some kind of attraction between objects? As I said before, I agree with you that the child indicating box B when the agent is trying to open box C fails to imply what they conclude.

    I also wonder whether you or Priewasser et al have any thoughts on [Buttelmann, D., Over, H., Carpenter, M., & Tomasello, M. (2014). Eighteen-month-olds understand false beliefs in an unexpected-contents task. Journal of Experimental Child Psychology, 119, 120-126.] which seems like an attempt to get around some of their objections.