How relevant to the psychology of mindreading is knowledge-first epistemology?
Some epistemic mental states with propositional content (e.g. knowing, perceiving, remembering) are commonly said to be factive on the grounds that one cannot know, see, hear or remember what is not a fact. Others (e.g. believing, thinking, guessing, suspecting) are commonly said to be non-factive on the grounds that one’s beliefs, thoughts, guesses and suspicions need not map onto facts. In short, unlike belief attribution (e.g. ‘Mara believes that it is raining’), the attribution of knowledge (e.g. ‘Mara knows that it is raining’) presupposes the truth of its embedded clause (‘it is raining’). One of the linguistic criteria taken to demonstrate the factivity of knowledge is that generally (if not in every case) the transformation of a knowledge attribution (e.g. ‘Mara knows that it is raining’) into the corresponding question (e.g. ‘Does Mara know that it is raining?’) preserves the presupposition of the truth of the embedded clause (‘it is raining’). One thorny and controversial issue is whether factivity is best construed as being primarily a property of verbs standing for some psychological states or a property of the psychological states themselves.
In his influential (2000) book entitled Knowledge and its Limits, the philosopher Timothy Williamson, who endorses the view that factivity is primarily a property of the mental states themselves, has highlighted the significance of the contrast between factive and non-factive mental states for epistemology. Williamson advocates a knowledge-first approach to epistemology: he takes the notorious failure to define knowledge in terms of belief plus something else as evidence that it is a mistake to assume that belief is an ingredient of knowledge. As the philosopher Jennifer Nagel (2017) has put it, “factive mental states (like knowing) can link an agent only to truths and non-factive states (like believing) link an agent to either truths or falsehoods.” The set of conditions in which one knows a proposition is a restricted subset of the set of conditions in which one believes it. The conditions that must be met for knowing are stronger (because more restricted) than the conditions for having a corresponding true belief. If Mara knows that it is raining, then it is raining and Mara believes it. But the converse does not hold: Mara may have formed the true belief that it is raining by sheer luck. If so, then even though it is raining and Mara believes it, she does not know that it is raining.
Knowledge unquestionably matters to epistemology by definition or etymology. In light of the knowledge-first approach to epistemology, a number of questions arise about the psychological investigation of the mindreading (or theory of mind) capacity to attribute mental states to self and others. In fact, the prior questions arise: should knowledge be construed as a genuine psychological state? To what extent does the difference between knowledge and true belief matter to scientific psychology? Should scientific psychology attend to the fact that a mental state meets the conditions that epistemologists take to be central for it to qualify as an instance of knowledge in their sense?
As the following illustrates, the issues are complex and delicate. Suppose Bob tells Jane: “Mara knows that her key is in her purse.” What is the psychological significance of Bob’s willingness to attribute to the agent knowledge of this fact, as opposed to a true belief about it? If Jane trusts Bob, then Bob’s utterance is likely to decrease Jane’s propensity to question the epistemic accuracy of Mara’s representation of the location of her key. Considering its contribution to the success of Mara’s goal-directed action (e.g. of opening her door), which is clearly relevant to psychology, it seems clear that granting Mara knowledge of the location of her key is not sufficient to warrant the success of her action.
In the past forty years or so, the capacity for false belief attribution has commonly been heralded by experimental psychologists as the hallmark of mindreading. In light of the knowledge-first approach to epistemology, the question arises in particular whether the orthodox approach to the psychology of mindreading is right to take it for granted that non-factive, rather than factive, mental state attribution is the hallmark of mindreading. Or so it might seem to advocates of the view that factivity is primarily a property of mental states. Should not the psychology of mindreading follow the lead of knowledge-first epistemology and acknowledge the primacy of factive mental state attribution? Does not the computation of another’s factive state of knowledge or lack thereof, i.e. ignorance, take phylogenetic and ontogenetic precedence over the computation of another’s non-factive state of true or false belief? If so, then should not the capacity for tracking others’ ignorance be regarded as the hallmark of mindreading?
In a recent (2019) intriguing paper, two philosophers, Jonathan Phillips and Aron Norby, who also take factivity to be primarily a property of mental states, propose to redress the balance in favor of “factive theory of mind” (which is the title of their paper). More precisely, they purport to replace the orthodox assumption that false belief attribution is the hallmark of mindreading by the two-tiered proposal that an individual’s capacity for tracking others’ factive epistemic mental states is sufficient for mindreading, provided that she can keep her representations of others’ factive epistemic mental states separate from her own representations of the world. As they put it, “theory of mind […] is a matter of both tracking and separation […] If the core capacities of theory of mind are tracking and separation, then the representation and attribution of factive attitudes (S’s knowing that p or S’s not knowing that p) is sufficient for theory of mind.” To erect the capacity to represent others’ false beliefs as the hallmark of mindreading is, in Phillips and Norby’s own terms, to “confuse the ability to represent a particular kind of (non-factive) content with the more general capacity to represent another agent’s understanding of the world.” (As it turns out, one major problem faced by Phillips and Norby’s proposal is that while there is no doubt that an agent’s false belief is a genuine mental state, it is far from obvious that an agent’s failure to know a fact is a mental state.)
If correct, Phillips and Norby’s two-tiered defense of factive theory of mind would have several interesting psychological consequences, of which I mention two. For example, while there is some evidence (reported by Krupenye et al., 2016) that great apes can track others’ false beliefs, there is no evidence that monkeys can. On the other hand, there is substantial evidence that both monkeys and apes can appropriately attribute to either a competing dominant or a human experimenter either knowledge or ignorance of either the location of a piece of food or their own attempted action of stealing food (Melis et al., 2006; Martin & Santos, 2014; Martin & Santos, 2016; Santos et al., 2006). The same evidence also shows that they can keep their representation of others’ ignorance separate from their own relevant knowledge. In short, the evidence strongly suggests that, if Phillips and Norby are on the right track, then monkeys (whose last common ancestor with humans goes back much further in evolutionary history than the last common ancestor between humans and apes) have mindreading capacities.
Secondly, Phillips and Norby argue that much of the evidence that has widely been taken to show that human adults can automatically perform tasks of both Level-1 visual perspective-taking (e.g. Samson et al., 2010) and false belief attribution (e.g. Kovács et al., 2010) is in fact poor evidence of mindreading because what it really shows is the influence of others’ representations of the world on participants’ own responses. It is therefore poor evidence of mindreading because it fails to meet Phillips and Norby’s condition of separation.
The questions are: are Phillips and Norby on the right track? Is their picture of the contrast between what they call “factive theory of mind” and “non-factive theory of mind” correct? Are they right to hold that the joint capacity to represent an agent’s ignorance of something one knows and to keep this representation separate from one’s own knowledge is sufficient for the mindreading capacity? Does an agent’s ignorance of some fact qualify as a factive mental state in the sense of knowledge-first epistemology? In order to answer these questions, one must scrutinize two of their fundamental analogies. First, they appeal to the analogy between mindreading and building maps. Secondly, they take the distinction between factive and non-factive mindreading to be analogous to the distinction between hypothetical and counterfactual reasoning. While both analogies are interesting, they are also misleading.
I start with the claim that the distinction between factive and non-factive mindreading is analogous to the distinction between hypotheticalandcounterfactual reasoning. While factive mindreading is taken to require only hypothetical reasoning, non-factive mindreading is taken to require counterfactual reasoning. Phillips and Norby argue that the difference between factive and non-factive theory of mind “is not essentially about theory of mind” because “the ability that allows for non-factive theory of mind representations” is simply “the same ability [that] also allows for other completely non-social representations,” namely counterfactual reasoning. Both hypothetical and counterfactual reasoning enable one to make “predictions about what would happen when conditions are different from the way you take them to actually be. However, only counterfactual reasoning requires constructing and maintaining a representation that is inconsistent with your own understanding of the world.” Only counterfactual reasoning (not hypothetical reasoning) is taken to require the capacity to entertain “a non-factive representation in precisely the same way as [does] success on the false belief task.”
In short, what factive theory of mind and hypothetical reasoning are taken by Phillips and Norby to have in common is the capacity for constructing and maintaining a set of first-order representations of the world that differs from one’s own set of first-order representations: for example one’s altercentric map lacks the representation of some fact that is being represented in one’s own map. But there is no contradiction between the two maps (or systems of first-order representations). What non-factive theory of mind (which Phillips and Norby take to be necessary for success on false belief tasks) and counterfactual reasoning are taken to have in common is the capacity for constructing and maintaining an altercentric map that contains at least one representation whose content is inconsistent with the content of your own map.
I agree that the capacity for false belief attribution (so-called non-factive mindreading) involves the capacity to entertain a first-order representation of the world that is inconsistent with one’s own first-order representation of (or belief about) the world. But it need not involve recognition of the inconsistency. Suppose for example that Sally first placed the banana in the basket before she left and Anne moved it to the box in Sally’s absence. Now Anne is likely to believe both that the banana is in the box and that Sally falsely believes that the banana is in the basket. In order to attribute to Sally the belief that the banana is in the basket, Anne need not accept the first-order belief that the banana is in the basket, whose content would directly contradict the content of her own first-order belief that the banana is in the box. All she needs to be able to do is to entertain the thought that the banana is in the basket, but she need not believe it. Assuming that there is a single banana that can either be in the box or in the basket, Sally’s first-order belief that the relevant banana is in the basket is inconsistent with Anne’s first-order belief that it is in the box and perfectly consistent with her higher-order belief that Sally believes that the banana is in the basket. Anne’s capacity to form the higher-order belief that Sally believes that the banana is in the basket, however, does not require Anne to recognize that Sally’s belief about the location of the banana is inconsistent with her own belief that the banana is in the box, and is therefore false by her own light.
What, I think, would be really disputable would be the further claim that the capacity for genuine counterfactual reasoning is required for false belief attribution. Phillips and Norby go on to mention well-known evidence that preschoolers fail both verbal false belief tasks and verbal counterfactual reasoning tasks in which they are asked to discriminate between a correct and an incorrect conclusion from a given counterfactual premise. As they put it, “it […] should not be so surprising that young children who cannot pass the simple verbal counterfactual reasoning tasks also cannot pass the verbal false belief task.” However, this negative evidence fails to demonstrate preschoolers’ inability to either attribute false beliefs to others or to engage in pretense, both of which rest on the capacity to entertain a first-order representation of the world that is inconsistent with the contents of some of one’s own beliefs.
The point is that being able to entertain a first-order representation that is inconsistent with one’s own belief is necessary for both false belief attribution and counterfactual reasoning, but it is not sufficient for counterfactual reasoning. Counterfactual reasoning further requires the capacity to recognize that both the antecedent and the consequent of a true counterfactual conditional are false. The complexities of counterfactual reasoning have been highlighted by Quine when he playfully pointed out that two distinct counterfactual conditionals may both be true although they involve one and the same false antecedent: “If Bizet and Verdi had been compatriots, Bizet would have been Italian” and “If Bizet and Verdi had been compatriots, Verdi would have been French.” There is (contested) experimental evidence that preverbal infants can attribute false beliefs to others, but there is no corresponding experimental evidence that preschoolers (let alone infants) can discriminate between a true and a false counterfactual conditional, both of which have a false antecedent and a false consequent. What is being probed by false belief tests (whether verbal or not) is the capacity to attribute to others beliefs that happen to be false, not the capacity to assess others’ false beliefs as false. In short, unlike the capacity to attribute false beliefs to others, one could not engage in counterfactual reasoning unless one could recognize that both the antecedent and the consequent of a true counterfactual conditional are inconsistent with one’s own beliefs, and therefore false by one’s own light.
I now turn to the analogy between map building and mindreading. What do Phillips and Norby call a map? Basically, an individual’s map is the set of all his or her first-order representations of the world. As they put it, “my own map is just my representation of the world, the way that I take the world to be. Other agents have their own maps, each of which captures what the world appears to be like from that agent’s perspective.” While every individual faces the task of building her own map of the world for guiding her own actions, some can build only one map. Individuals who can build only a single map (their own) are incapable of mindreading. All they can do to predict another’s behavior is use their own map. In particular, what they cannot do is track another’s ignorance of some fact known to her.
Now suppose that Anne knows that there is a banana in the opaque box in front of her because she just put it there. So the fact that there is a banana in the box is part of Anne’s own map. Suppose that this fact is not part of Sally’s map because Sally was away when Anne placed the banana in the box and Anne knows this. If Anne can build an altercentric map for Sally (i.e. a map centered on Sally, not on Anne), a very simple way she can do so is by removing (i.e. subtracting) from her own map the fact that there is a banana in the box. Suppose that she can keep the two maps separate. If so, then her own map provides her with the information relevant for retrieving the banana if she wants to. Moreover, she can also infer from the information encoded in her altercentric map that, whether or not Sally likes bananas, she is unlikely to look for one in the box.
The map analogy raises at least two separable issues. First of all, it is unclear whether the notion of factivity that is central to knowledge-first epistemology can also serve the purposes of the psychology of mindreading. One of the major expected psychological outcomes of Phillips and Norby’s plea for “factive theory of mind” is to grant mindreading capacities to non-human animals (e.g. monkeys and birds) on the grounds that they have been shown to be able to appropriately attribute to others knowledge and ignorance of some fact known to the attributor. Clearly, for knowledge-first epistemologists, knowledge, unlike belief, is the paradigmatic instance of a factive mental state. But what about ignorance? There are at least three ways in which an agent may be truly said to fail to know that there is a banana in the box: (i) she can falsely believe that the banana is in the box while the box is empty. (ii) She may fail to represent the fact that the banana is indeed in the box. (iii) She may truly believe but fail to know that the banana is in the box if her belief is accidentally true. Clearly an agent’s false belief is a paradigmatically non-factive mental state. While it may be psychologically relevant for one individual to track another’s failure to mentally represent some fact known to her, it is far from clear that an agent’s failure to mentally represent some relevant fact should itself be construed as a mental state at all, let alone a factive mental state (in the sense of knowledge-first epistemology).
Secondly, on the map analogy, Anne’s altercentric map for Sally is the output of the removal of some fact from her own map. Clearly, there are infinitely many non-mental facts that are not, and could not be, represented on Anne’s altercentric map for Sally, none of which Anne is aware of because they are not represented on Anne’s own map either. Let us say that the fact that Sally ignores non-mental facts that are not known by Anne is irrelevant to Anne. However, Sally’s ignorance of the non-mental fact that there is a banana in the box (and thus absent from Anne’s altercentric map for Sally) is relevant to Anne precisely because this non-mental fact is part of Anne’s own map and may guide her own action of retrieving the banana.
Here is the second problem for the map analogy. In the relevant situation involving Sally and the presence of a banana in the box, Anne has two available separate maps — her own map and her altercentric map for Sally — both of which constitute a set of first-order representations of non-mental facts, such that her own map contains one more specific non-mental fact than her altercentric map. The problem is that what the map analogy does not seem able to capture is that the fact that Sally does not know that there is a banana in the box is itself a relevant mental fact to be putatively represented by Anne, if she has mindreading capacities. This mental fact is relevant to the extent that Anne and Sally might have competing desires with respect to the banana. If Anne has mindreading capacities, then Sally’s ignorance of there being a banana in the box is a relevant mental fact that is part of Anne’s social environment. So Anne’s own map might be suited to represent her own first-order knowledge of (or true belief about) the non-mental fact that there is a banana in the box. But the mental fact that Sally does not know (or ignores) the non-mental fact that there is a banana in the box could not be represented by Anne’s altercentric map. In short, while not a factive mental state (in the sense of knowledge-first epistemology), Sally’s failure to know that there is a banana in the box may nonetheless be a mental fact sufficiently relevant for Anne to track it — even if tracking it is not tracking a mental state, let alone a factive one.
In other words, Anne’s higher-order belief about Sally’s ignorance could not belong to some altercentric map — at least, not if maps are defined as sets of first-order representations of non-mental facts. If Anne’s altercentric map for Sally is going to help Anne make predictions about Sally’s behavior, not Anne’s own behavior, then Anne must be able to attribute to Sally her altercentric map for Sally. It is one thing to be able to entertain an altercentric map comprised of a set of first-order representations of non-mental facts that constitutes a sub-set of the set of non-mental facts represented on one’s own map. It is another to be able to form second-order representations needed for the attribution of an altercentric map to another. The latter is required for mindreading. If tracking another’s failure to know some fact indeed matters to mindreading, but failing to know a fact is not a factive mental state, then being able to track factive mental states and to keep them distinct from one’s own representations of the world is unlikely to be sufficient for mindreading.
The last problem is this. Whether or not factive mindreading in Phillips and Norby’s sense turns out to be is a viable idea, the capacity to attribute to others factive mental states is a capacity to attribute or represent epistemic mental states. To the extent that one of the functions of mindreading is to predict an agent’s likely action, mindreading could only contribute to predicting another’s action if it also encompasses the capacity to attribute to others motivational states (e.g. desires and intentions). An agent’s desires or intentions are representations of possible (or impossible) non-actual states of affairs (with a world-to-mind, not a mind-to-world, direction of fit). No such motivational state could be a factive mental state. The question is: how should factive mindreading make room for the required capacity to attribute non-factive motivations to others?
The goal of Phillips and Norby’s paper is not merely to advocate the ontogenetic and/or phylogenetic priority of the capacity for factive mental state attribution over the capacity for non-factive mental state attribution. Their more ambitious goal is to argue instead that the joint capacity to track others’ factive mental states and to keep the representations of others’ factive mental states separate from one’s own representations of the world is sufficient for mindreading. Phillips and Norby’s picture of the contrast between factive and non-factive mindreading crucially rests on two analogies: the analogy between mindreading and building maps and the analogy between counterfactual reasoning and non-factive mindreading. I have argued that both analogies are questionable. If I am right, then their picture of the contrast between factive and non-factive mindreading is also questionable. So is their major claim that the capacities for tracking others’ factive mental states and for keeping them separate from one’s own representations of the world are jointly sufficient for mindreading. 
Kovács, A. M., Téglás, E., and Endress, A. D. (2010). The social sense: Susceptibility to others’ beliefs in human infants and adults. Science, 330:1830–34.
Krupenye, C., Kano, F., Hirata, S., Call, J., and Tomasello, M. (2016). Great apes anticipate that other individuals will act according to false beliefs. Science, 354(6308):110–114.
Martin, A. and Santos, L. R. (2014). The origins of belief representation: Monkeys fail to automatically represent others’ beliefs. Cognition, 130:300–8.
Martin, A. and Santos, L. R. (2016). What cognitive representations support primate theory of mind? Trends in Cognitive Sciences, 20(5):375–382.
Melis, A. P., Call, J., and Tomasello, M. (2006). Chimpanzees (pan troglodytes) conceal visual and auditory information from others. Journal of Comparative Psychology, 120(2):154.
Nagel, J. (2017). Factive and non-factive mental state attribution. Mind and Language, 32 (5):525-544.
Phillips, J. & Norby, A. (2019). Factive theory of mind. Mind and Language, https://doi.org/10.1111/mila.12267
Samson, D., Apperly, I. A., Braithwaite, J. J., Andrews, B. J., and Scott, S. E. B. (2010). Seeing it their way: Evidence for rapid and involuntary computations of what other people see. Journal of Experimental Psychology: Human Perception and Performance, 36(5):1255–66.
Santos, L. R., Nissen, A. G., and Ferrugia, J. A. (2006). Rhesus monkeys, Macaca mulatta, know what others can and cannot hear. Animal Behaviour, 71(5):1175–1181.
Williamson, T. (2000). Knowledge and its Limits. New York: Oxford University Press.
 Thanks to Jennifer Nagel and Dan Sperber, both of whose comments have led me to significant revisions.