{"id":365,"date":"2010-09-30T01:00:01","date_gmt":"2010-09-29T23:00:01","guid":{"rendered":"http:\/\/cognitionandculture.local\/?p=365"},"modified":"2023-07-24T11:57:54","modified_gmt":"2023-07-24T09:57:54","slug":"epistemic-trust-in-scientific-practice-the-case-of-primates-studies","status":"publish","type":"post","link":"https:\/\/cognitionandculture.local\/blogs\/helen-de-cruzs-blog\/epistemic-trust-in-scientific-practice-the-case-of-primates-studies\/","title":{"rendered":"Epistemic trust in scientific practice: The case of primates studies"},"content":{"rendered":"
A few days ago, I received a favorable review of a paper of mine. The reviewer suggested some minor improvements, one of which led me to reflect on epistemic trust in scientific practice. In the paper, I cited a recent study of which Marc Hauser was the lead author. The reviewer suggested that I replace this reference by a similar study on primate cognition. Fortunately, in this case, it turns out that there were other studies that reach similar findings. My paper was a revision of an earlier submission which I had been told to ‘revise and resubmit’. At the time of this earlier submission, the Hauser investigation<\/a> had not yet been made public.<\/p>\n The paper I cited was not compromised in the recent Harvard investigation, but it is nevertheless tainted since it has appeared in the time when the scientific misconduct took place. I would have changed the reference anyway, even if the reviewer had not brought it up. For some researchers, the consequences of this affair may be much more dramatic, if they directly relied on Hauser’s findings in their experimental designs or conclusions. I am thinking in particular about his language research, which has led to the retraction of the 2002 paper in Cognition.<\/p>\n There is a deep breach of confidence, which jeopardizes all of Hauser’s results and suggests that large amounts of research money have been wasted. What can be done to mitigate the consequences of this situation? One possibility (expensive) is to replicate all of Hauser’s experiments to see what can be salvaged. There are however very few primate labs out there, and even fewer with cotton top tamarins (Hauser’s main species of study).<\/p>\n The epistemologist Michael P. Lynch in a paper entitled “Epistemic Circularity and Epistemic Disagreement” (in press) suggests that we do something akin to Rawlsian political philosophy when we decide upon methods and research practices: Suppose you are working from behind a “veil of ignorance,” not knowing which outcome you would prefer, which research methods would you favor?<\/p>\n Lynch thinks that we would converge upon the following “Were we to play the method game, it would seem in our self-interest to favor privileging those methods that, to the greatest degree possible, were repeatable, adaptable, public and widespread. Repeatable methods are those that in like cases produce like results. It would be in our interest to favor repeatable methods because such methods could be used over and over again by people with different social standings. Adaptable methods are those that can be employed on distinct kinds of problems and which produce results given a variety of kinds of inputs. It would be in our interest to favor such methods because we don’t know what sort of problems we’ll face.”<\/p>\n How does current primate research measure up to such a demand? It seems to me that the methods used in this research are often not repeatable, let alone widespread due to the scarcity of primate labs. Regarding repeatability, it turns out that the skill of the experimenter is often crucial: for example, the fact that Hauser’s monkeys could, or so it seemed, pass the Gallup test (self-recognition in mirror) was attributed to experimenter skill, the unfortunately dubbed ‘Hauser effect’. Even if the Hauser case had not come to light, the problem of epistemic trust remains in a field where so few experiments are replicated (and where there is little incentive to replicate studies anyway).<\/p>\n What to conclude from this? Should we limit our epistemic trust in studies that are difficult to replicate or repeat, such as primate studies? Should primate studies be better funded and less subject to restrictive laws? I am not sure.<\/p>\n","protected":false},"excerpt":{"rendered":" A few days ago, I received a favorable review of a paper of mine. The reviewer suggested some minor improvements, one of which led me to reflect on epistemic trust in scientific practice. In the paper, I cited a recent study of which Marc Hauser was the lead author. The reviewer suggested that I replace […]<\/p>\n","protected":false},"author":681,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[],"acf":[],"yoast_head":"\n