Random drift and culture change

In April we chose to read Bentley, Hahn and Shennan’s paper Random drift and culture change (2004).

This very interesting study explains cultural variants on three real-world datasets using a neutral model. The model predicts the random drift effect on the statistics of the variants from the population size and the innovation (mutation) rate or only using the number of new variants per time slice. Because it requires a minimal number of free parameters, the authors propose using this very simple model as a null model, against which hypotheses can be tested to determine whether the variants of a certain trait in population are neutral, which often seems to be the case.

Let us know what you think about this paper and the model it suggests. Do you think you could find it useful for your own research?


  • comment-avatar
    Piers Kelly 17 May 2017 (09:21)

    Where next for the neutral model?
    I was going to wait for somebody else to comment on this paper first, only because I found it difficult to understand on a first read, but upon reading it again I think I get the gist: the ‘neutral model’ is the best default interpretive schema for understanding cultural variation. Humans tend to reproduce from the pool of variants at their disposal in their immediate context, and cultural fads are rare. The datasets (patents, names, pottery motifs) are sufficiently clean and quantifiable and thus offer the possibility to test this model effectively.

    My main question for this paper is this: if we can safely assume that the neutral model works and is reliable, how is the knowledge informative for areas in which the data is incomplete? Eg, what if we have a series of motifs from an archeological sites dating to 3000BP, and another set of related motifs from the same cultural complex dating to 1000BP, but zero artefacts recovered for the intervening period? Can the neutral model be used to infer the range motifs in the missing time period? Perhaps this is not the best example to illustrate my point, but I’m always interested in trying to imagine potential real-world applications for these kinds of models rather than simply testing them for their own sake (which is of course important too, but one can think big at the same time). Ideas?

  • comment-avatar
    Olivier Morin 25 May 2017 (15:46)

    about Piers’ question
    One possible application that I can think of is the detection of faked, or severely flawed, datasets. Humans are notoriously bad at creating truly random patterns, and power-law distributions aren’t immediately intuitive. People who have been producing big repertoires of artistic designs on their own (for instance when creating fake archaeological findings or fake art) may not be respecting power-law distributions. If such distributions appear as a result of cultural transmission, then individual fakers shouldn’t respect them. Thus, it might be possible to use those, in conjunction with other cues, to detect fraud. (This would only be possible for cases where a complete set of “findings” has been faked, e.g. the Glozel tablets).