Modularity and Recombination in Technological Evolution

In May we read Mathieu Charbonneau’s 2016 article, Modularity and Recombination in Technological Evolution. The article is describing a key property of cultural variation, recombination, and its central property, modularity, in order to avoid a circular understanding of their interaction, and to show how the study of recombination and modularity depends on understanding of generative processes involved in production of public displays during social transmission. Using the transition from the Oldowan to Acheulean stone tool industries as an example, the author argues that the recombination depends on the interface produced between modules and the complex recipes for their recombination, and that such interfaces are a result of generative processes involved in production of the recipes.

How do you think this paper benefits the theory of cultural evolution? Do you agree with the concept of cultural modules of recombination (CRM-s)? Can you think of study examples which could benefit from implementing this theory?

7 Comments

  • comment-avatar
    Olivier Morin 6 June 2017 (07:48)

    comment
    One reason I like this paper so much is because it addresses a major problem for cultural evolution researchers. Most of us in the field have been taught to be extremely suspicious towards crude varieties of Darwinism in biology. I mean that most cultural evolutionists are definitely not happy with the formula

    “Blind variation + selection and drift acting on on genes = Everything worth knowing about evolution.”

    They smile at versions of evolutionary theory that put a premium on developmental constraints, guided variation, niche construction, and such like. Changes in gene frequency by way of selection and drift is only a part of the evolutionary story, or rather one angle from which it can be described. This puts them at odds with the view that all complex adaptive structures we see in living forms originate as Darwinian adaptations only.

    If we now take a look at culture, my impression (perhaps uncharitable) is that we’ll find that many students of cultural evolution are, in practice, more or less satisfied with

    “Blind variation + differential imitation + drift = Everything worth knowing about culture.”

    I am, of course, exaggerating; but few people would deny, I think, that a lot more ink (and pixels) has been spent theorising developmental constraints, adaptive landscapes, plasticity or guided mutations in biology. The cultural counterparts of these phenomena (if they exist at all) remain under-theorised, to the point that “Blind Variation + Selective Retention” may seem an acceptable starting point to model the evolution of technology. I read Mathieu’s paper as a precious corrective to this trend.

    I’m so enthusiastic about this paper that I might be tempted to ask too much of it. For instance, I am tempted to ask for general predictions. The “chaînes opératoires” school (started by Leroi-Gourhan and followers) that visibly inspires this paper, was successful at establishing descriptive standards for, say, lithic technologies, but it didn’t set itself up as a theory with predictive purchase. This it might become if, for instance, it discovers that some action recombinations are impossible or highly unlikely. (In the way that Chomskyans linguists claim to have discovered constraints on the structure of syntactic trees.) A tall order, of course, but it would provide a strong validation of this approach.

  • comment-avatar
    Thomas Müller 8 June 2017 (11:39)

    Recombination
    I find the specific points of this paper plausible and quite convincing: that recombination processes are constrained by the mechanisms underlying the production of public displays, if they are to be successfully transmitted, and that for something to be a cultural module, it has to show specific characteristics. I also agree that recombination can be a productive source of innovation; however, not every innovation comes about by recombining old ideas, as these ideas have to be created in the first place.

    Thinking further from this, it is reasonable to assume that not every action has to be part of a cultural module: At first it would be unorganized, just an unintegrated unit that doesn’t fulfill a greater purpose yet. So my question would be: How do cultural modules get created in the first place? Do individuals just combine single actions via trial and error to see what works, and subsequently transmit these as a module? Or does cognition play a larger role, in that there needs to be a plan to constrain the space of possible combinations?

  • comment-avatar
    Barbara Pavlek 8 June 2017 (15:04)

    tools & brains
    I think this is a well and clearly written theoretical paper and, coming from archaeology background, I liked the points being illustrated with examples from a very simple and very old material culture. However, as material culture in these early periods evolved parallel to humans producing it, I would be interested to see what caused the stone tool production to get more and more complex – was it variation in production techniques which resulted in more components to recombine, or were bigger and more complex brains a prerequisite for developing specialized, refined tools?

    The “triangles of social transmission” made me think about artistic depictions of the same subject (eg. Venus and Amor) which change across time, space, material and cultures. Is art, being a kind of public display but having a different function than tools, also following cultural recipes? Could motif re-combination also be studied in this kind of framework?

  • comment-avatar
    Mathieu Charbonneau 11 June 2017 (14:39)

    Modules, cognition, and cultural evolution
    Thank you very much, Olivier, Thomas, and Barbara for your comments and interesting questions.
    I tried to answer you in order, but there will be some overlapping in the answers.
    OLIVIER: I want to develop two points that you mention. First, I think you have adequately underlined one key aspect of the argument of the paper: that there is more to cultural evolution than “blind variation + selection”, and I would, in fact, add “blind GRADUAL variation + selection”. A general assumption in much work in modeling and describing the introduction of novel variation in cultural evolution is that of small copying-errors, which would support a strong Neo-Darwinian version of cultural evolution. Following this model, people would miscopy (or mis-imitate) one another once in a while and the slight errors would then add-up on the long run to allow the evolution of traditions (technological or else). These errors have to be small, however, as too large an error would be very apparent to the individuals involved in the process of transmission (e.g., Eerkens has developed a copying-error model where individuals make unconscious errors of about 5% when copying some quantifiable morphological feature of artifacts). Intentional retention and rejection of such changes appear to go against a Darwinian approach to cultural evolution as adaptations (I’m specifically thinking of functional technologies here) would instead rely on the intentional decisions of the individual inventors and users, thus contradicting the blind clause in the formula. Some formal modeling has integrated the idea of modularity and recombination in the BSV + S paradigm as a way to deal with larger variations, yet still fundamentally blind ones. In these models, different ‘modules’ would randomly (blindly) be recombined with one another and, following some rule of fitness distribution, some such combinations would be more or less adaptive. In other words, even apparently “larger” variations could then be blind.
    One main point of the paper is that modules do not and cannot work that way: if you put two bicycles, a motor, and a coach seat in a (big) box, and then shake the box, you do not get Henri Ford’s Quadricycle. Instead, the modules need to be organized into some functional whole – through what I have termed the production of an ‘interface’ – which depend on the cognitive, material, and functional constraints involved in their production (this also answer THOMAS’ worry: even assembling lower-level actions together needs to be done in a functional way, i.e., through an interface that makes sense of the sequence of actions). It is difficult to conciliate such fragile structuring of complex techniques with a blind variation model. In fact, I would even say that the only way to study the introduction of cultural variation is to take into account these constraints, instead of imposing some default theoretical model of cultural variation based on evolutionary biology.
    And this brings me to the second point, you ‘asking too much’, which I don’t think is the case. In a paper in press, I try to tackle parts of your demands. I argue that we can make the specific structures (or techniques) discussed in the Modularity paper predictive rather than only descriptive, and this is by systematically testing different technical variants and examining if they can satisfy a (or some specific) functional goal or not. By such means, one can vary actions, higher-level aspects of a complex behavior, and identify the boundaries (constraints) between functional and dysfunctional techniques (what I call a ‘technospace’). In this sense, the method is the opposite of identifying the possible chaînes opératoires behind the production of a specific artifact form (or artifact class), as currently done in actualistic studies. Instead, we vary techniques to see which forms they can produce and consequently which forms cannot be produced, and which sorts of modifications are required to move from one functional form to the next.
    As for general predictions, although I haven’t made any in the modularity paper (if only implicitly that recombination is not a blind process of variation), I do offer some in other related papers, especially in All innovations are equals, but some more than others (https://link.springer.com/article/10.1007/s13752-015-0227-x) where I argue that the search space (or technospaces) of non-humans and early humans might have been constrained in that blind gradual variation would not allow any cumulative step beyond what a single individual could invent, but rather that we need to wait for higher cognitive capacities at innovating (namely, being able to modify complex behavior’s hierarchical structures) for cumulative culture to set off. In other words (and this addresses part of BARBARA’s worry), it’s only when an individual is capable of altering the higher-level (hierarchical) structure of a complex technique that larger jumps in technospace become possible and that material and functional constraints can be “leaped over” by a culture. In addition, the more complex the modules, the farther away one can subsequently jump in such spaces (so cognition AND culture would explain how technologies got more and more complex).
    In turn, this allows me to address a concern raised by THOMAS. You are absolutely right: not every innovation comes about by recombining old ideas, as these ideas have to be created in the first place. Recombination, in this sense, depends on the previous existence of available cultural variation (just as copying-errors do, see above) and so recombination cannot produce novel traditions tout court, i.e., novel cultural lineages with no ancestry. What you refer to could be called ‘innovation from scratch’: inventing novel behaviors that only subsequently become cultural and form traditions.
    So how do we make such innovations? That is an empirical question I do not know enough about to give you a satisfactory answer, and I guess part of the answer is to be found in studies of creativity. But here is a tentative answer: I would say we have a set of basic actions that we need not learn through others but learn on our own, and that we have some capacity of insight to find basic solutions to some problems by combining these actions. Sometimes these good solutions can be used for different things: hitting a fruit on a tree with a stick in order to make the fruit fall down may lead one to use the same behavior with a bee’s nest or with the head of a prey. These are speculations, but I guess the point I am making is that we are probably capable of recombining actions together whether or not we’ve learned them on our own or through others, but that an increase in combining the actions into more and more complex structures is what fuels cumulative culture first and foremost.
    Finally, BARBARA you asked, “Could motif re-combination also be studied in this kind of framework?” I would answer: YES, and I invite you to read what I have to say about it (specifically, about the modularity of monster and their representation) in a webinar here at ICCI that I participated to last year: http://cognitionandculture.net/webinars/the-origins-of-monsters-book-club/your-very-own-monster-creation-kit
    Thank you all for taking the time to read, think about, and comment on the paper. Please feel free to continue, I’ll be happy to follow up.

  • comment-avatar
    Piers Kelly 12 June 2017 (16:38)

    creativity
    Hi all, I’ve really enjoyed reading the paper as well as all the comments here and I don’t really have much to add. In a recent conversation with colleagues we discussed the fact that so much discussion of cultural change involves post-hoc description: this invention took place, became popular, diffused as an innovation etc. What we don’t tend to examine with as much attention are the underlying dynamics of creativity. Clearly combinatoriality is at the heart of this process, be it linguistic, artistic, or technological change and I like how this paper is able to extract the unit of recombination without resorting to circular reasoning.

  • comment-avatar
    James Winters 13 June 2017 (10:10)

    Modularity as a constraint on exploring the adjacent possible
    I had two comments on what was a genuinely thought-provoking paper by Charbonneau. First, we find in linguistic structure that the underlying generative system is often guided by pressures for compression (e.g., dependency length minimisation) and uncertainty reduction (e.g., making the order of a sequence more predictable). Reaching a tradeoff between these two pressures explains why language structure takes certain configurations and not others. My question to Charbonneau is: Do you expect similar pressures to constrain the recombination in the creation of CMRs?

    My second comment is about the way we recombine and use modular components and whether this creates blindspots in innovation. By this, I mean instances where we have a conceptually simple innovation, such as combining wheels on transportation devices, which nonetheless appear to be hard to innovate (as evident by their puzzlingly late appearance in the historical record) and hard to generalise beyond restricted domains. For instance, the wheelbarrow is a super simple labour-saving device, yet it only started appearing in China thousands of years after the wheel was invented (and took a further thousand years before it appearance in Europe). Not only was this functionally advantageous, in terms of minimising labour, it was also conceptually simple in its underlying recipe (i.e., reverse-engineering the concept of a wheelbarrow is relatively straightforward). Now, I realise that there are ecological and historical constraints that might explain the late appearance of wheelbarrow, as well as other unusually late innovations, but it left me wondering: Are there specific ways in which recombination and modularity might create blindspots in exploring adjacent possible innovations? And is this because modular components black-box important steps required for certain innovations?

    FYI: Apologies if these questions have already been answered above. I meant to post this comment sooner, but have been away for the last four days.

  • comment-avatar
    Mathieu Charbonneau 20 June 2017 (14:13)

    Compression, recombination, and rotation
    Thank you, Piers and James, for your input. I just saw your posts, so I’m sorry it took so much time to answer.

    JAMES: Your first question is a very interesting one. It is an empirical question, and I haven’t done the empirical work to answer it. But here are some expectations: I would expect that modules of different kinds may have different sets of constraints. In the cases that I consider in the paper – techniques or functional behaviors – I would expect a pressure towards higher functionality for the integrated modules, such as energetic/cognitive waste reduction promoting redundant modules/eliminating heterogeneity of modules (e.g. if doing A + A is pretty much the same as A + B, then prefer doing A + A), eliminating actions that have no functional role, and I would also expect techniques to evolve in such a way that they can be enacted by different individuals through some division of labor. This is all very speculative of course. Then, other pressures may apply to other forms of modules, such as artefactual modules (e.g., standardization of parts in free markets/heterogeneity of parts in protectionist markets) or visual modules (about which the Mint knows probably much more than I do).
    Regarding your second question, it just so happens I’m reading at the moment a book on the cultural evolution of rotary devices! I do believe that modular components can create blind spots, but that this is mainly due to the fact that cultural traits – especially technological ones – are very complex ones and span over multiple levels of possible modularity, each of which may not necessarily find their boundaries at the place as the technology itself. Consider the wheel: it seems rather a simple device, a case of an artefactual module if there is one. Although the wheel may seem a module par excellence, it, in fact, requires a lot of tinkering to construct a proper interface for its integration in a new technology, and I believe this is where the blind spots emerge: in generating an interface to combine modules into.

    Consider the underlying constraints in the design problems, production, and use of the wheel. The invention of the wheel is really the invention of the wheel-and-axle mechanism. With this comes several constraints on the modularity of the mechanism: (1) issues about friction between the wheel and axle (which requires specific lubricants/materials, repair strategies, bearings perhaps, etc.), (2) construction of the wheel so it can support specific weights, which involves many parameters of the wheel such as its width, diameter, whether it is a full-disk, a glued/nailed disk, or a spokes design, the specific materials, etc. – e.g. the New World wheels were used as toys and so did not have to deal with these constraints), (3) the nature of the surface on which the wheel is being used (e.g., bumpy vs. flat, soft vs hard, with or without rails, necessity to turn or just straight motion, etc.) and, finally, as wheels are meant to transport things (4) how to manage the gravity center of the whole apparatus, which is especially important for (a) how much force is needed to move the apparatus to a certain speed (should an a horse, an ox, or a person do it?) and (b) balance issues. The Chinese wheelbarrow solves both these latter issues by having a central wheel aligned with the center of gravity of the wheelbarrow, and so most of the weight is supported by the wheel itself and not by the user(http://krisdedecker.typepad.com/.a/6a00e0099229e888330162fdd8a0b0970d-pi) (vs. European/American wheelbarrows that have a gravity center on the front of the barrow: https://upload.wikimedia.org/wikipedia/commons/thumb/1/1f/2008-07-15_Construction_wheelbarrow_at_Duke.jpg/1200px-2008-07-15_Construction_wheelbarrow_at_Duke.jpg)).

    The point I am making here is that all these constraints can be met by playing around with modules at different levels of the technology (technique to produce some parts, techniques to use, available environment, modules of the artifact, etc.), but the right combination can be very complex to get at. It will usually involve many trade-offs between the different levels of the technology, and as modules can themselves be complex (composed of other modules that need to me interfaced), it is not always obvious how they need to be adapted so that they can be recombined. So, I would answer YES, it is because modular components black-box important steps required for certain innovations: They do not necessarily offer an easy or obvious solution to the production of a functional interface, and as any innovation process, it may require a lot of work to get at what looks on the surface (or conceptually) as very simple recombinations.