Evolution of a polyphenism


Here’s some very cool news: scientists have directly observed the evolution of a complex, polygenic, polyphenic trait by genetic assimilation and accommodation in the laboratory. This is important, because it is simultaneously yet another demonstration of the fact of evolution, and an exploration of mechanisms of evolution—showing that evolution is more sophisticated than changes in the coding sequences of individual genes spreading through a population, but is also a consequence of the accumulation of masked variation, synergistic interactions between different alleles and the environment, and perhaps most importantly, changes in gene regulation.

Unfortunately, it’s also an example of some extremely rarefied terminology that is very precisely used in genetic and developmental labs everywhere, but probably makes most people’s eyes glaze over and wonder what the fuss is all about. I’ll try to give a simple introduction to those peculiar words, and explain why the evolution of a polyphenic pigment pattern in a caterpillar is a fascinating and significant result.

Students are usually taught a grossly oversimplified version of genetics. Everyone who has gone through basic biology has heard of Mendel and his pea plants, and the simple traits that assort independently and can be traced back to a single locus by their pattern of inheritance. There is one wrinkled gene, for instance, and it makes peas wrinkled. Unless you are defining things solely on a molecular level, however, there is no such thing as a phenotypic property that is solely the product of a single gene. Traits are polygenic, meaning that multiple genes cooperate to produce a a phenotype. One pet peeve I (and many other biologists) have is the media shortcut of describing an identified gene as a “gene for X”, whether X is breast cancer, schizophrenia, or hematopoiesis. Multiple genes contribute to all of those phenomena, and that’s what we mean by polygenic.

(A complementary word is pleiotropy: a single gene contributes to multiple aspects of the phenotype. Mutations in single genes typically ripple through many elements of the phenotype, and cause surprising changes in multiple features of the organism.)

Looking at it simplistically, one might think these properties would make evolution difficult: everything is coupled to everything else, and when selection tugs on one parameter of the organism, it’s also pulling on all of the other parameters as well. The polygenic and pleiotropic nature of the organism and genome reflect widespread integration, however, and that coupling is a good thing—it means changes in a gene don’t just yank it out of the regulatory structure of the genome, but instead the plastic nature of genetic interactions means other genes follow and compensate or potentiate changes in the one.

i-8c2e1ff90f8c050ffeefb87d87b97903-precis_almana.jpg
Seasonal polyphenism in the tropical butterfly Precis almana. The wet-season form (top) has a rounded wing margin and colorful ventral pattern. The dry season form (bottom) has a more angular wing shape and a dull brown color pattern that resembles a dead leaf.

The output of a gene, the phenotype, is dependent on the output of other genes (that polygenic property), and it’s also dependent on non-genetic components of the environment. Another totemic word is polyphenism, which refers to irreversible environment-specific alternative phenotypes. Polyphenic traits develop in distinctly different ways depending on the environmental context: the figure to the right is of two forms of a single species of butterfly, in which individuals that eclose during the tropical wet season develop more colorful wings, while those that eclose during the dry season are more drab and brown. The environmental factor that triggers the different forms is temperature or humidity, and the two animals may have very similar genotypes (but not necessarily; different genes may predispose development of particular phenotypes) that respond in different ways to those factors, producing a very different form.

This sounds tricky to evolve, and superficially seems complicated. The argument is that it requires the acquisition of sophisticated genetic control elements that sense elements of the environment and selectively activate different sets of genes depending on the conditions. Putting it that way makes it sound unlikely and awkward. However, all phenotypes are conditionally sensitive and dependent on interactions between genes and between genes and the environment—the control elements aren’t novel introductions, they’re already there! The evolution of polyphenic traits may be more a matter of shifting conditional responses quantitatively in particular directions.

Let me introduce one more critical term: genetic assimilation. This is a concept that has been around for a long time, and has often been maligned or more often neglected by old-school evolutionists like Dobzhansky or Mayr or Simpson; whether you think it important or not is a good indicator of where you stand in the ongoing evo-devo revolution. Simply put, genetic assimilation is the fixation of a phenotype by a genetic change in the regulation of the genes involved. Remember, as mentioned for polyphenic traits above, that regulation can be initially due, in part, to environmental factors, so what this effectively suggests that a phenotype can be environmentally induced first, and later ‘hardwired’ into the genome by changes in regulatory elements of the DNA. Genetic accommodation is a related concept that differs in that, while genetic assimilation works to stabilize a particular phenotype, making it more robust, accommodation can increase the responsiveness of the phenotype to changes in environmental conditions.

i-14c7f8aded7b1df1892fc43be4db3c1d-hornworm_morphs.jpg

That’s the background. Now let’s get into the specifics of this experiment.

That beautiful green beast to the left is the larva of the tobacco hornworm, Manduca sexta. The one on top is the form you’ll usually see, the wildtype green caterpillar—it’s always green like that. On the bottom is the black mutant of Manduca sexta, which is always heavily pigmented. This is not a polyphenism! There is no environmental component to trigger the development of the different pigment forms.

On the other hand, there is another related species, Manduca quinquemaculata, which does exhibit a polyphenism: when raised at 20°C, the larvae are all black, and when raised at 28°C, the larvae are all green. The goal of this experiment is to take a strain of M. sexta and induce it to evolve a pigment polyphenism like that of M. quinquemaculata.

If you want to select for an interesting property, though, the first thing you need is some variation to work from. Both the green and the black M. sexta are uniform populations—they’re either all black or all green, so there isn’t much visible variation! One way to expose variation between individuals, though, is to expose them to stress: make life difficult, and subtle differences are amplified and show through. The stress chosen was to expose the animals to heat shock, a 6-hour long exposure to the harsh temperature of 42°C. The normally green M. sexta sailed through the heat shock and emerged still green, but the black M. sexta suddenly exhibited new variations: some were still black, others turned green, and others were all shades in between. The authors developed a scoring system from 0 to 4, with 0 being a caterpillar that was virtually solid black, while 4 was a completely green caterpillar.

Now the selection experiment begins. Take 300 black M. sexta, and zap them with a heat shock. Pull out the larvae that turn the greenest, getting the highest color score, and let them grow up and breed, producing another generation; this is the polyphenic line, the animals that switch from black to green. Also pull out the larvae that stay black, let them grow up and breed with each other, producing the monophenic line…the caterpillars that have only one phenotype, black. As a control the authors heat-shocked one line, the unselected line, and picked random members to breed without regard to their color.

Here’s the result. The polyphenic line showed more and more reliable switching from black to green with each generation, while the monophenic line became more and more resistant to change with each generation. It happened fast: the monophenic line stopped producing any green caterpillars at all after the seventh generation, while the polyphenic line became more and more reliably green, achieving virtually perfect green caterpillars by the thirteenth generation. That’s evolution in action.

i-d30c4529cff8f4c9dc37733991a31100-hw_color_evo.gif
Effect of selection on temperature-mediated larval color change. Changes in the mean coloration of heat-shocked larvae in response to selection for increased (green) and decreased (black) color response to heat-shock treatments, and no selection (blue).

What exactly is going on in these animals? Larvae from each line at generation 13 were set aside and raised at different temperatures to assess their sensitivity to heat shock. If larvae were maintained at a cool temperature of 20°C and then heat-shocked, relatively few switched to green; if they were raised all the time at 35°C and then heat-shocked, they were more likely to switch. The plot below is of the reaction norms for each line, or of the responsiveness of each line to the temperature of their environment. What you see is that the polyphenic line has become more sensitive, developing a more switch-like response to temperature. Below the inflection point of 28.5°C, the polyphenic larvae stay black, while above 28.5°C, they’re much more likely to switch to green.

i-d89242d050fa60ea4323cb83dd3fd91e-hw_reaction_norm.gif
Effect of selection on temperature-mediated larval color change. The reaction norm of generation 13 lines reared at constant temperatures between 20°C and 33°C, and heat-shocked at 42°C. The curves are sigmoidal regressions on the mean data points. Error bars represent 1 SE.

How the color change occurs is diagrammed below. Basically, one of the important regulators of pigmentation is an insect hormone called juvenile hormone (JH). When it’s high, melanization is suppressed and the larva is green; when it’s low, melanization occurs and the larva turns black. Heat shock has the effect of increasing the levels of JH. Looking at just A (the top row) of the figure below, the white, yellow, and orange curves represent the levels of JH at cold, warm, and heat shocked temperatures. In the wildtype M. sexta, all three curves still fall within the range above a threshold, T2, at which the melanization enzymes are inactive. Any variation in the responsiveness to temperature is masked and invisible, because JH levels are so high that they swamp out any detectable differences.

The second row, B, illustrates the effect of the black mutation—it reduces JH levels overall. We still have the temperature-dependent variation, though, and now the increase in JH from heat shock is enough to push the JH levels above the T1 threshold, high enough that some animals will suppress melanization.

i-e19b24d97d64d44fad4780d0bddd737c-nijhout_summary.gif
(click for larger image)

(Left) Model for the evolution of a threshold trait at the phenotypic level. The evolutionary process required for the evolution of a threshold trait depends on the proximity of the population to the two thresholds (T1 and T2). Below T1, the phenotype is all black. Above T2, the phenotype is all green. Between T1 and T2, individuals express some intermediate phenotype. If the physiological control lies far from the phenotypic threshold (A), a mutation of larger effect or a sensitizing mutation is required to bring the population closer to the threshold (B). Once the population is closer to the threshold, the population can evolve a threshold response through genetic accommodation (C) or become canalized through genetic assimilation (D). (Right) The corresponding changes at the genetic/physiological level observed in this study. Unidirectional arrows indicate high-temperature-induced (yellow) and heat-shock-induced (orange) shifts. Bidirectional arrows indicate polyphenic shifts induced by temperature shifts.

C and D show the effects of selection in this experiment. C is the polyphenic line; what has happened there is that there has been an amplification of their sensitivity to temperature. Now when the animal is warmed up, the levels of JH increase greatly, putting them above the T2 threshold—they turn green. In D, the monophenic line, their levels of JH have been reduced further still, and their sensitivity to temperature reduced—heat shock no longer raises the JH levels above T1, so they stay black. C is an example of genetic accommodation, while D is an example of genetic assimilation.

What this tells us about evolution is that there can be a reservoir of ‘invisible’ variation in populations, which is typically buffered by developmental mechanisms. The buffering allows the variants to accumulate without compromising the viability of carriers. Enabling mutations or changes in the environment, however, can rapidly shift the effect of these variants out of the range that can be buffered, exposing new phenotypic effects that can then be subject to selection. This can be fast, fast, fast, since we aren’t waiting for a single new mutation (or worse, for polygenic traits, many mutations) to expand into a population, but are exploiting a large pool of diversity that is already present, mixing extant alleles by recombination to produce new phenotypes.


Suzuki Y, Nijhout HF (2006) Evolution of a polyphenism by genetic accommodation. Science 311:650-652.

Comments

  1. P J Evans says

    I’d never seen a black tomato-worm before. Neat!
    (BTW: I know it’s a tobacco hornwrom, but they eat tomatoes just fine. I’ve also seen the very spectacular moth called the vine sphinx (likes grape leaves).)

  2. Steviepinhead says

    Eloquently explained, PZ.
    I also appreciate the way you are able to pull the key graphics, photos, and charts together insupport of your thesis. I’m not always able to tell whether you are–somehow–uploading these from the original articles (clearly you do some of this), pulling them from various available sources (which you somehow keep catalogued in your head), or “recreating” them with your own tools but–however–you do it, it sure adds luster to the presentation and helps me make sense of what is going on.
    This all takes some talent at “visual thinking” that one would not necessarily expect to find in the same bundle as your several other skill sets. Thanks!

  3. wswilso says

    I have a another nominee for the most intimidating evo/paleo/bio technical term. This one is almost enough to make me chuck it all and go for ID. It is from Zimmer’s the Loom article “Irish Elk of the Jurassic”. The term is a the end of this excerpt:

    T rex and its kin–known as tyrannosaurids–were a particularly distinctive bunch. They had big heads with fearsome teeth, little arms ending in two-fingered hands, long legs, and a massive overall body. That distinctiveness made it hard for paleontologists to determine exactly where in the dinosaur family tree they fit. Thus Guanlong comes as such a delight. In a paper published today in Nature, a team of paleontologists from China and the United States describe two skeletons of Guanlong, discovered in western China. It lived about 120 million years ago, and measured up to nine feet long. It sported a number of traits that are found only in tyrannosaurids. Some are pretty easy to recognize, like the fusion of the nasal bones in the skull into a single unit. Others are a bit more esoteric, like the “centropostzygapophyseal lamina on cervicodorsal vertebrae.”

  4. Sean Foley says

    …what this effectively suggests that a phenotype can be environmentally induced first, and later ‘hardwired’ into the genome by changes in regulatory elements of the DNA.

    So Lysenko was right!

    Speaking of quote-mining, how long do you reckon it will take some folks in the ID crowd to misinterpret the “reservoir of ‘invisible’ variation” as “genetic front-loading”?

  5. says

    No, Lysenko was wrong. The mechanism doesn’t involve the environment imposing variation on the heritable elements of the genome.

  6. says

    Paul wrote:

    “the control elements aren’t novel introductions, they’re already there!”

    Indeed.

    The genome is actively responsive to environmental changes and has the hardware and software already on board to deal with these perturbations. In other words, the intelligence is aleady there that allows existing highly organized structures, processes and systems to respond to changes in environment. The regulatory switches are being reset by the molecular machinery of the genome.
    The question is not “did evolution occur (it did), the question is “what is the mechanism that causes this to happen and what is the origin of that mechanism?”
    While environmental stresses, changes and perturbations are random, the hardware and software already present in the genome is clearly the product of intelligent input.

    (BTW, did you see today’s New York Times?

    Low-Fat Diet Does Not Cut Health Risks, Study Finds

    http://www.nytimes.com/2006/02/08/health/08fat.html

    Published: February 8, 2006

    “The largest study ever to ask whether a low-fat diet reduces the risk of getting cancer or heart disease has found that the diet has no effect.”

    I wonder where I heard that before?

  7. Steviepinhead says

    Ha, Sean Foley beat (shrug) Charlie Wagner to the punch!

    Sorry, Charlie, you’ll have to find your hi-fat calories somewhere else beside the hornworm punchbowl…!

  8. Johnny Vector says

    One pet peeve I (and many other biologists) have is the media shortcut of describing an identified gene as a “gene for X”

    The version that gets me steamed is when they embrace the unspoken assumption that all traits are a linear combination of exactly two components, one heritable and one environmental. This results in statements like “This research shows that blah blah blah is 75% genetic.”

    Next time anyone says something like that I’m going to point them to this article and ask what percent of that moth’s coloring is genetic.

  9. says

    I spent a week trying to write about this study in anything approaching a coherent manner, I gave up in the end – I wrote it in far too compicated and convoluted way for a blog post. Thank you for doing this – now I can just link to this post (and I will in about one minute).

    Temperature-dependent sex determination in the leopard gecko is another example of a polyphenism. Males are produced predominantly at warm incubation temperatures (around 32 degrees C). Females are produced predominantly at cooler incubation temperatures (e.g., 28 degrees C) as well as at very high temperatures (35-36 C – above this eggs get fried). Many other reptilian species have their own patterns and tresholds of temperature-dependent sex determination. Different temperatures trigger expression of different genes, which turn on the development of either testes or ovaries, which in turn churn out either androgens or estrogens, which in turn shape the sex-specific properties of all other structures of the body, from skin color and body-size to behavior.

  10. says

    So, does this experiment help support the idea of Punctuated Equilibrium? I.e. during stable periods, genetic drift encourages these sorts of stable mustations in the population, while an ecological change could result in stessors that could result in apparently major genetic alterations within a few short generations?

  11. says

    from Paul:

    The genome is actively responsive to environmental changes and has the hardware and software already on board to deal with these perturbations. In other words, the intelligence is aleady there that allows existing highly organized structures, processes and systems to respond to changes in environment. The regulatory switches are being reset by the molecular machinery of the genome.

    WHAT? and i suppose the code to allow this is Gödelized as well?

    great experiment. great explanation, PZ.

    it’s almost as if the “reservoir[s] of ‘invisible’ variation in populations” are like symptomless populations of microbes (including viruses) in individual carriers. you can’t tell they are there by looking at them, but given the right circumstances they can be expressed. microbes are probably more “pre-cooked” but, well, some viruses are pretty plastic.

  12. Classicalclarinet says

    Hi, first comment-

    The study is realliy fascinating (and easy to understand by virtue of Dr. PZ) but I have some questions- first, would this be likely to happen in nature (that is, would the polyphenic line be more likely to mate with the polyphenic line) since the control doesn’t seem to go any one way- maybe natural selection would favor the green worms (less likely to overheat) and cause the same effect, although slower? Second, how is the polypenism, or non-polyphenism, amplified through successive generations? And third, are some traits more likely to exibit polyphenism whith big changes to the environment?

  13. Torbjorn Larsson says

    So how come that the slow Charlie doesn’t understand the control elements have evolved earlier to be there?

    Wait, he is a crackpot – his mistakes aren’t novel introductions, they’re already there!

  14. says

    What a wonderfully clear piece of exposition, PZ. Glorious.

    The trick to Gödel and other words with accented characters in them — I’ll look a prat if this doesn’t work — is to spell out the character entities, which all start with “&” and end with “;” So Gödel is produced by writing Gödel.

  15. says

    Andrew,

    at first you did look a prat, I’m afraid; but then I changed my character encoding from ‘Western’ to ‘Unicode’, and suddenly you looked quite clever.

  16. says

    first, would this be likely to happen in nature (that is, would the polyphenic line be more likely to mate with the polyphenic line) since the control doesn’t seem to go any one way- maybe natural selection would favor the green worms (less likely to overheat) and cause the same effect, although slower?

    All you need is conditions that select for black young larvae and green older larvae. It would be interesting to see what the environment of M. quinquemaculata is like, and find out if their coloring is adaptive.

    Second, how is the polypenism, or non-polyphenism, amplified through successive generations?

    There are regulatory elements that respond to heat shock by increasing the activity of their genes. Via recombination and selection, they’ve sorted out the individuals that have the most responsive control elements.

    What exactly those genes are hasn’t been worked out yet.

    And third, are some traits more likely to exibit polyphenism whith big changes to the environment?

    I suppose. All traits are responsive to varying degrees to the environment, but some are more robustly buffered or canalized than others.

  17. says

    Speaking of genetics and plasticity, is anybody here familiar with work in cognitive neuroscience on using genetic algorithms to evolve (simulated) neural networks?

    The basic idea is that you use evolution to to fine-tune a neural network design, but then the neural network is not fixed—it then learns by normal neural network techniques such as backpropagation.

    One version of this is to use a fully connected multilayer neural network design, i.e., with every neuron in one layer connected to every neuron in the next layer. The simulated chromosome(s) basically just directly encode the initial weighting of the network, which is the initial state of a highly plastic phenotype. There is no genetic regulatory network in between.

    In this kind of model, the evolution of initial weights amounts to putting a bunch of biases into something that then learns from experience. Under selection pressure, you generally get certain initial biases that become strong and fixed very early; these amount to a basic design of a nervous system. You also get a bunch of (varying) subtle biases encoded by weaker and more variable weightings between the simulated neurons.

    One way of viewing this, in fairly abstract evolutionary terms, is that it’s an oversimplification of what happens in the evolution of things with nervous systems, where we “leave out” the genetic regulatory network that controls the basic wiring of the nervous system. Heavy and not-very-variable patterns of weightings that emerge early in evolution amount to culling a basic design from a vast universe of possible basic designs. These roughly correspond to the initial wiring of a real organism with a nervous system, which is actually done by genetic regulatory networks.

    Further “learning” can happen either by “evolution” random mutation and recombination of the subtler initial biasing patterns, by evolution, or by simple learning in the phenotype stage.

    One interesting propery of such a model is that there is strong evolutionary pressure to yield phenotypes that “can easily learn” what they most need to learn, without necessarily “hardwiring” those things as fixed “instincts.” The overall system can “learn to learn,” evolving things that “know just enough” to learn the rest by experience.

    A rather different way of viewing this is to think of the neural network not so much as modeling a real neural network, but as being a very rough analogue of a highly adaptive genetic regulatory network. The evolved weightings between neurons roughly model the interactions of genes, which regulate each others’ expression. Within a population, you get variations in which genes affect which other genes, and how strongly. Early in evolution, you get quick fixation toward a basic design of a genetic regulatory networks, with the relatively strong and relatively fixed weighting patterns representing the usual design of the genetic regulatory scheme. Beyond that, you can have a lot of variation, with certain genes interacting in some individuals, but not in others—or only weakly in most individuals, but in ways that can be rapidly amped up under natural selection.

    And this makes me wonder if there are any relatively general learning mechanisms evolved into our genetic regulatory networks, very roughly analogous to backpropagation in neural networks. Are there respects in which it’s better to view genetic regulatory networks, or some parts/aspects of them, as learning mechanisms that start with some initial propensities and then “learn”, as the organism develops, how to “solve the problems” of integrating the various relatively fixed things that emerge?

    That may be way too vague, but I find it thought-provoking. If it’s true, it would seem to imply that there would be large-scale feedbacks in genetic regulation, which amount to something roughly similar to backpropagation in a neural network, controlling the gradual convergence toward a particular “solution” to a problem of integrating the various plastic aspects of development. If such things are there, they might be extremely important, but very easy to miss if you’re not looking for them.

  18. says

    Aren’t “discontinuities” nifty? It seems to me that similar things occur in purely chemical systems, though. I’m trying to think of a good example … some case where a small concentration change drastically affects the “end state” or something. Drat … it has been too long.

  19. says

    Aren’t “discontinuities” nifty? It seems to me that similar things occur in purely chemical systems, though. I’m trying to think of a good example … some case where a small concentration change drastically affects the “end state” or something. Drat … it has been too long.

    If I understand what you’re asking about, I think Stuart Kauffman gives some good examples in At Home in the Universe.

    One is the infection of certain chemical manufacturing processes with bad by-products that auto-catalyze; once you exceed a certain threshold concentration, you get some minor subsidiary reactions that produce more of the catalyst, and you’re screwed. You get bad batch after bad batch, because the subsidiary reactions screw up the chemical you’re trying to produce.

    This can effectively ruin a whole factory—it’s almost impossible to clean all the equipment well enough to get rid of the infection. It may be easier to build a new, clean factory.

    There’s some common industrial chemical that had this problem, with factory after factory eventually getting infected and ruined; they had to find a new process to synthesize it, which didn’t have that problem.

    The prions that cause Mad Cow Disease and other Bovine Spongiform Encephalopathies are interestingly similar and different. In that case, it’s not so much a chemical discontinuity as a physical one, like crystal formation. Certain proteins fold “the wrong way,” very rarely, but serve as a template to encourage other copies of the same protien to mis-fold in the same way. So you get an “infection” of domino mis-foldings, and it’s progressive and degenerative.

    (And in the realm of fiction, there’s Kurt Vonnegut’s Ice-9… water molecules can crystallize in an extremely rare way, at room temperature, but once some, and it escapes the lab, it’s the end of the world: the oceans freeze solid, etc.)

    Not sure these examples count as “nifty,” though. :-)

  20. Timothy Chase says

    Some of these ideas are dealt with in fair detail in the “The Plausibility of Life.” For example, in line with the Baldwin Effect, phenotypic developmental plasticity in results in an organism’s somatic adaptability to the environment — which is needed for simply for being able to respond to a changing environment.

    However, as a result of this plasticity, organisms are able to extend their reach into new ecological niches. Such extension will tend to stress the organism, increasing the pressure of natural selection until stabilizing mutations occur. As you mentioned on Panda’s Thumb, this would seem to suggest that not only has life evolved, but so has the evolvability of life, although in part simply in response to the need for a certain robustness, that is the ability to adapt during development to an environment which is subject to change.

  21. Timothy Chase says

    Paul W. wrote:

    One way of viewing this, in fairly abstract evolutionary terms, is that it’s an oversimplification of what happens in the evolution of things with nervous systems, where we “leave out” the genetic regulatory network that controls the basic wiring of the nervous system. Heavy and not-very-variable patterns of weightings that emerge early in evolution amount to culling a basic design from a vast universe of possible basic designs. These roughly correspond to the initial wiring of a real organism with a nervous system, which is actually done by genetic regulatory networks.

    Well, I am not as familiar with the simulation of networks by means of genetic algorithms, but from what I have been reading (e.g., “The Plausibility of Life”), much of the construction and maintanence of organisms involves the existence of exploratory processes. For example, the telomeres in the cytoskeletons of cells (with a few exceptions — such as the flagella of the male gamete) are highly dynamic even when the cell is largely “static.” Microtubules tend to have a lifetime of only a few minutes after they have been constructed. If and when they make the right connections, they will temporarily stabilize, otherwise they will be torn down with their tubulin being returned to the cytoplasm. This activity while at rest (which is also found in cellular membranes) is something which I have come to think of as “dynamic stasis.”

    Similarly (and perhaps more in line with your interests), they have recently discovered that a retrotransposon is apparently responsible for the over-production of neurons in the mammalian brain. Those which make “the right” connections are preserved, and those which do not die off. This gives the mammalian brain a fair degree of plasticity — which likewise appears to be an instance of “evolved evolvability” (see my earlier post above).

  22. Timothy Chase says

    Paul quoted Charlie (who then quoted him):

    In other words, the intelligence is aleady there that allows existing highly organized structures, processes and systems to respond to changes in environment… the control elements aren’t novel introductions, they’re already there!

    According to what I have been reading (e.g., the book “The Plausibility of Life” and technical papers on the web), what we are finding are indeed protein networks, where combinatorial switches will be activated or deactivated depending upon proteins which are in a cell’s environment. Moreover, this is so much like electronic or electrical switches that it is oftentimes diagrammed the much the same way.

    However, I understand that one of the major differences between these biological circuits and electrical circuits is that the biological circuits tend to be highly redundant and of low fidelity. As a result, they have a great deal of robustness, permitting some biological circuits to be co-opted for other functions. This leads to a further degree of evolvability which itself appears to have evolved.

    Charlie, do you suppose you could ever introduce us to that towering intellect, Marshal Nelson, the discoverer of Nelson’s Law? I understand the two of you are very close…

  23. Timothy Chase says

    Sorry Paul (PZM), Paul W.

    I misunderstood who and what Charlie was (mis)quoting. Fortunately, his selection of phrases demonstrates high redundancy, and will surely act as a red flag if he ever decides to undergo asexual reproduction by means of the experimental process known as “cyber-fission.”

  24. says

    Timothy, thanks for your comments. I will read the plausibility of life.

    (My own interests are fairly general, by the way, including a basic fascination with evolution and curiosity about the computer science of genetic regulatory networks; I’m at least as interested in the basic issues of evolving evolvability into genetic regulatory networks as I am in the cognitive science stuff per se.)

    One reason I thought that biologists might be interested in evolution of neural networks is that it’s one way in which you can easily simulate evolution of things with polypheny and genetic assimilation. Even if neither the evolutionary algorithms and the neural networks are are terribly biologically realistic—and they’re not—you at least have the right qualitative properties to see things like the Baldwin effect pop right out.

    Which makes me curious. Do people in evolutionary biology have other simulation models with those properties? (E.g., likely not evolving “nervous systems,” but evolving some other kind of plastic developmental phenotype.) I don’t know much about what kinds of modeling actual evolutionary biologists do or don’t do.

    However, I understand that one of the major differences between these biological circuits and electrical circuits is that the biological circuits tend to be highly redundant and of low fidelity. As a result, they have a great deal of robustness, permitting some biological circuits to be co-opted for other functions. This leads to a further degree of evolvability which itself appears to have evolved.

    Right; it’s this kind of consideration that made me think of viewing evolutionary algorithms generating neural networks as pretty similar to evolution generating genetic regulatory networks. Both kinds of networks are low-fi, highly redundant, and robust in a certain sense. I was wondering if you could get anything interesting out that kind of simulation model by replacing the (simple, idealized) “neural networks” with comparably simple, idealized genetic networks.

    In “neural networks” work, there are some standard neural network models that everybody knows are horribly oversimplified, but which are still considered significant and scientifically interesting idealizations. (They also have practical applications, independent of their biological plausibility.) One advantage of using such off-the-shelf idealizations of neural networks in evolutionary simulations is that it’s obvious that people didn’t just make up a particular neural network for that evolutionary algorithm; it’s clear that people didn’t just make up an idealization about neural networks for the purpose of getting a rigged result out of the evolutionary algorithm. There’s really something interesting going on, because even if it isn’t very realistic, it isn’t just obviously ad hoc, either. (Neither the basic evolutionary algorithms nor the basic neural network models was designed with an eye toward demonstrating the Baldwin effect, but you can plug them together, and they do that “naturally.”)

    I don’t know if there are similarly simple, abstract models of gene action that people use as reference points in that way, though. If so, I’d be interested in hearing about them from anybody familiar with the biology literature.

    Similarly (and perhaps more in line with your interests), they have recently discovered that a retrotransposon is apparently responsible for the over-production of neurons in the mammalian brain. Those which make “the right” connections are preserved, and those which do not die off. This gives the mammalian brain a fair degree of plasticity — which likewise appears to be an instance of “evolved evolvability” (see my earlier post above).

    I’m very vaguely familiar with the over-growth and culling of neurons and synapses, but I don’t have any deep understanding of how it really works. (Less than used to, I think; must have culled the wrong ones again.) I’m also unclear on the significance of this being controlled by a retrotransposon; could you elaborate?

  25. Torbjorn Larsson says

    Paul,
    It seems our interests here overlap, but you are far more studied on this. So I have some naive questions.

    You give an example: “The basic idea is that you use evolution to to fine-tune a neural network design, but then the neural network is not fixed—it then learns by normal neural network techniques such as backpropagation.

    One version of this is to use a fully connected multilayer neural network design, i.e., with every neuron in one layer connected to every neuron in the next layer.”

    I seem to remember that whose fully connected multilayers had problems with learning (learning convergence and/or stability?) so I thought they were not so successful as partially connected ones? Threelayers with many-few-many connections is vaguely on my mind as practical, am I totally off?

    Which leads me to my next question: Your proposal doesn’t seem to include the partially connections that brain structures, or in your case, gene structures are. Ie, fine that some connections are made weak, but in reality they are zero from the start. And as Timothy describes those disconnects may change, at least for nerves.

    Is it a possibility that weak vs zero makes a difference? My naive guess is: yes. What do you think?

  26. Timothy Chase says

    Paul W. wrote:

    I’m very vaguely familiar with the over-growth and culling of neurons and synapses, but I don’t have any deep understanding of how it really works. (Less than used to, I think; must have culled the wrong ones again.) I’m also unclear on the significance of this being controlled by a retrotransposon; could you elaborate?

    Here is one of the pop-articles which touches on this:

    Helpful junk
    Jun 16th 2005
    From The Economist print edition
    Brain development may be influenced by genetic parasites

    I can try looking up some of the other articles. I should at least be able to find one or two technical articles (or at least references) over the weekend. But it is not simply the over-production of brain cells which the retrotransposon is presumably responsible for (the article doesn’t really go into it in detail, but I would presume that either retrotransposition or — since this process involves an L1 — amplification are involved), but quite possibly also the variety of cells found in the brain, as there are more cell types in the mammalian brain than in any other organ. But the latter of these seemed more tentative.

    But why did I emphasize the fact that a retrotransposon is involved? Basically I see retroelements as having played a very important part in the evolution of organisms. For example, they seem to have played a very important role in the development of placenta in mammals. Several ERVs (endogenous retroviruses) play complementary roles in the creation of a barrier to the mother’s immune system, and at least in humans are involved in embryonic tissue development in a number of different organs, including the kidneys, testes, lungs, and nervous system. Likewise, retroelements appear to have had an even greater role in primate evolution.

    Similarly, retrotransposons are responsible for most gene duplication and thus make possible the subsequent subfunctionalization and neofunctionalization. Moreover, something like retrotransposons (or endogenous retroviruses, which are essentially nothing more than retrotransposons which include an ENV gene) were responsible for making the leap from the RNA-world to the DNA-world, and likewise, part of telomerase (an enzyme responsible for the lengthening of the telomeres of the chromosomes, which is required for sexual reproduction due to the linear structure of the chromosomes) is functionally homologous to reverse transcriptase. In the case of drosophila, if I remember correctly, an actual reverse transcriptase is employed in place of telomerase.

    So I guess the long and the short of it (LINEs and SINEs?) is that retroviruses (or their relics) seem to have had a great deal of influence upon the evolution of life. One clear sign of their importance lies in the fact that retroelements together compose approximately 49 percent of DNA in the human genome, and these retroelements are almost entirely relics from past retroviral infections. (However, there exists a plant where this number is closer to 80 percent.) Incidentally, transposons also seem to be viral in origin, although this seems more tentative. If I remember correctly, their likely origin is in single-stranded DNA viruses, but I would have to check.

    One of my obsessions…

  27. says

    It seems our interests here overlap, but you are far more studied on this.

    I think you’re overestimating my expertise and my memory. My knowledge of this stuff comes mostly from conversations with a colleague, and reading a few papers, years ago.

    You’re likely right about randomly-connected vs. fully connected networks, etc.

    I shouldn’t have been so specific—I was just trying to get across was just that it was fairly simple and off-the shelf neural network technology those guys were already using for , not a special kind of invented for demonstrating polyphenism, the Baldwin Effect, etc.

    Now that I think of it, maybe the researchers in question likely used sparse networks, or didn’t use plain backpropagation, but some faster, cheaper training algorithm. The major concern there was simply the speed of the simulations, where training the network is the “phenotypic development” you have to do to each candidate genome before applying the fitness function and culling the population.

    At the time, I regarded the neural network part as mostly a black box; the discussions I had with the colleague were mostly about the genetic algorithms. They were using the simple everybody-sleeps-with-everybody, single-fitness-function kind of GA, which was fast. I was interested in the possibility of something similar in a richer, more realistic “environment,” where you have varying fitness functions in different niches, so that things would likely evolve via different pathways as they wandered through different niches, rather than always competing head-on. This should let you solve harder problems, while being much less efficient for the easy-enough problems.

    (A conventional GA is very “greedy,” weeding out anything that doesn’t show a lot of promise very quickly. It overcommits to one or a few basic designs early on, once it’s good enough, and a “new design” generally gets clobbered because it isn’t given any time to “get up to speed” before competing head-on with old, more-refined designs. Real evolution isn’t like that, of course, except within local niches that are interconnected. Most novelties do die off right away, but the ones that don’t survive because they move into a somewhat different niche.)

    So, for example, if you were trying to evolve a fast and agile little robot, you could have a two-dimensional environment where things could evolve in the “fast” direction first, and then in the “agile” direction, while other things were evolving in the “agile” direction first. They might only compete directly when they both tried to spread into the very-fast-and-very-agile corner.

    I never actually did any research on that. Partly because never got to the point of being clear on exactly what scientific points I wanted to prove, but mostly because I was busy with a bunch of completely unrelated research. (It’s much easier to keep doing what you’re already an expert on, like a greedy GA.) I did find out that some people in the artificial life community were contrasting the greediness of conventional GA’s with the more general search you get from evolving through a richer environment.

  28. says

    But why did I emphasize the fact that a retrotransposon is involved? Basically I see retroelements as having played a very important part in the evolution of organisms […].

    One question this raises for me is whether retroviruses and retrotransposons are important for a simple, basic reason, like providing a basic mutation mechanism of a convenient sort, or if there’s something much weirder and deeper going on.

    For example, thinking about PZ’s articles about parasites lately, I got to wondering about coeveolution of viruses and their hosts in genetic algorithm terms.

    One of the inefficiencies of conventional (sexual) genetic algorithms is that once things speciate, any work done in developing a feature in one species has to be re-done to evolve the same feature in a different species. A neat hack discovered by one species has to be laboriously re-evolved by another, if it’s lucky.

    Suppose that a major mechanism of evolution was viruses evolving to be more or less symbiotic with their hosts—they may modify modify the host in mostly self-serving, exploitative ways, and be bad on average, but they’re also more than randomly likely to evolve to confer a mitigating benefit.

    So if evolution of the hosts is significantly by incorporation of viral DNA that modifies its genome, and if those viruses then jump a species barrier, and the mitigating benefit transfers to them, a lot of work could be saved.

    In effect, viruses would act like plug-ins for software packages, like a video codec for a browser. It might be developed for one browser, and then re-used in another browser, even if the code bases for the two browsers have diverged substantially in many respects. (Rather like the swapping of plasmids between quite different types of microbes.)

    This could potentially save a tremendous amount of redundant evolutionary “software development effort,” with viruses acting as public domain software that can be incorporated into a variety of more proprietary software packages. If it worked well, it might tremendously accelerate evolution.

    If this were true in its simple and obvious form, I’d expect that we’d have a whole lot of symbiotic transmissable viruses around, not just retroviral relics stuck in our DNA and restricted to our descendant lineages. I don’t know much about that, but from my biologically ignorant position at the moment, it doesn’t seem right—my impression is that there are not a lot of transmissable viruses that most people have, and must have to develop properly. (E.g., if your family doesn’t transmit the virus, you don’t develop some common feature properly.)

    It’s also my impression that endogenous retroviruses tend to be either eliminated or become stuck in our DNA. They don’t generally jump back out into a transmissable virus. (Although it’s my impression that that can happen, with viruses occasionally picking up bits of DNA from their hosts, or several viruses picking up each other’s DNA within a host.)

    So a potential problem with this theory is that for the code-sharing system to work optimally, you’d want to solve some big commons problems. For example, a given species should not generally sequester useful viral DNA in its own genome—it should be freely shared so that other lineages can use it. But it seems that the obvious selection pressures would oppose that—if the virus is good for you, you want to hang onto it in your sequestered DNA rather than relying on the vagaries of viral transmission. And you don’t much care about other lineages, which are likely competitors.

    (But maybe not. If they’re your prey, what’s good for them is good for you, unless it’s good for them because it keeps them from being your lunch. For example a toxoplasmosis-like infection might be good for rats and cats in general, even though it gets rats eaten by cats sometime; it could pay for itself in some way for rats, reducing the selection pressure for rats to be immune to it. But in each case, the virus would need to be transmitted selectively—e.g. to likely prey but not likely competitors.)

    Another problem for this scheme working optimally would be that you’d probably want viruses to be big and able to code for entire software modules with a number of interacting parts. (E.g., in an extreme case, a how-to-build-an-eye module, as opposed to just a particular tweak on relatively standard eye development.)

    So far as I know, viruses are generally quite small; they may tweak several things in a genetic regulatory network, encoding several changes that need to be made simultaneously for any of them to make sense, but they probably don’t implant very big software modules into the host. E.g., nothing of the complexity of a video codec, or an eyeball from scratch. That doesn’t mean that you couldn’t get some really interesting transmissable modules, which would amount to an elegant combination of subroutine calls that built something very cool by efficiently reusing standard code. And maybe that’s enough for it to be very important.

    A subtler and weaker version of this would be changes to an organism that just shift certain developmental or cognitive biases, “directing” its evolutionary path such that normal evolution does the real problem-solving, given this “strong hint” as to which direction to go in searching for a solution. That kind of “good hint” might direct evolution, with genetic assimilation doing the grunt work.

    Unfortunately, that’s suboptimal in a couple of major ways. One is that if a good hint directs one species to develop in a certain general way, that may well work for another species, too, and you’ll get directed convergent phenotypic evolution, but their code bases will diverge, because they’ll fill in the genetic blanks in different minor ways that are incompatible. This will limit the ability to “reuse code” across lineages in the future.

    The other way this directed convergence is suboptimal is that it decouples the fitness function of the virus from the fitness function of the host to some extent. A virus might direct an organism in a good direction, but if it takes too long for the organism to get very far in that direction and proliferate, it may not do the virus as much good. And if the genetic assimilation eliminates the host’s dependency on the virus, it increases the selection pressure on the host to eliminate the virus, which is no longer useful and may be harmful.

    Many of these problems could be solved for a genetic algorithm, by fiat—just rig the algorithm in a way that solves the commons problems that would actually occur in nature. I could freely add big viruses or plasmids to a basically sexual GA, and rig the fitness function to reward producers of code that’s reused in other lineages. Maybe that would yield a useful algorithm, but at this point I’m sure I don’t understand this stuff well enough—deep facts about what makes evolution work well or badly—and I’m really more interested in the scientific questions than in engineering solutions.

  29. Timothy Chase says

    Note: this is picking up where Paul W. was responding to Torbjorn Larsson.

    I know very little regarding genetic algorithms — the broad outlines, sure — but while being a programmer, this certainly isn’t an area that I have gotten into or even studied that closely. However, I have heard of circuits being designed by means of genetic programming which were highly efficient, except that no one seemed to understand how exactly the worked once they had evolved as solutions to problems. Similarly, when the program is required to achieve efficient solutions to varying problems (a given set of problems, but encountered repeatedly yet in different order), modularity (which reduces pleiotropy) naturally results. Something similar, it was hypothesized, may have resulted in the modularity found in living organisms.

    Paul W. wrote:

    At the time, I regarded the neural network part as mostly a black box; the discussions I had with the colleague were mostly about the genetic algorithms. They were using the simple everybody-sleeps-with-everybody, single-fitness-function kind of GA, which was fast. I was interested in the possibility of something similar in a richer, more realistic “environment,” where you have varying fitness functions in different niches, so that things would likely evolve via different pathways as they wandered through different niches, rather than always competing head-on. This should let you solve harder problems, while being much less efficient for the easy-enough problems.

    Sounds like the problem you are interested may be considerably more complex than that which is normally studied. From what I can tell, you are interested in the evolution of ecological systems and of species within evolving ecological systems. Another thing you might want to consider, perhaps, (and please keep in mind that this is just a stray thought) is the possibility of self-culling populations within systems where there exist limited resources.

    I remember in reading “Fifty Years of Genetic Load” Bruce Wallace suggesting that such self-culling could be very important in the presevation of populations — by culling the weakest throughout the developmental process — thus preventing the catastrophic decline of the population (which might mean extinction) if all were to be culled at the same time once nearly all of the resources have been used up. Then again, there are plenty of other things to model as well, I am sure.

    Paul W. wrote:

    (A conventional GA is very “greedy,” weeding out anything that doesn’t show a lot of promise very quickly. It overcommits to one or a few basic designs early on, once it’s good enough, and a “new design” generally gets clobbered because it isn’t given any time to “get up to speed” before competing head-on with old, more-refined designs. Real evolution isn’t like that, of course, except within local niches that are interconnected. Most novelties do die off right away, but the ones that don’t survive because they move into a somewhat different niche.)

    This reminds me of recent results they have been getting through the statistical analysis of populations. It appears that in line with the near-neutral theory of molecular evolution by Ohta (she was a student of Motoo Kimura) slightly detrimental mutations will tend to accumulate in smaller populations, thus for example, one should expect to find larger genomes in the smaller populations of freshwater fish than in the genomes of larger populations of saltwater fish. Similarly, introns are largely absent from prokaryotes, but play a very large role in gene regulation in multicellular eukaryotes. Likewise, combinatorial switches are much less common and usually a great deal simpler in prokaryotes than they are in eukaryotes.

    All of this suggests that smaller populations accumulate more slightly deleterious mutations in their genomes, but this in essence gives the genome more to play with in the long-run, resulting in greater novelty and developments which themselves require higher levels of complexity. This is something which has been expected by some since the mid-1990s, but now research is finally beginning to bare it out, although I undestand it is still somewhat controversial.

  30. Timothy Chase says

    I wrote:

    But why did I emphasize the fact that a retrotransposon is involved? Basically I see retroelements as having played a very important part in the evolution of organisms […].

    Paul W. responded:

    One question this raises for me is whether retroviruses and retrotransposons are important for a simple, basic reason, like providing a basic mutation mechanism of a convenient sort, or if there’s something much weirder and deeper going on.

    Providing a basic mechanism for mutation, particularly in terms of gene duplication, but to a lesser extent, in terms of rearrangement would seem to be the most conventional role possible for them. And this does not necessarily mean that the effect isn’t important, either, especially inasmuch as viruses are able to mutate a million times faster than their eukaryotic hosts. But yes, I myself have been wondering about more exotic possibilities as well.

    For example, thinking about PZ’s articles about parasites lately, I got to wondering about coeveolution of viruses and their hosts in genetic algorithm terms.

    One of the inefficiencies of conventional (sexual) genetic algorithms is that once things speciate, any work done in developing a feature in one species has to be re-done to evolve the same feature in a different species. A neat hack discovered by one species has to be laboriously re-evolved by another, if it’s lucky.

    Well, this would certainly seem to be important in the case of bacteria. Phages will often carry genetic material from one bacteria to another. Moreover, there is a great deal of lateral gene transfer taking place at the bacterial level, such as in the acquisition of resistance to antibiotics. Interestingly enough, bacteria seem to have formed efficient, small world networks which facilitate the transfer of such innovations throughout the entire network. So for example, if a gene for resistance to a given antibiotic makes it to one of the hubs (perhaps a species of bacteria living in the soil), this gene can quickly propogate. Bad news for us. But one related piece of news is that bacteria also create their own antibiotics, and by testing our antibiotics against bacterial populations found in the soil, we may be able to determine which antibiotics they will have more difficulty acquiring resistance to.

    Suppose that a major mechanism of evolution was viruses evolving to be more or less symbiotic with their hosts—they may modify modify the host in mostly self-serving, exploitative ways, and be bad on average, but they’re also more than randomly likely to evolve to confer a mitigating benefit.

    Agreed. In fact, even without endosymbiosis, HIV-2 confers a benefit of sorts. In addition to being less virulent than HIV-1, infection by HIV-2 confers some resistance to HIV-1. Another possibility is that HERVs, while adapting to the genome, may also take over certain regulatory functions which are thus modified in a way that benefits the host. Or as in the case of placenta, they may perform certain roles that the host itself is ill-equipped for, but which viruses are especially well-suited for: creating the barrier to the mother’s immune system.

    So if evolution of the hosts is significantly by incorporation of viral DNA that modifies its genome, and if those viruses then jump a species barrier, and the mitigating benefit transfers to them, a lot of work could be saved.

    In effect, viruses would act like plug-ins for software packages, like a video codec for a browser. It might be developed for one browser, and then re-used in another browser, even if the code bases for the two browsers have diverged substantially in many respects. (Rather like the swapping of plasmids between quite different types of microbes.)

    In principle, this is certainly possible. But one of the questions this raises for me is how would the virus “know” that it had an entire set of genes which are needed for a given protein network? It would suggest a level of complexity on the part of viruses (which are typically quite small — for example, influenza has only eight RNA chromosomes, and HIV has only nine RNA chromosomes, and the number of base-pairs is typically quite small relative to the host) which they simply don’t have the genomic size for.

    This could potentially save a tremendous amount of redundant evolutionary “software development effort,” with viruses acting as public domain software that can be incorporated into a variety of more proprietary software packages. If it worked well, it might tremendously accelerate evolution.

    Actually, this seems to have occured, largely in terms of the generation of the initial proteins and protein networks which were needed for various functions (e.g., digestion, the ability to sense light, chemical gradients, pressure, temperature) largely within bacterial communities. The genetic basis for these networks, it would appear, became integrated into the eukaryotic genome prior to the division of cells into the somatic cells and the gametes, and became the basis for the various functions which are now performed by specialized organs, but much less tightly coupled in the process. For example, rhodopsin proteins are responsible for converting light into chemical energy, and thus play a central role in the perception of light in the cones and rods of our eyes. In bacteria, they are part of a tightly coupled network responsible for phototaxis. But indirection and consequent loose-coupling makes possible far more complicated behavior of multicellular organisms.

    If this were true in its simple and obvious form, I’d expect that we’d have a whole lot of symbiotic transmissable viruses around, not just retroviral relics stuck in our DNA and restricted to our descendant lineages. I don’t know much about that, but from my biologically ignorant position at the moment, it doesn’t seem right—my impression is that there are not a lot of transmissable viruses that most people have, and must have to develop properly. (E.g., if your family doesn’t transmit the virus, you don’t develop some common feature properly.)

    Well, if they need the DNA to develop properly, then it should be part of their genome so as to insure that they do develop properly. You wouldn’t want to have to depend upon chance infections to insure proper development. Nevertheless, something along these lines in fact takes place during birth — with the acquisition of bacteria which are necessary for digestion.

    It’s also my impression that endogenous retroviruses tend to be either eliminated or become stuck in our DNA. They don’t generally jump back out into a transmissable virus. (Although it’s my impression that that can happen, with viruses occasionally picking up bits of DNA from their hosts, or several viruses picking up each other’s DNA within a host.)

    True — they start out as exogenous, then wind up endogenous, probably working their way into the germline originally via the male gamete — although there is the potential exception of HERV-K113. It may still be capable of some lateral transmission.

    So a potential problem with this theory is that for the code-sharing system to work optimally, you’d want to solve some big commons problems. For example, a given species should not generally sequester useful viral DNA in its own genome—it should be freely shared so that other lineages can use it. But it seems that the obvious selection pressures would oppose that—if the virus is good for you, you want to hang onto it in your sequestered DNA rather than relying on the vagaries of viral transmission. And you don’t much care about other lineages, which are likely competitors.

    Exactly. Sequesteration will be the rule since viral transmission is chancy.

    (But maybe not. If they’re your prey, what’s good for them is good for you, unless it’s good for them because it keeps them from being your lunch. For example a toxoplasmosis-like infection might be good for rats and cats in general, even though it gets rats eaten by cats sometime; it could pay for itself in some way for rats, reducing the selection pressure for rats to be immune to it. But in each case, the virus would need to be transmitted selectively—e.g. to likely prey but not likely competitors.)

    I have been thinking along these lines myself. Viruses are aggressive — they have to be in order to defeat the immune system of their host and reproduce. However, coevolution results in a special relationship between the virus and its natural host, where the virus is more or less asymptomatic, and the immune system leaves it alone. Nevertheless, there are benefits which a virus could confer to its natural host which would consequently benefit itself insofar as it depends upon this natural host. For example, a virus may remain fatal to a predator of its natural host, or it could remain fatal to a competitor of its natural host. In the later case, between two populations which have been separated for some time, a virus in one of those populations might also act as a barrier to the mixing of the two populations, and thus prevent the exchange of genetic information, thereby promoting speciation — in some cases. (This would also tend to argue against their having played a large role in the lateral gene transmission among multicellular eukaryotic hosts.)

    Another problem for this scheme working optimally would be that you’d probably want viruses to be big and able to code for entire software modules with a number of interacting parts. (E.g., in an extreme case, a how-to-build-an-eye module, as opposed to just a particular tweak on relatively standard eye development.)

    So far as I know, viruses are generally quite small; they may tweak several things in a genetic regulatory network, encoding several changes that need to be made simultaneously for any of them to make sense, but they probably don’t implant very big software modules into the host. E.g., nothing of the complexity of a video codec, or an eyeball from scratch. That doesn’t mean that you couldn’t get some really interesting transmissable modules, which would amount to an elegant combination of subroutine calls that built something very cool by efficiently reusing standard code. And maybe that’s enough for it to be very important.

    Exactly. The size will be a problem for transporting an entire module. Plus how would it know where one module ended and another began?

    A subtler and weaker version of this would be changes to an organism that just shift certain developmental or cognitive biases, “directing” its evolutionary path such that normal evolution does the real problem-solving, given this “strong hint” as to which direction to go in searching for a solution. That kind of “good hint” might direct evolution, with genetic assimilation doing the grunt work.

    Not something I have really thought of before, but it may be related to the kind of regulatory function which I suggested above.

    Unfortunately, that’s suboptimal in a couple of major ways. One is that if a good hint directs one species to develop in a certain general way, that may well work for another species, too, and you’ll get directed convergent phenotypic evolution, but their code bases will diverge, because they’ll fill in the genetic blanks in different minor ways that are incompatible. This will limit the ability to “reuse code” across lineages in the future.

    Agreed.

    The other way this directed convergence is suboptimal is that it decouples the fitness function of the virus from the fitness function of the host to some extent. A virus might direct an organism in a good direction, but if it takes too long for the organism to get very far in that direction and proliferate, it may not do the virus as much good. And if the genetic assimilation eliminates the host’s dependency on the virus, it increases the selection pressure on the host to eliminate the virus, which is no longer useful and may be harmful.

    I doubt that the genetic assimilation would occur as quickly as the the adaption of the virus to the host. The virus will probably become largely asymptomatic before that, and therefore there will be no need to eliminate it. However, viruses undoubtedly have fairly narrowly defined ranges for what they “consider” their natural host. Outside of that, assuming they are able to defeat the immune system of a host, it will undoubtedly be due to their being fairly aggressive, which will mean population culling.

    Many of these problems could be solved for a genetic algorithm, by fiat—just rig the algorithm in a way that solves the commons problems that would actually occur in nature. I could freely add big viruses or plasmids to a basically sexual GA, and rig the fitness function to reward producers of code that’s reused in other lineages. Maybe that would yield a useful algorithm, but at this point I’m sure I don’t understand this stuff well enough—deep facts about what makes evolution work well or badly—and I’m really more interested in the scientific questions than in engineering solutions.

    I have little doubt that both areas will offer a great many fascinating insights in the years ahead, but yes, I am more interested in how evolution actually took place. And there is little doubt that viruses have played a very significant role in the evolution of their hosts.

  31. Steviepinhead says

    The new Discover magazine has an article about a really LARGE virus that was recently discovered in some odd place (gunk scoured off a cooling tower), so there may be lots of viruses out there that are larger than we are used to thinking about.
    The same article suggests that viruses may have played major roles in early evolution–back in the RNA world, or even earlier–and even suggests that the eukaryotic cell nucleus may have originated as a virus that accomodated to its host…
    It’s clear to see that I need to learn more about viruses.

  32. Timothy Chase says

    steviepinhead wrote:

    The new Discover magazine has an article about a really LARGE virus that was recently discovered in some odd place (gunk scoured off a cooling tower), so there may be lots of viruses out there that are larger than we are used to thinking about.

    The same article suggests that viruses may have played major roles in early evolution–back in the RNA world, or even earlier–and even suggests that the eukaryotic cell nucleus may have originated as a virus that accomodated to its host…

    It’s clear to see that I need to learn more about viruses.

    Yes — it was a mistake on my part to think of virus size in terms of the size of RNA-viruses. They are what I tend to focus, principally the retroviruses, but double-stranded viruses can get quite large. I remember running across information on the Mimi virus a number of times, which was originally thought to be a bacteria on account of its size and coloration. There are other giant dsDNA viruses, particularly in the oceans. If I remember correctly, they have run into at least one which is motile outside of any host. Similarly, some include genes for photosynthesis, although what those genes are used for (assuming they are used) is a different question. Likewise, some viruses include enzymes for self-repair. And I have run across material on a virus which is able to infect dead bacteria, then reanimate them for the purpose self-replication.

    Yep, you heard it here first: there are zombie bacteria which have been brought back to life by viruses!

    Here are a couple of articles which may be of interest:

    Are viruses driving microbial diversification and diversity?
    Markus G. Weinbauer* and Fereidoun Rassoulzadegan
    Environmental Microbiology
    Volume 6 Page 1 – January 2004

    Common Lineage Suggested For Viruses That Infect Hosts From All Three Domains Of Life
    December 6, 2004

  33. Timothy Chase says

    A couple more articles which may be of interest…

    A Popularized article mentions the viruses which bring bacteria back to life:
    Are Viruses Dead or Alive? by Luis P. Villareal, Dec 2004

    Incidentally Luis P. Villareal suggests that a dsDNA virus may have been responsible for the transition from prokaryotes to eukaryotes.

    Short piece describing a “Science” article from 2004 on the Mimivirus…
    Giant Virus Genome Blurs Life Lines

  34. says

    OK, lemme see if I understand this…

    On this bigass virus theory, eukaryotic cells are basically prokaryotic cells that were taken over by a bigass virus, in a microscopic Invasion of the Body Snatchers.

    Yikes. We have met the enemy, and he is us.

  35. says

    i don’t know much about either dinoflagellates or plastids yet — it’s on my reading list — but apparently in terms of endosymbiosis, the dinoflagellates are really active, pursuing plastids in algae and elsewhere. the captured and incorporated plastids even have an evocative name: kleptoplastids. the Euglenids, subjects of personal study a long time ago, can apparently indulge in this practice, too.

    if viruses are remarkable in this behavior to me it’s all the more remarkable these larger organisms do it regularly and for life functions “bigger” than genetic variation.

  36. Torbjorn Larsson says

    “I think you’re overestimating my expertise and my memory. My knowledge of this stuff comes mostly from conversations with a colleague, and reading a few papers, years ago.”

    Well, that makes you far more studied; I have only read a few papers, years ago. ;-)

    I enjoyed your and Timothys commentaries inasmuch as I was able to follow them. One of the impressions they make is that, as probably for neural networks, too much shared information gives inferior solutions from GAs, but ‘just so’ has advantages. I can’t help but think in my naiveté that this could be part of the explanation why a brain is so differentiated, apart from the obvious interfaces to different sensor inputs and motor outputs.

    “All of this suggests that smaller populations accumulate more slightly deleterious mutations in their genomes, but this in essence gives the genome more to play with in the long-run, resulting in greater novelty and developments which themselves require higher levels of complexity.”

    Interesting info on the adaptation of mutation rate to environment. So not only will there be increased number of different interactions when species split off, but the smaller populations themselves help increase evolution speed.

    Again, in my naiveté, I can’t help but think of this as perhaps one of the reasons why evolution accelerated after the first eons of monocellulars. Or why we should try even harder to keep different ecologies intact – not only are species evolutionary resources, but the small and diverse population structures themselves are.

  37. Anonymous says

    “Gouldians” tend to harp on the meaninglessnes of “genes-for-Xism”. Gould said there was no one gene “for” your fifteenth eyelash, or your right index finger. But, like much of what Gould wrote (such as the Cacmbrian explosion producing fully-formed phyla) it was a bit of a red herring.

    To use an analogy of Pat Bateson, OF COURSE no one gene is responsible for one phenotypic trait, and more than one word in a cake-recipe is repsonsible for a particular piece of the cake. The cake’s taste is the result of a complex interaction of components — butter, sugar, baking soda, etc., exposed to certain condiitons for certain periods. The whole recipe “maps” onto the whole cake. There is no “one-to-one mapping” from the recipe’s words to “bits” of cake (although the cherry DOES map directly to the surface!)

    HOWEVER, if you omit a certain key word in the recipe, then the cake will not get cooked properly, and a specific, more-or-less-predictable result will ensue in the oven. In THAT limited sense, it DOES make sense, perfect sense, to speak of “genes for X”

  38. Anonymous says

    and we know this mutation was not just a low frequency version of the same from the related species that was lowered due to selection before hand only to have the frequencies brought back after new selective pressures were added mimicking what the related species lives in?