More on the origins of complexity

My earlier post on the origins of complexity sparked an interesting and informative discussion in the comments so I thought I would pass on another article about it by Amy Maxmen that I came across later, that looked at the same question in light of what we know about the seemingly lowly amoeba. She starts by posing the problem.

Amoebas are puny, stupid blobs, so scientists were surprised to learn that they contain 200 times more DNA than Einstein did. Because amoebas are made of just one cell, researchers assumed they would be simpler than humans genetically. Plus, amoebas date back farther in time than humans, and simplicity is considered an attribute of primitive beings. It just didn’t make sense.

The idea of directionality in nature, a gradient from simple to complex, began with the Greeks, who called nature physis, meaning growth. That idea subtly extended from changes over an organism’s lifetime, to changes over evolutionary time after Charles Darwin argued that all animals descend from a single common ancestor. When his contemporaries drew evolutionary trees of life, they assumed increasing complexity. Worms originated early in animal evolution. Creatures with more complex structures originated later. Biologists tweaked evolutionary trees over the following century, but generally, simple organisms continued to precede the complex.

She says that new studies of genomes cast doubt on earlier ideas that complexity, at least as measured in the number of genes possessed by an organism, grew in a linear fashion with time. “The results suggested that complex body parts evolved multiple times and had also been lost. One study found that winged stick insects evolved from wingless stick insects who had winged ancestors.”

She also looks at comb jellies, an organism more complex than sponges that predated the latter, and other animals that reveal that evolution can drive systems to more or less complexity, depending on which is more suited. As a result, things that we think of as singular events in evolution, like the emergence of a central nervous system, may have evolved more than once.

“Traditional views are based on our dependence on our nervous system,” says [evolutionary biologist Joseph] Ryan. “We think the nervous system is the greatest thing in the world so how could anything lose it,” he says. “Or, it’s the greatest thing in the world, so how could it happen twice.”

It is an interesting article that contained fascinating new information (for me at least) about creatures and their features that I was not aware of before.


  1. Reginald Selkirk says

    Plus, amoebas date back farther in time than humans, and simplicity is considered an attribute of primitive beings.

    She comes pretty close to screwing up in several ways right here. She is assuming genome size is the same for prehistoric amoebae and modern amoebae.

    Also, assuming that super-large amoeba is representative of all amoebae.
    Genomes large and small.

    The smallest eukaryotic genome known to date is that of the protist Encephalitozoon intestinalis, a parasitic microsporidian with a genome size of only 2.3 million base pairs, which is smaller than that of many bacteria (Vivarès and Méténier 2000). The smallest free-living eukaryote genome size is found in Ostreococcus tauri at 12.6 million base pairs (Derelle et al. 2006). The largest reliable protozoan genome size estimate reported to date is 97.8 billion base pairs in the dinoflagellate Gonyaulax polyedra (Shuter et al. 1983). That is a more than 33,000-fold range among protists.

    Then there is the mistake of assuming that her information is reliable. (ibid)

    It should be pointed out that the largest published eukaryote genome size estimate is 1,400 billion base pairs (400 times larger than human) in the free-living amoeba Chaos chaos (Friz 1968), although the largest genome size is often attributed to Amoeba dubia at 700 billion base pairs based on the same study. These data are not generally considered reliable, for several reasons. First, these values for amoebae were based on rough biochemical measurements of total cellular DNA content, which probably includes a significant fraction of mitochondrial DNA. Second, Friz’s (1968) value of 300pg for Amoeba proteus is an order of magnitude higher than those reported in subsequent studies (Byers 1986). Third, some amoebae (e.g., A. proteus) contain 500-1000 small chromosomes and are quite possibly highly polyploid (Byers 1986), in which case these values would be inappropriate for a comparison of haploid genome sizes among eukaryotes.

    (Bold added by me for emphasis).

    So, it appears that she is setting up a strawman

  2. invivoMark says

    This article does a fair job of representing current science, I think. But it perpetuates the meme that scientists were surprised to find 25,000 genes in the human genome, because everyone expected our complexity to be explained by having so many more genes than fruit flies.

    This is a very popular story, but at best, it’s a half truth. Many scientists in the 1980s and ’90s knew that there was likely a low upper bound for the number of genes we would find in humans, and these upper bounds were estimated as early as the late ’60s. Even the popular textbook Molecular Biology of the Cell (a venerable volume whose current editions still find use in classrooms) included in its 1983 edition an estimate of 30,000 genes. More on historical gene estimates:

    Geneticists have thought for a long time that complexity was a result not of having more genes, but of having more complicated gene regulatory networks. Eric Davidson and Sean Carroll (not the physicist) have been two leading authors promoting this view, the former working in sea urchins and mapping out their regulatory programming [Image]. Genetic complexity frequently arises from adaptation and co-option of existing gene modules.

    The modular nature of gene regulation means that complexity can arise with a relatively low number of genetic mutations. You don’t need to add whole genes to make a new function. Rather, you simply need the right 5 to 8 base pair sequence at the right spot before an existing gene, making a new transcription factor binding site to turn on the gene in a new location, potentially causing a cascade of other genes to begin functioning in a novel cellular environment. And likewise, complexity is equally simple to lose -- simply mutate one or two bases in that same transcription factor binding site.

    Hence, evolution can easily vary in complexity like a drunk person’s stagger. I do appreciate this article’s link to one of my favorite websites,

  3. Reginald Selkirk says

    She does it again, and again:

    Even more perplexing: Sea anemones evolved before flies and humans, some 560 million years ago.

    Late last year, the animal evolutionary tree quaked at its root. A team led by Joseph Ryan, an evolutionary biologist who splits his time between the National Genome Research Institute in Bethesda, Md. and the Sars International Center for Marine Molecular Biology in Bergen, Norway, analyzed the genome from a comb jelly, Mnemiopsis leidyi, a complex marine predator with muscles, nerves, a rudimentary brain, and bioluminescence, and found that the animals may have originated before simple sponges, which lack all of those features.4

    Not the anemone whose genome was sequenced. And the early comb jellies did not necessarily have all those features found in modern versions.
    I’m glad she’s learning something, but her initial knowledge set is problematic.

    My version of what happened: the recent finding was that the lineage for the comb jellies split off from the lineage for all the other types of multicellular animals first. Comb jellies have been evolving ever since. The main trunk later split into all kinds of things, including sponges and us.

    “We think the nervous system is the greatest thing in the world so how could anything lose it,” he (Ryan) says.

    O crikey. Tunicatelarvae have a brain, and are free-swimming. When they settle down into sessile adulthood, they lose their brain as an unnecessary expense. This is so well known (among biologists) that it can be found on Wikipedia.

  4. Reginald Selkirk says

    A term that should be found in any good discussion of htat subject: left wall saturation.

  5. Reginald Selkirk says

    Pelagibacter ubique

    It is an abundant member of the SAR11 clade in the phylum Alphaproteobacteria. SAR11 members are highly dominant organisms found in both salt and fresh water worldwide — possibly the most numerous bacterium in the world… P. ubique and its relatives may be the most abundant organisms in the ocean, and quite possibly the most abundant bacteria in the entire world. It can make up about 25% of all microbial plankton cells, and in the summer they may account for approximately half the cells present in temperate ocean surface water. The total abundance of P. ubique and relatives is estimated to be about 2 × 10^28 microbes.

    The genome of P. ubique strain HTCC1062 was completely sequenced in 2005 showing that P. ubique has the smallest genome (1,308,759 bp) of any free living organism[5] encoding only 1,354 open reading frames (1,389 genes total).[13] The only species with smaller genomes are intracellular symbionts and parasites, such as Mycoplasma genitalium or Nanoarchaeum equitans

    1) Parasites can have smaller genomes than similar free-living organisms if they have certain resources supplied by their host.

    2) These things have huge population numbers, and generation time is pretty short, so selection is much keener than for larger, low population organisms.

    3) Getting finally to left-wall saturation. I forget where I picked up the term, possibly from one of Stephen Jay Gould’s books(?) With these huge population numbers, how would any sub-population of P. ubique break out and consitute a new species? Possibilities:

    a) Lower costs. Perform essentially the same job, fill the same ecological role, but do it more efficiently. Get rid of any unecessary genes, for example. If this were likely, it would have happened already. So you can expect that P. ubique is probably about as efficient as it can get, barring any really huge innovation of the sort that is evolutionarily unlikely (e.g. requiring a huge leap).

    b) Add new functionality. Fill a new role, so that you no longer have to compete with the cost-cutters. This of course would require adding at least a bit of complexity.

    Thus, over time, life may become more complex just because things are already crowded at the “left wall.” (I.e. the X axis is for ‘commplexity.’

  6. says

    The whole ‘complexity’ angle seems to me to be a symptom of creationists’ not really thinking things through. You can have massive ‘complexity’ emerge by taking a few simple rules and maybe a pseudorandom number generator, and letting it run: the output is complex! But that doesn’t mean that the complexity is anything more than an output from a very simple rule!

    Consider, for example, the electrical code and a simple model for how human-built buildings get wired. From that, with some random variation (# of floors, width, floorplan, gaps, stairs, bathroom ratio, etc) you get extremely complex wiring systems in each building. But that does not mean that there was a single master designer that designed the entire wiring architecture of New York City down to the outlet -- the insane complexity of New York City’s wiring is an emergent property of some relatively simple rules, and those rules were not cleverly designed, they simply came from the properties of the wiring itself (2 wires in buildings built before 1920, 3 in buildings built after 1970, roll 1d100 for anyhing in between…) and how electrical machinery works.

    What’s odd to me is that the creationists who focus so intensely on ‘complexity’ don’t realize that all they wind up with is a great god the absurd micromanager because otherwise, delegating how a process works to simple rule-sets results in reality as an emergent property of simple rules. Kind of like how it appears to be.

    By the way the proper creationist answer to the issue I raise above is that “god really wanted to make the ultra-complex wiring system of New York City” and created humans as a finite state machine to produce that wiring so he didn’t have to micromanage the fucking mess.

  7. says

    Speaking of complexity:
    The ‘B’ in “Benoit B. Mandlebrot” stands for “Benoit B. Mandlebrot”

    And that’s all you need to say about apparently complex things that arise from simple rules.

Leave a Reply

Your email address will not be published. Required fields are marked *