I’ve been informed that I’ve been at war for a while. I was surprised. Apparently, Perry Marshall thinks he’s been firing salvo after salvo at me…I just hadn’t noticed.
— Evolution 2.0 (@cfingerprints) January 9, 2016
Oh, OK. I would just ignore him, but he’s presenting some fascinatingly common misconceptions. One of his boogeymen is chance, and I’ve noticed that a lot of people hate the idea of chance. Uncle Fred got hit by lightning? He must have done something very bad. It can’t just have been an accident. There are no accidents!
Yes, Virginia, there are accidents, chance events, and random happenings, and solid scientific explanations have to include chance variation as a component. Even consistently predictable events on a macro scale often have a strong stochastic element to their underlying mechanisms.
Marshall, unfortunately, has this wrong idea that invoking chance is a cop-out — that randomness is bad and unscientific. So one of his salvos is a whole page of
synonyms for random, which actually do more to reveal his ignorance than expose any problems with chance.
I don’t know
I don’t care
Can we go to lunch now
Flying Spaghetti Monster
It wasn’t God so it must have been something else
Vague un-testable assertion that excuses me from doing my science job
Judging a pattern of variation to be random is determined by the actual properties of the data set. Randomness is an empirical conclusion.
Here’s a simple example. I have a six-sided die. I throw it 666 times and record the result of each throw. What do you expect?
You probably expect about 111 “1”s, 111 “2”s, 111 “3”s, 111 “4”s, 111 “5”s, and 111 “6”s. But not exactly 111 of each; you expect some deviation from that number. You might actually suspect a non-random result if you got exactly 111 of each, because that would suggest some regularity. If the data showed that the first 111 throws all produced “1”, the second 111 produced “2”, etc., you’d immediately recognize that as non-random. That we can identify non-random series implies that we have some properties that we can examine to determine randomness.
We can even quantitatively predict properties of random sets of data; there are statistical theories and tests that can estimate how much variation we might expect from a given number of trials, how often and how long a run of repeated results should occur, etc. We can use these parameters to test for faked data, for instance.
To demonstrate this to beginning students of probability, I often ask them to do the following homework assignment the first day. They are either to flip a coin 200 times and record the results, or merely pretend to flip a coin and fake the results. The next day I amaze them by glancing at each student’s list and correctly separating nearly all the true from the faked data. The fact in this case is that in a truly random sequence of 200 tosses it is extremely likely that a run of six heads or six tails will occur (the exact probability is somewhat complicated to calculate), but the average person trying to fake such a sequence will rarely include runs of that length.
I’ve read whole books on the mathematical properties of randomness. I got into it for a while because of a problem that bugged me: I was watching the formation of peripheral sensory networks in zebrafish, which has both random and non-random aspects. Random: neurons grow out and branch in unpredictable ways; they don’t form a methodical pattern, like a fishnet stocking covering the animal. The specifics of branching vary from animal to animal. Non-random: the dispersal of the branches has to demonstrate adequate spacing — you shouldn’t have clumping, or large gaps in the coverage. There were rules, but they played out on a game board where chance drove the particulars.
Or on a grander scale, read David Raup’s Extinction: Bad Genes or Bad Luck?. The answer to the question is both, but luck is the best way to describe some large scale events in the history of life.
That’s the important thing: many phenomena have an underlying basis in chance, and are subsequently shaped by non-chance processes: you can’t model enzyme kinetics without acknowledging random molecular interactions given a direction by the laws of thermodynamics, for instance, or regard evolution without seeing the importance of chance variation, winnowed by selection.
And contra Marshall and a thousand other creationists, chance isn’t simply the answer we give when we don’t know what is going on. There are criteria. We have statistical tests for randomness and non-randomness, and we also use chance as a tool.
For example, one of the things I’ve been doing over this winter break is prepping some fly lines for mapping crosses my students will be doing in genetics next term. We use chance events to peek at the structure of the chromosome. Here’s how it works.
We set up flies with pairs of traits (actually, we’re doing three at a time, but that’s more complicated to explain), and we generate heterozygotes that we are going to cross to homozygotes (or in this case, because these are X-linked traits, we can cross to hemizygous males…but see, it’s already getting complicated). To keep it simple, here’s an example of a fly that is heterozygous for two genes, one for body color and one for eye color. Wild type flies are gray-bodied, and we have a recessive mutant yellow that gives the body a yellowish cast. Wild type eyes are red, and we have another recessive mutant that has white eyes.
The female, on the left, has a gray body and red eyes, because those traits are dominant, but she’s heterozygous, or a carrier for both recessive traits. The male has only one X chromosome, so he can only pass on the yellow and white traits, and he is also yellow bodied and white eyed.
As I’ve drawn them, if there is no other process in play, the female can pass on either a chromosome carrying yellow and white, or a chromosome carrying the gray and red traits (Note that I’ve faded out the male contribution: he’s just donating solely recessive alleles to allow the female contribution to be expressed in the phenotype, so you can ignore him*). So by default, we’d expect that the result of this cross would be that half the progeny would be gray bodied and red eyed, and the other half would be yellow bodied and white eyed. Note that the half and half distribution is also a product of chance — in most situations, which X chromosome gets passed on is random.
But there is another process in play! During meiosis, the female can swap around portions of her two X chromosomes in a process called recombination. This is a chance event. One chromosome is broken at a random point along its length, and the other chromosome is broken at the equivalent position (a non-random choice), and they’re re-stitched together to form an intact chromosome with a different arrangement of the alleles. That allows some of the progeny to express a different pair of traits.
Every time you see a fly with red eyes and yellow body, or white eyes and gray body, in this cross, you are seeing the phenotypic expression of a recombination event between the two genes.
We can use this chance event to make a map of genes, an insight that came to Thomas Hunt Morgan and Alfred Sturtevant around 1911. Yes, genetics has been using chance to study genes for over a hundred years.
The way this works…imagine you have a barn. On the side of the barn, you paint two targets: one is a square 2 meters on a side, and the other is a smaller square, 1 meter on a side. You then blindfold a person with a gun, and tell them to blaze away in the general direction of the barn. At the end of the afternoon, after they’ve gone through a few boxes of ammo, you tell them the results: most of the time they completely missed the targets, but they hit the first one 18 times, and the second one 4 times.
Can they estimate the relative size of the two targets?
Of course they can. And the more shots they take, the more accurate their estimate will be.
That’s what Sturtevant and Morgan were doing. They couldn’t see genes, they could barely see the chromosome, but they could use recombination to take random shots at the arrangement of alleles on the chromosome, and they could see whether they hit that spot between two genes, like yellow and white, by looking for the rearrangement in the phenotype. The frequency of those rearrangements relative to misses also told them the relative size of the target — how far apart yellow and white are on the chromosome.
(FYI, yellow and white are fairly close together, and we see recombination between them in only 1.5% of progeny of the cross. This gets reported as a map distance of 1.5 between yellow and white.)
You cannot predict ahead of time whether a specific individual produced in this cross will be wild type, or white-eyed and gray-bodied, or any particular possibility. It is a random process with a stochastic distribution of results that has some general predictability, just like an individual bullethole produced by the blindfolded shootist.
This seems to baffle creationists. They have a deep antipathy to randomness on principle, but even worse, they seem incapable of realizing that scientists can be simultaneously studying chance events that have statistically predictable outcomes — like genetics, or evolution, or the physics of sub-atomic particles. It’s a guaranteed way to blow their minds to point out that on one scale a phenomenon might be chance driven, but stepping back and looking at the whole reveals a regularity and pattern.
At which point they stagger back and declare that the small-scale events have to be determined and specified and predictable too, and therefore nothing is random. They just don’t get it.