Where Bigotry Thrives

All of us strive to be rational. We believe that reality does not contradict itself, that something cannot exist and not exist at the same time. So when we encounter a contradiction we believe in, we discard it to align ourselves closer to reality. But there’s another, more human reason to weed out contradictions in our views.

Charles Murray, in his interview with Sam Harris, was grilled a bit on universal basic income.

[1:53:17] HARRIS: I’ve heard you talk about it and this is a surprise because, in “Coming Apart” you are fairly critical of the welfare state in all its guises and you- you just said something that at least implied disparagement of the welfare state in Europe, as we know it, so tell me why you are an advocate for universal basic income.

[1:53:40] MURRAY: Well, I first wrote [a] book back in two thousand five or six, called “In Our Hands,” but I did it initially for the same reason that Milton Friedman was in favor of a negative income tax, the idea is that you replace the current system with the universal basic income and, that, you leave people alone to make their decisions about how to use it.

And yet, back in 1984, Murray was singing a different tune.

In Losing Ground, Charles Murray shows that the great proliferation of social programs and policies of the mid-’60s made it profitable for the poor to behave in the short term in ways that were destructive in the long term.

Murray comprehensively documents and analyzes the disturbing course of Great Society social programs. Challenging popular notions that Great Society programs marked the beginning of improvement in the situation of the poor, Murray shows substantial declines in poverty prior to 1964-but slower growth, no growth, and retreat from progress as public assistance programs skyrocketed.

If we truly want to improve the lot of the poor, Murray declares, we should look to equality of opportunity and to education and eliminate the transfer programs that benefit neither recipient nor donor.

Murray was influential in Reagan’s war on the poor, which argued poor people would unwisely spend their government assistance cheques, yet now he’s arguing that the poor should be given government assistance without strings attached?! He never acknowledges his about-face, but I think this part of the interview is telling.

[2:00:11] MURRAY: There will be work disincentives, but we are already at a point, Sam, where something more than 20 percent of working-age males with high school diplomas, and no more [education than that], are out of the labor force. So we already have a whole lot of guys, sitting at home, in front of a TV set or a gameboy, probably stoned on meth, or- or opioids, doing nothing. We got a problem already and I see a lot of ways in which the moral agency that an income would give could make the problem less.

[2:00:46] HARRIS: Did the dysfunction you, you see in white and largely rural America now, is it analogous to the dysfunction that we were seeing in the in the black inner-city starting a few decades ago? Are there important differences, or- or how do you how do you view that?

[2:01:05] MURRAY: In some ways it followed pretty much the same trajectory. Way back in nineteen ninety two, or three it was, I had an op-ed in the Wall Street Journal called “Becoming a White Underclass,” and I was simply tracking the growth in a non-marital births among white working-class people, and I said to myself, along with Pat Moynihan who said it better and first, that if you have communities in which large numbers of young men are growing to adulthood without a male figure, you asked for and get chaos. And I assume that what had happened in the black community when non-marital births, uh, kept on going up is going to happen in the white community. So in that sense they follow pretty much a predictable trajectory.

In the 1980’s, the face of poverty was black and addicted to drugs. Now, it’s white and addicted to drugs. Changing the race of those impoverished may have changed Murray’s views of poverty.

We dug into a contradiction Murray held, and found bigotry hiding underneath. This is no coincidence, persistent contradictions in your worldview are fertile ground for bigotry. All the atheists in the crowd know this.

To evade the charge of bigotry, you need to do more than say that you sincerely believe that the Bible is against gay marriage. You need to explain why you take the clobber verses as something important and relevant to today, while the statements like “Let the man with two tunics share with him who has none,” aren’t.

There are arguments against taking the missional verses and the poverty verses and trying them to apply them today. Of course, many of those arguments could be turned against the clobber verses as well. Can it be shown that there is a consistent means of interpretation that would lead to the clobber verses being taken literally while the charity verses should be basically ignored?

Or think of it this way: would the hypothetical “man from Mars” who was innocent of Christianity and the culture wars really look at the Bible and come away saying, “Wow, we’ve really got to do something to stop gay marriage”?

Think about how this looks from the outside. The parts of the Bible that you believe apply today are the ones that require other people to make sacrifices. The parts of the Bible that would require YOU to make big sacrifices are not considered relevant. Look at it this way, and you’ll see why “bigot” is one of the nicer things you could be called.

Contradictions allow you to pick and choose which rules you follow, allowing you to benefit while others fall into harm. It also provides a great shield against criticism.

[59:06] MURRAY: Dick and I, our- our crime in the book was to have a single, solitary paragraph that said – after talking about the patterns that I’m about to describe – “if we’ve convinced you that either the environmental or the genetic explanation has won out to the exclusion of the other, we haven’t done a good enough job presenting the evidence for one side of the other. It seems to us highly likely that both genes and the environment have something to do with racial differences.” And we went no farther than that. There is an asymmetry between saying “probably genes have some involvement” and the assertion that it’s entirely environmental and that’s what the, that’s the assertion that is being made. If you’re going to be upset at “The Bell Curve,” you are obligated to defend the proposition that the black/white difference in IQ scores is 100% environmental, and that’s a very tough measure.

Hit Murray with the charge that he’s promoting genetic determinism, and he’ll point to that paragraph in “The Bell Curve” and say you’re straw-personing his views. Argue that intelligence is primarily driven by environment and he’ll either point to the hundreds of pages and dozens of charts that he says demonstrates a genetic link that’s much stronger than environment, or he’ll equivocate between “primarily driven by environment” and “100% environmental.” Nor is this an isolated incident. Remember his bit about “large numbers of young men are growing to adulthood without a male figure, you asked for and get chaos?”

[40:23] MURRAY: … the thing about the non-shared environment is it’s not susceptible to systematic manipulation. It’s … idiosyncratic. It’s non-systematic … there are no obvious ways that you can deal with the non-shared environment, in the way that you could say “Oh, we can improve the schools, we can teach better parenting practices, we can provide more money for – …” [those] all fall into the category of manipulating the shared environment and when it comes to personality, as you just indicated, it’s 50/50 [for genes and environment] but almost all that 50 is non-shared.

[41:02] HARRIS: Yeah, which seems to leave parents impressively off the hook for … how their kids turn out.

[41:10] MURRAY: Although it is true that parents – and I’m a father of four – uh, we resist that. … and with the non-shared environment and the small role left for parenting, I will say it flat out: I read [the research of Judith Rich Harris] with *the* most skeptical possible eye. I was looking for holes in it, assiduously. …

[41:57] MURRAY: … the book was very sound, it was very rigorously done, and … at this point I don’t know of anybody who’s familiar with literature, who thinks there’s that much of a role left of the kind of parents thought they had in shaping their children.

[42:15] HARRIS: Right, well I’m not gonna stop trying, I think, it’s [a] very hard illusion to cut through… as I read Harry Potter tonight to my eldest daughter.

[42:23] MURRAY: … You know that, but I think that it’s good to reflect on that: reading Harry Potter to your eldest daughter is a good in itself.

[42:32] HARRIS: Yeah.

[42:35] MURRAY: And the fact that she behaves differently 20 years from now is not the point.

[42:38] HARRIS: No, exactly, and it is an intrinsic good, and it’s for my own pleasure that I do it largely at this point.

Murray also thinks that nothing a parent will do will change their child’s development. His ability to flip between both sides of a contradiction is Olympic.

[43:12] HARRIS: That’s the one thing that it just occurred to me people should also understand is that, in addition to the fact that IQ doesn’t explain everything about a person’s success in life and … their intellectual abilities, the fact that a trait is genetically transmitted in individuals does not mean that all the differences between groups, or really even any of the differences between groups in that trait, are also genetic in origin, right?

[43:41] MURRAY: Critically important, critically important point.

[43:42] HARRIS: Yeah, so the jury can still be out on this topic, and we’ll talk about that, but to give a clear example: so if you have a population of people that is being systematically malnourished – now they might have genes to be as tall as the Dutch, but they won’t be because they’re not getting enough nourishment. And, in the case that they don’t become as tall as the Dutch, it will be entirely due to their environment and yet we know that height is among the most heritable things we’ve got – it’s also like 60 to 80 percent predicted by a person’s genes.

[44:15] MURRAY: Right. Uh, the comparison we use in the book … is that, you take a handful of genetically identical seed-corn, and divide it into two parts, and plant one of those parts in Iowa and the other part in the Mojave Desert, you’re going to get way different results. Has nothing whatsoever to do with the genetic content of the corn.

It’s no wonder that when Harris asks him if anything discovered since publication has changed his claims, his response was no. As he inhabits both side of a contradiction, nothing could falsify his views.

Contradictions are also a way to change your views without acknowledging you did. Consider this small bit of trivia Murray throws out (emphasis mine):

[1:40:53] HARRIS: If my life depended on it, I could not find another person [besides Christopher Hitchens] who smoked cigarettes in my contact list, you know, and let’s say there’s a thousand people in there, right?

[1:41:04] MURRAY:  Hmm mm-hmm.

[1:41:05] HARRIS: That’s an amazing fact in a society where something like 30% of people smoke cigarettes.

[1:41:12] MURRAY: That’s a wonderful illustration of how isolated [we are within our classes]… because, in my case, I do know people who smoke cigarettes but that’s only because I go play poker at Charleston West Virginia casino and there, about 30% of the guys I played poker with smoked. But that’s ok. In terms of [the] American Enterprise Institute, where I work, [I] don’t know anybody who smokes there, I don’t… social circles, no.

If you had a long memory, that small tidbit packs quite a punch.

Let’s begin by referring to the basic objectives of the program:

  1. To show that the basic social cost changes are bad economics.
  2. To illustrate how smoking benefits society and its members.
  3. To show that anti-smoking groups, who are promoting the social cost issue, have self-serving ends, and are not representative of the general society.

In short, we took as our goals a defense which would undermine the concepts of the social cost issue, and an offense which would stress the social benefits of smoking and freedom to smoke.

In 1980, the American Enterprise Institute was preparing reports and training videos that argued smoking is a net benefit to society. Among other things, worker productivity was better when people took regular smoke breaks, and restrictions on cigarettes harm personal liberty.

In 2017, the number of smokers at the American Enterprise Institute is far less than in the general population. If you value being free of contradictions, a reversal like this should cause you some tough introspection about who you allow into your think-tank. If you don’t, no introspection is necessary. There’s no need to criticize yourself, no need to submit yourself to annoying audits, you can just carry on being awesome.

Like Sam Harris. Emphasis mine.

[1:39] HARRIS: Human intelligence itself is a taboo topic; people don’t want to hear that intelligence is a real thing, and that some people have more of it than others. They don’t want to hear that IQ tests really measure it. They don’t want to hear that differences in IQ matter because they’re highly predictive of differential success in life, and not just for things like educational attainment and wealth, but for things like out-of-wedlock birth and mortality. People don’t want to hear that a person’s intelligence is in large measure due to his or her genes, and there seems to be very little we can do environmentally to increase a person’s intelligence, even in childhood. It’s not that the environment doesn’t matter, but genes appear to be 50 to 80 percent of the story. People don’t want to hear this, and they certainly don’t want to hear that average IQ differs across races and ethnic groups. Now, for better or worse, these are all facts.

[5:32] HARRIS: Whatever the difference in average IQ is across groups, you know nothing about a person’s intelligence on the basis of his or her skin color. That is just a fact. There is much more variance among individuals in any racial group than there is between groups.

If the mean IQs of people grouped by skin colour are different, then you must know something about a person’s intelligence by knowing their skin colour. Head over to R Psychologist’s illustration of Cohen’s d and keep a close eye on the “probability of superiority.” For instance, when d = 0.1, the fine print tells me “there is a 53 % chance that a person picked at random from the treatment group will have a higher score than a person picked at random from the control group (probability of superiority),” which means that if I encounter someone from group A I can state they have a higher intelligence than someone from group B with odds slightly better than chance. There’s only one situation where knowing someone’s skin colour tells me nothing about their intelligence, and that’s when the mean IQs of both groups are equal.

You could counter “so what, that 53% chance is so small as to be no different than 50/50,” and I’d agree with you. But if Murray demonstrated group differences of the same magnitude, his conclusion should not have been “IQ differs between races,” it should have been “IQ is effectively equal across racial lines.” By taking this counter, you’ve abandoned the ability to say mean IQ varies across groups. “Average IQ differs across races” and “skin colour conveys information about IQ” are equivalent statements, so Sam Harris is contradicting himself.

Contradictions are a chronic problem for him. It should come as no surprise that Sam Harris is always right, and that entire websites are wrong.

A few of the subjects I explore in my work have inspired an unusual amount of controversy. Some of this results from real differences of opinion or honest confusion, but much of it is due to the fact that certain of my detractors deliberately misrepresent my views. The purpose of this article is to address the most consequential of these distortions. […]

Whenever I respond to unscrupulous attacks on my work, I inevitably hear from hundreds of smart, supportive readers who say that I needn’t have bothered. In fact, many write to say that any response is counterproductive, because it only draws more attention to the original attack and sullies me by association. These readers think that I should be above caring about, or even noticing, treatment of this kind. Perhaps. I actually do take this line, sometimes for months or years, if for no other reason than that it allows me to get on with more interesting work. But there are now whole websites—Salon, The Guardian, Alternet, etc.—that seem to have made it a policy to maliciously distort my views.

Disagreement is due to misunderstanding, not genuine error. Ergo, he cannot be a bigot.

This, then, is a strong second reason to examine yourself for contradictions. Don’t just do it to stay in line with reality, do it to help rid yourself of bigotry against your fellow person.

Belling Sam Harris

I wrote off Sam Harris long ago, and currently ignore him as best as I can. Still, this seems worth the exception.

In this episode of the Waking Up podcast, Sam Harris speaks with Charles Murray about the controversy over his book The Bell Curve, the validity and significance of IQ as a measure of intelligence, the problem of social stratification, the rise of Trump, universal basic income, and other topics.

For those unaware, Charles Murray co-wrote The Bell Curve, which carried this explosive claim among others:

There is a mean difference in black and white scores on mental tests, historically about one standard deviation in magnitude on IQ tests (IQ tests are normed so that the mean is 100 points and the standard deviation is 15). This difference is not the result of test bias, but reflects differences in cognitive functioning. The predictive validity of IQ scores for educational and socioeconomic outcomes is about the same for blacks and whites.

Alas, it was written with dubious sources, based on the notion that intelligence is genetically determined (I touch on the general case here), supported by dubious organizations, and even how it was published was designed to frustrate critics.

The Bell Curve was not circulated in galleys before publication. The effect was, first, to increase the allure of the book (There must be something really hot in there!), and second, to ensure that no one inclined to be skeptical would be able to weigh in at the moment of publication. The people who had galley proofs were handpicked by Murray and his publisher. The ordinary routine of neutral reviewers having a month or two to go over the book with care did not occur. Another handpicked group was flown to Washington at the expense of the American Enterprise Institute and given a weekend-long personal briefing on the book’s contents by Murray himself (Herrnstein had died very recently), just before publication. The result was what you’d expect: The first wave of publicity was either credulous or angry, but short on evidence, because nobody had had time to digest and evaluate the book carefully. [..]

The debate on publication day was conducted in the mass media by people with no independent ability to assess the book. Over the next few months, intellectuals took some pretty good shots at it in smaller publications like the New Republic and the New York Review of Books. It wasn’t until late 1995 that the most damaging criticism of The Bell Curve began to appear, in tiny academic journals.

Entire books have been written debunking The Bell Curve.

Richard Herrnstein and Charles Murray argued that intelligence largely determined how well people did in life. The rich were rich mostly because they were smart, the poor were poor mostly because they were dumb, and middle Americans were middling mostly because they were of middling intelligence. This had long been so but was becoming even more so as new and inescapable economic forces such as global trade and technological development made intelligence more important than ever before. In a more open economy, people rose or sank to the levels largely fixed by their intelligence. Moreover, because intelligence is essentially innate, this expanding inequality cannot be stopped. It might be slowed by government meddling, but only by also doing injustice to the talented and damaging the national economy. Inequality is in these ways “natural,” inevitable, and probably desirable. [..]

Yet decades of social science research, and further research we will present here, dispute the claim that inequality is natural and increasing inequality is fated. Individual intelligence does not satisfactorily explain who ends up in which class; nor does it explain why people in different classes have such disparate standards of living.

So why was Sam Harris resurrecting this dead horse?

[9:35] HARRIS: The purpose of the podcast was to set the record straight, because I find the dishonesty and hypocrisy and moral cowardice of Murray’s critics shocking, and the fact that I was taken in by this defamation of him and effectively became part of a silent mob that was just watching what amounted to a modern witch-burning, that was intolerable to me. So it is with real pleasure (and some trepidation) that I bring you a very controversial conversation, on points about which there is virtually no scientific controversy. […]

[11:30] HARRIS: I’ve- since, in the intervening years, ventured into my own controversial areas as a speaker and writer and experienced many hysterical attacks against me in my work, and so I started thinking about your case a little – again without ever having read you – and I began to suspect that you were one of the canaries in the coal mine that I never recognized as such, and seeing your recent treatment at Middlebury, which many of our listeners will have heard about, where you were prevented from speaking and and your host was was physically attacked – I now believe that you are perhaps the intellectual who was treated most unfairly in my lifetime, and it’s, it’s just an amazing thing to be so slow to realize that. And at first I’d just like to apologize to you for having been so lazy and having been taken in to the degree that I was by the rumors and lies that have surrounded your work for the last 20 years, and so I just want to end- I want to thank you doubly for coming on the podcast to talk about these things.

Sigh.

Tell me, Robert Plomin, is intelligence hereditary?

Genes make a substantial difference, but they are not the whole story. They account for about half of all differences in intelligence among people, so half is not caused by genetic differences, which provides strong support for the importance of environmental factors. This estimate of 50 percent reflects the results of twin, adoption and DNA studies.

It’s deja-vu all over again; there are good reason to think twin studies overstate inheritance, and adoption studies are not as environmentally pure as they’re thought to be. As for DNA studies,

The literature on candidate gene associations is full of reports that have not stood up to rigorous replication. This is the case both for straightforward main effects and for candidate gene-by-environment interactions (Duncan and Keller 2011). As a result, the psychiatric and behavior genetics literature has become confusing and it now seems likely that many of the published findings of the last decade are wrong or misleading and have not contributed to real advances in knowledge. The reasons for this are complex, but include the likelihood that effect sizes of individual polymorphisms are small, that studies have therefore been underpowered, and that multiple hypotheses and methods of analysis have been explored; these conditions will result in an unacceptably high proportion of false findings (Ioannidis 2005).[1]

Ah yes, the replication crisis. I know it well. Genetic studies can easily have millions of datapoints yet only draw from less than a few hundred volunteers, and are particularly ripe for false correlations. But according to Angry White Men, Sam Harris was ignorant of all of the above.

Harris didn’t bat an eye when Murray accused critics of race realism — or human biodiversity, or whatever the alt-right calls its racist junk science nowadays — of elitism and compared them to modern-day flat Earthers. As Murray put it: “But at this point, Sam, it’s almost as if we are in the opposite position of conventional wisdom versus elite wisdom that we were, say, when Columbus was gonna sail to America. … It’s the elites who are under the impression that, oh, IQ tests only measure what IQ tests measure, and nobody really is able to define intelligence, and this and that, they’re culturally biased, on and on and on and on. And all of these things are the equivalent of saying the Earth is flat.

By now, I’m convinced he doesn’t want to hear the counter-arguments. He’d rather pretend to be rational and scientific, because then he can remain bigoted without fear of challenge.


[1] Hewitt, John K. “Editorial Policy on Candidate Gene Association and Candidate Gene-by-Environment Interaction Studies of Complex Traits.” Behavior Genetics 42, no. 1 (January 1, 2012): 1–2. doi:10.1007/s10519-011-9504-z.

The Sinmantyx Posts

It started off somewhere around here.

Richard Dawkins: you’re wrong. Deeply, profoundly, fundamentally wrong. Your understanding of feminism is flawed and misinformed, and further, you keep returning to the same poisonous wells of misinformation. It’s like watching creationists try to rebut evolution by citing Kent Hovind; do you not understand that that is not a trustworthy source? It’s a form of motivated reasoning, in which you keep returning to those who provide the comfortable reassurances that your biases are actually correct, rather than challenging yourself with new perspectives.

Just for your information, Christina Hoff Sommers is an anti-feminist. She’s spent her entire career inventing false distinctions and spinning fairy tales about feminism.

In the span of a month, big names in the atheo-skeptic community like Dawkins, Sam Harris, and DJ Groethe lined up to endorse Christina Hoff Sommers as a feminist. At about the same time, Ayaan Hirsi Ali declared “We must reclaim and retake feminism from our fellow idiotic women,” and the same people cheered her on. Acquaintances of mine who should have known better defended Sommers and Ali, and I found myself arguing against brick walls. Enraged that I was surrounded by the blind, I did what I always do in these situations.

I researched. I wrote.

The results were modest and never widely circulated, but it caught the eye of M.A. Melby. She offered me a guest post at her blog, and I promised to append more to what I had written. And append I did.

After that was said and done, Melby left me a set of keys and said I could get comfortable. I was officially a co-blogger. I started pumping out blog posts, and never really looked back. Well, almost; out of all that I wrote over at Sinmantyx, that first Christina Hoff Sommers piece has consistently been the most popular.
I’ll do the same thing here as with my Sinmantyx statistics posts, keep the originals intact and in-place and create an archive over here.

Fake Journals, Too

Myers beat me to the punch with his post on fake peer reviewers, so I’ll zag and mention the other side of the fence.

The rapid rise of predatory journals—publications taking large fees without providing robust editorial or publishing services—has created what some have called an age of academic racketeering. Predatory journals recruit articles through aggressive marketing and spam emails, promising quick review and open access publication for a price. There is little if any quality control and virtually no transparency about processes and fees. Their motive is financial gain, and they are corrupting the communication of science. Their main victims are institutions and researchers in low and middle income countries, and the time has come to act rather than simply to decry them.

Clark, Jocalyn, and Richard Smith. “Firm action needed on predatory journals.” BMJ 350.jan16 1 (2015): h210-h210.

How prevalent are these journals?

Over the studied period, predatory journals have rapidly increased their publication volumes from 53,000 in 2010 to an estimated 420,000 articles in 2014, published by around 8,000 active journals. Early on, publishers with more than 100 journals dominated the market, but since 2012 publishers in the 10–99 journal size category have captured the largest market share. The regional distribution of both the publisher’s country and authorship is highly skewed, in particular Asia and Africa contributed three quarters of authors. Authors paid an average article processing charge of 178 USD per article for articles typically published within 2 to 3 months of submission.

Shen, Cenyu, and Bo-Christer Björk. “‘Predatory’ open Access: A Longitudinal Study of Article Volumes and Market Characteristics.” BMC Medicine 13, no. 1 (2015): 230.

The rise of predatory journals is an unfortunate combination of the open-access model with the pressure to publish; young researchers desperate to get something on their CV are attracted to them or naive about their existence.

One of our findings is that authors who publish in so called “predatory” journals have little to no history of previous publications and citations. This may indicate that they are young researchers, which is indeed supported by the author information. [..]

The demands stimulate a multiplying of new OA journals, particularly in developing countries. A low submission acceptance standard provides an opportunity for non-elite members of the scholarly community to survive in the “publish or perish” culture found in both the West and many developing countries. Most of the “predatory” journals initiated and operated in the developing countries charge a fee affordable to local submissions, enabling researchers to publish quickly. Publishing in such journals is much less costly than conducting expensive studies and attempting to publish without fees in a prestigious foreign non-OA journal. This is by no means only an
open access problem, but is a prevalent dilemma in the current scholarly communication system.

Xia, Jingfeng, Jennifer L. Harmon, Kevin G. Connolly, Ryan M. Donnelly, Mary R. Anderson, and Heather A. Howard. “Who Publishes in ‘predatory’ Journals?” Journal of the Association for Information Science and Technology 66, no. 7 (2015): 1406–1417.

You might think there’s an easy solution to this: do extensive research on any journal interested in your paper, and be suspicious of any journal that approaches you or isn’t up-front about costs. You’d be wrong, though.

During the last 2 years, cyber criminals have started to imitate the names of reputable journals that publish only printed versions of articles. [..]

Unfortunately, such fake websites can be created by almost anyone who has even minimal knowledge of how to design a website can do so by using open-source Content Management Systems (CMSs). However, we believe that the academic cyber criminals who are responsible for the propagation of hijacked journals are completely familiar with the academic rules of upgrading lecturers, qualifying Ph.D. candidates, and applying for admission to postgraduate programs or any professorship positions. These criminals may be ghost writers or they may be the experts who used to help scholars write and publish their research work before they decided to become full-scale “ghost publishers”. Whoever they are, it is apparent that they have the knowledge required to design a website and to hide their identities on the Internet. In addition, they definitely are familiar with authors’ behaviors, and they know that many of authors are in urgent need of publishing a couple of “ISI papers” (i.e. articles published in journals that are indexed by Thomson Reuters/Institute for Scientific Information-ISI) within a limited time. Therefore, the new version of academic cyber criminals knows what to do and how to organize a completely fake conference or hijack a printed journal.

Jalalian, Mehrdad, and Hamidreza Mahboobi. “Hijacked Journals and Predatory Publishers: Is There a Need to Re-Think How to Assess the Quality of Academic Research?” Walailak Journal of Science and Technology (WJST) 11, no. 5 (2014): 389–394.

These “hijacked” journals are good enough to fool experienced researchers.

One of our students submitted a manuscript to the International Journal of Philosophy and Theology. This is a prestigious peer-reviewed journal, founded in 1938 by Jesuit Academics at the University of Louvain in Belgium. Initially a Dutch language journal, Bijdragen, it was internationalized in 2013 and is now published by Taylor and Francis. Within a few weeks our student received a message from the journal that his contribution had been reviewed and accepted: the topic was relevant, the methodology sound, and the relevant literature engaged. His manuscript could be published rather quickly. As soon as the publication had materialized, the student received an invoice of $200 to be paid to a bank account in Bangladesh. [..]

Our first student did not know — and neither did we — that there are in fact two journals with the same name International Journal of Philosophy and Theology. The [fake] one refers to a fancy website with an impressive name: American Research Institute for Policy Development. This organization publishes 52 journals in areas such as Arts, Humanities and Social Science, as well as Science and Technology. The journals have fancy names, often identifying an international scope.

Have, Henk ten, and Bert Gordijn. “Publication Ethics: Science versus Commerce.” Medicine, Health Care and Philosophy, April 11, 2017. doi:10.1007/s11019-017-9774-1.

The fact that I’ve made this post just by quoting scientific papers should tell you there’s extensive literature on faux literature, from people much more knowledgeable than I. Unfortunately, that also means none of it offers easy solutions or quick fixes. At the root of it all is the “publish or perish” model of science, and unfortunately that’s firmly embedded in modern scientific practice.

We’re overdue for a complete overhaul of how science is done.

Proof from Design, or the Teleological Proof (2)

A Hat Tip to Mandelbrot

Grab a sheet of paper. Near the top, write a number, any one you want. In your head or with a calculator, multiply that number by itself, then add the original number you had before multiplying, and write the result below that first number. Take the number at the bottom of the column, multiply it by itself, add the number at the top of the column, and write the result at the bottom of the column. Continue that last step until you’re bored, then start a new column with a new first number and repeat it all again. If you won’t or can’t play along, I’ll share what’s on my paper: [Read more…]

Saving the World, One Silly Dance at a Time

I think I get Bill Nye’s plan.

Currently he’s caping around Netflix, promising to “save the world.” One of the two episodes I watched was on nutrition, and it was unceasingly awful. Over and over again, he hammered home the point that fad diets were useless: problem is, he didn’t explain why. He didn’t bring up the shady practices and lousy science, he didn’t give a lecture on human physiology; he did burn food with a blowtorch, interview a cave person, and host a deliberately awkward school play over nutrition. His “expert panel” consisted of a comedian, a personal trainer, and a psychologist. As someone who prefers the process and facts, I was left deeply unsatisfied. How exactly was this saving the world?

In this paper, we report the results of two rounds of experiments investigating the extent to which corrective information embedded in realistic news reports succeeds in reducing prominent misperceptions about contemporary politics. In each of the four experiments, which were conducted in fall 2005 and spring 2006, ideological subgroups failed to update their beliefs when presented with corrective information that runs counter to their predispositions. Indeed, in several cases, we find that corrections actually strengthened misperceptions among the most strongly committed subjects.[1]

Enter the Backfire Effect. I’m not yet convinced it exists, thanks to the current replication crisis, but I do know it is widely believed in the skeptic circles Nye is familiar with. Let’s say it does exist; how then do we dispel myths?

A common explanation for the Backfire Effect is competing arguments.[2] The idea is that when someone hears a refutation of a myth they hold dear, they work hard to swat it down. In doing so, they bring up their prior knowledge and remind themselves of its strength. Weighing the (supposedly) defused refutation and the (supposedly) iron-clad evidence for the myth in their minds, people chalk in more evidence in favor of the myth. In hindsight, they’ll remember the evidence in favor of the myth rather than the evidence opposed.

If true, then one approach is to avoid bringing evidence against the myth, as that will cause people to work less to refute it and thus dredge up less counter-argument. Never bring up evidence in favor of it either, as you’ll remind people it exists. In fact, why bring up evidence at all when you can use peer pressure and mockery to exploit our social tendencies? Another two approaches are repetition and entertainment; make sure people remember your talking points, instead of the evidence against them.

Bill Nye did all of that.

He’s not trying to engage people like me, who already know fad diets are bogus, he’s trying to convince the people who think fad diets are legit. By tackling the harder problem he is indeed trying to save the world, by carefully refuting the myths people hold. This is not science or the discovery of novel truths, it’s the spread of those truths to the masses and the battle against misinformation.

Alas, some people didn’t get the memo. Like Jerry Coyne.

It’s no secret that I am not a big fan of Bill Nye, regarding him as a buffoon who will engage in any shenanigans that keep him in the public eye and help him retain the fame he desires—fame accrued as “The Science Guy”.

Spoken like someone who’s never read Bill Nye’s CV. I’m sure the current CEO of The Planetary Society, who’s designed sundials for Mars landers and took Obama to the Florida Everglades to discuss climate change and education, is consumed by a need for fame.

Well, Nye has a new show humbly called “Bill Nye Saves the World“, which apparently still has the goal of promoting science. Here’s a new video from the show. Featuring comedian and actor Rachel Bloom singing “My vagina has its own voice,” it’s an arrant travesty.

Or a memorable way to drive home the point that how you have sex doesn’t matter, nor what body parts you use or how they’re shaped. One that will be shared far and wide by people who argue the contrary, who seem genuinely frightened of what Nye is saying.

Now this may be social justice stuff, but it ain’t science …

Social justice is the promotion of a fair and just society. It is universal health care, progressive taxation, international trade policy, and discounted tuition. It is eliminating discrimination based on sex or race. If you consider mass misinformation as a social injustice, then yes, educating people on the best science is a form of social justice, but that’s a more tenuous form than guaranteed minimum income programs.

And yes, studying sex is science. Coyne himself agrees on this.

I think the size dimorphism of humans is more likely a result of male “battling” for dominance and access to females than simply female preference for large males, though of course both factors can be involved. […]

I also adduced four other bits of evidence predicted by the sexual selection hypothesis, which you can see at my earlier post. Those predictions were made before the data were collected, and they were confirmed.

That’s got all the basic trappings of science: hypotheses, evidence, and a methodology for combining the two. Next, we have to establish if the scientific consensus is that sex is a spectrum instead of a binary.

The idea of two sexes is simplistic. Biologists now think there is a wider spectrum than that. […]

Since the 1990s, researchers have identified more than 25 genes involved in DSDs [differences of sex development], and next-generation DNA sequencing in the past few years has uncovered a wide range of variations in these genes that have mild effects on individuals, rather than causing DSDs. “Biologically, it’s a spectrum,” says [Eric] Vilain, [a clinician and the director of the Center for Gender-Based Biology at the University of California, Los Angeles].[3]

The influence of the XX/XY model of chromosomal sex has been profound over the last century, but it’s founded on faulty premises and responsible for encouraging reductive, essentialist thinking. While the scientific world has moved on, its popular appeal remains.[4]

Sex determination exists on a spectrum, with genitals, chromosomes, gonads, and hormones all playing a role. Most fit into the male or female category, but about one in a hundred may fall in between.[5]

Easy peasy. Even Adam Savage is aware that science promotes a sex spectrum. But Coyne offers up a weak counter-argument against the scientific consensus.

… not even if you construe it as promoting a “spectrum of sexuality,” which is misleading because most people bunch at either end of the “spectrum.”

Riiiiiiight, so we should ditch the idea of a spectrum because people don’t fall along it in a uniform fashion. Does this mean I can declare all prime numbers to be odd? Most of them are, after all. Or maybe we should dispense with the visual spectrum, since our eyes tend to lump colours into discrete categories?

As always, I wonder what Coyne thinks of people who don’t fall into the binary. Are they “defects” in need of “correction?” Should we trim the clitoris of a newborn baby if it is longer than we feel comfortable with? Should a baby with a micropenis have it lengthened? I know Coyne is vocal over the mutilation of genitals for religious reasons, so I’m curious if he’s fine with “correcting” them for social ones.

On April 18, 2006, when M was 16 months old, Dr. Ian Aaronson operated on him at the Medical University of South Carolina (MUSC). He reduced M’s penis to look more like a clitoris, cut up his scrotum to form labia, and removed his internal testicle tissue. Two other specialists also treated M: Dr. Yaw Appiagyei-Dankah, who worked at MUSC, and Dr. James Amrhein from Greenville Hospital.

In a letter to M’s pediatrician, Dr. Amrhein wrote that initially, M’s condition was “confusing.” He had been identified as a boy at birth because of his “rather large” penis. Routine blood tests showed his testosterone levels were extremely elevated. However, he had a small vaginal opening beneath his penis and both ovarian and testicular tissue. “Surgical correction” was necessary, the doctors noted in medical records. [6]

Let’s do the math: roughly 1 in 2,000 children are born with an ambiguous sex. Surgical “correction” has been a common response since the 1950’s. Between 1960 and 2009, about 175 million Americans were born. If all those figures are accurate, roughly 87,000 Americans had their genitals “corrected” by doctors to fit into the binary.

Now, we have no way of getting accurate numbers here. No-one tracks the number of intersex children born (how can we, when we can’t even define “intersex?”), doctors rarely if ever publicly discussed the practice (so as to preserve the social taboo), and they usually told parents to never discuss these surgeries with their kids (and sometimes never informed the parents at all). But even with the fuzzy math it’s obvious that our society’s binary view of sex carries a terrible cost.

Try telling that to Coyne, though.

I’m not sure what this is doing on a science show. It’s not even funny, […]

Defend this travesty if you want, but I’ll never admit it promotes anything but ideology.

The irony is that Coyne is fine with the science of sex within the context of Evolutionary Psychology, he’s fine with social justice when it comes to separation of church and state, and he’s fine with eliminating unnecessary surgeries prompted by religion. Shift the context slightly and suddenly these topics are “ideologies” that he can safely ignore, even if the variations are well grounded in science and of benefit to everyone.

Lighten up, Coyne, and try talking to a vagina. You might learn something from the experience.


[1] Nyhan, Brendan, and Jason Reifler. “When corrections fail: The persistence of political misperceptions.” Political Behavior 32.2 (2010): 303-330.

[2] Trevors, Gregory J., et al. “Identity and epistemic emotions during knowledge revision: A potential account for the backfire effect.” Discourse Processes 53.5-6 (2016): 339-370.
[3] Ainsworth, Claire. “Sex Redefined.” Nature 518, no. 7539 (February 18, 2015): 288–91. doi:10.1038/518288a.
[4] Ian Steadman. “Sex Isn’t Chromosomes: The Story of a Century of Misconceptions about X & Y.” New Statesman, February 23, 2015.
[5] http://www.nationalgeographic.com/magazine/2017/01/how-science-helps-us-understand-gender-identity/

Fame and Citations

5Remember that “Rock Stars of Science” ad campaign? I thought it was dreadful. Science is supposed to be the pursuit of knowledge through experimentation and rigorous methodology. When you focus on the personalities behind the science, you push all that to the side and turn it into a purely creative task, mysterious and luck-dependent. You start to get situations like Lord Kelvin’s opinion of the age of the Earth.

The result of Kelvin’s assumptions about the deep interior of the Earth, without any sound evidence, was unfortunately quite significant. Because the timeframe he provided was far too brief to allow for known geological processes to produce the current topographical features of the Earth. Even worse, Kelvin then made significant attacks on the science of geology and it’s practitioners, but most of the geologists in that era were intimidated by Kelvin’s stature within the overall scientific community (Lewis, 2000). Kelvin was regarded as possibly the most well regarded and imposing scientific figure of the day (Lewis, 2000). […]

Physics was regarded as a more mature and noble field than geology (Hallam, 1989), which was still perceived as immature and without the (apparent) certainty provided by the more mathematically-oriented physics and chemistry. Kelvin derived his estimate from quantitative and repeatable measurements, physical principles of the known natural laws of the time, and elegant math (Dalrymple, 2004). That method, combined with his arguments about the uncertainty of geologic data analysis, provided Kelvin with a tremendous amount of swagger over his theory’s potential opponents. He was enthusiastic and persuasive, and was perhaps the leading scientific celebrity of his time, and this made him an exceptionally difficult opponent for Lyell and Darwin (Hallam, 1989); Darwin referred to Kelvin as his “sorest trouble” (Dalrymple, 2004; Lewis, 2000). The end result was that most scientists sought agreement rather than conflict with Kelvin (Lewis, 2000). Archibald Geikie (Hallam, 2009), James Croll, Lyell, and Samuel Haughton all adjusted their theories to make allowances for Kelvin. Additionally, P.G. Tait, T. Mellard Reade, Clarence King, and John Joly (Hallam, 1989) all reached conclusions concordant with Kelvin through their own methods. This is unfortunate and could be concluded as an effect of peer pressure biasing the scientific method, and perhaps a little bit of an inferiority complex on the part of the geologists in comparison with their 19th century physics peers.

“Rock Star science” harms productivity, too; one study found that when a “superstar” in a field dies, the output of their collaborators drops 5-8%. Instead, I prefer a “Wonder of Science” approach where cool facts are mixed with play and experimentation. When everyone has the tools to do science, anyone can pick up where someone else left off and we’re not stuck waiting for a “big name” to come along and save us.

When I entered the field of psychological science, what excited me was that, historically, the field was full of big thinkers—scholars like Sigmund Freud and Carl Rogers in psychotherapy, Edward Tolman and B. F. Skinner in learning, Herbert Simon and more recently Daniel Kahneman in cognition, and Abraham Maslow and David McClelland in personality. They represented psychological science in the large—a kind of “big psychology.” A concern I have developed over the years is that our field is moving toward a kind of psychological science in the small—a kind of “small psychology.” […]

For example, Sigmund Freud has an h index of 265, B. F. Skinner of 98, Herbert Simon of 163, and Daniel Kahneman of 123. Their total citations are prodigious, for example, 450,339 for Freud, 277,573 for Simon, and 254,326 for Kahneman. In today’s scientific climate, it may be challenging to be a “big psychological scientist,” but I believe big thinking pays off in the kind of impact (with accompanying citation statistics) that lasts over generations, not merely over the duration of one’s career or a part of one’s career. In the long run, the big thinkers are the ones who most create a lasting legacy.

That’s Robert J. Sternberg offering his counterpoint. Still,

in comments to us, some psychological scientists, including some from our book, challenged the criteria or the weighting of the criteria, which led us to wonder just how eminence, or performance at any level, should be judged. What is the future of such evaluations of scientific merit?

Don Foss and I then decided—regrettably, Susan Fiske was unavailable to participate at the time—to pursue this universally important issue by creating the present symposium for Perspectives on Psychological Science. We invited several distinguished psychological scientists who have worked on the problem of merit and eminence in psychological science and asked them each if they would write an essay for Perspectives.

The answer was “yes,” and so seven prominent male scientists weighed in on how we should judge the prominence of a scientist. The one woman allowed in, Alice H. Eagly, was graciously allowed to share a by-line with a male author so she could ask “where the women at?

Yeeeeaaah. I’ll let Katie Corker tell the tale of Perspectives‘ second attempt.

The new call was issued in response to a chorus of nasty women and other dissidents who insisted that their viewpoints hadn’t been represented by the scholars in the original special issue. The new call explicitly invited these “diverse perspectives” to speak up (in 1,500 words or less****).
Each of the six of us independently rose to the challenge and submitted comments. None of us were particularly surprised to receive rejections – after all, getting rejected is just about the most ordinary thing that can happen to a practicing researcher. Word started to spread among the rejected, however, and we quickly discovered that many of the themes we had written about were shared across our pieces. That judgments of eminence were biased along predictable socio-demographic lines. That overemphasis on eminence creates perverse incentives. That a focus on communal goals and working in teams was woefully absent from judgments of eminence.
And so all six posted their opinions online, free for anyone to read. Simine Vazire, for instance, argues that
The drive for eminence is inherently at odds with scientific values, and insufficient attention to this problem is partly responsible for the recent crisis of confidence in psychology and other sciences. The replicability crisis has shown that a system without transparency doesn’t work. The lack of transparency in science is a direct consequence of the corrupting influence of eminence-seeking. If journals and societies are primarily motivated by boosting their impact, their most effective strategy will be to publish the sexiest findings by the most famous authors. Humans will always care about eminence. Scientific institutions and gatekeepers should be a bulwark against the corrupting influence of the drive for eminence, and help researchers maintain integrity and uphold scientific values in the face of internal and external pressures to compromise.
Alas, Perspectives on Psychological Science‘s mulligan has yet to be published. But it should be obvious that this argument strikes right to the heart of how science is done.

Gimmie that Old-Time Breeding

Full disclosure: I think Evolutionary Psychology is a pseudo-science. This isn’t because the field endorses a flawed methodology (relative to the norm in other sciences), nor because they come to conclusions I’m uncomfortable with. No, the entire field is based on flawed or even false assumptions; it doesn’t matter how good your construction techniques are, if your foundation is a banana cream pie your building won’t be sturdy.

But maybe I’m wrong. Maybe EvoPsych researchers are correct when they say every other branch of social science is founded on falsehoods. So let’s give one of their papers a fair shake.

Ellis, Lee, et al. “The Future of Secularism: a Biologically Informed Theory Supplemented with Cross-Cultural Evidence.” Evolutionary Psychological Science: 1-19. [Read more…]

Proof from Design, or the Teleological Proof (1)

The sun that we circle outputs light at a wide range of wavelengths, but has a peak at light coloured yellow-green. Our eyes are most sensitive to light waves that are yellow-green.

Our bodies cannot create vitamin C; without it, we fall apart in about two months. Fortunately, that vitamin is in some of the food we eat, in enough quantities to save us from a slow, painful death.

We come equipped with a staggeringly complex defence system, that can detect and tag potential invaders for immediate removal. It has many layers, ranging from white blood cells that roam the body to the simple act of raising our body’s temperature, which inhibits some common attackers while boosting the effectiveness of some other immune components.

All three of these are clear evidence of design. But how can such complicated systems arise from simple molecules and proteins? Does this not point to the “guiding hand” of a higher deity?

Cranes and Skyhooks

Like so many proofs, this one dates back to the Greeks. I’m going to pin this one on Socrates; his mouthpieces Plato and Xenophon[124] claims that he claimed that the way human eyelids protected human eyes was no accident. It was very much designed by a grand designer.

Does it not strike you then that he who made man from the beginning did for some useful end furnish him with his several senses–giving him eyes to behold the visible word, and ears to catch the intonations of sound? Or again, what good would there be in odours if nostrils had not been bestowed upon us? what perception of sweet things and pungent, and of all the pleasures of the palate, had not a tongue been fashioned in us as an interpreter of the same? And besides all this, do you not think this looks like a matter of foresight, this closing of the delicate orbs of sight with eyelids as with folding doors, which, when there is need to use them for any purpose, can be thrown wide open and firmly closed again in sleep? and, that even the winds of heaven may not visit them too roughly, this planting of the eyelashes as a protecting screen? this coping of the region above the eyes with cornice-work of eyebrow so that no drop of sweat fall from the head and injure them? again this readiness of the ear to catch all sounds and yet not to be surcharged? this capacity of the front teeth of all animals to cut and of the “grinders” to receive the food and reduce it to pulp? the position of the mouth again, close to the eyes and nostrils as a portal of ingress for all the creature’s supplies? and lastly, seeing that matter passing out of the body is unpleasant, this hindward direction of the passages, and their removal to a distance from the avenues of sense? I ask you, when you see all these things constructed with such show of foresight can you doubt whether they are products of chance or intelligence?

(“The Memorabilia,” Xenophon, Book I.4, translated by H. G. Dakyns)

And that settled it for about two thousand years. There was no other way to explain biological design, so most people declared religion the winner by default. The most famous example comes to us via William Paley.[125]

In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there; I might possibly answer, that, for any thing I knew to the contrary, it had lain there for ever: nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer which I had before given, that, for any thing I knew, the watch might have always been there. Yet why should not this answer serve for the watch as well as for the stone? why is it not as admissible in the second case, as in the first? For this reason, and for no other, viz. that, when we come to inspect the watch, we perceive (what we could not discover in the stone) that its several parts are framed and put together for a purpose, e. g. that they are so formed and adjusted as to produce motion, and that motion so regulated as to point out the hour of the day; […] This mechanism being observed (it requires indeed an examination of the instrument, and perhaps some previous knowledge of the subject, to perceive and understand it; but being once, as we have said, observed and understood), the inference, we think, is inevitable, that the watch must have had a maker: that there must have existed, at some time, and at some place or other, an artificer or artificers who formed it for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.

(“Natural Theology, or Evidences of the Existence and Attributes of the Deity collected from the Appearances of Nature,” William Paley. 1802.)

You can probably guess the rest; Paley points to evidence of design in nature, and argues that there must have been a designer for it all, and again we wind up invoking a god.

Right off the bat, we stumble across some fishy logic. Suppose you have no idea how the planets formed around the sun. Does this mean you have to accept my theory as truth, that they were all deposited there by a giant space clam? Of course not. A theory that doesn’t explain the data or is riddled with internal inconsistences can be safely discarded, even if it’s the only game in town. It’s not enough to say “God did it,” you have to describe how God did it. How did God put most marsupial mammals in Australia, and not elsewhere? How did he keep placentals away, even though they could comfortably live there?How did the long-dead remains of marsupials get to Antarctica, given the hostile climate and huge ocean walling that continent off? If it can’t explain data like that, we’ll be forced to look for an alternative to the God theory instead of blindly accepting it.

The publication of “De Humani Corporis Fabrica,” by Andreas Vesalius, was the first of many hints that there was another challenger out there. Scientists began looking at human beings and other animals in some detail, instead of parroting the claims of long-dead Greeks, and discovered odd bits that didn’t make sense. Why, for instance, do we have a disabled third eyelid?[126] You’d think an intelligent designer would either have given us a functional one, or none at all. Why are some people unable to see colour, and why is it far more common in men then women? The “clean” design of the ancient Greeks was dissolving into a “complex” design, which seemed eager to stir in a few contradictions and poor choices too.

David Hume is the first philosopher I can find that argued against a supernatural designer. In his “Dialogues Concerning Natural Religion,” published in 1779, the character Cleathes invokes the Design proof. His foil, Philo, slowly begins to dismantle it.

But can a conclusion, with any propriety, be transferred from parts to the whole? Does not the great disproportion bar all comparison and inference? From observing the growth of a hair, can we learn any thing concerning the generation of a man? Would the manner of a leaf’s blowing, even though perfectly known, afford us any instruction concerning the vegetation of a tree?
But, allowing that we were to take the operations of one part of nature upon another, for the foundation of our judgement concerning the origin of the whole, (which never can be admitted,) yet why select so minute, so weak, so bounded a principle, as the reason and design of animals is found to be upon this planet? What peculiar privilege has this little agitation of the brain which we call thought, that we must thus make it the model of the whole universe? Our partiality in our own favour does indeed present it on all occasions; but sound philosophy ought carefully to guard against so natural an illusion.

(Part II)

After pointing out this particular air gap, he invents some fanciful alternate explanations for the design of the world.

But to waive all objections drawn from this topic, I affirm, that there are other parts of the universe (besides the machines of human invention) which bear still a greater resemblance to the fabric of the world, and which, therefore, afford a better conjecture concerning the universal origin of this system. These parts are animals and vegetables. The world plainly resembles more an animal or a vegetable, than it does a watch or a knitting-loom. Its cause, therefore, it is more probable, resembles the cause of the former. The cause of the former is generation or vegetation. The cause, therefore, of the world, we may infer to be something similar or analogous to generation or vegetation. […]
The BRAHMINS assert, that the world arose from an infinite spider, who spun this whole complicated mass from his bowels, and annihilates afterwards the whole or any part of it, by absorbing it again, and resolving it into his own essence. Here is a species of cosmogony, which appears to us ridiculous; because a spider is a little contemptible animal, whose operations we are never likely to take for a model of the whole universe. But still here is a new species of analogy, even in our globe. And were there a planet wholly inhabited by spiders, (which is very possible,) this inference would there appear as natural and irrefragable as that which in our planet ascribes the origin of all things to design and intelligence, as explained by CLEANTHES. Why an orderly system may not be spun from the belly as well as from the brain, it will be difficult for him to give a satisfactory reason.

(Part VII)

But were this world ever so perfect a production, it must still remain uncertain, whether all the excellences of the work can justly be ascribed to the workman. If we survey a ship, what an exalted idea must we form of the ingenuity of the carpenter who framed so complicated, useful, and beautiful a machine? And what surprise must we feel, when we find him a stupid mechanic, who imitated others, and copied an art, which, through a long succession of ages, after multiplied trials, mistakes, corrections, deliberations, and controversies, had been gradually improving? Many worlds might have been botched and bungled, throughout an eternity, ere this system was struck out; much labour lost, many fruitless trials made; and a slow, but continued improvement carried on during infinite ages in the art of world-making. In such subjects, who can determine, where the truth; nay, who can conjecture where the probability lies, amidst a great number of hypotheses which may be proposed, and a still greater which may be imagined?

(Part V)

Alas, Hume didn’t realize he was on to something; at the end of “Dialogues,” his skeptic Philo mysteriously concedes the argument to Cleathes, despite the poor replies of the latter.

The second serious alternative arrived in 1795, thanks to Erasmus Darwin, [127] and was then fleshed out by Jean-Baptiste Lamarck in 1809’s “Philosophie Zoologique.” Lamarck proposed that any changes made to an animal during its life were passed on to their offspring. The classic example is that a blacksmith’s son will also have the thick arms of his father. It’s almost entirely wrong, [128] but at least it got biologists thinking. The third edition of “On the Origin of Species,” published half a century later in 1861, lists nineteen other biologists who nearly figured out evolution for themselves, two who did come up with it (Alfred Wallace and Patrick Matthew), and one person who claimed to but probably didn’t.

Ah yes, evolution. The concept is very simple. You start with something that is capable of making copies of itself. These copies must be imperfect, at least some of the time, leading to differences from the original. [129] If those differences or even just the surrounding environment imposes limits on these things, then the version which copes best with those limits will be able to create more copies of itself than other variations. Repeat this many, many, many times, and the result is something that seems designed for its environment.

No really, that’s it. The process that spawned the vast diversity of life on this planet, that made creatures as wildly different as bacteria and human beings, can be comfortably laid out in a single paragraph.

It’s incredible, in that it strains credibility. How can a simple set of rules lead to such complicated results? I think I can answer that by using an even simpler example.


[124]  Socrates spent most of his life wandering the streets, asking people annoying questions, instead of writing things down. Incredibly, this sort of behaviour kept him fed and earned him students and admirers. It’s probably for the best that he detested writing; spreading that sort of knowledge around would crash any economy.

[125]  Paley may not have been the first to make this argument, actually. Bernard le Bovier de Fontenelle may have beaten him to the punch in 1686; refer to “The Story of Civilization: The age of Louis XIV, 1648-1715” by Will and Ariel Durant for the details.

[126]  Ever wonder about that little pinkish thing in one corner of your eye? That’s the remains of it. The remains might still have a use, by helping to clear out grime from your eye, but it’s a shadow of what it once was.

[127]  Every writer who mentions Charles Darwin must point out that his grandfather Erasmus nearly scooped his discovery of evolution, or pay a fine and do 42 hours of community service.

[128]  It turns out that genes can be overridden by the environment or organism, though this doesn’t directly alter the genes themselves. These changes usually don’t pass to the next generation, but there are exceptions.

[129]  In biology, errors are very rare, only matter in the few cells that are devoted to reproduction, and have a nasty habit of killing the organism. That last part means the mutations in the surviving animals are not completely random, in practice. Sex complicates things further by merging two plans into one, which helps spread beneficial errors more rapidly.

Proof from Morality (5)

Fuzzy Logic

For the moment, let’s assume there is a deity helping us with tricky morals. All religions that I know of state that their god or gods are much smarter than us, in some cases infinitely smart and capable of seeing future events. If we are being guided, then we should easily find clear, consistent answers to these questions.

Instead, we find the contrary.

George Tarmarin conducted a fascinating study in 1966. He presented a few thousand Israeli children with the Old Testament’s telling of the battle for ancient Israeli city of Jericho:

And at the seventh time, when the priests had blown the trumpets, Joshua said to the people, “Shout, for the LORD has given you the city.

And the city and all that is within it shall be devoted to the LORD for destruction. Only Rahab the prostitute and all who are with her in her house shall live, because she hid the messengers whom we sent.

But you, keep yourselves from the things devoted to destruction, lest when you have devoted them you take any of the devoted things and make the camp of Israel a thing for destruction and bring trouble upon it.

But all silver and gold, and every vessel of bronze and iron, are holy to the LORD; they shall go into the treasury of the LORD.”

So the people shouted, and the trumpets were blown. As soon as the people heard the sound of the trumpet, the people shouted a great shout, and the wall fell down flat, so that the people went up into the city, every man straight before him, and they captured the city.

Then they devoted all in the city to destruction, both men and women, young and old, oxen, sheep, and donkeys, with the edge of the sword.

But to the two men who had spied out the land, Joshua said, “Go into the prostitute’s house and bring out from there the woman and all who belong to her, as you swore to her.”

So the young men who had been spies went in and brought out Rahab and her father and mother and brothers and all who belonged to her. And they brought all her relatives and put them outside the camp of Israel.

And they burned the city with fire, and everything in it. Only the silver and gold, and the vessels of bronze and of iron, they put into the treasury of the house of the LORD.


(Joshua 6:16-24, English Standard Translation)

Half of them were given this passage with no changes. The other half were given the same passage but with the names and locations changed to suit ancient China instead. Tarmarin then polled the students on the actions of Joshua (or “General Lin”): did they completely approve of them, partially approve, or completely disapprove?

Of those given the unaltered passage, 66% of them completely approved and 8% partially approved.

Of those given the altered passage, 7% of them completely approved and 18% partially approved.

We already consider the moral landscape involving murder, arson, and theft to be relatively easy to answer. So why would our answers depend so heavily on the name of the person committing these acts, and very little on the actions themselves? What’s worse, Israel has a high concentration of religious believers; of the total population, 88% identified themselves as Jewish as of 1972. Their closer contact to God should make them better judges of moral issues than non-believers, if we assume a god was helping us with moral questions. That clearly is not the case.

If you think this is just a sign that Jews are immoral, let me counter with a secular version. In 2000, the Republican party of the United States of America was deciding on who they’d push for the presidency. John McCain was the frontrunner, having scored an unexpected victory in New Hampshire over his main rival, George Bush Jr., and was expected to win the critical state of South Carolina.

As he started campaigning in that state, tens of thousands of voters received a call. The person on the other side of the line claimed to be conducting a poll, and asked a few questions related to their current voting preference. They then asked:

“Would you be more or less likely to vote for John McCain… if you knew he had fathered an illegitimate black child?”

This touched off rumours that McCain had a child out of wedlock, which in turn were picked up by the press. McCain defended himself against the accusations, but the damage had been done: George Bush Jr. won South Carolina instead, and would go on to become both the Republican’s presidential candidate and eventually the president of the United States.

What happened? Cindy McCain was moved by the plight of two baby girls while helping out in a Bangladesh orphanage. After flying both children to the United States for treatment, she decided to adopt one of them, who was renamed Bridget McCain. This is clearly a moral act; indeed, John talked about his adopted daughter while on the campaign trail and brought her on-stage several times as a show of his moral strength.

This telephone “poll” was insinuating something more sinister, that Bridget was his illegitimate child from an affair, and McCain hid this by inventing the adoption story. While this too could be moral under certain conditions (say, the affair was approved and encouraged by Cindy), most people consider those as unlikely and would consider the situation immoral until proven otherwise.

Superficially, both cases have the same evidence going for them. Rationally, we should either sharpen Occam’s Razor and thus believe McCain, since the adoption story is far more likely, or dig for more evidence.

Instead, the voters went on instinct. We don’t want to be taken advantage of, so we tend to be pessimistic when we have something at stake. Voters didn’t know which situation was true, but didn’t want to assume he was clean, only to learn after he’d earned their precious vote that they’d been suckered.[123]

 South Carolina is considered a conservative state, with most residents placing an emphasis on traditional marriage and being more likely to be racist than the average person in the United States. The idea of an illegitimate black child was obscene to most of its residents, which made them even more likely to choose someone else at the polls.

While there was a rationale to the voter’s decision, it wasn’t rational.

We could shore up the god hypothesis by adding to it. Perhaps our lack of clarity is due to something else interfering with god, such as “free will” or another god. These extra assumptions only make it easier to cut down with Ockham’s Razor. So what else could explain the rest of our morality?

The Monkey Wrench

Ironically, the answer to this is also Game Theory. Not the consequences of it, however, but the fact that it exists.

Intelligence allows us to overcome problems that evolution hasn’t developed a solution to. In the chapter on the Intelligence proof, I mentioned Betty the crow, who was able to bend a metal wire to retrieve a tasty morsel of food from a tube.

Metal wires are not natural. Crows do not get their food by sticking things into tubes. Yet none of that mattered; the crow was able to understand the situation, come up with a plan that it could pull off, then put it into action. Intelligence is swifter and more flexible than raw evolution.

It can even override it. Ghandi was a strong believer in celibacy, at one point deliberately sleeping next to two nude women to prove his self control. He thought that sexual desire caused suffering, and suffering kept humans from achieving spiritual enlightenment.

I disagree. Ghandi was reasoning that because some sexual desires are harmful, all of them are. This is not true; while sex can be taken too far, it can also be a wonderful show off affection  with no consequences for those not involved. Ghandi was doing this in the name of spiritual purity, yet never gave evidence that this made him “pure”. What if his view of the supernatural was wrong, having been planted by a daemon, and the tantric pursuit of sex was the real way to purity? He would have tossed his life away blindly.

Invoking intelligence for morality makes a lot of sense. Like big claws and long legs, big brains are expensive to grow and maintain. Given enough time, the result is an animal only as smart as it needs to be to get by. This explains why we don’t see intelligence everywhere, why it’s second-fiddle to instinct, and why it’s so easily mangled. Ghandi or I could be wrong, because neither of us are good at rational thought.

Even if we were absolutely smart, we might still have different morals due to different information. John McCain’s situation seems moral, but what if we learned the adoption tale was really a cover story, and Bridget was conceived a steamy affair? The moral situation changes dramatically, yet the facts of this reality are nearly identical to the old view. Those Israeli schoolchildren have been taught by family or society, that a devout Jew with a divine mandate can do no wrong, and their morality reflects this “fact.”

These intelligence-based morals will be as universal as our commonalities. I assume you’re conscious while reading this; based on that, can we agree that forcibly ending consciousness is worth banning? Yes? Then can’t we also agree that this should be a general rule applied to all conscious beings? From that simple act, we’ve generated a rule which appears universal, without once needing to invoke anything beyond us, let alone a god.

I’ll admit I haven’t absolutely proven our morality does not come from the divine. I don’t need to; so long as that mix of evolution and intelligence was at least as plausible, we could invoke Ockham’s Razor and declare the god explanation to be unlikely. The small scraps of evidence that point to the simpler theory are just icing on the cake, and the argument that a god cannot provide an absolute morality seals the deal.

There are two big flaws that remain. I’m assuming that intelligence does not come from the divine, and that no-one has found evidence for a god. Thankfully, intelligence has already been given its own chapter, and the second is nicely handled in the last chapter.


[123]  Aaand we’re back to the Prisoner’s Dilemma. The only differences are the introduction of multiple players, and the payoffs and costs for each choice.