How to Become a Radical

If I had a word of the week, it would be “radicalization.” Some of why the term is hot in my circles is due to offline conversations, some of it stems from yet another aggrieved white male engaging in terrorism, and some from yet another study confirms Trump voters were driven by bigotry (via fearing the loss of privilege that comes from giving up your superiority to promote equality).

Some just came in via Rebecca Watson, though, who pointed me to a fascinating study.

For example, a shift from ‘I’ to ‘We’ was found to reflect a change from an individual to a collective identity (…). Social status is also related to the extent to which first person pronouns are used in communication. Low-status individuals use ‘I’ more than high-status individuals (…), while high-status individuals use ‘we’ more often (…). This pattern is observed both in real life and on Internet forums (…). Hence, a shift from “I” to “we” may signal an individual’s identification with the group and a rise in status when becoming an accepted member of the group.

… I think you can guess what Step Two is. Walk away from the screen, find a pen and paper, write down your guess, then read the next paragraph.

The forum investigated here is one of the largest Internet forums in Sweden, called Flashback (…). The forum claims to work for freedom of speech. It has over one million users who, in total, write 15 000 to 20 000 posts every day. It is often criticized for being extreme, for example in being too lenient regarding drug related posts but also for being hostile in allowing denigrating posts toward groups such as immigrants, Jews, Romas, and feminists. The forum has many sub-forums and we investigate one of these, which focuses on immigration issues.

The total text data from the sub-forum consists of 964 Megabytes. The total amount of data includes 700,000 posts from 11th of July, 2004 until 25th of April, 2015.

How did you do? I don’t think you’ll need pen or paper to guess what these scientists saw in Step Three.

We expected and found changes in cues related to group identity formation and intergroup differentiation. Specifically, there was a significant decrease in the use of ‘I’ and a simultaneous increase in the use of ‘we’ and ‘they’. This has previously been related to group identity formation and differentiation to one or more outgroups (…). Increased usage of plural, and decreased frequency of singular, nouns have also been found in both normal, and extremist, group formations (…). There was a decrease in singular pronouns and a relative increase in collective pronouns. The increase in collective pronouns referred both to the ingroup (we) and to one or more outgroups (they). These results suggest a shift toward a collective identity among participants, and a stronger differentiation between the own group and the outgroup(s).

Brilliant! We’ve confirmed one way people become radicalized: by hanging around in forums devoted to “free speech,” the hate dumped on certain groups gradually creates an in-group/out-group dichotomy, bringing out the worst in us.

Unfortunately, there’s a problem with the staircase.

Categories Dictionaries Example words Mean r
Group differentiation First person singular I, my, me -.0103 ***
First person plural We, our, us .0115 ***
Third person plural They, them, their .0081 ***
Certainty Absolutely, sure .0016 NS

***p < .001. NS = not significant. n=11,751.

Table 2 tripped me up, hard. I dropped by the ever-awesome R<-Psychologist and cooked up two versions of the same dataset. One has no correlation, while the other has a correlation coefficient of 0.01. Can you tell me which is which, without resorting to a straight-edge or photo editor?

Comparing two datasets, one with r=0, the other with r=0.01.

I can’t either, because the effect size is waaaaaay too small to be perceptible. That’s a problem, because it can be trivially easy to manufacture a bias at least that large. If we were talking about a system with very tight constraints on its behaviour, like the Higgs Boson, then uncovering 500 bits of evidence over 2,500,000,000,000,000,000 trials could be too much for any bias to manufacture. But this study involves linguistics, which is far less precise than the Standard Model, so I need a solid demonstration of why this study is immune to biases on the scale of r = 0.01.

The authors do try to correct for how p-values exaggerate the evidence in large samples, but they do it by plucking p < 0.001 out of a hat. Not good enough; how does that p-value relate to studies of similar subject matter and methodology? Also, p-values stink. Also also, I notice there’s no control sample here. Do pro-social justice groups exhibit the same trend over time? What about the comment section of sports articles? It’s great that their hypotheses were supported by the data, don’t get me wrong, but it would be better if they’d tried harder to swat down their own hypothesis. I’d also like to point out that none of my complaints falsify their hypotheses, they merely demonstrate that the study falls well short of confirmed or significant, contrary to what I typed earlier.

Alas, I’ve discovered another path towards radicalization: perform honest research about the epistemology behind science. It’ll ruin your ability to read scientific papers, and leave you in despair about the current state of science.

Bayes Bunny iz trying to cool off after reading too many scientific papers.

To A Burnt-Out Activist

The scandal brewing at the end of my post has come to pass. This one hurt a little bit; publicly  at least, Silverman seemed to be in favor of policies that would reduce sexual assault, and spoke out against the bigots in our movement. In reality, given the evidence, he was talking the talk but not walking the walk.

That comes on top of my growing unease over that last blog post. There’s nothing in there worth changing, that I’m aware of; the problem is more with what it doesn’t say, and who it mentions in passing but otherwise leaves at the margin.

See, there’s a pervasive belief that minorities are responsible for bringing about social justice, either by claiming they created the problem or demanding they educate everyone. That falls apart if you spend a half-second dwelling on it. The majority, by definition, hold most of the power in society. If they accepted the injustice done to the minority, they’d use that power to help resolve it. In reality, they tend to bury their heads in the sand, ignoring the evidence of injustice or finding ways to excuse it, so their power is often wielded against the minority. The result is that the minority has to spend an enormous amount of time and energy educating and agitating the majority.

So you can see why calling for people to fight harder for the change they’d like to see, as I did last blog post, can seem clueless and even heartless. Yes, I placed a few lines in there to hint that I was talking to the majority, but those have to be weighed against the context I outlined above. This time around, I’d rather focus on the burnt-out activist than the clueless white guy.

Put bluntly, life is short. You should spend your time doing things you find rewarding; endlessly quoting painful testimony of sexual assault, or the science and statistics of how tragically common it is, or giving an embarrassingly basic lecture on consent, doesn’t stay in that category for long. The resulting feelings of burnout or frustration are entirely valid, and worthy of taking seriously.

Human beings are also complex, we exist in many cultures and movements. I sometimes advocate for secularism, but I’ve also written about science, statistics, and even dabbled in art from time to time. If one aspect of my life becomes frustrating, I can easily switch to another, and there’s nothing wrong with that switch. This may seem like a betrayal; how can you leave your sisters behind as they carry on fighting the good fight?

But it’s extremely rare for a single person to change a culture; in practice, change comes via a sustained, coordinated effort from multiple people. At worst, the loss of one person may slow things down, and even that is debatable: there’s an unstated premise here that once you’ve dropped out of culture, you can’t come back. That should be obviously false (and if it isn’t, run). If you can return, though, then why not use the time away to recharge? You’ll get a helluva lot more done ducking out from time to time to fight burn-out, than you would if you stuck around when you don’t care to.

I have tremendous sympathy for the people who are sick of arguing against all the sexism, racism, ableism, and so on within the atheist and skeptic movements. Take as long a break as you need to, come back if or when you feel it’s time. There should be an empty seat waiting for you, and if there isn’t you’ll be in a better place to flip everyone the bird and create a new culture that gets this shit right.


(As a side-note, I found it amusing when I began working through the OrbitCon talks and heard Greta Christina laying out similar points. She has been a big influence on my views on activism for several years, so the overlap is less surprising in hindsight.)

Sean Hannity?!?!

Context first. Trump has been fuming since investigators raided Michael Cohen’s offices and hotel room. It quickly became public that Cohen was under criminal investigation for “business dealings” possibly related to squashing reports of sexual improprieties. Nonetheless, Federal prosecutors have been secretly reading his email communication as part of said investigation.

There’s also been more reporting about Eliott Broidy, who used Cohen to pay $1.6 million to a woman he impregnated. Some interesting details started emerging: two of the women who had affairs with Trump, plus this third woman, all had Keith Davidson as their lawyer, who happens to know Cohen. The contract was with the same LLC Cohen set up to funnel money to Stormy Daniels, and it used similar wording right down to the code names. This adds to speculation that Cohen silenced so many stories of sexual assault that he had an organized system in place.

Both Michael Cohen and Trump have asked for “first dibs” in determining which documents are protected by attorney-client privilege, rather than the conventional “taint team.” More interestingly, again allegations by Federal prosecutors that Cohen had no real clients, Cohen’s provided a list of three: Trump, Elliot Broidy, and [REDACTED]. The latter explicitly asked Cohen and his lawyers to keep his name quiet. That came to a head today during a hearing in court today. Over on the Political Madness thread, SC and I have been tuning in via Twitter.

And, as you may have guessed by now, the judge ruled that Cohen’s lawyers couldn’t keep [REDACTED]‘s name sealed, so they were outed as Sean Hannity. Mayhem ensued; after all, Sean Hannity has been a big defender of Trump and condemned the raid on Cohen’s offices without disclosing his relationship. Hannity has had sexual harassment allegations leveled against him, and Fox News has promoted myths about sexual assault as well as a culture which tolerates sexual harassment.

There’s nothing public about Hannity that’s of the same scale as Trump or Broidy, however. Not yet, anyway; I expect a dozen investigative reporters are working to change that.

The Tuskegee Syphilis Study

Was it three years ago? Almost to the day, from the looks of it.

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future.

That was Steven Pinker arguing that biomedical research is too ethical. Follow that link and you’ll see my counter-example: the Tuskegee syphilis study. It is a literal textbook example of what not to do in science. Pinker didn’t mention it back then, but it was inevitable he’d have to deal with it at some time. Thanks to PZ, I now know he has.

At a recent conference, another colleague summed up what she thought was a mixed legacy of science: vaccines for smallpox on the one hand; the Tuskegee syphilis study on the other. In that affair, another bloody shirt ind the standard narrative about the evils of science, public health researchers, beginning in 1932, tracked the progression of untreated latent syphilis in a sample of impoverished African Americans for four decades. The study was patently unethical by today’s standards, though it’s often misreported to pile up the indictment. The researchers, many of them African American or advocates of African American health and well-being, did not infect the participants as many people believe (a misconception that has led to the widespread conspiracy theory that AIDS was invented in US government labs to control the black population). And when the study began, it may even have been defensible by the standards of the day: treatments for syphilis (mainly arsenic) were toxic and ineffective; when antibiotics became available later, their safety and efficacy in treating syphilis were unknown; and latent syphilis was known to often resolve itself without treatment. But the point is that the entire equation is morally obtuse, showing the power of Second Culture talking points to scramble a sense of proportionality. My colleague’s comparison assumed that the Tuskegee study was an unavoidable part of scientific practice as opposed to a universally deplored breach, and it equated a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century in perpetuity.

What horse shit.

To persuade the community to support the experiment, one of the original doctors admitted it “was necessary to carry on this study under the guise of a demonstration and provide treatment.” At first, the men were prescribed the syphilis remedies of the day — bismuth, neoarsphenamine, and mercury — but in such small amounts that only 3 percent showed any improvement. These token doses of medicine were good public relations and did not interfere with the true aims of the study. Eventually, all syphilis treatment was replaced with “pink medicine” — aspirin. To ensure that the men would show up for a painful and potentially dangerous spinal tap, the PHS doctors misled them with a letter full of promotional hype: “Last Chance for Special Free Treatment.” The fact that autopsies would eventually be required was also concealed. As a doctor explained, “If the colored population becomes aware that accepting free hospital care means a post-mortem, every darky will leave Macon County…”

  • “it equated a one-time failure to prevent harm to a few dozen people”: In reality, according to that last source, “28 of the men had died directly of syphilis, 100 were dead of related complications, 40 of their wives had been infected, and 19 of their children had been born with congenital syphilis.” As of August last year, 12 former children were still receiving financial compensation.
  • “the prevention of hundreds of millions of deaths per century in perpetuity”: In reality, the Tuskegee study wasn’t the only scientific study looking at syphilis. Nor even the first. Syphilis was discovered in 1494, named in 1530, the causative organism was found in 1905, and the first treatments were developed in 1910. The science was dubious at best:

The study was invalid from the very beginning, for many of the men had at one time or another received some (though probably inadequate) courses of arsenic, bismuth and mercury, the drugs of choice until the discovery of penicillin, and they could not be considered untreated. Much later, when penicillin and other powerful antibiotics became available, the study directors tried to prevent any physician in the area from treating the subjects – in direct opposition to the Henderson Act of 1943, which required treatment of venereal diseases.

A classic study of untreated syphilis had been completed years earlier in Oslo. Why try to repeat it? Because the physicians who initiated the Tuskegee study were determined to prove that syphilis was ”different” in blacks. In a series of internal reviews, the last done as recently as 1969, the directors spoke of a ”moral obligation” to continue the study. From the very beginning, no mention was made of a moral obligation to treat the sick.

Pinker’s response to the Tuskegee study is to re-write history to suit his narrative, again. No wonder he isn’t a fan of ethics.

Steven Pinker, “Historian”

It’s funny, if you look back over my blog posts on Steven Pinker, you’ll notice a progression.

Ignoring social justice concerns in biomedical research led to things like the Tuskegee experiment. The scientific establishment has since tried to correct that by making it a critical part. Pinker would be wise to study the history a bit more carefully, here.


Setting aside your ignorance of the evidence for undercounting in the FBI’s data, you can look at your own graph and see a decline?

When Sargon of Arkkad tried and failed to discuss sexual assault statistics, he at least had the excuse of never having gotten a higher education, never studying up on the social sciences. I wonder what Steven Pinker’s excuse is.


Ooooh, I get it. This essay is just an excuse for Pinker to whine about progressives who want to improve other people’s lives. He thought he could hide his complaints behind science, to make them look more digestible to himself and others, but in reality just demonstrated he understands physics worse than most creationists. What a crank.

You’ll also notice a bit of a pattern, too, one that apparently carries on into Pinker’s book about the Enlightenment.

It is curious, then, to find Pinker breezily insisting that Enlightenment thinkers used reason to repudiate a belief in an anthropomorphic God and sought a “secular foundation for morality.” Locke clearly represents the opposite impulse (leaving aside the question of whether anyone in period believed in a strictly anthropomorphic deity).

So, too, Kant. While the Prussian philosopher certainly had little use for the traditional arguments for God’s existence – neither did the exceptionally pious Blaise Pascal, if it comes to that – this was because Kant regarded them as stretching reason beyond its proper limits. Nevertheless, practical reason requires belief in God, immorality and a post-mortem existence that offers some recompense for injustices suffered in the present world.

That’s from Peter Harrison, a professional historian. Even I was aware of this, though I am guilty of a lie of omission. I’ve brought up the “Cult of Reason” before, which was a pseudo-cult set up during the French Revolution that sought to tear down religion and instead worship logic and reason. What I didn’t mention was that it didn’t last long; Robespierre shortly announced his “Cult of the Supreme Being,” which promoted Deism as the official religion of France, and had the leaders of the Cult of Reason put to death. Robespierre himself was executed shortly thereafter, for sounding too much like a dictator, and after a half-hearted attempt at democracy France finally settled on Napoleon Bonaparte, a dictator everyone could get behind. The shift to reason and objectivity I was hinting at back then was more gradual than I implied.

If we go back to the beginning of the scientific revolution – which Pinker routinely conflates with the Enlightenment – we find the seminal figure Francis Bacon observing that “the human intellect left to its own course is not to be trusted.” Following in his wake, leading experimentalists of the seventeenth century explicitly distinguished what they were doing from rational speculation, which they regarded as the primary source of error in the natural sciences.

In the next century, David Hume, prominent in the Scottish Enlightenment, famously observed that “reason alone can never produce any action … Reason is, and ought only to be the slave of the passions.” And the most celebrated work of Immanuel Kant, whom Pinker rightly regards as emblematic of the Enlightenment, is the Critique of Pure Reason. The clue is in the title.

Reason does figure centrally in discussions of the period, but primarily as an object of critique. Establishing what it was, and its intrinsic limits, was the main game. […]

To return to the general point, contra Pinker, many Enlightenment figures were not interested in undermining traditional religious ideas – God, the immortal soul, morality, the compatibility of faith and reason – but rather in providing them with a more secure foundation. Few would recognise his tendentious alignment of science with reason, his prioritization of scientific over all other forms of knowledge, and his positing of an opposition between science and religion.

I’m just skimming Harrison’s treatment, the rest of the article is worth a detour, but it really helps underscore how badly Pinker wants to re-write history. Here’s something the man himself committed to electrons:

More insidious than the ferreting out of ever more cryptic forms of racism and sexism is a demonization campaign that impugns science (together with the rest of the Enlightenment) for crimes that are as old as civilization, including racism, slavery, conquest, and genocide. […]

“Scientific racism,” the theory that races fall into a hierarchy of mental sophistication with Northern Europeans at the top, is a prime example. It was popular in the decades flanking the turn of the 20th century, apparently supported by craniometry and mental testing, before being discredited in the middle of the 20th century by better science and by the horrors of Nazism. Yet to pin ideological racism on science, in particular on the theory of evolution, is bad intellectual history. Racist beliefs have been omnipresent across history and regions of the world. Slavery has been practiced by every major civilization and was commonly rationalized by the belief that enslaved peoples were inherently suited to servitude, often by God’s design. Statements from ancient Greek and medieval Arab writers about the biological inferiority of Africans would curdle your blood, and Cicero’s opinion of Britons was not much more charitable.

More to the point, the intellectualized racism that infected the West in the 19th century was the brainchild not of science but of the humanities: history, philology, classics, and mythology.

As I’ve touched on, this is so far from reality it’s practically creationist. Let’s ignore the implication that no-one used science to promote racism past the 1950’s, which ain’t so, and dig up more data points on the dark side of the Enlightenment.

… the Scottish philosopher David Hume would write: “I am apt to suspect the Negroes, and in general all other species of men to be naturally inferior to the whites. There never was any civilized nation of any other complection than white, nor even any individual eminent in action or speculation.” […]

Another two decades on, Immanuel Kant, considered by many to be the greatest philosopher of the modern period, would manage to let slip what is surely the greatest non-sequitur in the history of philosophy: describing a report of something seemingly intelligent that had once been said by an African, Kant dismisses it on the grounds that “this fellow was quite black from head to toe, a clear proof that what he said was stupid.” […]

Scholars have been aware for a long time of the curious paradox of Enlightenment thought, that the supposedly universal aspiration to liberty, equality and fraternity in fact only operated within a very circumscribed universe. Equality was only ever conceived as equality among people presumed in advance to be equal, and if some person or group fell by definition outside of the circle of equality, then it was no failure to live up to this political ideal to treat them as unequal.

It would take explicitly counter-Enlightenment thinkers in the 18th century, such as Johann Gottfried Herder, to formulate anti-racist views of human diversity. In response to Kant and other contemporaries who were positively obsessed with finding a scientific explanation for the causes of black skin, Herder pointed out that there is nothing inherently more in need of explanation here than in the case of white skin: it is an analytic mistake to presume that whiteness amounts to the default setting, so to speak, of the human species.


Indeed, connections between science and the slave trade ran deep during [Robert] Boyle’s time—all the way into the account books. Royal Society accounts for the 1680s and 1690s shows semi-regular dividends paid out on the Society’s holdings in Royal African Company stock. £21 here, £21 there, once a year, once every two years. Along with membership dues and the occasional book sale, these dividends supported the Royal Society during its early years.

Boyle’s early “experiments” with the inheritance of skin color set an agenda that the scientists of the Royal Society pursued through the decades. They debated the origins of blackness with rough disregard for the humanity of enslaved persons even as they used the Royal African’s Company’s dividends to build up the Royal Society as an institution. When it came to understanding skin color, Boyle used his wealth and position to help construct a science of race that, for centuries, was used to justify the enslavement of Africans and their descendants globally.


This timeline gives an overview of scientific racism throughout the world, placing the Eugenics Record Office within a broader historical framework extending from Enlightenment-Era Europe to present-day social thought.

All this is obvious via a glance at a history book, something which Pinker is apparently allergic to. I’ll give Harrison the final word:

If we put into the practice the counting and gathering of data that Pinker so enthusiastically recommends and apply them to his own book, the picture is revealing. Locke receives a meagre two mentions in passing. Voltaire clocks up a modest six references with Spinoza coming in at a dozen. Kant does best of all, with a grand total of twenty-five (including references). Astonishingly, Diderot rates only two mentions (again in passing) and D’Alembert does not trouble the scorers. Most of these mentions occur in long lists. Pinker refers to himself over 180 times. […]

… if Enlightenment Now is a model of what Pinker’s advice to humanities scholars looks like when put into practice, I’m happy to keep ignoring it.

EvoPsych and Scientific Racism

I’m not a fan of EvoPsych. It manages the feat of misunderstanding both evolution and psychology, its researchers are prone to wild misrepresentation of fields they clearly don’t understand, and it has all the trappings of a pseudo-science. Nonetheless, I’ve always thought they had enough sense to avoid promoting scientific racism, at least openly.

[CONTENT WARNING: Some of them don’t.]

[Read more…]

Social Justice Is Core To Atheism/Skepticism

As Jordan Peterson’s become more and more influential, there’s been an uptick in the number of people writing about him. The most astute analysis I’ve seen yet comes in podcast form via CANADALAND, but a recent blog post by PZ Myers directed me to Peter Coffin’s excellent contribution as well as Shiv’s ongoing coverage.

I see a common thread to it all, though. Coffin in particular points out just how poor Peterson is at coming to the point. That twigged a memory; forgive this copy-paste from The Skeptic’s Dictionary, but the entire text is important.

Shotgunning: Shotgunning is a cold reading trick used by pseudo-psychics and false mediums. To convince one’s mark that one is truly in touch with the other world, one provides a large quantity of information, some of which is bound to seem appropriate. Shotgunning relies on subjective validation and selective thinking.

Peterson is using a fairly old trick: blast out as many points as you can, and hope that a few of them stick. It doesn’t matter if some or even most of them fail, because people will cling to the handful that resonate with them. As an added bonus, shotgunning adds cognitive load to anyone hoping to critique you. If you want to make your critique solid, you need to grasp the entire argument and hold it in memory, which is nearly impossible when it’s a small fraction of a wandering, wide-ranging rant. Coffin needed to piece together small segments of three separate interviews to illuminate one of Peterson’s beliefs, for instance, making it easy to toss out the “out-of-context” card. Some reviews of Peterson’s latest book back up Coffin’s observation, though.

the reader discovers that each of Peterson’s 12 rules is explained in an essay delivered in a baroque style that combines pull-your-socks-up scolding with footnoted references to academic papers and Blavatskyesque metaphysical flights. He likes to capitalise the word “Being” and also to talk about “fundamental, biological and non-arbitrary emergent truth”. Within a page, we are told that “expedience is cowardly, and shallow, and wrong” and “meaning is what emerges beautifully and profoundly like a newly formed rosebud opening itself out of nothingness into the light of sun and God”. The effect is bizarre, like being shouted at by a rugby coach in a sarong. […]

What makes this book so irritating is Peterson’s failure to follow many of the rules he sets out with such sententiousness. He does not “assume that the person he is listening to might know something he doesn’t”. He is far from “precise in his speech”, allowing his own foundational concepts (like “being” and “chaos”) to slide around until they lose any clear meaning. He is happy to dish out a stern injunction against straw-manning, but his “Postmodernists” and Marxists are the flimsiest of scarecrows, so his chest-thumping intellectual victories seem hollow.


Peterson has a knack for penning sentences that sound like deep wisdom at first glance but vanish into puffs of pseudo-profundity if you give them more than a second’s thought. Consider these: “Our eyes are always pointing at things we are interested in approaching, or investigating, or looking at, or having”; “In Paradise, everyone speaks the truth. That is what makes it Paradise.” It is no defence to say there are truths here clumsily expressed: rule 10 is “Be precise in your speech”.


  1. The content does not justify the length of the book. When you strip away the pseudo-profundity and verbosity, you’re left with rather simple ideas you could find in any self-help book or discover on your own. Rule # 1, for instance, essentially states that females prefer males with confidence and that success breeds confidence and further success. This is rather obvious without having to understand the evolutionary history of lobsters.
  2. The introduction of the book presents the author as an objective investigator of the truth, disillusioned by dogmatic ideology and prepared to demonstrate its dangers. He then proceeds to incessantly quote from the bible, perhaps the most dogmatic text ever written. I didn’t purchase the book to be preached at, and found it unexpected and highly obnoxious.I understand that the author is interested in story and “archetypes,” but the bible is quoted out of proportion. There are many ancient stories to choose from, each with endless interpretive possibilities, but the bible is, for some reason, the primary text. Now I’m sure this is fine with many people, but I was unpleasantly surprised that I had purchased a book on biblical criticism or theology.

The latter review brings up another good point: Peterson’s morality is fundamentalist Christian and heavily influenced by the Christian Bible. He’s recently denied believing in their god, during an interview about his book which obsessively quotes from their Holy Writ. Less than a year ago, he was arguing in apocalyptic tones that everyone is secretly Christian.

… and that brings me to the last line which is that “so that we can all stumble forward to the Kingdom of God” Why would I put it that way? The reason I put it that way is because, well first of all, everybody does really know what that means, even though they may not believe in it, but that doesn’t really matter, being I think that to believe is to act and not to spout a set of statements and it is certainly possible to act as if what you are attempting to do is to bring about the Kingdom of God, and I would say that doing so is something that will radically justify your miserable existence and that’s really what you need, is radical justification for your miserable existence, and because human beings are so powerful, really powerful beyond the limits of our imagination, we have no idea where our ultimate destiny might be, that it’s not clear what our limits are and then if we decided to improve the place, let’s say, and I would say, starting with ourselves, because that is the safest and humblest way to begin, that there’s no telling where we might end up.

And since we’re all fragile and vulnerable creatures, and we’re going to lose everything anyway, we might as well risk everything to obtain the highest possible good, and then that would make the misery that constitutes our life bearable as a consequence of our intrinsic nobility. And there’s nothing in that except the good. And so then why not do it? And so that’s what I would enjoy and encourage, encourage everyone to do because there’s nothing better to do than that and we might as well all do that which there is nothing better than!

You’d think the mix of fundamentalist Christianity and self-help psychobabble would set off alarm bells in atheist and skeptic minds. As PZ points out, though, it isn’t. Even some big names in the movement are falling for Peterson, like Michael Shermer. Sam Harris is more resistant, yet has talked with him twice already and has a third event upcoming (tickets start at $79!). Some of YouTube Atheism is all agog over Peterson, too. This seems paradoxical at first glance.

But there’s a simple calculus at work here: Peterson’s stance against social justice is highly valued by these people, so much so that it doesn’t trigger skepticism or easily swamps those concerns. We see the same thing in evangelical Christians’ support for Donald Trump: he may be highly immoral according to their worldview, but stacking the courts in their favor is so important that they’ll wallpaper over his moral failings. This segment of skeptics and atheists values social justice over skepticism and atheism, albeit as a target of scorn.

If we combine that segment of anti-social-justice skeptics/atheists with the pro- side, however, isn’t that more than half of all atheists and skeptics? And if more than half of us value social justice that highly, isn’t it fair to argue social justice is a core part of the atheist/skeptic movement? Even if my numbers are off, it’s amusing to compare the PZ Myers of 2009 to that of 2017. The Culture Wars have made us all more mindful of social justice, no matter what subculture we belong to, and in the case of atheism/skepticism gradually turned it into a defining issue. You can’t understand our subculture without knowing about the social justice Deep Rift, and that makes social justice critical to understanding us.

I Thought We All Agreed On This One

The UN Population Fund puts it well:

Child marriage is a human rights violation. Despite laws against it, the practice remains widespread, in part because of persistent poverty and gender inequality. In developing countries, one in every four girls is married before reaching age 18. One in nine is married under age 15.

Child marriage threatens girls’ lives and health, and it limits their future prospects. Girls pressed into child marriage often become pregnant while still adolescents, increasing the risk of complications in pregnancy or childbirth. These complications are a leading cause of death among older adolescents in developing countries.

UNICEF is even more forceful:

Marriage before the age of 18 is a fundamental violation of human rights. Many factors interact to place a girl at risk of marriage, including poverty, the perception that marriage will provide ‘protection’, family honour, social norms, customary or religious laws that condone the practice, an inadequate legislative framework and the state of a country’s civil registration system. Child marriage often compromises a girl’s development by resulting in early pregnancy and social isolation, interrupting her schooling, limiting her opportunities for career and vocational advancement and placing her at increased risk of domestic violence. Child marriage also affects boys, but to a lesser degree than girls.

These children are insufficiently developed to vote or drive under the law, yet they can freely consent to a possibly life-long legal partnership? Ridiculous. It’s not hard to find stories of girls who felt like slaves to their husbands, or had to endure years of abuse before they could legally divorce. And the excuses given to protect it defy belief at times.

Last year, 17-year-old Girl Scout Cassandra Levesque campaigned to change the New Hampshire law that allows girls as young as 13 to get married if their parents approve. “My local representative introduced a bill that raised the minimum age to 18. But a couple of male representatives persuaded the others to kill the bill and to prevent it from being discussed again for some years,” she says. “One of them said that a 17-year-old Girl Scout couldn’t have a say in these matters.”

“So they think she’s old enough for marriage, but not old enough to talk about it, says Reiss. “I think that reasoning is terrifying.”

My point exac-

… Wait. “New Hampshire?!”

In 2013, [Sherry] Johnson was working at a barbecue stand in Tallahassee when she told her story to a senator who was one of her regular customers. “She listened to me and decided to do something,” Johnson recalls. “She presented a bill to restrict child marriage in 2014, but it failed. That was because nobody understood the problem at the time. “People thought: this can’t happen in Florida. The minimum marriage age is 18; what’s the problem? But they didn’t know about the loopholes. Between 2001 and 2015, 16,000 children were married in Florida alone. A 40-year-old man can legally marry a five-year-old girl here.”

Child marriage is semi-legal in New Hampshire and Florida? That can’t be right.

In most US states, the minimum age for marriage is 18. However, in every state exceptions to this rule are possible, the most common being when parents approve and a judge gives their consent. In 25 states, there is no minimum marriage age when such an exception is made.

…. What the hell, USA? How could you allow over 200,000 children to get married between 2000 and 2015? How could your judges approve marriages before one party can even consent to sex?! Between this and your poor maternal health outcomes, your unsustainable military spending, your dysfunctional political system, and your growing reputation as a tax haven and money launderer, I have half a mind to invalidate your “developed country” card.

(Thanks to blf for pointing me to this outrage.)