The Laziness of Steven Pinker

I know, I know, I should have promoted that OrbitCon talk on Steven Pinker before it aired. I was a bit swamped developing material for it, ironically, most of which never made it to air. Don’t worry, I’ll be sharing the good bits via blog post. Amusingly, this first example isn’t from that material. I wound up reading a lot of Pinker, and developed a hunch I wasn’t able to track down before air time. In a stroke of luck, Siggy handed me the material I needed to properly follow up.

Enough suspense: what’s your opinion of self-plagiarism, or copying your own work without flagging what you’ve done?

… self-plagiarism does carry with it some level of dishonesty, at least in some situations. The problem is that, when an author, artist or other creator presents a new work, it’s generally expected to be all-new content, unless otherwise clearly stated. … with an academic paper, one is generally expected to showcase what they have learned most recently, meaning that self-plagiarism defeats the purpose of the paper or the assignment. On the other hand, in a creative environment, however, reusing old passages, especially in a limited manner, might be more about homage and maintaining consistency than plagiarism.

It’s a bit of a gray area, isn’t it? The US Office of Research Integrity declares it unethical, but also declares that self-plagiarism isn’t misconduct. Nonetheless it could be considered misconduct in an academic context, and the ORI themselves outline the case:

For example, in one editorial, Schein (2001) describes the results of a study he and a colleague carried out which found that 92 out of 660 studies taken from 3 major surgical journals were actual cases of redundant publication. The rate of duplication in the rest of the biomedical literature has been estimated to be between 10% to 20% (Jefferson, 1998), though one review of the literature suggests the more conservative figure of approximately 10% (Steneck, 2000). However, the true rate may depend on the discipline and even the journal and more recent studies in individual biomedical journals do show rates ranging from as low as just over 1% in one journal to as high as 28% in another (see Kim, Bae, Hahm, & Cho, 2014) The current situation has become serious enough that biomedical journal editors consider redundancy and duplication one of the top areas of concern (Wager, Fiack, Graf, Robinson, & Rowlands, 2009) and it is the second highest cause for articles to be retracted from the literature between the years 2007 and 2011 (Fang, Steen, & Casadevall, 2012).

But is it misconduct in the context of non-academic science writing? I’m not sure, but I think it’s fair to say self-plagiarism counts as lazy writing. Whatever the ethics, let’s examine an essay by Pinker that Edge published sometime before January 10th, 2017, and match it up against Chapter 2 of Enlightenment Now. I’ve checked the footnotes and preface of the latter, and failed to find any reference to that Edge essay, while the former does not say it’s excerpted from a forthcoming book. You’d have no idea one copy existed if you’d only read the other, so any matching passages count as self-plagiarism.

How many passages match? I’ll use the Edge essay as a base, and highlight exact duplicates in red, sections only present in Enlightenment Now in green, paraphrases in yellow, and essay-only text in black.

The Second Law of Thermodynamics states that in an isolated system (one that is not taking in energy), entropy never decreases. (The First Law is that energy is conserved; the Third, that a temperature of absolute zero is unreachable.) Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there.

In its original formulation the Second Law referred to the process in which usable energy in the form of a difference in temperature between two bodies is inevitably dissipated as heat flows from the warmer to the cooler body. (As the musical team Flanders & Swann explained, “You can’t pass heat from the cooler to the hotter; Try it if you like but you far better notter.”) A cup of coffee, unless it is placed on a plugged-in hot plate, will cool down. When the coal feeding a steam engine is used up, the cooled-off steam on one side of the piston can no longer budge it because the warmed-up steam and air on the other side are pushing back just as hard.

Once it was appreciated that heat is not an invisible fluid but the energy in moving molecules, and that a difference in temperature between two bodies consists of a difference in the average speeds of those molecules, a more general, statistical version of the concept of entropy and the Second Law took shape. Now order could be characterized in terms of the set of all microscopically distinct states of a system (in the original example involving heat, the possible speeds and positions of all the molecules in the two bodies). Of all these states, the ones that we find useful from a bird’s-eye view (such as one body being hotter than the other, which translates into the average speed of the molecules in one body being higher than the average speed in the other) make up a tiny sliver of the possibilities, while the disorderly or useless states (the ones without a temperature difference, in which the average speeds in the two bodies are the same) make up the vast majority. It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness —not because nature strives for disorder, but because there are so many more ways of being disorderly than of being orderly. If you walk away from a sand castle, it won’t be there tomorrow, because as the wind, waves, seagulls, and small children push the grains of sand around, they’re more likely to arrange them into one of the vast number of configurations that don’t look like a castle than into the tiny few that do. [Enlightenment Now adds five sentences here.]

 

I could (and have!) carried on, demonstrating that almost all of that essay reappears in Pinker’s book. Maybe half of the reappearance is verbatim. I figure he copy-pasted the contents of his January 2017 essay into the manuscript for his 2018 book, and expanded it to fill an entire chapter. Whether I’m right or wrong, I think the similarities make a damning case for intellectual laziness. It also sets up a bad precedent: if Pinker can get this lazy with his non-academic writing, how lazy can he be with his academic work? I haven’t looked into that, and I’m curious if anyone else has.

The Tuskegee Syphilis Study

Was it three years ago? Almost to the day, from the looks of it.

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future.

That was Steven Pinker arguing that biomedical research is too ethical. Follow that link and you’ll see my counter-example: the Tuskegee syphilis study. It is a literal textbook example of what not to do in science. Pinker didn’t mention it back then, but it was inevitable he’d have to deal with it at some time. Thanks to PZ, I now know he has.

At a recent conference, another colleague summed up what she thought was a mixed legacy of science: vaccines for smallpox on the one hand; the Tuskegee syphilis study on the other. In that affair, another bloody shirt ind the standard narrative about the evils of science, public health researchers, beginning in 1932, tracked the progression of untreated latent syphilis in a sample of impoverished African Americans for four decades. The study was patently unethical by today’s standards, though it’s often misreported to pile up the indictment. The researchers, many of them African American or advocates of African American health and well-being, did not infect the participants as many people believe (a misconception that has led to the widespread conspiracy theory that AIDS was invented in US government labs to control the black population). And when the study began, it may even have been defensible by the standards of the day: treatments for syphilis (mainly arsenic) were toxic and ineffective; when antibiotics became available later, their safety and efficacy in treating syphilis were unknown; and latent syphilis was known to often resolve itself without treatment. But the point is that the entire equation is morally obtuse, showing the power of Second Culture talking points to scramble a sense of proportionality. My colleague’s comparison assumed that the Tuskegee study was an unavoidable part of scientific practice as opposed to a universally deplored breach, and it equated a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century in perpetuity.

What horse shit.

To persuade the community to support the experiment, one of the original doctors admitted it “was necessary to carry on this study under the guise of a demonstration and provide treatment.” At first, the men were prescribed the syphilis remedies of the day — bismuth, neoarsphenamine, and mercury — but in such small amounts that only 3 percent showed any improvement. These token doses of medicine were good public relations and did not interfere with the true aims of the study. Eventually, all syphilis treatment was replaced with “pink medicine” — aspirin. To ensure that the men would show up for a painful and potentially dangerous spinal tap, the PHS doctors misled them with a letter full of promotional hype: “Last Chance for Special Free Treatment.” The fact that autopsies would eventually be required was also concealed. As a doctor explained, “If the colored population becomes aware that accepting free hospital care means a post-mortem, every darky will leave Macon County…”

  • “it equated a one-time failure to prevent harm to a few dozen people”: In reality, according to that last source, “28 of the men had died directly of syphilis, 100 were dead of related complications, 40 of their wives had been infected, and 19 of their children had been born with congenital syphilis.” As of August last year, 12 former children were still receiving financial compensation.
  • “the prevention of hundreds of millions of deaths per century in perpetuity”: In reality, the Tuskegee study wasn’t the only scientific study looking at syphilis. Nor even the first. Syphilis was discovered in 1494, named in 1530, the causative organism was found in 1905, and the first treatments were developed in 1910. The science was dubious at best:

The study was invalid from the very beginning, for many of the men had at one time or another received some (though probably inadequate) courses of arsenic, bismuth and mercury, the drugs of choice until the discovery of penicillin, and they could not be considered untreated. Much later, when penicillin and other powerful antibiotics became available, the study directors tried to prevent any physician in the area from treating the subjects – in direct opposition to the Henderson Act of 1943, which required treatment of venereal diseases.

A classic study of untreated syphilis had been completed years earlier in Oslo. Why try to repeat it? Because the physicians who initiated the Tuskegee study were determined to prove that syphilis was ”different” in blacks. In a series of internal reviews, the last done as recently as 1969, the directors spoke of a ”moral obligation” to continue the study. From the very beginning, no mention was made of a moral obligation to treat the sick.

Pinker’s response to the Tuskegee study is to re-write history to suit his narrative, again. No wonder he isn’t a fan of ethics.

Steven Pinker, “Historian”

It’s funny, if you look back over my blog posts on Steven Pinker, you’ll notice a progression.

Ignoring social justice concerns in biomedical research led to things like the Tuskegee experiment. The scientific establishment has since tried to correct that by making it a critical part. Pinker would be wise to study the history a bit more carefully, here.


Setting aside your ignorance of the evidence for undercounting in the FBI’s data, you can look at your own graph and see a decline?

When Sargon of Arkkad tried and failed to discuss sexual assault statistics, he at least had the excuse of never having gotten a higher education, never studying up on the social sciences. I wonder what Steven Pinker’s excuse is.


Ooooh, I get it. This essay is just an excuse for Pinker to whine about progressives who want to improve other people’s lives. He thought he could hide his complaints behind science, to make them look more digestible to himself and others, but in reality just demonstrated he understands physics worse than most creationists. What a crank.

You’ll also notice a bit of a pattern, too, one that apparently carries on into Pinker’s book about the Enlightenment.

It is curious, then, to find Pinker breezily insisting that Enlightenment thinkers used reason to repudiate a belief in an anthropomorphic God and sought a “secular foundation for morality.” Locke clearly represents the opposite impulse (leaving aside the question of whether anyone in period believed in a strictly anthropomorphic deity).

So, too, Kant. While the Prussian philosopher certainly had little use for the traditional arguments for God’s existence – neither did the exceptionally pious Blaise Pascal, if it comes to that – this was because Kant regarded them as stretching reason beyond its proper limits. Nevertheless, practical reason requires belief in God, immorality and a post-mortem existence that offers some recompense for injustices suffered in the present world.

That’s from Peter Harrison, a professional historian. Even I was aware of this, though I am guilty of a lie of omission. I’ve brought up the “Cult of Reason” before, which was a pseudo-cult set up during the French Revolution that sought to tear down religion and instead worship logic and reason. What I didn’t mention was that it didn’t last long; Robespierre shortly announced his “Cult of the Supreme Being,” which promoted Deism as the official religion of France, and had the leaders of the Cult of Reason put to death. Robespierre himself was executed shortly thereafter, for sounding too much like a dictator, and after a half-hearted attempt at democracy France finally settled on Napoleon Bonaparte, a dictator everyone could get behind. The shift to reason and objectivity I was hinting at back then was more gradual than I implied.

If we go back to the beginning of the scientific revolution – which Pinker routinely conflates with the Enlightenment – we find the seminal figure Francis Bacon observing that “the human intellect left to its own course is not to be trusted.” Following in his wake, leading experimentalists of the seventeenth century explicitly distinguished what they were doing from rational speculation, which they regarded as the primary source of error in the natural sciences.

In the next century, David Hume, prominent in the Scottish Enlightenment, famously observed that “reason alone can never produce any action … Reason is, and ought only to be the slave of the passions.” And the most celebrated work of Immanuel Kant, whom Pinker rightly regards as emblematic of the Enlightenment, is the Critique of Pure Reason. The clue is in the title.

Reason does figure centrally in discussions of the period, but primarily as an object of critique. Establishing what it was, and its intrinsic limits, was the main game. […]

To return to the general point, contra Pinker, many Enlightenment figures were not interested in undermining traditional religious ideas – God, the immortal soul, morality, the compatibility of faith and reason – but rather in providing them with a more secure foundation. Few would recognise his tendentious alignment of science with reason, his prioritization of scientific over all other forms of knowledge, and his positing of an opposition between science and religion.

I’m just skimming Harrison’s treatment, the rest of the article is worth a detour, but it really helps underscore how badly Pinker wants to re-write history. Here’s something the man himself committed to electrons:

More insidious than the ferreting out of ever more cryptic forms of racism and sexism is a demonization campaign that impugns science (together with the rest of the Enlightenment) for crimes that are as old as civilization, including racism, slavery, conquest, and genocide. […]

“Scientific racism,” the theory that races fall into a hierarchy of mental sophistication with Northern Europeans at the top, is a prime example. It was popular in the decades flanking the turn of the 20th century, apparently supported by craniometry and mental testing, before being discredited in the middle of the 20th century by better science and by the horrors of Nazism. Yet to pin ideological racism on science, in particular on the theory of evolution, is bad intellectual history. Racist beliefs have been omnipresent across history and regions of the world. Slavery has been practiced by every major civilization and was commonly rationalized by the belief that enslaved peoples were inherently suited to servitude, often by God’s design. Statements from ancient Greek and medieval Arab writers about the biological inferiority of Africans would curdle your blood, and Cicero’s opinion of Britons was not much more charitable.

More to the point, the intellectualized racism that infected the West in the 19th century was the brainchild not of science but of the humanities: history, philology, classics, and mythology.

As I’ve touched on, this is so far from reality it’s practically creationist. Let’s ignore the implication that no-one used science to promote racism past the 1950’s, which ain’t so, and dig up more data points on the dark side of the Enlightenment.

… the Scottish philosopher David Hume would write: “I am apt to suspect the Negroes, and in general all other species of men to be naturally inferior to the whites. There never was any civilized nation of any other complection than white, nor even any individual eminent in action or speculation.” […]

Another two decades on, Immanuel Kant, considered by many to be the greatest philosopher of the modern period, would manage to let slip what is surely the greatest non-sequitur in the history of philosophy: describing a report of something seemingly intelligent that had once been said by an African, Kant dismisses it on the grounds that “this fellow was quite black from head to toe, a clear proof that what he said was stupid.” […]

Scholars have been aware for a long time of the curious paradox of Enlightenment thought, that the supposedly universal aspiration to liberty, equality and fraternity in fact only operated within a very circumscribed universe. Equality was only ever conceived as equality among people presumed in advance to be equal, and if some person or group fell by definition outside of the circle of equality, then it was no failure to live up to this political ideal to treat them as unequal.

It would take explicitly counter-Enlightenment thinkers in the 18th century, such as Johann Gottfried Herder, to formulate anti-racist views of human diversity. In response to Kant and other contemporaries who were positively obsessed with finding a scientific explanation for the causes of black skin, Herder pointed out that there is nothing inherently more in need of explanation here than in the case of white skin: it is an analytic mistake to presume that whiteness amounts to the default setting, so to speak, of the human species.


Indeed, connections between science and the slave trade ran deep during [Robert] Boyle’s time—all the way into the account books. Royal Society accounts for the 1680s and 1690s shows semi-regular dividends paid out on the Society’s holdings in Royal African Company stock. £21 here, £21 there, once a year, once every two years. Along with membership dues and the occasional book sale, these dividends supported the Royal Society during its early years.

Boyle’s early “experiments” with the inheritance of skin color set an agenda that the scientists of the Royal Society pursued through the decades. They debated the origins of blackness with rough disregard for the humanity of enslaved persons even as they used the Royal African’s Company’s dividends to build up the Royal Society as an institution. When it came to understanding skin color, Boyle used his wealth and position to help construct a science of race that, for centuries, was used to justify the enslavement of Africans and their descendants globally.


This timeline gives an overview of scientific racism throughout the world, placing the Eugenics Record Office within a broader historical framework extending from Enlightenment-Era Europe to present-day social thought.

All this is obvious via a glance at a history book, something which Pinker is apparently allergic to. I’ll give Harrison the final word:

If we put into the practice the counting and gathering of data that Pinker so enthusiastically recommends and apply them to his own book, the picture is revealing. Locke receives a meagre two mentions in passing. Voltaire clocks up a modest six references with Spinoza coming in at a dozen. Kant does best of all, with a grand total of twenty-five (including references). Astonishingly, Diderot rates only two mentions (again in passing) and D’Alembert does not trouble the scorers. Most of these mentions occur in long lists. Pinker refers to himself over 180 times. […]

… if Enlightenment Now is a model of what Pinker’s advice to humanities scholars looks like when put into practice, I’m happy to keep ignoring it.

Computational Propaganda

Sick of all this memo talk? Too bad, because thanks to Lynna, OM in the Political Madness thread I discovered a new term: “computational propaganda,” or the use of computers to help spread talking points and generate “grassroots” activism. It’s a lot more advanced than running a few bots, too. You’ll have to read the article to learn the how and why, but I can entice you with its conclusion:

The problem with the term “fake news” is that it is completely wrong, denoting a passive intention. What is happening on social media is very real; it is not passive; and it is information warfare. There is very little argument among analytical academics about the overall impact of “political bots” that seek to influence how we think, evaluate and make decisions about the direction of our countries and who can best lead us—even if there is still difficulty in distinguishing whose disinformation is whose. Samantha Bradshaw, a researcher with Oxford University’s Computational Propaganda Research Project who has helped to document the impact of “polbot” activity, told me: “Often, it’s hard to tell where a particular story comes from. Alt-right groups and Russian disinformation campaigns are often indistinguishable since their goals often overlap. But what really matters is the tools that these groups use to achieve their goals: Computational propaganda serves to distort the political process and amplify fringe views in ways that no previous communication technology could.”

This machinery of information warfare remains within social media’s architecture. The challenge we still have in unraveling what happened in 2016 is how hard it is to pry the Russian components apart from those built by the far- and alt-right—they flex and fight together, and that alone should tell us something. As should the fact that there is a lesser far-left architecture that is coming into its own as part of this machine. And they all play into the same destructive narrative against the American mind.

Democracies have not faced a challenge like this since yellow journalism.

Winning Hearts and Minds

I’ll forgive you if haven’t heard of Ian Danskin, if only because he’s primarily known on YouTube as Innuendo Studios. You know, the person behind “Why Are You So Angry?” and more recently “The Alt-Right Playbook.” The latter project is aimed at sharpening the rhetoric of progressives to better defend against the “playbook” the Alt-Right uses in online arguments. It’s still a work in progress, but recently Danskin tried to jump ahead and compress it all into a single lecture.

[Read more…]

The Power Of Representation

I kicked around a number of titles for this one: The Persistence of Bias, Science is Social, Beeing Blind. It’s amazing just how many themes can be packed into a Twitter thread.

Hank Campbell: Resist the call to make science about social justice. Astronomers should not be enthusiastic when told that their cosmic observations are inevitably a reflex of the power of the socially privileged.

Ask An Entomologist: Although we disagree with this tweet…it gives us an opportunity to explore a really interesting topic. What we now call ‘queen’ bees-the main female reproductive honeybees-were erroneously called ‘kings’ for nearly 2,000 years. Why? Let’s explore the history of bees!

We’ve been keeping bees for 5,000 years+ and what we called the various classes of bees was closely tied to the societies naming those classes. For instance, in a lot of societies it was very common to call the ‘workers’ slaves because slavery was common at the time. For awhile, this was the big head-honcho in the biological sciences. This is Aristotle, whose book The History of Animals was the accepted word on animal biology in Europe until roughly the 1600s. This book was published in 350, and discussed honeybees in quite some detail …and is a good reflection on what was known at the time. […] I’d recommend reading the whole thing…it’s really interesting for a number of reasons.

…but in particular, let’s look at how Aristotle described the swarming process. Bees reproduce by swarming: They make new queens, who leave to set up a new hive. The queens take a big chunk of the colony’s workers with them.

“Of the king bees there are, as has been stated, two kinds. In every hive there are more kings than one; and a hive goes to ruin if there be too few kings, not because of anarchy thereby ensuing, but, as we are told, because these creatures contribute in some way to the generation of the common bees. A hive will go also to ruin if there be too large a number of kings in it; for the members of the hives are thereby subdivided into too many separate factions.”

Aristotle didn’t know what we know about bees now…but it was widely accepted that the biggest bees in the colony lead the hive somehow and were essential for reproduction and swarming. …but we now know the queens are female. Why didn’t Aristotle?

Well it turns out that Aristotle, frankly, had some *opinions* about women. He was…uh, a little sexist. Which was, like, common at the time. Without going into all of his views on the topic, it’s apparent his views on women pretty heavily influenced what he saw was going on in the beehive. He thought of reproduction as a masculine activity, and thought of women as property. He…just wasn’t very objective about this. So, when he saw a society led almost entirely of women…it actually makes a lot of sense as to why he saw the ‘queen’ bees as male and called them kings. These ideas of women in his circle were so ingrained that a female ruler literally wouldn’t compute.

Moving on through the middle ages, the name ‘king’ kind of stuck because biological sciences were stuck on Aristotle’s ideas for a very long time. Beekeepers *knew* the queens were female; they were observed laying eggs…but their exact role was controversial outside of them. In fact, in most circles, it was commonly accepted that the workers gathered the larvae which grew on plants. Again, this is from Aristotle’s work.

So…today it’s completely and 100% accepted that queen bees are, in fact, female…and that the honeybee society is led by women. What changed in Western Society to get this idea accepted?

The exact work which popularized the (scientifically accurate) idea of the honeybee as a female-led society was The Feminine Monarchy, by Charles Butler. However, I’d argue this lady also played a role. The woman in the picture … is Queen Elizabeth, who ruled England from 1558 until her death in 1603. Charles Butler (1560-1647) published The Feminine Monarchy in 1609, and had lived under Queen Elizabeth’s reign for most of his life. This is largely a ‘right place, right time’ situation. At this point, there was a lot of science that was just up and starting. There had been female rulers before, but not at the exact point where people were rethinking their assumptions. The fact that Charles Butler was interested in bees, *and* lived under a female monarch for most of his life, I think played a major role in his decision to substitute one simple word in his book.

That substitution? He called ‘king bees’ ‘queen bees’…and it stuck.

At this point in Europe’s history, there had been several female monarchs so the idea of a female leader didn’t seem so odd. Society was simply primed to accept the idea of a female ruler.

…but this thread isn’t just about words, it’s also about *sex*.

How so? Sorry, you’ll have to click through for that one. Bee sure to read to the end for the punchline, too. Big kudos are due to @BugQuestions for such an expansive, deep Twitter thread.

The Gender Inclusivity of Diverse White Privilege Equity

Blame Shiv for this one; she posted about someone at Monsanto inviting Jordan Peterson to talk about GMOs, and it led me down an interesting rabbit hole. For one thing, the event already happened, and it was the farce you were expecting. This, however, caught my eye:

Corrupt universities—and Women’s Studies departments in particular, he says—are responsible for turning students into activists who will one day tear apart the fabric of society. “The world runs on ideas. And the ideas that are in the universities are the ideas that are going to be in the general public in five to ten years. And there’s no shielding yourself from it,” he said.

Peterson also shared a trick for figuring out whether or not a child’s school has been affected by the coming crisis: If a schoolteacher uses any of the five words listed on his display screen—”diversity,” “inclusivity,” “equity,” “white privilege,” or “gender”—then a child has been “exposed.”

What’s Peterson’s solution for all this? “The answer to the ills that our society still obviously suffers from,” he said, is that “people should adopt an ethos of responsibility rather than continually clamoring about their rights, which is something that we’ve been talking about for about four decades too long, as far as I can surmise.”

Four decades puts us back into the 1970’s, when women’s liberation groups were calling to be able to exercise their right to bodily autonomy, to be free from violence, to equal pay for equal work, to equal custody of kids. If Peterson is opposed to that then he’s more radical than most MRAs, who are generally fine with Second Wave feminism. I wonder if he’s a lost son of Phyllis Schlafly.

But more importantly, he appears to be warning us of a crisis coming in 5-10 years, one that invokes those five terms as holy writ. That’s …. well, let’s step through it.

[Read more…]

FEMINISM ISN’T SCIENCE!11!

Gawd, that line annoys me. I’m sure you’ve heard it or some variation of it: “feminism is a religion,” “feminism destroys science,” “feminism is opposed to science,” and so on. Yet if you take a dip in the social sciences literature, you realise there’s quite a bit of science behind feminist perspectives. While reading up on sexual assault and trauma, for instance, I came across this delightful passage in one paper:

In conjunction with the SES and similar measures, scholars have argued that feminist perspectives have had a profound impact on the sexual assault literature (Adams-Curtis & Forbes, 2004). For example, Brownmiller (1975) emphasized the role of patriarchy and helped shift attention further away from internal pathology to systemic and social issues. Advancements such as the SES and the development of Burt’s (1980) instruments to assess rape-supportive beliefs added weight to feminist perspectives of sexual assault by demonstrating that many men did not label specific sexual encounters against a woman’s will as rape, and even held favorable attitudes toward rape (e.g., Burt, 1980; Malamuth, 1981). Feminist perspectives emphasizing systemic devaluation of women and gender inequality as major contributors to college men’s sexual assault perpetration continue to be widely embraced in the literature.

Feminist perspectives also appear to be a driving force in different approaches to studying men and masculinities in relation to sexual assault perpetration. For instance, our narrative review identified several distinct areas of research, such as general descriptive studies of college sexual assault perpetration rates (e.g., Koss et al., 1987), characteristics of sexual assault perpetration (e.g., Krebs et al., 2007), and key features of sexual assault offenders (e.g., Abbey & McAuslan, 2004). Consistent with previous systematic reviews (e.g., Tharp et al., 2013), findings across each of these domains indicate that, although there are several avenues that may lead a man toward sexual assault perpetration, certain factors are associated with increased risk, such as living in a fraternity (e.g., Murnen & Kohlman, 2007), viewing violent pornography (e.g., Carr & VanDeusen, 2004), using alcohol on dates or believing alcohol increases the chances for sexual access (Abbey, 2011), endorsing violence toward women or accepting rape myths (e.g., Murnen, Wright, & Kaluzny, 2002), engaging in past sexual assault perpetration (e.g., Loh, Gidycz, Lobo, & Luthra, 2005), feeling entitled to sex (e.g., Widman & McNulty, 2010), associating with men who endorse rape-supportive ideologies (Abbey, McAuslan, Zawacki, Clinton, & Buck, 2001; Swartout, 2013), and perceiving that peers endorse rape myths (e.g., Swartout, 2013). In general, investigators emphasized men’s socialization (i.e., masculinities) as a driving force behind each of these risk factors.

McDermott, Ryon C., et al. “College male sexual assault of women and the psychology of men: Past, present, and future directions for research.” Psychology of Men & Masculinity16.4 (2015): 355-366.

That last paragraph is greatly amusing, to me. MRAs love to bring up all the problems men face, but don’t seem to realise that feminists were the first to recognise and study those problems, using frameworks they’d created. Many don’t realise how big a debt they owe. (#notallmen!)

Stephanie Zvan on Recovered Memories

I’ve been hoping for a good second opinion on this topic, and Zvan easily delivers. She has some training in psychology (unlike me), has been dealing with this topic for longer than I have, and by waiting longer to weigh in she’s had more time to craft her arguments. I place high weight on her words, so if you liked what I had to say be sure to read her take as well.

When we look more generally at how memory works, it quickly becomes apparent that focusing exclusively on the recovery of false memories produces lessons that aren’t generally applicable for evaluating memories of traumatic events. We need to continue to be on our guard for the circumstances that produce induced memories, and we have skeptics to thank for very important work on that topic.

However, it’s equally important that we, as skeptics, don’t fall into thinking every memory that people haven’t been shouting from the rooftops from the moment of trauma is induced. Recovered false memories are unusual events that happen under unusual circumstances. Abuse is a common occurrence, typically subject to normal rules of memory.

She also takes a slightly different path than I did. As weird as it may sound, I didn’t cover recovered memories very much in an argument supposedly centred around them; between the science on trauma, the obvious bias of Pendergrast and Crews, the evidence for bias from Loftus, the signs of anomaly hunting, and those court transcripts, I didn’t need to. I could blindly accept their assumptions of how those memories worked, and still have a credible counter-argument. Zvan’s greater familiarity with psychology allows her to take on that angle directly, and it adds much to the conversation. A taste:

Not everyone is susceptible to [false recovered memories]. Brewin and Andrews, writing for The British Psychological Society, characterize the situation thus: “Rather than childhood memories being easy to implant, therefore, a more reasonable conclusion is that they can be implanted in a minority of people given sufficient effort.” Estimates in the studies they look at (including Elizabeth Loftus’s work) show an effect in, on average, 15% of study participants, though they caution actual belief in those memories may be lower.

But enough from me, go read her.