Dictionary Atheism and Morality

I’m quite late to the party, I see. Hopefully I can make up for it with a slightly different angle.

There are no shortage of atheists that fetishize the dictionary. “It’s just a lack of belief, nothing more!” they cry, “there’s no moral code attached to it!”

Bullshit. If there is no moral system, why then are dictionary atheists so insistent on being atheist?

Moral codes are proscriptive, while assertions and bare facts are descriptive. One tells us how the world ought to behave, the others how the world is or might be. This can get confusing, I’ll admit. Science is supposed to be in the “descriptive” bin, yet scientists make predictions about how the world ought to behave. It sounds very proscriptive, but what happens when reality and your statement conflict? Say I calculate the trajectory of an asteroid via Newtonian Mechanics, but observe it wanders off my predicted path. Which of these two must change to resolve the contradiction, reality or Newtonian Mechanics? Surely the latter, and that reveals it and similar scientific laws as a descriptive item: if the description is wrong, or in conflict with reality, it gets tossed.

But this division is further tested by things like evolution. If we ever did find find something that broke that theory, like a fossil rabbit in the Precambrian era, we are not justified in tossing evolution. The weight of all other evidence in favor of evolution makes it more likely we got something wrong then that evolution should be dust-binned. We again seem to be proscriptive.

That pile of evidence is our ticket back to descriptiveness, though. One bit of counter-evidence may fall flat, but a giant enough heap would not. There is only a finite amount of it favoring evolution, so in theory I can still pile up more counter-evidence and be forced to give that theory up in favor of reality, even if that’s impossible in practice.

No amount of evidential persuasion can force me to give up on a moral, in contrast. This too may seem strange; it may not be moral to kill a person, but wouldn’t it be moral to kill Hitler? The information we have about a scenario can dramatically shift the moral action.

But, importantly, it doesn’t shift the moral code. No sane moral system will hold you accountable for honest ignorance, and even the non-sane ones provide an “out” via (for instance) penitence or another loop on the karmic wheel. Instead, you apply the moral code to the knowledge you do have, a code that does not change over time. Slavery was just as bad in the past as it is now, what’s changed instead is us. We as moral agents have progressed, through education, reason, and the occasional violent rebellion. The moral code hasn’t changed, we have adjusted our reality to better match it. Again, we find morality is proscriptive.

So what are we to make of atheists that argue they can only follow the evidence? “Do not hold false beliefs” is proscriptive, because it tells us what to do, yet it’s a necessary assumption behind “I cannot believe in the gods, because there is insufficient evidence to warrant belief.” Having a moral code is an essential prerequisite for every atheist who isn’t that way out of ignorance, and that ignorance dissipates within seconds of hearing someone attempt to describe what a god is.

But… is it true that black people deserve to be paid less than whites? Is it true that women who dress provocatively deserved to be raped? Is it true that the poor are lazy and shiftless? All it takes to believe in any form of social justice is the moral “do not hold false beliefs” and evidence to support “claim X is false.” The minimal moral system for a hardline dictionary atheist is no different then the minimal moral system of a feminist!

Of course, there’s no reason you can’t toss extra morals into the mix. Social justice types would quickly add “allowing false beliefs to persist in others is wrong,” but so too would the dictionary atheist. How else could they justify trying to persuade others away from religion? No doubt those atheists would disavow any additional morals, but so too could a feminist. That one extra premise is enough to justify actively changing the culture we live in.

There might be other differences in the moral code between dictionary atheists and those promoting social justice, but it amounts to little more than window dressing; not only does being an atheist require a moral code, even the “dictionary” brand, the smallest possible code also supports feminists and others engaging in social justice.

So knock off the “atheism has no moral code” crap. It just ain’t true.

It’s About Ethics in Biomedical Research

I’m a bit surprised this didn’t get more play. From what I hear, Pinker has some beef with bioethics.

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future.

This path leads to very dark places. I’ll quote a summary I wrote of Blumenthal (2004).[1]

Booker T. Washington had an ambitious plan around the turn of the century, of rapidly advancing the health and welfare of African Americans in that city. His Tuskegee Institute revived agriculture in the South, build schools and business alliances, created a self-sustaining architectural program, and developed a Black-owned-and-operated hospital.

It also took a keen interest in health issues, and after World War I it faced a major crisis in syphilis. Soldiers returning home led to a dramatic spike in cases, and as of 1926 as many as 36% of everyone within the surrounding Macon County were infected. The best cure, at the time, was a six-week regimen of toxic drugs with a depressing 30% success rate. Something had to be done.

A short study of six to eight months was proposed, the idea being to track the progression of the disease in African-Americans and learn more about it, then administer treatment. It got full approval of the government, health officials, and local leaders in the African-American community. Substantial outreach was done to bring in patients, explain what the disease was, and even give them free rides to reach the clinic.

But then… circumstances changed. The newly appointed leader of the project, Dr. Raymond Vonderlehr, became fascinated with how syphilis changed people’s bodies. The Great Depression hit, and as of 1933 there wasn’t a lot of money available for treatment. So Vonderlehr decided to make the study longer, and provide less than the recommended treatment. He also faced the problem of getting subjects to agree to the toxic treatments and painful diagnostic tools, but that was easily solved: stretch the truth, just a bit. Those spinal taps they used to diagnose syphilis spread to the neural system became “free special treatment,” even though no actual treatment was done. Disaster struck when other scientists discovered the first effective cure, penicillin; elaborate “procedures” were developed to keep the patients from getting their hands on the drug, even if other infectious diseases threatened their lives.

And the entire time, the project had the full support of the government, and published their results openly.

After the entire incident exploded in the press, a commission of experts were formed to advise the US government on bioethical legislation. The result was the Belmont Report, and one of the three core principals it rested on was

Justice. — Who ought to receive the benefits of research and bear its burdens? This is a question of justice, in the sense of “fairness in distribution” or “what is deserved.” […]

Questions of justice have long been associated with social practices such as punishment, taxation and political representation. Until recently these questions have not generally been associated with scientific research. However, they are foreshadowed even in the earliest reflections on the ethics of research involving human subjects. For example, during the 19th and early 20th centuries the burdens of serving as research subjects fell largely upon poor ward patients, while the benefits of improved medical care flowed primarily to private patients. […]

Against this historical background, it can be seen how conceptions of justice are relevant to research involving human subjects. For example, the selection of research subjects needs to be scrutinized in order to determine whether some classes (e.g., welfare patients, particular racial and ethnic minorities, or persons confined to institutions) are being systematically selected simply because of their easy availability, their compromised position, or their manipulability, rather than for reasons directly related to the problem being studied. Finally, whenever research supported by public funds leads to the development of therapeutic devices and procedures, justice demands both that these not provide advantages only to those who can afford them and that such research should not unduly involve persons from groups unlikely to be among the beneficiaries of subsequent applications of the research.

Ignoring social justice concerns in biomedical research led to things like the Tuskegee experiment. The scientific establishment has since tried to correct that by making it a critical part. Pinker would be wise to study the history a bit more carefully, here.

But don’t just take my word for it. Others have also called him out, like Matthew Beard

Let’s put aside the fact that one paragraph later Pinker casts doubt on our ability to make accurate predictions at all. Because there is an interesting question here.

Let’s assume that hand-wringing ethicists slow progress that cures diseases. As a result, animals aren’t subjected to painful experiments, patients’ autonomy is respected, and “justice” is upheld. At the same time, lots of people died who could otherwise have been saved. Surely, Pinker suggests, this is unethical.

Only under a certain framework, known as utilitarianism, in which the right action is the one that does the most good. And even then, only under certain conditions. For instance, although some research might have saved more lives without ethical constraints, Pinker wants all oversight removed.

Thus, even bad research will operate without ethical restraint. For each pioneering piece of research that saves lives there will be much more insignificant research. And each of these insignificant items will also entail ethical breaches. This makes Pinker’s utilitarian matrix much harder to compute.

… and Wesley J. Smith.

These general principles [than Pinker excludes] are essential to maintaining a moral medical research sector! Indeed, without them, we would easily slouch into a crass utilitarianism that would blatantly treat some human beings as objects instead of subjects.

Bioethics is actually rife with such proposals. For example, one research paper published in a respected journal proposed using unconscious patients as “living cadavers” to test the safety of pig-to-human organ xenotransplantation.

The best defences of Pinker I’ve seen ignored the bit where he dismissed “social justice” and pretended he was discussing less basic things. It doesn’t reflect well on Pinker.


[1] Blumenthal, Daniel S., and Ralph J. DiClemente, eds. Community-based health research: issues and methods. Springer publishing company, 2004. pg. 48-53

Christina Hoff Sommers: Blatant Science Denialist

So, how’d my predictions of Christina Hoff Sommer’s video pan out?

The standard approach for those challenging rape culture is to either to avoid defining the term “rape culture” at all, or define it as actively encouraging sexual assault instead of passively doing so, setting up a strawperson from the get-go.

Half points for this one. Sommers never defined “rape culture,” but thanks to vague wording made it sound like “rape culture” was synonymous with “beliefs that encourage the sexual assault of women on college campuses:”

[1:12] Now, does that mean that sexual assault’s not a problem on campus? Of course not! Too many women are victimized. But it’s not an epidemic, and it’s not a culture.

Continuing with myself:

Sommers herself is a fan of cherry-picking individual studies or case reports and claiming they’re representative of the whole, and I figure we’ll see a lot of that.

Success kid: NAILED IT

There’s also the clever technique of deliberately missing the point or spinning out half-truths […] I don’t think Sommers will take that approach, preferring to cherry-pick and fiddle with definitions instead, but as a potent tool of denialists it’s worth keeping in mind.

Oooooo, almost. Almost.

While there’s a lot of things I could pick apart about this video, I’d like to focus on the most blatant examples of her denialism, her juggling of sexual assault statistics.

The first study she cites is an infamous one in conservative circles, the Campus Sexual Assault Study of 2007. Ever since Obama made a big deal of it, they’ve cranked up their noise machine and dug in deep to discredit the study. Sommers benefits greatly from that, doing just a quick hit-and-run.

[0:50] The “one in five” claim is based on a 2007 internet study, with vaguely worded questions, a low response rate, and a non-representative sample.

Oh, how many ways is that wrong? Here’s the actual methodology from the paper (pg 3-1 to 3-2):

Two large public universities participated in the CSA Study. Both universities provided us

with data files containing the following information on all undergraduate students who were enrolled in the fall of 2005: full name, gender, race/ethnicity, date of birth, year of study, grade point average, full-time/part-time status, e-mail address, and mailing address. […]

We created four sampling subframes, with cases randomly ordered within each subframe: University 1 women, University 1 men, University 2 women, and University 2 men. […]

Samples were then drawn randomly from each of the four subframes. The sizes of these samples were dictated by response rate projections and sample size targets (4,000 women and 1,000 men, evenly distributed across the universities and years of study) […]

To recruit the students who were sampled to participate in the CSA Study, we relied on both recruitment e-mails and hard copy recruitment letters that were mailed to potential respondents. Sampled students were sent an initial recruitment e-mail that described the study, provided each student with a unique CSA Study ID#, and included a hyperlink to the CSA Study Web site. During each of the following 2 weeks, students who had not completed the survey were sent a follow-up e-mail encouraging them to participate. The third week, nonrespondents were mailed a hard-copy recruitment letter. Two weeks after the hard-copy letters were mailed, nonrespondents were sent a final recruitment e-mail.

Christopher P Krebs, Christine H. Lindquist, Tara D. Warner, Bonnie S. Fisher, and Sandra L. Martin. “Campus Sexual Assault (CSA) Study, Final Report,” October 2007.

The actual number of responses was 5,446 women and 1,375 men, above expectations. Yes, the authors expected a low response rate with a non-representative sample, and already had methods in place to deal with that; see pages 3-7 to 3-10 of the report for how they compensated, and then verified their methods were valid. Note too that this “internet study” was quite targeted and closed to the public, contrary to what Sommers implies.

As to the “vaguely-worded” questions, that’s because many people won’t say they were raped even if they were penetrated against their will (eg. Koss, Mary P., Thomas E. Dinero, Cynthia A. Seibel, and Susan L. Cox. “Stranger and Acquaintance Rape: Are There Differences in the Victim’s Experience?Psychology of Women Quarterly 12, no. 1 (1988): 1–24). Partly that’s because denial is one way to cope with a traumatic event, and partly because they’ve been told it isn’t a crime by society. So researchers have to tip-toe around “rape culture” just to get an accurate view of sexual assault, yet more evidence that beast exists after all.

Sommers champions another study as more accurate than the CSA, one from the US Bureau of Justice Statistics which comes to the quite-different figure of one in 52. Sommers appears to be getting her data from Figure 2 in that document, and since that’s on page three either she or a research assistant must have read page two.

The NCVS is one of several surveys used to study rape and sexual assault in the general and college-age population. In addition to the NCVS, the National Intimate Partner and Sexual Violence Survey (NISVS) and the Campus Sexual Assault Study (CSA) are two recent survey efforts used in research on rape and sexual assault. The three surveys differ in important ways in how rape and sexual assault questions are asked and victimization is measured. […]

The NCVS is presented as a survey about crime, while the NISVS and CSA are presented as surveys about public health. The NISVS and CSA collect data on incidents of unwanted sexual contact that may not rise to a level of criminal behavior, and respondents may not report incidents to the NCVS that they do not consider to be criminal. […]

The NCVS, NISVS, and CSA target different types of events. The NCVS definition is shaped from a criminal justice perspective and includes threatened, attempted, and completed rape and sexual assault against males and females […]

Unlike the NCVS, which uses terms like rape and unwanted sexual activity to identify victims of rape and sexual assault, the NISVS and CSA use behaviorally specific questions to ascertain whether the respondent experienced rape or sexual assault. These surveys ask about an exhaustive list of explicit types of unwanted sexual contact a victim may have experienced, such as being made to perform or receive anal or oral sex.

Lynn Langton, Sofi Sinozich. “Rape and Sexual Assault Among College-age Females, 1995-2013” December 11, 2014.

This information repeats in Appendix A, which even includes a handy table summarizing all of the differences. If it’s been shoved into page two as well, that must indicate many people have tried to leverage this study to “discredit” others, without realizing the different methodologies make that impossible. The study authors tried to paint these differences in bright neon, to guard against any stat-mining, but alas Sommers has no qualms about ignoring all that to suit her ends. Even the NCVS authors suggest going with other numbers for prevalence and only using theirs for differences between student and non-student populations:

Despite the differences that exist between the surveys, a strength of the NCVS is its ability to be used to make comparisons over time and between population subgroups. The differences observed between students and nonstudents are reliable to the extent that both groups responded in a similar manner to the NCVS context and questions. Methodological differences that lead to higher estimates of rape and sexual assault in the NISVS and CSA should not affect the NCVS comparisons between groups.

In short, Sommers engaged in more half-truths and misleading statements than I predicted. Dang. But hold onto your butts, because things are about to get worse.

[2:41] The claim that 2% of rape accusations are false? That’s unfounded. It seems to have started with Susan Brownmiller’s 1975 feminist manifesto “Against Our Will.” Other statistics for false accusations range from 8 to 43%.

Hmph, so how did Brownmiller come to her 2% figure for false reports? Let’s check her book:

A decade ago the FBI’s Uniform Crime Reports noted that 20 percent of all rapes reported to the police were determined by investigation to be unfounded.’ By 1973 the figure had dropped to 15 percent, while rape remained, in the FBI’s words, the most underreported crime.’ A 15 percent figure for false accusations is undeniably high, yet when New York City instituted a special sex crimes analysis squad and put police women (instead of men) in charge of interviewing complainants, the number of false charges in New York dropped dramatically to 2 percent, a figure that corresponded exactly to the rate of false reports for other crimes. The lesson in the mystery of the vanishing statistic is obvious. Women believe the word of other women. Men do not.

Brownmiller, Susan. Against Our Will: Men, Women and Rape. Open Road Media, 2013. pg. 435.

…. waaaitaminute. Brownmiller never actually says the 2% figure is the false reporting rate; at best, she merely argues it’s more accurate than figures of 15-20%. And, in fact, it is!

In contrast, when more methodologically rigorous research has been conducted, estimates for the percentage of false reports begin to converge around 2-8%.Lonsway, Kimberly A., Joanne Archambault, and David Lisak. “False reports: Moving beyond the issue to successfully investigate and prosecute non-stranger sexual assault.” (2009).

That’s taken from the third study Sommers cites, or more accurately a summary of other work by Lisak. She quotes two of the three studies in that summary which show rates above 8%. The odd study out gives an even higher false reporting rate than the 8% one Sommers quotes, and should therefore have been better evidence, but look at how Lisak describes it:

A similar study was then again sponsored by the Home Office in 1996 (Harris & Grace, 1999). This time, the case files of 483 rape cases were examined, and supplemented with information from a limited number of interviews with sexual assault victims and criminal justice personnel. However, the determination that a report was false was made solely by the police. It is therefore not surprising that the estimate for false allegations (10.9%) was higher than those in other studies with a methodology designed to systematically evaluate these classifications.

That’s impossible to quote-mine. And while Lisak spends a lot of time discussing Kanin’s study, which is the fifth one Sommers presents, she references it directly instead of pulling from Lisak. A small sample may hint at why he’s been snubbed:

As a result of these and other serious problems with the “research,” Kanin’s (1994) article can be considered “a provocative opinion piece, but it is not a scientific study of the issue of false reporting of rape. It certainly should never be used to assert a scientific foundation for the frequency of false allegations” (Lisak, 2007, p. 1).

Well, at least that fourth study wasn’t quote-mined. Right?

internal rules on false complaints specify that this category should be limited to cases where either there is a clear and credible admission by the complainants, or where there are strong evidential grounds. On this basis, and bearing in mind the data limitations, for the cases where there is information (n=144) the designation of false complaint could be said to be probable (primarily those where the account by the complainant is referred to) in 44 cases, possible (primarily where there is some evidential basis) in a further 33 cases, and uncertain (including where victim characteristics are used to impute that they are inherently less believable) in 77 cases. If the proportion of false complaints on the basis of the probable and possible cases are recalculated, rates of three per cent are obtained, both of all reported cases (n=67 of 2,643), and of those where the outcome is known (n=67 of 2,284). Even if all those designated false by the police were accepted (a figure of approximately ten per cent), this is still much lower than the rate perceived by police officers interviewed in this study.Kelly, Liz., Jo. Lovett, Linda. Regan, Great Britain., Home Office., and Development and Statistics Directorate. Research. A Gap or a Chasm?: Attrition in Reported Rape Cases. London: Home Office Research, Development and Statistics Directorate, 2005.

Bolding mine. It’s rather convenient that Sommers quoted the police false report rate of 8% (or “approximately ten per cent” here), yet somehow overlooked the later section where the authors explain that the police inflated the false report figure. In the same way they rounded the 8% to ten, Liz Kelly and her co-authors also rounded up the “three per cent” figure; divide 67 by 2,284, and you get within fingertip distance of 2%, a false report rate of 2.5%.

Lisak did not get the low-end of his 2-8% range from Brownmiller; he got it from two large-scale, rigorous studies that concluded a 2% false report rate was reasonable. In his scientific paper, in fact, he explicitly discards Brownmiller’s number:

Another source, cited by Rumney (2006) and widely referenced in the literature on false allegations is a study conducted by the New York City police department and originally referenced by Susan Brownmiller (1975) in her book, Against Our Will: Men, Women and Rape. According to Brownmiller, the study found a false allegation rate of 2%. However, the only citation for the study is a public remark made by a judge at a bar association meeting, and, therefore, no information is available on the study’s sample or methodology.

Lisak, David, Lori Gardinier, Sarah C. Nicksa, and Ashley M. Cote. “False Allegations of Sexual Assualt: An Analysis of Ten Years of Reported Cases.” Violence Against Women 16, no. 12 (2010): 1318–34.

That 2% number is actually quite well founded, and Sommers must have known that. Feminists also know of the 2-8% stat, and cite it frequently.

In hindsight, this is a blatant example of the embrace-extend-extinguish pattern of Sommers that I discussed earlier. She took one extreme of the feminist position, then painted it as the typical one while cherry-picking the evidence in her favor. She took the other extreme as her low point, so she had the option of invoking a false concession, and then extended her false report range to encompass the majority of false rape report studies out there, most of which are useless.

very few of these estimates are based on research that could be considered credible. Most are reported without the kind of information that would be needed to evaluate their reliability and validity. A few are little more than published opinions, based either on personal experience or a non-systematic review (e.g., of police files, interviews with police investigators, or other information with unknown reliability and validity).

Lisak (2009), pg. 1

Sommers then claims this “middle ground” as her own, riding the Appeal to Moderation for all it’s worth. This is denialism so blatant that no skeptic should take it seriously.

Alas, quite a few do.

Christina Hoff Sommers: Science Denialist?

In a bizarre coincidence, just three days before my lecture on rape culture Christina Hoff Sommers happened to weigh in on the topic. I haven’t seen the video yet, which puts me in a great position to lay a little groundwork and make some predictions.

First off, we’ve got to get our definitions straight. “Rape culture” is the cloud of myths about sexual assault that exist within our society, which make it easier to excuse that crime and/or tougher for victims to recover or seek justice. Take Burt’s 1980 paper on the subject:

The burgeoning popular literature on rape (e.g., Brownmiller, 1975; Clark & Lewis, 1977) all points to the importance of stereotypes and myths — denned as prejudicial, stereotyped, or false beliefs about rape, rape victims, and rapists — in creating a climate hostile to rape victims. Examples of rape myths are “only bad girls get raped”; “any healthy woman can resist a rapist if she really wants to”; “women ask for it”; “women ‘cry rape’ only when they’ve been jilted or have something to cover up”; “rapists are sex-starved, insane, or both.” Recently, researchers have begun to document that rape myths appear in the belief systems of lay people and of professionals who interact with rape victims and assailants (e.g., Barber, 1974; Burt, 1978; Feild, 1978; Kalven & Zeisel, 1966). Writers have ana-
lyzed how rape myths have been institutionalized in the law (Berger, 1977) […]

Much feminist writing on rape maintains that we live in a rape culture that supports the objectification of, and violent and sexual abuse of, women through movies, television, advertising, and “girlie” magazines (see, e.g., Brownmiller, 197S). We hypothesized that exposure to such material would increase rape myth acceptance because it would tend to normalize coercive and brutal sexuality.
Burt, Martha R. “Cultural Myths and Supports for Rape.” Journal of Personality and Social Psychology 38, no. 2 (1980): 217.
http://www.excellenceforchildandyouth.ca/sites/default/files/meas_attach/burt_1980.pdf

You can see how the definition has shifted a little over time; objectification certainly helps dehumanize your victim, but it’s not a strict necessity, and while in all modern societies that I know of women are disproportionately targeted for gender-based violence, there’s still a non-trivial number of male victims out there.

There are two ways to demonstrate “rape culture” is itself a myth. The most obvious route is to challenge the “rape myth” part, and show either that those myths are in line with reality or are not commonly held in society. For instance, either good girls do not get raped, or few people believe that good girls do not get raped. Based on even a small, narrow sample of the literature, this is a tough hill to climb. I did a quick Google Scholar search, and even when I asked specifically for “rape myth acceptance” I had no problem pulling a thousand results, with Google claiming to have another 2,500 or so it wouldn’t show me. There must be a consensus on “rape culture,” based merely on volume, and to pick a side opposing that consensus is to be a science denialist.

The less obvious route to challenge the “help perpetrators/harm victims” portion. Consider the “rubber sheet model” of General Relativity; we know this is wrong, and not just because it depends on gravity to explain gravity, but nonetheless the model is close enough to reality that non-physicists get the gist of things without having to delve into equations. It’s a myth, but the benefits outweigh the harms. Sommers could take a similar approach to sexual assault, not so much arguing that rape myths are a net benefit but instead riding the “correlation is not causation” line and arguing the myths don’t excuse perpetrators or harm victims. This approach has problems too, as correlation can be evidence for causation when there’s a plausible mechanism, and past a point this approach also becomes science denialism. Overall, I think it’s Sommers’ best route.

If she gets that far, of course. The standard approach for those challenging rape culture is to either to avoid defining the term “rape culture” at all, or define it as actively encouraging sexual assault instead of passively doing so, setting up a strawperson from the get-go. Sommers herself is a fan of cherry-picking individual studies or case reports and claiming they’re representative of the whole, and I figure we’ll see a lot of that. There’s also the clever technique of deliberately missing the point or spinning out half-truths: take this video about date rape drugs by her partner-in-crime Caroline Kitchens, for instance. Her conclusion is that date rape drugs are over-hyped, and having looked at the literature myself I agree with her… so long as we exclude alcohol as a “date rape drug.” If you include it, then the picture shifts dramatically.

Numerous sources implicate alcohol use/abuse as either a cause of or contributor to sexual assault. … Across both the literatures on sexual assault and on alcohol’s side effects, several lines of empirical data and theory-based logic suggest that alcohol is a contributing factor to sexual assault.
George, William H., and Susan A. Stoner. “Understanding acute alcohol effects on sexual behavior.” Annual review of sex research 11.1 (2000): 92-124.

General alcohol consumption could be related to sexual assault through multiple path-ways. First, men who often drink heavily also likely do so in social situations that frequently lead to sexual assault (e.g., on a casual or spontaneous date at a party or bar). Second, heavy drinkers may routinely use intoxication as an excuse for engaging in socially unacceptable behavior, including sexual assault (Abbey et al. 1996b). Third, certain personality characteristics (e.g., impulsivity and antisocial behavior) may increase men’s propensity both to drink heavily and to commit sexual assault (Seto and Barbaree 1997).

Certain alcohol expectancies have also been linked to sexual assault. For example, alcohol is commonly viewed as an aphrodisiac that increases sexual desire and capacity (Crowe and George 1989). Many men expect to feel more powerful, disinhibited, and aggressive after drinking alcohol. … Further-more, college men who had perpetrated sexual assault when intoxicated expected alcohol to increase male and female sexuality more than did college men who perpetrated sexual assault when sober (Abbey et al. 1996b). Men with these expectancies may feel more comfortable forcing sex when they are drinking, because they can later justify to themselves that the alcohol made them act accordingly (Kanin 1984).

Attitudes about women’s alcohol consumption also influence a perpetrator’s actions and may be used to excuse sexual assaults of intoxicated women. Despite the liberalization of gender roles during the past few decades, most people do not readily approve of alcohol consumption and sexual behavior among women, yet view these same behaviors among men with far more leniency (Norris 1994). Thus, women who drink alcohol are frequently perceived as being more sexually available and promiscuous compared with women who do not drink (Abbey et al. 1996b). … In fact, date rapists frequently report intentionally getting the woman drunk in order to have sexual intercourse with her (Abbey et al. 1996b).
Abbey, Antonia, et al. “Alcohol and sexual assault.” Alcohol Research and Health 25.1 (2001): 43-51.
http://pubs.niaaa.nih.gov/publications/arh25-1/43-51.htm

I don’t think Sommers will take that approach, preferring to cherry-pick and fiddle with definitions instead, but as a potent tool of denialists it’s worth keeping in mind.

With that preamble out of the way, we can begin….