Christina Hoff Sommers: Blatant Science Denialist


So, how’d my predictions of Christina Hoff Sommer’s video pan out?

The standard approach for those challenging rape culture is to either to avoid defining the term “rape culture” at all, or define it as actively encouraging sexual assault instead of passively doing so, setting up a strawperson from the get-go.

Half points for this one. Sommers never defined “rape culture,” but thanks to vague wording made it sound like “rape culture” was synonymous with “beliefs that encourage the sexual assault of women on college campuses:”

[1:12] Now, does that mean that sexual assault’s not a problem on campus? Of course not! Too many women are victimized. But it’s not an epidemic, and it’s not a culture.

Continuing with myself:

Sommers herself is a fan of cherry-picking individual studies or case reports and claiming they’re representative of the whole, and I figure we’ll see a lot of that.

Success kid: NAILED IT

There’s also the clever technique of deliberately missing the point or spinning out half-truths […] I don’t think Sommers will take that approach, preferring to cherry-pick and fiddle with definitions instead, but as a potent tool of denialists it’s worth keeping in mind.

Oooooo, almost. Almost.

While there’s a lot of things I could pick apart about this video, I’d like to focus on the most blatant examples of her denialism, her juggling of sexual assault statistics.

The first study she cites is an infamous one in conservative circles, the Campus Sexual Assault Study of 2007. Ever since Obama made a big deal of it, they’ve cranked up their noise machine and dug in deep to discredit the study. Sommers benefits greatly from that, doing just a quick hit-and-run.

[0:50] The “one in five” claim is based on a 2007 internet study, with vaguely worded questions, a low response rate, and a non-representative sample.

Oh, how many ways is that wrong? Here’s the actual methodology from the paper (pg 3-1 to 3-2):

Two large public universities participated in the CSA Study. Both universities provided us

with data files containing the following information on all undergraduate students who were enrolled in the fall of 2005: full name, gender, race/ethnicity, date of birth, year of study, grade point average, full-time/part-time status, e-mail address, and mailing address. […]

We created four sampling subframes, with cases randomly ordered within each subframe: University 1 women, University 1 men, University 2 women, and University 2 men. […]

Samples were then drawn randomly from each of the four subframes. The sizes of these samples were dictated by response rate projections and sample size targets (4,000 women and 1,000 men, evenly distributed across the universities and years of study) […]

To recruit the students who were sampled to participate in the CSA Study, we relied on both recruitment e-mails and hard copy recruitment letters that were mailed to potential respondents. Sampled students were sent an initial recruitment e-mail that described the study, provided each student with a unique CSA Study ID#, and included a hyperlink to the CSA Study Web site. During each of the following 2 weeks, students who had not completed the survey were sent a follow-up e-mail encouraging them to participate. The third week, nonrespondents were mailed a hard-copy recruitment letter. Two weeks after the hard-copy letters were mailed, nonrespondents were sent a final recruitment e-mail.

Christopher P Krebs, Christine H. Lindquist, Tara D. Warner, Bonnie S. Fisher, and Sandra L. Martin. “Campus Sexual Assault (CSA) Study, Final Report,” October 2007.

The actual number of responses was 5,446 women and 1,375 men, above expectations. Yes, the authors expected a low response rate with a non-representative sample, and already had methods in place to deal with that; see pages 3-7 to 3-10 of the report for how they compensated, and then verified their methods were valid. Note too that this “internet study” was quite targeted and closed to the public, contrary to what Sommers implies.

As to the “vaguely-worded” questions, that’s because many people won’t say they were raped even if they were penetrated against their will (eg. Koss, Mary P., Thomas E. Dinero, Cynthia A. Seibel, and Susan L. Cox. “Stranger and Acquaintance Rape: Are There Differences in the Victim’s Experience?Psychology of Women Quarterly 12, no. 1 (1988): 1–24). Partly that’s because denial is one way to cope with a traumatic event, and partly because they’ve been told it isn’t a crime by society. So researchers have to tip-toe around “rape culture” just to get an accurate view of sexual assault, yet more evidence that beast exists after all.

Sommers champions another study as more accurate than the CSA, one from the US Bureau of Justice Statistics which comes to the quite-different figure of one in 52. Sommers appears to be getting her data from Figure 2 in that document, and since that’s on page three either she or a research assistant must have read page two.

The NCVS is one of several surveys used to study rape and sexual assault in the general and college-age population. In addition to the NCVS, the National Intimate Partner and Sexual Violence Survey (NISVS) and the Campus Sexual Assault Study (CSA) are two recent survey efforts used in research on rape and sexual assault. The three surveys differ in important ways in how rape and sexual assault questions are asked and victimization is measured. […]

The NCVS is presented as a survey about crime, while the NISVS and CSA are presented as surveys about public health. The NISVS and CSA collect data on incidents of unwanted sexual contact that may not rise to a level of criminal behavior, and respondents may not report incidents to the NCVS that they do not consider to be criminal. […]

The NCVS, NISVS, and CSA target different types of events. The NCVS definition is shaped from a criminal justice perspective and includes threatened, attempted, and completed rape and sexual assault against males and females […]

Unlike the NCVS, which uses terms like rape and unwanted sexual activity to identify victims of rape and sexual assault, the NISVS and CSA use behaviorally specific questions to ascertain whether the respondent experienced rape or sexual assault. These surveys ask about an exhaustive list of explicit types of unwanted sexual contact a victim may have experienced, such as being made to perform or receive anal or oral sex.

Lynn Langton, Sofi Sinozich. “Rape and Sexual Assault Among College-age Females, 1995-2013” December 11, 2014.

This information repeats in Appendix A, which even includes a handy table summarizing all of the differences. If it’s been shoved into page two as well, that must indicate many people have tried to leverage this study to “discredit” others, without realizing the different methodologies make that impossible. The study authors tried to paint these differences in bright neon, to guard against any stat-mining, but alas Sommers has no qualms about ignoring all that to suit her ends. Even the NCVS authors suggest going with other numbers for prevalence and only using theirs for differences between student and non-student populations:

Despite the differences that exist between the surveys, a strength of the NCVS is its ability to be used to make comparisons over time and between population subgroups. The differences observed between students and nonstudents are reliable to the extent that both groups responded in a similar manner to the NCVS context and questions. Methodological differences that lead to higher estimates of rape and sexual assault in the NISVS and CSA should not affect the NCVS comparisons between groups.

In short, Sommers engaged in more half-truths and misleading statements than I predicted. Dang. But hold onto your butts, because things are about to get worse.

[2:41] The claim that 2% of rape accusations are false? That’s unfounded. It seems to have started with Susan Brownmiller’s 1975 feminist manifesto “Against Our Will.” Other statistics for false accusations range from 8 to 43%.

Hmph, so how did Brownmiller come to her 2% figure for false reports? Let’s check her book:

A decade ago the FBI’s Uniform Crime Reports noted that 20 percent of all rapes reported to the police were determined by investigation to be unfounded.’ By 1973 the figure had dropped to 15 percent, while rape remained, in the FBI’s words, the most underreported crime.’ A 15 percent figure for false accusations is undeniably high, yet when New York City instituted a special sex crimes analysis squad and put police women (instead of men) in charge of interviewing complainants, the number of false charges in New York dropped dramatically to 2 percent, a figure that corresponded exactly to the rate of false reports for other crimes. The lesson in the mystery of the vanishing statistic is obvious. Women believe the word of other women. Men do not.

Brownmiller, Susan. Against Our Will: Men, Women and Rape. Open Road Media, 2013. pg. 435.

…. waaaitaminute. Brownmiller never actually says the 2% figure is the false reporting rate; at best, she merely argues it’s more accurate than figures of 15-20%. And, in fact, it is!

In contrast, when more methodologically rigorous research has been conducted, estimates for the percentage of false reports begin to converge around 2-8%.Lonsway, Kimberly A., Joanne Archambault, and David Lisak. “False reports: Moving beyond the issue to successfully investigate and prosecute non-stranger sexual assault.” (2009).

That’s taken from the third study Sommers cites, or more accurately a summary of other work by Lisak. She quotes two of the three studies in that summary which show rates above 8%. The odd study out gives an even higher false reporting rate than the 8% one Sommers quotes, and should therefore have been better evidence, but look at how Lisak describes it:

A similar study was then again sponsored by the Home Office in 1996 (Harris & Grace, 1999). This time, the case files of 483 rape cases were examined, and supplemented with information from a limited number of interviews with sexual assault victims and criminal justice personnel. However, the determination that a report was false was made solely by the police. It is therefore not surprising that the estimate for false allegations (10.9%) was higher than those in other studies with a methodology designed to systematically evaluate these classifications.

That’s impossible to quote-mine. And while Lisak spends a lot of time discussing Kanin’s study, which is the fifth one Sommers presents, she references it directly instead of pulling from Lisak. A small sample may hint at why he’s been snubbed:

As a result of these and other serious problems with the “research,” Kanin’s (1994) article can be considered “a provocative opinion piece, but it is not a scientific study of the issue of false reporting of rape. It certainly should never be used to assert a scientific foundation for the frequency of false allegations” (Lisak, 2007, p. 1).

Well, at least that fourth study wasn’t quote-mined. Right?

internal rules on false complaints specify that this category should be limited to cases where either there is a clear and credible admission by the complainants, or where there are strong evidential grounds. On this basis, and bearing in mind the data limitations, for the cases where there is information (n=144) the designation of false complaint could be said to be probable (primarily those where the account by the complainant is referred to) in 44 cases, possible (primarily where there is some evidential basis) in a further 33 cases, and uncertain (including where victim characteristics are used to impute that they are inherently less believable) in 77 cases. If the proportion of false complaints on the basis of the probable and possible cases are recalculated, rates of three per cent are obtained, both of all reported cases (n=67 of 2,643), and of those where the outcome is known (n=67 of 2,284). Even if all those designated false by the police were accepted (a figure of approximately ten per cent), this is still much lower than the rate perceived by police officers interviewed in this study.Kelly, Liz., Jo. Lovett, Linda. Regan, Great Britain., Home Office., and Development and Statistics Directorate. Research. A Gap or a Chasm?: Attrition in Reported Rape Cases. London: Home Office Research, Development and Statistics Directorate, 2005.

Bolding mine. It’s rather convenient that Sommers quoted the police false report rate of 8% (or “approximately ten per cent” here), yet somehow overlooked the later section where the authors explain that the police inflated the false report figure. In the same way they rounded the 8% to ten, Liz Kelly and her co-authors also rounded up the “three per cent” figure; divide 67 by 2,284, and you get within fingertip distance of 2%, a false report rate of 2.5%.

Lisak did not get the low-end of his 2-8% range from Brownmiller; he got it from two large-scale, rigorous studies that concluded a 2% false report rate was reasonable. In his scientific paper, in fact, he explicitly discards Brownmiller’s number:

Another source, cited by Rumney (2006) and widely referenced in the literature on false allegations is a study conducted by the New York City police department and originally referenced by Susan Brownmiller (1975) in her book, Against Our Will: Men, Women and Rape. According to Brownmiller, the study found a false allegation rate of 2%. However, the only citation for the study is a public remark made by a judge at a bar association meeting, and, therefore, no information is available on the study’s sample or methodology.

Lisak, David, Lori Gardinier, Sarah C. Nicksa, and Ashley M. Cote. “False Allegations of Sexual Assualt: An Analysis of Ten Years of Reported Cases.” Violence Against Women 16, no. 12 (2010): 1318–34.

That 2% number is actually quite well founded, and Sommers must have known that. Feminists also know of the 2-8% stat, and cite it frequently.

In hindsight, this is a blatant example of the embrace-extend-extinguish pattern of Sommers that I discussed earlier. She took one extreme of the feminist position, then painted it as the typical one while cherry-picking the evidence in her favor. She took the other extreme as her low point, so she had the option of invoking a false concession, and then extended her false report range to encompass the majority of false rape report studies out there, most of which are useless.

very few of these estimates are based on research that could be considered credible. Most are reported without the kind of information that would be needed to evaluate their reliability and validity. A few are little more than published opinions, based either on personal experience or a non-systematic review (e.g., of police files, interviews with police investigators, or other information with unknown reliability and validity).

Lisak (2009), pg. 1

Sommers then claims this “middle ground” as her own, riding the Appeal to Moderation for all it’s worth. This is denialism so blatant that no skeptic should take it seriously.

Alas, quite a few do.