Stephanie Zvan on Recovered Memories

I’ve been hoping for a good second opinion on this topic, and Zvan easily delivers. She has some training in psychology (unlike me), has been dealing with this topic for longer than I have, and by waiting longer to weigh in she’s had more time to craft her arguments. I place high weight on her words, so if you liked what I had to say be sure to read her take as well.

When we look more generally at how memory works, it quickly becomes apparent that focusing exclusively on the recovery of false memories produces lessons that aren’t generally applicable for evaluating memories of traumatic events. We need to continue to be on our guard for the circumstances that produce induced memories, and we have skeptics to thank for very important work on that topic.

However, it’s equally important that we, as skeptics, don’t fall into thinking every memory that people haven’t been shouting from the rooftops from the moment of trauma is induced. Recovered false memories are unusual events that happen under unusual circumstances. Abuse is a common occurrence, typically subject to normal rules of memory.

She also takes a slightly different path than I did. As weird as it may sound, I didn’t cover recovered memories very much in an argument supposedly centred around them; between the science on trauma, the obvious bias of Pendergrast and Crews, the evidence for bias from Loftus, the signs of anomaly hunting, and those court transcripts, I didn’t need to. I could blindly accept their assumptions of how those memories worked, and still have a credible counter-argument. Zvan’s greater familiarity with psychology allows her to take on that angle directly, and it adds much to the conversation. A taste:

Not everyone is susceptible to [false recovered memories]. Brewin and Andrews, writing for The British Psychological Society, characterize the situation thus: “Rather than childhood memories being easy to implant, therefore, a more reasonable conclusion is that they can be implanted in a minority of people given sufficient effort.” Estimates in the studies they look at (including Elizabeth Loftus’s work) show an effect in, on average, 15% of study participants, though they caution actual belief in those memories may be lower.

But enough from me, go read her.

Where Have You Been?

Thomas Smith released a podcast episode about his time at MythCon. I have few nitpicks about it; the bit where he chastised people for calling the organizers “Nazis” because it didn’t help him came across as tone policing and a touch self-absorbed, and I was chuffed he didn’t mention Monette Richards when he listed off people who’d been right about what would happen. But that needs to be weighed against the rest of what he said on that podcast, and in particular an honest-to-goodness ultimatum he issued to Mythicist Milwaukee: change and disavow your problematic board members, or he’ll do everything he can to discourage people from their events. Never thought I’d hear something like that from him.

The kudos and love he’s getting right now are deserved. His performance at MythCon was the best anyone could hope for, based on the few scraps I’m seeing. And yet, those kudos come with a bitter taste. Steve Shives beat me to the reason why, and Smith himself has suggested he agrees with Shives, so in some sense what follows is redundant. But it’s a point that needs emphasis and repetition until it fully sinks in. [Read more…]

Christina Hoff Sommers: Blatant Science Denialist

So, how’d my predictions of Christina Hoff Sommer’s video pan out?

The standard approach for those challenging rape culture is to either to avoid defining the term “rape culture” at all, or define it as actively encouraging sexual assault instead of passively doing so, setting up a strawperson from the get-go.

Half points for this one. Sommers never defined “rape culture,” but thanks to vague wording made it sound like “rape culture” was synonymous with “beliefs that encourage the sexual assault of women on college campuses:”

[1:12] Now, does that mean that sexual assault’s not a problem on campus? Of course not! Too many women are victimized. But it’s not an epidemic, and it’s not a culture.

Continuing with myself:

Sommers herself is a fan of cherry-picking individual studies or case reports and claiming they’re representative of the whole, and I figure we’ll see a lot of that.

Success kid: NAILED IT

There’s also the clever technique of deliberately missing the point or spinning out half-truths […] I don’t think Sommers will take that approach, preferring to cherry-pick and fiddle with definitions instead, but as a potent tool of denialists it’s worth keeping in mind.

Oooooo, almost. Almost.

While there’s a lot of things I could pick apart about this video, I’d like to focus on the most blatant examples of her denialism, her juggling of sexual assault statistics.

The first study she cites is an infamous one in conservative circles, the Campus Sexual Assault Study of 2007. Ever since Obama made a big deal of it, they’ve cranked up their noise machine and dug in deep to discredit the study. Sommers benefits greatly from that, doing just a quick hit-and-run.

[0:50] The “one in five” claim is based on a 2007 internet study, with vaguely worded questions, a low response rate, and a non-representative sample.

Oh, how many ways is that wrong? Here’s the actual methodology from the paper (pg 3-1 to 3-2):

Two large public universities participated in the CSA Study. Both universities provided us

with data files containing the following information on all undergraduate students who were enrolled in the fall of 2005: full name, gender, race/ethnicity, date of birth, year of study, grade point average, full-time/part-time status, e-mail address, and mailing address. […]

We created four sampling subframes, with cases randomly ordered within each subframe: University 1 women, University 1 men, University 2 women, and University 2 men. […]

Samples were then drawn randomly from each of the four subframes. The sizes of these samples were dictated by response rate projections and sample size targets (4,000 women and 1,000 men, evenly distributed across the universities and years of study) […]

To recruit the students who were sampled to participate in the CSA Study, we relied on both recruitment e-mails and hard copy recruitment letters that were mailed to potential respondents. Sampled students were sent an initial recruitment e-mail that described the study, provided each student with a unique CSA Study ID#, and included a hyperlink to the CSA Study Web site. During each of the following 2 weeks, students who had not completed the survey were sent a follow-up e-mail encouraging them to participate. The third week, nonrespondents were mailed a hard-copy recruitment letter. Two weeks after the hard-copy letters were mailed, nonrespondents were sent a final recruitment e-mail.

Christopher P Krebs, Christine H. Lindquist, Tara D. Warner, Bonnie S. Fisher, and Sandra L. Martin. “Campus Sexual Assault (CSA) Study, Final Report,” October 2007.

The actual number of responses was 5,446 women and 1,375 men, above expectations. Yes, the authors expected a low response rate with a non-representative sample, and already had methods in place to deal with that; see pages 3-7 to 3-10 of the report for how they compensated, and then verified their methods were valid. Note too that this “internet study” was quite targeted and closed to the public, contrary to what Sommers implies.

As to the “vaguely-worded” questions, that’s because many people won’t say they were raped even if they were penetrated against their will (eg. Koss, Mary P., Thomas E. Dinero, Cynthia A. Seibel, and Susan L. Cox. “Stranger and Acquaintance Rape: Are There Differences in the Victim’s Experience?Psychology of Women Quarterly 12, no. 1 (1988): 1–24). Partly that’s because denial is one way to cope with a traumatic event, and partly because they’ve been told it isn’t a crime by society. So researchers have to tip-toe around “rape culture” just to get an accurate view of sexual assault, yet more evidence that beast exists after all.

Sommers champions another study as more accurate than the CSA, one from the US Bureau of Justice Statistics which comes to the quite-different figure of one in 52. Sommers appears to be getting her data from Figure 2 in that document, and since that’s on page three either she or a research assistant must have read page two.

The NCVS is one of several surveys used to study rape and sexual assault in the general and college-age population. In addition to the NCVS, the National Intimate Partner and Sexual Violence Survey (NISVS) and the Campus Sexual Assault Study (CSA) are two recent survey efforts used in research on rape and sexual assault. The three surveys differ in important ways in how rape and sexual assault questions are asked and victimization is measured. […]

The NCVS is presented as a survey about crime, while the NISVS and CSA are presented as surveys about public health. The NISVS and CSA collect data on incidents of unwanted sexual contact that may not rise to a level of criminal behavior, and respondents may not report incidents to the NCVS that they do not consider to be criminal. […]

The NCVS, NISVS, and CSA target different types of events. The NCVS definition is shaped from a criminal justice perspective and includes threatened, attempted, and completed rape and sexual assault against males and females […]

Unlike the NCVS, which uses terms like rape and unwanted sexual activity to identify victims of rape and sexual assault, the NISVS and CSA use behaviorally specific questions to ascertain whether the respondent experienced rape or sexual assault. These surveys ask about an exhaustive list of explicit types of unwanted sexual contact a victim may have experienced, such as being made to perform or receive anal or oral sex.

Lynn Langton, Sofi Sinozich. “Rape and Sexual Assault Among College-age Females, 1995-2013” December 11, 2014.

This information repeats in Appendix A, which even includes a handy table summarizing all of the differences. If it’s been shoved into page two as well, that must indicate many people have tried to leverage this study to “discredit” others, without realizing the different methodologies make that impossible. The study authors tried to paint these differences in bright neon, to guard against any stat-mining, but alas Sommers has no qualms about ignoring all that to suit her ends. Even the NCVS authors suggest going with other numbers for prevalence and only using theirs for differences between student and non-student populations:

Despite the differences that exist between the surveys, a strength of the NCVS is its ability to be used to make comparisons over time and between population subgroups. The differences observed between students and nonstudents are reliable to the extent that both groups responded in a similar manner to the NCVS context and questions. Methodological differences that lead to higher estimates of rape and sexual assault in the NISVS and CSA should not affect the NCVS comparisons between groups.

In short, Sommers engaged in more half-truths and misleading statements than I predicted. Dang. But hold onto your butts, because things are about to get worse.

[2:41] The claim that 2% of rape accusations are false? That’s unfounded. It seems to have started with Susan Brownmiller’s 1975 feminist manifesto “Against Our Will.” Other statistics for false accusations range from 8 to 43%.

Hmph, so how did Brownmiller come to her 2% figure for false reports? Let’s check her book:

A decade ago the FBI’s Uniform Crime Reports noted that 20 percent of all rapes reported to the police were determined by investigation to be unfounded.’ By 1973 the figure had dropped to 15 percent, while rape remained, in the FBI’s words, the most underreported crime.’ A 15 percent figure for false accusations is undeniably high, yet when New York City instituted a special sex crimes analysis squad and put police women (instead of men) in charge of interviewing complainants, the number of false charges in New York dropped dramatically to 2 percent, a figure that corresponded exactly to the rate of false reports for other crimes. The lesson in the mystery of the vanishing statistic is obvious. Women believe the word of other women. Men do not.

Brownmiller, Susan. Against Our Will: Men, Women and Rape. Open Road Media, 2013. pg. 435.

…. waaaitaminute. Brownmiller never actually says the 2% figure is the false reporting rate; at best, she merely argues it’s more accurate than figures of 15-20%. And, in fact, it is!

In contrast, when more methodologically rigorous research has been conducted, estimates for the percentage of false reports begin to converge around 2-8%.Lonsway, Kimberly A., Joanne Archambault, and David Lisak. “False reports: Moving beyond the issue to successfully investigate and prosecute non-stranger sexual assault.” (2009).

That’s taken from the third study Sommers cites, or more accurately a summary of other work by Lisak. She quotes two of the three studies in that summary which show rates above 8%. The odd study out gives an even higher false reporting rate than the 8% one Sommers quotes, and should therefore have been better evidence, but look at how Lisak describes it:

A similar study was then again sponsored by the Home Office in 1996 (Harris & Grace, 1999). This time, the case files of 483 rape cases were examined, and supplemented with information from a limited number of interviews with sexual assault victims and criminal justice personnel. However, the determination that a report was false was made solely by the police. It is therefore not surprising that the estimate for false allegations (10.9%) was higher than those in other studies with a methodology designed to systematically evaluate these classifications.

That’s impossible to quote-mine. And while Lisak spends a lot of time discussing Kanin’s study, which is the fifth one Sommers presents, she references it directly instead of pulling from Lisak. A small sample may hint at why he’s been snubbed:

As a result of these and other serious problems with the “research,” Kanin’s (1994) article can be considered “a provocative opinion piece, but it is not a scientific study of the issue of false reporting of rape. It certainly should never be used to assert a scientific foundation for the frequency of false allegations” (Lisak, 2007, p. 1).

Well, at least that fourth study wasn’t quote-mined. Right?

internal rules on false complaints specify that this category should be limited to cases where either there is a clear and credible admission by the complainants, or where there are strong evidential grounds. On this basis, and bearing in mind the data limitations, for the cases where there is information (n=144) the designation of false complaint could be said to be probable (primarily those where the account by the complainant is referred to) in 44 cases, possible (primarily where there is some evidential basis) in a further 33 cases, and uncertain (including where victim characteristics are used to impute that they are inherently less believable) in 77 cases. If the proportion of false complaints on the basis of the probable and possible cases are recalculated, rates of three per cent are obtained, both of all reported cases (n=67 of 2,643), and of those where the outcome is known (n=67 of 2,284). Even if all those designated false by the police were accepted (a figure of approximately ten per cent), this is still much lower than the rate perceived by police officers interviewed in this study.Kelly, Liz., Jo. Lovett, Linda. Regan, Great Britain., Home Office., and Development and Statistics Directorate. Research. A Gap or a Chasm?: Attrition in Reported Rape Cases. London: Home Office Research, Development and Statistics Directorate, 2005.

Bolding mine. It’s rather convenient that Sommers quoted the police false report rate of 8% (or “approximately ten per cent” here), yet somehow overlooked the later section where the authors explain that the police inflated the false report figure. In the same way they rounded the 8% to ten, Liz Kelly and her co-authors also rounded up the “three per cent” figure; divide 67 by 2,284, and you get within fingertip distance of 2%, a false report rate of 2.5%.

Lisak did not get the low-end of his 2-8% range from Brownmiller; he got it from two large-scale, rigorous studies that concluded a 2% false report rate was reasonable. In his scientific paper, in fact, he explicitly discards Brownmiller’s number:

Another source, cited by Rumney (2006) and widely referenced in the literature on false allegations is a study conducted by the New York City police department and originally referenced by Susan Brownmiller (1975) in her book, Against Our Will: Men, Women and Rape. According to Brownmiller, the study found a false allegation rate of 2%. However, the only citation for the study is a public remark made by a judge at a bar association meeting, and, therefore, no information is available on the study’s sample or methodology.

Lisak, David, Lori Gardinier, Sarah C. Nicksa, and Ashley M. Cote. “False Allegations of Sexual Assualt: An Analysis of Ten Years of Reported Cases.” Violence Against Women 16, no. 12 (2010): 1318–34.

That 2% number is actually quite well founded, and Sommers must have known that. Feminists also know of the 2-8% stat, and cite it frequently.

In hindsight, this is a blatant example of the embrace-extend-extinguish pattern of Sommers that I discussed earlier. She took one extreme of the feminist position, then painted it as the typical one while cherry-picking the evidence in her favor. She took the other extreme as her low point, so she had the option of invoking a false concession, and then extended her false report range to encompass the majority of false rape report studies out there, most of which are useless.

very few of these estimates are based on research that could be considered credible. Most are reported without the kind of information that would be needed to evaluate their reliability and validity. A few are little more than published opinions, based either on personal experience or a non-systematic review (e.g., of police files, interviews with police investigators, or other information with unknown reliability and validity).

Lisak (2009), pg. 1

Sommers then claims this “middle ground” as her own, riding the Appeal to Moderation for all it’s worth. This is denialism so blatant that no skeptic should take it seriously.

Alas, quite a few do.

Christina Hoff Sommers: Science Denialist?

In a bizarre coincidence, just three days before my lecture on rape culture Christina Hoff Sommers happened to weigh in on the topic. I haven’t seen the video yet, which puts me in a great position to lay a little groundwork and make some predictions.

First off, we’ve got to get our definitions straight. “Rape culture” is the cloud of myths about sexual assault that exist within our society, which make it easier to excuse that crime and/or tougher for victims to recover or seek justice. Take Burt’s 1980 paper on the subject:

The burgeoning popular literature on rape (e.g., Brownmiller, 1975; Clark & Lewis, 1977) all points to the importance of stereotypes and myths — denned as prejudicial, stereotyped, or false beliefs about rape, rape victims, and rapists — in creating a climate hostile to rape victims. Examples of rape myths are “only bad girls get raped”; “any healthy woman can resist a rapist if she really wants to”; “women ask for it”; “women ‘cry rape’ only when they’ve been jilted or have something to cover up”; “rapists are sex-starved, insane, or both.” Recently, researchers have begun to document that rape myths appear in the belief systems of lay people and of professionals who interact with rape victims and assailants (e.g., Barber, 1974; Burt, 1978; Feild, 1978; Kalven & Zeisel, 1966). Writers have ana-
lyzed how rape myths have been institutionalized in the law (Berger, 1977) […]

Much feminist writing on rape maintains that we live in a rape culture that supports the objectification of, and violent and sexual abuse of, women through movies, television, advertising, and “girlie” magazines (see, e.g., Brownmiller, 197S). We hypothesized that exposure to such material would increase rape myth acceptance because it would tend to normalize coercive and brutal sexuality.
Burt, Martha R. “Cultural Myths and Supports for Rape.” Journal of Personality and Social Psychology 38, no. 2 (1980): 217.
http://www.excellenceforchildandyouth.ca/sites/default/files/meas_attach/burt_1980.pdf

You can see how the definition has shifted a little over time; objectification certainly helps dehumanize your victim, but it’s not a strict necessity, and while in all modern societies that I know of women are disproportionately targeted for gender-based violence, there’s still a non-trivial number of male victims out there.

There are two ways to demonstrate “rape culture” is itself a myth. The most obvious route is to challenge the “rape myth” part, and show either that those myths are in line with reality or are not commonly held in society. For instance, either good girls do not get raped, or few people believe that good girls do not get raped. Based on even a small, narrow sample of the literature, this is a tough hill to climb. I did a quick Google Scholar search, and even when I asked specifically for “rape myth acceptance” I had no problem pulling a thousand results, with Google claiming to have another 2,500 or so it wouldn’t show me. There must be a consensus on “rape culture,” based merely on volume, and to pick a side opposing that consensus is to be a science denialist.

The less obvious route to challenge the “help perpetrators/harm victims” portion. Consider the “rubber sheet model” of General Relativity; we know this is wrong, and not just because it depends on gravity to explain gravity, but nonetheless the model is close enough to reality that non-physicists get the gist of things without having to delve into equations. It’s a myth, but the benefits outweigh the harms. Sommers could take a similar approach to sexual assault, not so much arguing that rape myths are a net benefit but instead riding the “correlation is not causation” line and arguing the myths don’t excuse perpetrators or harm victims. This approach has problems too, as correlation can be evidence for causation when there’s a plausible mechanism, and past a point this approach also becomes science denialism. Overall, I think it’s Sommers’ best route.

If she gets that far, of course. The standard approach for those challenging rape culture is to either to avoid defining the term “rape culture” at all, or define it as actively encouraging sexual assault instead of passively doing so, setting up a strawperson from the get-go. Sommers herself is a fan of cherry-picking individual studies or case reports and claiming they’re representative of the whole, and I figure we’ll see a lot of that. There’s also the clever technique of deliberately missing the point or spinning out half-truths: take this video about date rape drugs by her partner-in-crime Caroline Kitchens, for instance. Her conclusion is that date rape drugs are over-hyped, and having looked at the literature myself I agree with her… so long as we exclude alcohol as a “date rape drug.” If you include it, then the picture shifts dramatically.

Numerous sources implicate alcohol use/abuse as either a cause of or contributor to sexual assault. … Across both the literatures on sexual assault and on alcohol’s side effects, several lines of empirical data and theory-based logic suggest that alcohol is a contributing factor to sexual assault.
George, William H., and Susan A. Stoner. “Understanding acute alcohol effects on sexual behavior.” Annual review of sex research 11.1 (2000): 92-124.

General alcohol consumption could be related to sexual assault through multiple path-ways. First, men who often drink heavily also likely do so in social situations that frequently lead to sexual assault (e.g., on a casual or spontaneous date at a party or bar). Second, heavy drinkers may routinely use intoxication as an excuse for engaging in socially unacceptable behavior, including sexual assault (Abbey et al. 1996b). Third, certain personality characteristics (e.g., impulsivity and antisocial behavior) may increase men’s propensity both to drink heavily and to commit sexual assault (Seto and Barbaree 1997).

Certain alcohol expectancies have also been linked to sexual assault. For example, alcohol is commonly viewed as an aphrodisiac that increases sexual desire and capacity (Crowe and George 1989). Many men expect to feel more powerful, disinhibited, and aggressive after drinking alcohol. … Further-more, college men who had perpetrated sexual assault when intoxicated expected alcohol to increase male and female sexuality more than did college men who perpetrated sexual assault when sober (Abbey et al. 1996b). Men with these expectancies may feel more comfortable forcing sex when they are drinking, because they can later justify to themselves that the alcohol made them act accordingly (Kanin 1984).

Attitudes about women’s alcohol consumption also influence a perpetrator’s actions and may be used to excuse sexual assaults of intoxicated women. Despite the liberalization of gender roles during the past few decades, most people do not readily approve of alcohol consumption and sexual behavior among women, yet view these same behaviors among men with far more leniency (Norris 1994). Thus, women who drink alcohol are frequently perceived as being more sexually available and promiscuous compared with women who do not drink (Abbey et al. 1996b). … In fact, date rapists frequently report intentionally getting the woman drunk in order to have sexual intercourse with her (Abbey et al. 1996b).
Abbey, Antonia, et al. “Alcohol and sexual assault.” Alcohol Research and Health 25.1 (2001): 43-51.
http://pubs.niaaa.nih.gov/publications/arh25-1/43-51.htm

I don’t think Sommers will take that approach, preferring to cherry-pick and fiddle with definitions instead, but as a potent tool of denialists it’s worth keeping in mind.

With that preamble out of the way, we can begin….