Surprise Weddings are Nonconsensual and Icky

Okay, I promise I’ll actually write something for this blog soon, but for now I have another Daily Dot piece, this time about “surprise weddings.” (It’s as icky as it sounds.) Here’s an excerpt:

It’s incredibly ironic that an event meant to celebrate the joining of two people in marriage would be so one-sided, and that consent would be deemed so irrelevant. Relationships aren’t—or shouldn’t be—about one person deciding and creating things for another. They should be about two people building a life together.

In case my reference to “consent” doesn’t make sense, consider this: expressing a desire to have sex with someone doesn’t mean they get to decide unilaterally when and where and how the sex will happen. Agreeing to marry someone doesn’t mean they get to decide unilaterally when and where and how you’ll get married and who the guests will be and what music you’ll have and what types of hors d’oevres will be served. Unless, of course, you tell your partner that you don’t really care about these details and they’re free to do whatever they want with the wedding planning.

Weddings, like the marriages they are meant to celebrate, should be collaborative. That collaboration can mean “We make all the decisions together,” or it can mean “I don’t care, it’s all up to you!”, or it can mean anything in between. Personally, if someone sprung a wedding on me like that, I’d have to have a serious conversation with them about why they don’t think my own wedding preferences matter enough to be taken into account.

You can read the rest here.

One thing I didn’t really have space to get into in the article was the romanticization of surprise itself, and why it is that people find surprises so romantic. I think part of it is just how many people find it fun to be surprised, so it’s nice when a partner surprises them. It also implies a certain amount of effort; secrecy can be hard, and doing things without your partner’s suggestions can be especially hard (such as planning a birthday party they’d like with the friends they’d want to see or buying them a gift they’ll love without asking them what they want).

On the other hand, surprising your partner also means–you guessed it–not having to communicate with them about their desires and preferences. It means being let off the hook if they don’t like it so much because, well, how were you supposed to know! Communication can be fun and exciting, but it can also be difficult and not very exciting. Especially communication about wedding planning.

Surprise Weddings are Nonconsensual and Icky
{advertisement}

Against Role Models

Whenever a famous person does something of which the general public disapproves, much is often made of that person’s status as a “role model” and how it influences the public’s judgment of their behavior, and whether or not it is time to revoke that status.

It seems that celebrities cannot escape being seen as “role models” no matter what made them famous. We expect an athlete or a singer or an actor to be good at not just sports or singing or acting, but at upstanding, ethical behavior, too. The assumption is that children should look up to these figures not just because they represent talent and achievement that (supposedly) comes from lots of hard work and sacrifice, but because their behavior in the rest of their lives is something to emulate, too.

This makes sense to an extent. We know that children learn by modeling the behavior of adults, and we want them to have adults whose behavior they can model. While a parent is normally the one expected to serve that function, most parents hope for their children to achieve more than they (the parents) have been able to in their own lives. Choosing and fixating upon a random successful but unknown doctor or lawyer or scientist or writer seems odd, but famous people already serve the role of entertaining the public simply by existing. So, perhaps some parents hope that celebrities can be good role models for their children and inspire them to both professional and personal success.

In fact, there is absolutely no reason why someone’s success at sports or music should be taken to mean that that person’s treatment of others is just as admirable. There’s no reason why being a great actor means you keep your promises to your partners and respect the law. There’s no reason why being in a famous band means you are very careful about your health and avoid dangerous drugs. Expecting celebrities to be able to model these types of “good behavior” makes no sense.

And even when we try to see someone as a role model in a specific domain only, it never seems to quite work. We fall victim to black-and-white thinking–people are either “good” or “bad,” and if a talented, successful athlete cheats on his wife, he goes from “good” to “bad” very quickly. Even though many people cheat, and even though occasional bad behavior doesn’t necessarily mean someone is a “bad person.”

The expectation of being a role model places undue pressures on celebrities, especially women. Tracy Moore writes:

Critiquing famous (or any) women’s behavior in terms of whether what they do is good for the girls or not is a sticky trap. It prevents them from being complicated, actual people working themselves out — you know, individuals? The thing we want women to be seen as? It keeps us in an endless loop of chasing after this One Correct Way for Women to Conduct Themselves. It’s exhausting, and I refuse to buy into it, and I don’t want to help christen it.

I also think it insults girls, who are more individual, and already far more developed as people than we give them credit for by treating them like blank slates who will copy and absorb every thing they ever see on command. That may be true for fashion, and I’m not disputing that teens copy famous people’s behavior too (and yes I’m staring down a princess phase with a toddler), but that doesn’t mean they instantly absorb the values and ideology of everyone they admire.

What I want is for women to be seen as human, which means, flawed, misguided, shitty, awesome, talented, cool, all of the above. In order to be treated like equal people, we have to have the latitude to have the same range of profound greatness and disturbing awfulness as men. We have to be ordinary, boring, fascinating, idiotic and brilliant.

Moore notes that female celebrities seem to bear a greater burden for Making Sure Our Children Turn Out Okay than male ones do, and male celebrities do seem to have an easier time recovering from Scandals with their popularity mostly intact (see: Bill Clinton, Charlie Sheen, Chris Brown, R. Kelly).

And what about non-celebrities? What happens when they’re expected to be role models?

I don’t know how this plays out in other professions or contexts, but within social work and mental healthcare, there is an immense amount of pressure put on professionals to be role models. We’ve talked about this in my social work classes.

People look to social workers and mental health professionals for more than just “help me fix my brain bugs.” They also look to them as examples of how to live well, and they often expect them to be wearing the same professional “face” even if they encounter them randomly outside of the office.

Our professors ask us what we would do if we encountered a client, say, at a bar or on public transit or even at a party. How would we manage their expectations of us with our desire to behave as we usually would at a bar or on the subway or at a party? Would it harm our relationships with our clients if they saw us acting like, well, normal people?

It’s true that if our clients think that we’re always the way we are in a session–calm, empathic, curious, mature, “wise”–it might disturb them to see us drinking at a bar or kissing a significant other in public or dancing at a party. They might wonder if we’re “faking” when we’re in a session with them. They might wonder who we “really” are.

For some professionals, this seems to be enough of a reason to significantly alter their behavior if they see a client out in public, or leave a bar or party where a client happens to be. They might even consider whether or not doing things like going to bars and parties after hours is even compatible with who they are as professionals.

When we discussed this in class, I was glad that most of my classmates reacted with minor indignation. Why should we be expected to be professional 24/7? Why does everyone else get to take off their work persona when they leave the office, but we don’t? Why is it our fault if our clients judge us as immature or irresponsible just because we go to bars on the weekends?

I think there are two reasons why expecting therapists to act like therapists 24/7 is harmful. One is that, on the individual level, it’s stressful and takes a toll on one’s mental health and freedom to live life the way they want to. Deciding to be a therapist should not be a life sentence to never behave like a normal person outside of work again. That’s too much of a burden for someone whose work is already very stressful and difficult.

Second, part of our role as mental health professionals is encouraging clients to think rationally, accurately, and adaptively about other people and their relationships with them. “This person is drinking at a bar therefore they are immature and I can’t trust them as my therapist” is not a rational, accurate, or adaptive thought. (Well, it could be accurate, but you’d need more evidence to come to that conclusion.) Neither is, “This person is behaving differently after hours than they are at work, and therefore the way they behave at work is totally fake and they’re just lying to me.”

But speaking as someone who’s been on both sides of that relationship, I have to say that we are really, really patronizing our clients if we think that they are incapable of realizing that we have selves outside of the office. We are treating them like children if we presume that they need to be carefully prevented from seeing any part of our non-therapist persona, including kissing a partner in public or getting tipsy at a bar.

But it’s possible that some clients might be confused or bothered by seeing a therapist acting non-therapisty out in public. I think that the best course of action then is to discuss that in therapy, not laboriously alter one’s public behavior so that such an issue never comes up to begin with.

Because our classes are mostly discussion-based and there’s little in the social work code of ethics about situations like this (dual relationships, though, are a different matter), my professor never gave a definitive answer on whether or not we should endeavor to be role models to our clients no matter where we encounter them. His intent, I think, was mostly to spark discussion and let us know that this is something to consider.

The examples of celebrities and mental health professionals are two very different examples, but my conclusion is largely the same for each: being expected to be a “role model” in every context, at work and outside of it, in one’s chosen domain (be it sports or entertaining or counseling) and in every other domain in which it’s possible to judge a person’s behavior, is too much.

A final reason holding people up as “role models” is harmful: the criteria by which we judge them are largely based on social norms, which can be a very poor barometer for determining how ethical an action is. That’s why, when Miley Cyrus was vilified for her performance at the VMAs and reprimanded by many commentators for not being a good enough “role model,” the focus of most of the criticism was not the racism inherent in her performance, but the fact that she dressed revealingly and shook her ass. And she shook it…at a married man! How dare she. The married man, by the way, made a clear show of enjoying it, and he’s the one who’s married. And the one who sings a song about “blurred lines.”

It’s also why, when Kristen Stewart cheated on Robert Pattinson (to whom she was not married) with Rupert Sanders (who is married), it was Stewart on whom the majority of the public opprobrium fell, and who was finally compelled to publicly apologize. (A hopefully unnecessary disclaimer: I think breaking a promise to a partner is wrong, but I also wish people didn’t make promises they couldn’t keep in the first place, and I don’t think cheating is the worst thing a person could do and I don’t think a person who cheats owes an apology to anyone but the person they cheated on.)

And women of color in particular are held to impossibly high standards as “role models,” as public reactions to Beyonce and Rihanna attest.

Sometimes the intersections between the expectation of role model behavior and various types of prejudice affect people’s livelihoods in really crappy ways. To return to the example of therapists, I’ve been reading this blog by a woman who is studying to be a therapist and also works as a stripper. The faculty of her program are pressuring her to either quit sex work or leave the program, because doing both is necessarily an ethical violation. They also told her that being a stripper “contributes to further injustice in the world,”  and is therefore incompatible with her other role as a therapist.

That’s a slightly different type of role model that she’s being expected to perform, but that demand that therapists be perfect in every aspect of their lives is still there. The role of therapist is supposed to take precedence over everything else she may want to do in her life, including making enough money to get by and finish her education. And in this case, these expectations are intersecting with stigma and prejudice against sex workers.

So, whether you’re a celebrity or just a regular person trying to make the world better, it’s rarely a neutral expectation that one be a “role model.” Like all social expectations do, it comes along with lots of baggage. And it’s incredible how often, for women, being a “role model” means having no sexuality.

Children may need adults to look up to and clients may need therapists to learn from, but that’s not a good enough reason, in my opinion, to expect or demand perfection from people.

I think a more realistic view is that almost everyone can teach us something, and almost everyone has done things we probably shouldn’t emulate*.

~~~

*And to be clear, wearing revealing clothing and/or being a sex worker are not the sorts of things I’m particularly desperate to discourage.

Against Role Models

Strawmanning Rape Culture (Part One)

[Content note: sexual assault]

Rape culture is a very difficult concept for many people to understand, perhaps because, like many sociological constructs, it works in such a way as to make itself invisible. Understanding rape culture, especially if you are someone who isn’t affected by it very much, requires a keen attention to detail and a willingness to examine your own complicity in things you’d rather not believe that you’re complicit in.

For a great introduction to rape culture, read the Wikipedia page and this Shakesville piece. If you’re not familiar with it, read these things before you read this post, because this is not a 101-level post. Here’s another definition, from the book Transforming a Rape Culture, that may be useful (although you’ll notice that I’ll expand on it a bit later):

A rape culture is a complex of beliefs that encourages male sexual aggression and supports violence against women. It is a society where violence is seen as sexy and sexuality as violent. In a rape culture, women perceive a continuum of threatened violence that ranges from sexual remarks to sexual touching to rape itself. A rape culture condones physical and emotional terrorism against women as the norm.

In a rape culture both men and women assume that sexual violence is a fact of life, inevitable as death or taxes. This violence, however, is neither biologically nor divinely ordained. Much of what we accept as inevitable is in fact the expression of values and attitudes that can change.

Many people hear about rape culture briefly, perhaps online or in a text assigned in a sociology or gender studies class, and don’t really read about or grasp the nuances of it. This makes it very easy to strawman the rape culture argument, to reduce it to clearly absurd and obviously inaccurate claims that are easy to strike down–and, crucially, that nobody who claims that rape culture exists ever made to begin with.

Here are some common strawman versions of rape culture, and why they are inaccurate.

“So you’re saying that people think rape is okay.”

When many people hear “rape culture,” they assume this is supposed to imply that we live in a society where people actually think rape is okay and/or good. That’s an easily falsifiable claim. After all, rape is illegal. We do, in some cases, punish people for committing it. If someone is known to be a rapist, that person’s reputation often takes a huge nosedive. We teach nowadays that “no means no.” People obviously resist being identified as rapists, and they wouldn’t resist it if it weren’t generally considered a bad thing to be.

So how could we really have a rape culture? More to the point, if people who say we live in a rape culture are not claiming that people literally think rape is okay, what exactly are we claiming?

One way rape gets shrugged off and thus accepted in our culture is by constantly shifting the goalposts of what rape is. If you flirted with someone, it’s not rape. If you had an orgasm, it’s not rape. If you dressed sluttily, it’s not rape. If you’re a sex worker, it’s not rape. If it was with your partner or spouse, it’s not rape. If you’re a prisoner, it’s not rape. If you’re fat or unattractive, it’s not rape (because you must’ve wanted it). If no penis was involved, it’s not rape. If you were unconscious, it’s not rape. The fact that we have politicians debating what is and is not “legitimate rape” is evidence that we do not consider all rape to be legitimate. And, unsurprisingly, studies show that people will admit to having committed sexual assault provided it’s not called “sexual assault” in the survey.

Another way rape gets excused is through victim blaming, which I’ll discuss a bit later. Even when we admit that what happened to someone is rape, we still often blame them for it, thus implying that, in some cases, rape isn’t really so wrong because the victim was “asking for it.”

One more related way in which rape gets excused is through claims that rapists (male rapists, generally) “can’t help themselves.” By framing rape as the inevitable result of masculinity, hormones, sexual tension, and so on, we’re implying that rape is a normal part of our society that we’re not going to do anything about. The hypocrisy of a society that pays lip service to the idea that rape is bad while also suggesting that in some cases it’s not “really” rape and in some cases it’s just what you’d expect and ultimately it’s inevitable anyway is emblematic of rape culture.

Remember, though, that some people do actually think rape is good and/or okay. Some men do openly admit to wanting to rape women, and even if they’re attempting to make a so-called “joke,” their choice of joke says a lot about their beliefs about rape.

“So you’re saying that without rape culture, there would be no more rape.”

People also misinterpret the rape culture argument as a claim that all rape is caused directly by rape culture. While some people probably do believe that there would be no rape in a society free from rape culture, I don’t. I think that rape culture drastically increases the prevalence of rape by encouraging attitudes that lead to it, reducing penalties for rapists, and making it more difficult for victims to speak out and seek justice.

Strawmanning the rape culture argument in this way makes it seem patently ridiculous. After all, we don’t claim that there’s a “car theft culture,” but people steal plenty of cars. We don’t wring our hands over “identity theft culture,” but lots and lots of people fall victim to identity theft. Same, unfortunately, with murder. So if you think we’re saying that rape culture is the entire reason rape exists as a phenomenon at all, it’s easy to refute that claim by pointing to other crimes, and also by pointing out that people often commit crimes because it gives them some sort of advantage.

If rape culture did not exist, rape would still exist, but things would look very different. Rape would be much rarer. When there is enough evidence to show that someone committed rape, that person will go to jail. Although there may still a bit of stigma surrounding being a rape victim, that stigma will not be any greater than it is for being the victim of any other crime (right now, it’s much greater). Rape would not constantly be threatened and used as “punishment” for being queer, for being a woman who speaks out, and so on. There will still be researchers trying to understand what causes people to become rapists and activists trying to stop them from doing so, but the key difference will be that when someone gets raped, we’ll ask more questions about the person who raped them than about the person who was raped. We’ll ask what led the rapist to do such a thing, not what led the victim to be so careless.

“So you’re saying that the fact that a given crime exists means that ‘[crime] culture’ exists. Why isn’t there a murder culture, then, huh?!”

Closely related to the previous one. The existence of a given type of crime is not sufficient to show that a “culture” exists that encourages and excuses that crime. The reason there is a rape culture but not a murder culture is because, overall, our culture does not claim that murder is acceptable, okay, inevitable, or even commendable in certain cases. Are there individual people who believe this about murder? Certainly. But for the most part, these people lack institutional backing. Police officers and judges and jury members are not constantly going on record saying that, well, it wasn’t really murder in this case, or the victim’s past behavior suggests they have a tendency to lie about these things

It’s still absolutely reasonable to say that we have a problem with murder or theft or [other crime] in our society without having to make the claim that a [crime] culture exists. These crimes do have sociological causes, not just individual ones. Economic inequality, for instance, tends to contribute a lot to these types of crimes; they are not simply personal failings as we often dismiss them to be.

Culturally, however, rape gets a lot more support and excuses than theft or murder do. Victims of rape are blamed to a greater extent than victims of any other crime; and not only that, but that blame is used by people in positions of authority to avoid finding, trying, and sentencing the rapist.

The second half of this post will be up tomorrow. If you have more strawmans to add in the comments, try to hold on to them until that post comes out and you see the rest of them.

Strawmanning Rape Culture (Part One)

[guest post] Dictionary Arguments, and Why They Suck

CaitieCat, a frequent and awesome commenter around here, has a guest postI

It’s not news to any activist for any cause that people just love to whip out dictionary definitions as ostensibly authoritative guides to what words mean. Even so august a person as a fellow whose name may or may not rhyme with Shmichard Shmawkins has been known to whip out (pun intended) the old Oxford English when he doesn’t like someone else’s usage being different from the one he learned.

What’s disappointing about it is that it’s really just a common logical fallacy: the appeal to authority.

Now, I hear the defenders of that fellow who may or may not have that rhyming name leaping to their feet, cursing at me and their screens and the perfidy of anyone (especially a much-despiséd FEMINIST OMFFSM!) who’d dare to suggest the emperor would turn out to be naked commit a logical fallacy, but let me (the irony should delight you) tell you as a linguist why it’s exactly that.

First, it’s perhaps valuable to look at what a dictionary actually is. A dictionary is a compilation of language-objects (usually words, sometimes other related entities) – and this is the important bit – compiled by humans at a particular time.

Yes, they’re genuine experts in their field, and yes, they work by consensus, sort of. See, they only work by consensus within the field of dictionary writers who use the same language variety as they do. We tend to view a dictionary as a collection of objective facts: word X means meaning Y (and possibly Z, J, F, and Q). In fact, though, any dictionary is bounded by several biases which we tend not to think about when citing them as major authorities.

First, it is bounded in time. The date of publication provides an absolute limit for when the meanings are considered definitely valid; for proof, consider trying to use Mr. Johnson’s original dictionary for your English assignment today. That we can see this in Mr. Johnson’s 1709 effort, but not in the 2000 Oxford Online, has to do with our monkey-brained habit of filtering out the everyday, in order to make best use of the meat-computer we come pre-installed with.The instant it is sent to print, a dictionary is already badly out of date: words can change their meaning significantly in a very short time, as the quill flies/pixel pulses.

The filters have to do with more than time, though. There is also to be considered the makeup of the editorial board – does it accurately reflect the state of the whole of English, with different sociolects, dialects, jargons, cants, argots? There is a known demographic issue in academia generally, as well as most parts of academia, for the majority of tenured positions currently to be held by white able-bodied middle- or upper-class cis men. Are they always going to catch all possible meanings of a given word, all the nuances, when they aren’t users of a given word themselves? More diversity in such an endeavour would be evidently useful in making the dictionary more accurately reflect the true state of the language, if that were a goal of the project. It’s not: the OED is an attempt at defining what the prestige version of the language is.

Consider the privilege in dictionary-making accorded to written (i.e., “documentable”) usages, and in fact only certain types of documentable usages, which predominantly exclude those who stand outside the power-structure in today’s society. Emails between people, chat usage, comic books, zines, samizdat generally, erotic fiction/pornography, fan fiction, maledicta, rap lyrics, acronyms, and languages for the Deaf, among many other types of language usage, tend to be recorded only in the most conservative ways, and often ignored entirely, dismissed as vulgar ephemera, unworthy of inclusion in the Pantheon of English Wordhood.

Verbal usages are ignored almost entirely, as being “undocumentable”. Yet written language is only ever at its very best a vague approximation of the true richness and beauty of the spoken/signed language; great numbers of expressions and words will never make it into any dictionary, despite their usage in the millions of times daily all over the world, because they’re never written down in “acceptable” documentation. And yet we’re accumulating a humanity-wide store of thousands of hours of video of native and non-native speakers using their languages every day; there’s no particular reason to privilege written communication over spoken any more, now that the data are more readily available. Yes, it would cause a lot more work, but that’s only because we’ve only been doing half the job up til now, not a good argument for not doing it.

The primary problem, of course, is that these nominally objective (but in actuality, wildly subjective) works are cited as prescriptive authorities: they purport to describe the language as it ought to be spoken. I’m hoping that with the paragraphs above, I don’t need to describe the ways in which that view of language is of a tiny window on a huge, living phenomenon: it’s like carefully describing every pitch and swing of a baseball at-bat, and claiming that only that one at-bat, by one player, at one time, counts as a real at-bat, and that all at-bats are like that one, or aren’t real at-bats at all1.

There are real and invisible-to-us biases we all bear, having been raised in societies which are basically giant machines for inculcating invisible-to-us bias: the privilege of having a background such that one speaks the very high-prestige Received Pronunciation (the so-called “Queen’s English”) might lead one to assume that, since the OED agrees exactly with one’s internal definitions, then it must necessarily be a valid and unshakeable authority. Call it the oligoanthropic principle: “all the highly-educated Oxbridge graduates I know agree with the OED, and nobody I consider important disagrees, therefore it is an inerrant objective compilation of facts.”

But this requires not noticing that there are a lot of people in the world speaking English as their main language, and every single one of them has as much claim to say that their version is “the real one” as any Oxbridge-trained-mouthful-of-marbles has.

Like any compilation of subjective human knowledge, then, a dictionary definition is a poor premise to base an argument upon: it is too easily falsified by the simple act of noting that there are a noticeable number of people in the world using those words differently. As a linguist, I lean naturally toward a descriptive approach to language: to me, language is what the people who speak it want it to be. Language changes, as we adapt it – like any tool, as the good tool-using primates we are – to the needs we’re facing with it today. Insistence on some apocryphal golden-age idea that a given dictionary definition is a universally valid, and objectively and eternally true, premise reveals only a weak ability to recognize the numerous bounds and biases which make it at best subjective, and at worst revealing only a small part of the meaning a given language-object might carry to other people.

And that means, I’m afraid, that showing that your dictionary has a famous person’s or institution’s name on it doesn’t make it any more important or objective an arbiter of language than any random two speakers of that dictionary’s language: thus, the fallacy of appeal to authority.

Sorry, rhymes-with-Shmawkins fans: the emperor’s butt’s hanging out2.

For the turquoise ungulate crowd:

FALLIBLE PEOPLE WRITE DICTIONARIES THAT ARE SUBJECTIVE PRODUCTS OF THEIR PHYSICAL AND TEMPORAL CONTEXT.

FALLIBLE PEOPLE WRITE HOLY BOOKS THAT ARE SUBJECTIVE PRODUCTS OF THEIR PHYSICAL AND TEMPORAL CONTEXT.

WHY DO WE REVERE ONE AND DENIGRATE THE OTHER?

1 Wow, that’s a crap analogy. Anyone got a better one?

2 Big thanks to Miri the Amazing Professional Fun-Ruiner of Awesomeness for the opportunity to guest-post. 🙂

CaitieCat is a 47-year-old trans bi dyke, outrageously feminist, and is a translator/editor for academics by vocation. She also writes poetry, does standup comedy, acts and directs in community theatre, paints, games, plays and referees soccer, uses a cane daily, writes other stuff, was raised proudly atheist, is both English by birth and Canadian by naturalization, a former foxhole atheist, a mother of four, and a grandmother of four more (so far). Sort of a Renaissance woman (and shaped like a Reubens!).

[guest post] Dictionary Arguments, and Why They Suck

On Useful and Not-So-Useful Definitions of Racism

[Update 10/22/13: If you’ve found this post through a racist hate forum, don’t bother commenting. Your comment’s going straight to the trash and nobody will ever read it. :)]

Richard Dawkins, whose Twitter feed never fails to amuse, has lately been discussing racism–specifically, against white people:

[Here’s the link in case you can’t see this]

 

Dawkins sounds eerily like my high school self here–desperate to stick to his own definitions of things and reject the definitions of others, all while claiming that everyone needs to be using the same definition in order for a discussion to be productive. Dawkins assumes that a dictionary definition is by default more legitimate than a definition provided by people who actually study the subject in question and presumes that what is written in a dictionary is “true” in the same sense as, say, the periodic table or the speed of light. Consider that dictionaries have historically been written by those least likely to understand what racism actually is and how it actually works, because if you’re a white person, racism isn’t something you’re ever forced to give serious thought to.

It is true that if you define racism as “not liking someone based on their race,” then people of color can be just as racist as white people. If you define racism this way, then it is true that the person who dismissed Dawkins’ opinion at the beginning was being racist. If you define racism this way, then it is true that a white person who is treated rudely by a Black person is a victim of racism, and it is true that, strictly speaking, affirmative action is racist.

But the fact is that this isn’t a very useful definition. You might as well make up a word for “not liking someone based on the color of their hair” or “not liking someone based on whether they wear boxers or briefs.” I don’t deny that it’s hurtful when someone doesn’t like you based on something arbitrary like your skin color, but when you’re white, this doesn’t carry any cultural or institutional power. When you’re not white, it does. Because then it’s not just a random asshole who doesn’t like your skin color.

I have had a person of color express prejudice towards me because I’m white exactly once in my life. Once. (And for what it’s worth, it was a stranger on the train who apparently just felt like yelling at people that day.) I have never been denied a job because I’m white. I have never been followed around or stopped and frisked by the police because I’m white. I’ve never been told I’m ugly because I’m white. I have never been told I’m stupid because I’m white, and I’ve never been told that I’m unusually intelligent for a white person.

Disliking someone based on their skin color is not enough for it to be racism. In fact, it’s not even a necessary condition. You can like people of color a lot while still maintaining that they’re just different from white people or that they need protection or that they’re perhaps better suited by nature for servile roles (this was an attitude commonly expressed during slavery). Likewise, you can just loooooove women while still supporting patriarchal laws and cultural norms, which is why I have to laugh when someone’s all like “But how can I be sexist? I LOOOOVE women! ;)”

As a scientist, Dawkins must realize how difficult it is when people take technical terms and use them too generally. For instance, a “chemical” is any substance that has a constant composition and that is characterized by specific properties. Elements are chemicals. Compounds are chemicals. Basically, tons of substances are chemicals, including water. Yet most people use “chemical” to mean “awful scary synthetic substance put into our food/water/hygienic products.” You see products being advertised as “chemical-free,” a laughable concept, and people talking about how “chemicals” are bad for you.

So yes, it’s important to recognize that many people use the word “chemical” in a particular way that conflicts with the definition used by chemists. But that doesn’t suddenly mean that this lay definition becomes the “real” definition and the chemists are suddenly “wrong.” And if you want to rant about the dangers of chemicals with your friends (I’d advise you not to, but whatever), it doesn’t matter if you use the lay definition.

But the way the lay public uses the word “chemical” is essentially meaningless, because they basically use it to mean “substances that may or may not be dangerous but we don’t really know we just know that we can’t pronounce them.” It doesn’t even necessarily refer to synthetic substances, because most people would probably say that cyanide is a chemical, it’s naturally occurring (in fact, it’s produced in certain fruit seeds). So if you want to discuss chemicals with a chemist, you’d better use the actual definition, because the terms used by chemists are more precise and useful.

Of course, when it comes to race it’s not quite as benign as people taking chemistry terms and using them haphazardly. It’s important to remember that white people have a vested interest in ignoring the structural causes and effects of racism–the kind that are best encapsulated in the definition of racism preferred by sociologists and activists. It’s uncomfortable to talk about racism this way. It’s painful and guilt-inducing to acknowledge that you (as a white person) have benefited from unearned privileges at the expense of people of color. It’s awkward to admit that affirmative action is not “bias in favor of people of color”; it’s an attempt to correct for the fact that college admissions and hiring practices are actually prejudiced in favor of whites, and this has been shown by controlled studies over and over again.

What’s significantly more comfortable is claiming that “everyone can be racist” and “Blacks can be racist too” and “some Blacks are even more racist toward whites than whites are toward them.” That is a definition of racism that white folks can deal with. But that doesn’t make it useful for actually talking about the things that matter.

 

 

 

On Useful and Not-So-Useful Definitions of Racism

Richwine and the Inherent Goodness of Intelligence

[Content note: racism]

In news that should surprise absolutely no one, conservatives have once again embarrassed themselves by attempting to “prove” with “science” that people of color are stupider than white people. Yup, again.

You’ve probably read this story elsewhere so I’ll make my recap brief: It has come to light that Jason Richwine (I’m not making this name up, folks), the lead author of a study on immigration from the conservative Heritage Foundation, wrote his 2009 PhD dissertation on…why Hispanics are genetically stupider than whites and will therefore continue to have children who are stupider than whites:

Richwine’s dissertation asserts that there are deep-set differentials in intelligence between races. While it’s clear he thinks it is partly due to genetics — ‘the totality of the evidence suggests a genetic component to group differences in IQ’ — he argues the most important thing is that the differences in group IQs are persistent, for whatever reason. He writes, ‘No one knows whether Hispanics will ever reach IQ parity with whites, but the prediction that new Hispanic immigrants will have low-IQ children and grandchildren is difficult to argue against.

In case you’re wondering at which podunk school Richwine wrote such a dissertation, well, it was Harvard.

(Awkwardly, the very next day after WaPo broke this story, a Pew Research Center report was released that showed that Hispanic students’ rate of college enrollment is now greater than whites’. LOLZ. [However, note that Hispanic =/= Latino.])

Why are conservatives so goddamn obsessed with trying to “prove” that people of color are stupid? Zack Beauchamp at ThinkProgress has a great analysis:

These spats don’t generally endear conservatism to the general public, so it’s not like this is a political move. So why is it that the right-of-center intelligentsia keeps coming back to this topic? I’d suggest two reasons: first, a link between race and IQ moots the moral imperative for public policy aimed at addressing systemic poverty; second, it allows conservatives to take up the mantle of disinterested, dispassionate intellectual they so love.

One mistake that all of these people make–aside from the glaring one of being racist, that is–is that they treat the distinction between “IQ” and “intelligence” as completely irrelevant. Scrupulous research psychologists are quick to acknowledge that the measures they use are imperfect and can only provide an approximation of the actual abstractions they are trying to assess. So if you score higher on a scale of depression, we don’t say you are “more depressed”; we say that you “scored higher on the Such-and-Such Depression Scale.” If you score higher on a scale of extroversion, we don’t say that you are “more extroverted”; we say that you “scored higher on the Blah-Blah-Blah Extroversion-Introversion Scale.” At least, that’s what careful, conscientious psychologists do.

Many believe that intelligence is a much more concrete (and therefore measurable) quality than extroversion or how depressed you are. They may be right; I’m not a cognitive psychologist so this is not my specialty. However, serious criticism of IQ as a measure of intelligence has been made–and by “Real Scientists,” too, not just by Bleeding-Heart-Tree-Hugging-I’m-Mixing-Metaphors Liberals. And in terms of race, some researchers have suggested that IQ tests are biased against Mexican Americans because the tests contain “cultural influences” that reduce the validity of the test when assessing these students’ cognitive ability.

Back to Beauchamp’s analysis of conservatives and why they’re so obsessed with race and IQ:

This vein of argument was pioneered by Richwine’s mentor, Bell Curve author Charles Murray. Murray’s research focused more on the purported unintelligence of African-Americans, but his conclusions about its role in sustaining poverty were similar. Murray has taken this conclusion and used it to argue against everything from affirmative action to essentially all policy interventions aimed at reducing economic inequality. It’s easy to see how this argument works — if some people are less intelligent than others, as a consequence of either genetics or “underclass culture,” then government programs aren’t likely to help equalize society — creating an economically more level playing field will only cause the most talented to rise to the top again. Inequality is thus natural and ineradicable; poverty might be helped at the margins, but helping the unintelligent will be fraught with unintended consequences.

Moreover, this framing allows conservatives to explain the obviously racial character of American poverty without having to concede the continued relevance of racism to American public life. If it’s really the case that people with certain backgrounds simply aren’t as smart as others, then it makes sense that they’d be less successful as a group. What strikes progressives as offensively racial inequality thus becomes naturalized for conservatives in the same way that inequality and poverty writ large do.

It makes sense, doesn’t it? People of color are disproportionately likely to be poor compared to white people. People of color are stupider than white people. Ergo, there’s no need to try to alleviate poverty and economic inequality because it’s natural.

Hopefully you noticed the big honkin’ naturalistic fallacy in that argument. Even if it’s natural for people of color to be poor (because they’re stupid and therefore can’t get off the couch and get a job), that doesn’t mean that this is a good way for society to be. It does not follow that we should just allow things to continue this way.

The other big flaw is that these conservatives are also succumbing–as, to be fair, most people do–to the notion that people with higher IQs/more intelligence are inherently better than people with lower IQs/less intelligence. It is okay that people with little intelligence should struggle just to get by, should be unable to give their children a better life (whether those children have low IQs or not), should be unable to afford basic healthcare, should have to eat cheap, unhealthy food, should have to choose between dangerous, dehumanizing, low-pay work (or none at all) and breaking the law to make money, should have to live as second-class citizens. All because they are “less intelligent,” which is supposedly mostly genetic and therefore not something they chose.

I wish liberals talked about this more. I wish that when conservatives started trotting out these reprehensible arguments, that liberals would, rather than simply emphasizing that there is no proof that people of color are “naturally” dumber than white people and that this is a racist argument, also ask why it is that intelligence should determine whether or not you have access to food, shelter, and healthcare.

There are, of course, many other important things to discuss here. We could talk about how there are so many different types of intelligence and IQ tests only measure a certain type. We could talk about how growing up in poverty drastically reduces one’s opportunities for intellectual enrichment and growth. We could talk about how you don’t necessarily need to be “smart” to contribute to society; we do need service-sector workers and types of unskilled laborers and they should be able to live on what they make, too.

But I think we need to talk about this idea that having a lot of “intelligence” (whatever that even means) makes you better than those who do not have a lot of it. So much better, in fact, those without sufficient “intelligence” do not deserve to live above the poverty line.

~~~

Edit: Not quite related to the main point of this article, but the conservative response to this controversy and Richwine’s subsequent firing/resignation from the Heritage Foundation is veeery interesting. I won’t link to any because you can Google it yourself, but it’s all about Richwine’s “crucifixion” and how liberals are trying to “destroy” him and so on.

Conservatives have this interesting theory in which, when someone does something wrong, it is the fault of the person who calls attention to it that the wrong-doer experiences negative consequences. It’s not that Richwine did something wrong, it’s that the meanie liberals are trying to destroy him. Similarly, when someone accuses someone–say, up-and-coming football players–of sexual assault, many conservatives accuse the victims of “ruining” their rapists’ lives by bringing what they did to light.

The fact that people’s reputation suffers when they do something terrifically stupid or harmful is not a bad thing. That is, indeed, society working as it should. It is a feature, not a bug.

Richwine and the Inherent Goodness of Intelligence

[blogathon] Restorative Justice for Sexual Assault

This is the eighth and last post in my SSA blogathon. It was requested by a reader. Don’t forget to donate!

[Content note: sexual assault]

Restorative justice is a word you sometimes hear in discussions about how to reform our criminal justice system. It refers to “an approach to justice that focuses on the needs of the victims and the offenders, as well as the involved community, instead of satisfying abstract legal principles or punishing the offender.” As you can see, it would probably look quite different from the system we have now.

Someone asked me to write about what restorative justice might look like from the perspective of a rape survivor. To be clear, I am not a survivor of rape, although I am a survivor of sexual assault. In any case, I can only speak for myself.

But when I think about justice, this is what comes to mind.

I would want a perpetrator of sexual assault to have to learn about the roots of what they did. It’s not as simple is “Sexual assault is bad, don’t sexually assault people.” I would want them to understand rape culture. I would want them to understand all of the factors that might have contributed to their decision (because, yes, it was their decision) to sexually assault someone. I would want them to understand that their socialization has prepared them to become a person who sexually assaults people, but that this can be undone.

I would want the perpetrator to listen to the survivor talk about what they want through (if the survivor is comfortable). This doesn’t need to be a face-to-face conversation, of course, and I don’t think that many survivors would be willing for it to be. It could be an audio- or video-taped recording. It could even be a written account.

If prison is involved, I would want the prison to be humane. Regardless of whether or not we switch to a system of restorative justice, prison violence (including rape) must be addressed. This isn’t (just) because I’m concerned for the welfare of prisoners; it’s also because violent environments are much more likely to create violent individuals. For both selfish and altruistic reasons, I want perpetrators to serve their sentences feeling healthy and safe.

I would want the perpetrator to receive help with integrating back into their community afterwards–with finding a job, getting a place to live, and so on. Again, this is not because I think they “deserve” help. This is not about what they do and do not deserve. This is about what will make them the least likely to offend again.

But enough about the perpetrator. What about the survivor?

I think it goes without saying that in a system of restorative justice, there will be no victim blaming. The past “behavior” of a victim should have no bearing on the outcome of a trial. Not even if they had been sexually “promiscuous” (whatever that even means) in the past. Not even if they are a sex worker. Not even if they have committed crimes. Not even if they are an undocumented immigrant. Nothing makes someone deserving of sexual assault, and nothing makes it not worthwhile to pursue justice following an assault.

In a system of restorative justice, a survivor should not have to pursue any legal action that they don’t want to pursue. If a survivor doesn’t want to testify, they shouldn’t have to. That’s what it would mean to prioritize the needs of the survivor over our desire to punish the perpetrator.

Hopefully, in a system that focuses on reforming the perpetrator rather than punishing them, community members would be much less likely to blame the survivor for “ruining” the perpetrator’s life–which, tragically, often happens now when survivors of sexual assault speak out. But in any case, a system of restorative justice would also help community members support and affirm the survivor. Friends and family of the survivor would learn–both directly from the survivor and in general–what sorts of challenges survivors of sexual assault may face in dealing with the aftermath of their trauma. Rather than blaming the survivor for their feelings and expecting them to “get over it,” community members would learn how to help them cope.

Of course, this is all probably incredibly naive and the cultural shifts it would require are immense. But that’s a bit of what it would look like for this survivor of sexual assault.

~~~

That’s the end of my SSA Blogathon. If you haven’t yet, please donate to the SSA. Thank you for reading!

[blogathon] Restorative Justice for Sexual Assault

Viewing History Skeptically, Part 2: Beauty

Joan Jacobs Brumberg's "The Body Project"
One of the first things one learns in a college-level history or sociology course is that the ways we define and think about various human attributes and qualities—sexual orientation, mental illness, gender, race, virginity—are never static. They vary geographically and temporally, and even though it may seem that the way we currently conceptualize a particular aspect of human experience is the “right” one, the one that’s accurate and supported by the research evidence, that’s pretty much what people always think.

This is what I discussed in a previous post, where I promised to write some followups about specific examples of this sort of thing. So here we go!

Beauty is a good example of shifting cultural attitudes—not only in the sense that beauty standards have changed over the decades, but also in terms of what meaning and significance we attribute to beauty as a quality. In her book The Body Project: An Intimate History of American Girls, Joan Jacobs Brumberg discusses these shifting meanings. Brumberg notes in her chapter on skincare that in the 19th century, acne and other facial blemishes were considered a sign of moral or spiritual impurity. In fact, many people believed that people got blemishes as a result of masturbating, having “promiscuous” sex, or simply having “impure” thoughts. She writes, “In the nineteenth century, young women were commonly taught that the face was a ‘window on the soul’ and that facial blemishes indicated a life that was out of balance.”

By the mid-20th century, however, Americans had already started to think of beauty very differently. Brumberg writes of perceptions of acne in the postwar period:

Although acne did not kill, it could ruin a young person’s life. By undermining self-confidence and creating extreme psychological distress, acne could generate a breakdown in social functioning. Acne was considered dangerous because it could foster an “inferiority complex,” an idea that began to achieve wide popularity among educated Americans.

Facial blemishes were no longer considered a sign of inner weakness or impurity; they were a potentially dangerous blow to a young person’s self-esteem. They were something to be dealt with swiftly, before they could cause any serious damage:

In magazines popular with the educated middle class, parents were urged to monitor teenagers’ complexions and to take a teenager to a dermatologist as soon as any eruptions appeared: “Even the mildest attack is best dealt with under the guidance of an understanding medical counselor.” Those parents who took a more acquiescent view were guilty of neglect: “Ignoring acne or depending upon its being outgrown is foolish, almost wicked.”

Whereas worrying about one’s appearance and trying to correct it was once viewed as improper for young women, it was now considered acceptable and even productive. Even state health departments issues pamphlets urging young people to make sure that they are “as attractive as nature intended you to be.” It was understood that beauty was an important and necessary quality to have, not only because it opened doors for people but because it was just another aspect of health and wellbeing.

Today, our views on beauty seem much more rife with contradictions. Obviously beauty is still important. Women (and, to a lesser but growing extent, men) are still encouraged and expected to spend money, time, and energy on improving their appearance. We know from research that the halo effect exists, and that lends a certain practicality to what was once viewed as a frivolous pursuit—trying to be beautiful.

At the same time, though, we insist that beauty “doesn’t matter,” that “it’s what’s on the inside that counts.” It’s difficult for me to imagine a modern middle-class parent immediately rushing their child to the dermatologist at the first sign of pimples; it seems that they would be more likely to encourage the child to remember that “beauty is only skin deep” and that one’s “real friends” would never make fun of them for their acne. (Of course, I grew up with no-nonsense immigrant parents who rejected most forms of conformity, so maybe my experience was different.) Nowadays, costly medical interventions to improve teenagers’ looks are more associated with the upper class than the middle class, and we tend to poke fun (or shudder in disgust) at parents who take their children to get plastic surgery and put them on expensive weight loss programs.

It appears that our culture has outwardly rejected—or is in the process of trying to reject, amid much cognitive dissonance—the idea that beauty is a good way to judge people, that it reveals anything about them other than how they happen to look thanks to genetics or their environment. No longer do we consider beauty a sign of purity and spiritual wellbeing, as in the Victorian era, or of health and social success, as in the postwar years.

Of course, that’s just outwardly. Although we’re loath to admit it, beauty still matters, and people still judge others by their appearance, and we still subscribe to the notion that anyone can be beautiful if they just try hard enough (which generally involves investing a sufficient amount of money). While people are likely to tell you that beauty is a superficial thing that shouldn’t matter, their actions suggest otherwise.

An interesting contrast to this is Brazil, where plastic surgery, or plástica, is generally covered by the state healthcare system. As anthropologist Alexander Edmonds describes, many in Brazil believe that beauty is a “right” that everyone deserves, not just those who can afford it. One surgeon says:

In the past the public health system only paid for reconstructive surgery. And surgeons thought cosmetic operations were vanity. But plástica has psychological effects, for the poor as well as the rich. We were able to show this and so it was gradually accepted as having a social purpose. We operate on the poor who have the chance to improve their appearance and it’s a necessity not a vanity.

Brazilians, too, have been influenced by Alfred Adler’s concept of the “inferiority complex,” and in this sense the meaning of beauty in that context is similar to that in postwar America, although with a few differences. Like Americans in the 1950s, many Brazilians believe that improving one’s appearance is an important form of healthcare that heightens self-esteem and confidence. It’s not a matter of vanity.

However, unlike Americans, Brazilians (at least the ones profiled in Edmonds’ study) believe that self-esteem is important for the poor as well as for those who are better-off. In the United States people tend to scoff at the idea that people living in poverty need (let alone deserve) entertainment, pleasure, or really anything other than what they need to survive, and in the postwar years the focus on adolescents’ appearance seemed to be confined to the middle and upper class. But in Brazil it’s accepted as a “right”–a right to be beautiful.

Looking at how Americans in the past viewed beauty, as well as how people in other cultures view it, exposes the contradictions in our own thinking about it. Our outward dismissal of beauty as vain and unimportant clashes with our actual behavior, which suggests that beauty is quite important. This tension probably emerged because we have abandoned our earlier justifications for valuing beauty, such as the Victorian view of beauty as a sign of morality and the postwar view of beauty as a vital component of health. Now that we know that beauty has nothing to do with morality and relatively little to do with health, we’re forced to declare that it “doesn’t matter.” But, of course, it does.

 

Viewing History Skeptically, Part 2: Beauty

Viewing History Skeptically: On Shifting Cultural Assumptions and Attitudes

I’ve been reading Odd Girls and Twilight Lovers, Lillian Faderman’s sweeping social history of lesbians in 20th century America (this is the sort of thing I do for fun). At the beginning of the chapter on World War II, Faderman makes this insight:

If there is one major point to be made in a social history such as this one, it is that perceptions of emotional or social desires, formations of sexual categories, and attitudes concerning “mental health” are constantly shifting–not through the discovery of objectively conceived truths, as we generally assume, but rather through social forces that have little to do with the essentiality of emotions or sex or mental health. Affectional preferences, ambitions, and even sexual experiences that that are within the realm of the socially acceptable during one era may be considered sick or dangerous or antisocial during another–and in a brief space of time attitudes may shift once again, and yet again.

This is probably the single most important thing I’ve learned through studying history and sociology in college. For many reasons that I’ll get into in a moment, many people assume that the cultural attitudes and categories they’re familiar with are that way “for a reason”: that is, a reason that can be logically explicated. This requires a certain amount of reverse engineering–we note our attitudes and then find reasons to justify them, not the other way around. We don’t want gay couples raising kids because that’s bad for the kids. We don’t want women getting abortions because fetuses are human beings. We don’t want women to breastfeed in public because it’s inappropriate to reveal one’s breasts. We don’t want women to be in sexual/romantic relationships with other women because that’s unhealthy and wrong. That last idea is the one Faderman addresses in the next paragraph (emphasis mine):

The period of World War II and the years immediately after illustrate such astonishingly rapid shifts. Lesbians were, as has just been seen [in the previous chapter], considered monstrosities in the 1930s–an era when America needed fewer workers and more women who would seek contentment making individual men happy, so that social anger could be personally mitigated instead of spilling over into social revolt. In this context, the lesbian (a woman who needed to work and had no interest in making a man happy) was an anti-social being. During the war years that followed, when women had to learn to do without men, who were being sent off to fight and maybe die for their country, and when female labor–in the factories, in the military, everywhere–was vital to the functioning of America, female independence and love between women were understood and undisturbed and even protected. After the war, when the surviving men returned to their jobs and the homes that women needed to make for them so that the country could return to “normalcy,” love between women and female independence were suddenly manifestations of illness, and a woman who dared proclaim herself a lesbian was considered a borderline psychotic. Nothing need have changed in the quality of a woman’s desires for her to have metamorphosed socially from a monster to a hero to a sicko.

“Nothing need have changed in the quality of woman’s desires”–and neither did lesbianism need a PR campaign–in order for love between women to gain acceptance during the war. All that needed to happen was for lesbianism to become “useful” to mainstream American goals, such as manufacturing sufficient military supplies while all the male factory workers were off at war. And since having a male partner simply wasn’t an option for a lot of young women, the idea that one might want a female lover suddenly didn’t seem so farfetched. And so, what was monstrous and anti-social just a few years before suddenly became “normal” or even good–until the nation’s needs changed once again.

Once I got to college and learned to think this way, I quickly abandoned my socially conservative beliefs and got much better at doing something I’d always tried to do, even as a child–questioning everything. I also started seeing this phenomenon all over the place–in the labels we use for sexual orientation, in the assumptions we make about the nature of women’s sexuality, in the  way we define what it means to be racially white.

Unfortunately, though, the way history is usually taught to kids and teens isn’t conducive to teaching them to be skeptical of cultural assumptions. (That, perhaps, is no accident.) The history I learned in middle and high school was mostly the history of people and events, not of ideas. In Year X, a Famous Person did an Important Thing. In Year Y, a war broke out between Country A and Country B.

When we did learn about the history of ideas, beliefs, and cultural assumptions, it was always taught as a constant, steady march of progress from Bad Ideas to Better Ideas. For instance, once upon a time, we thought women and blacks aren’t people. Now we realize they’re people just like us! Yay! Once upon a time we locked up people who were mentally ill in miserable, prison-like asylums, but now we have Science to help them instead!

Of course, it’s good that women and Black people are recognized as human beings now, and we (usually) don’t lock up mentally ill people in miserable, prison-like asylums. But 1) that doesn’t mean everything is just peachy now for women, Black people, and mentally ill people, and 2) not all evolutions of ideas are so positive.

This view of history precludes the idea that perhaps certain aspects of human life and society were actually better in certain ways in the past than they are now–or, at least, that they weren’t necessarily worse. And while very recent history is still fresh in the minds of people who may be wont to reminisce about the good ol’ days when there weren’t all these silly gadgets taking up everyone’s time and wives still obeyed their husbands, nobody seems to particularly miss the days when a man could, under certain circumstances, have sex with other men without being considered “homosexual,” or when people believed that in order for a woman to get pregnant, she had to actually enjoy sex and have an orgasm.

Societal factors, not objective physical “reality,” create social categories and definitions. I believe that understanding this is integral to a skeptical view of the world.

In a followup post (hopefully*), I’ll talk about some specific examples of these shifting cultural attitudes, such as the invention of homosexuality and the definition of “normal” female sexuality.

*By this I mean that you should pester me until I write the followup post, or else I’ll just keep procrastinating and probably never do it.

Viewing History Skeptically: On Shifting Cultural Assumptions and Attitudes

Blaming Everything On Mental Illness

The Associated Press has revised their AP Stylebook, the guide that most journalists use to standardize their writing, to include an entry on mental illness. Among many other important things that the entry includes, which you should read here, it says:

Do not describe an individual as mentally ill unless it is clearly pertinent to a story and the diagnosis is properly sourced.

And:

Do not assume that mental illness is a factor in a violent crime, and verify statements to that effect. A past history of mental illness is not necessarily a reliable indicator. Studies have shown that the vast majority of people with mental illness are not violent, and experts say most people who are violent do not suffer from mental illness.

That first one is important because there is a tendency, whenever a person who has done something wrong also happens to have a mental illness, to attempt to tie those two things together.

Some things I have seen people (and, in some cases, medical authorities) try to blame on mental illness:

  • being violent
  • being religious
  • being an atheist
  • abusing children
  • spending money unwisely
  • raping people
  • stealing
  • bullying or harassing people
  • being upset by bullying and harassment
  • enjoying violent video games
  • being shy
  • being overly social
  • being too reliant on social approval
  • having casual sex
  • being into BDSM
  • not being interested in sex
  • dating multiple people
  • not wanting to date anyone
  • not wanting to have children
  • being attracted to someone of the same sex
  • being trans*
  • wanting to wear clothing that doesn’t “belong” to your gender

You’ll notice that these things run the gamut from completely okay to absolutely cruel. Some of them involve personal decisions that affect no one but the individual, while others affect others immeasurably. All of them are things that we’ve determined in our culture to be inappropriate on varying levels.

That last one, I believe, explains why these things (and many others) are so often attributed to mental illness. It is comforting to believe that people who flout social norms, whether they’re as minor as wearing the wrong clothing or as severe as abusing and killing others, do so for individual reasons or personal failings of some sort. It’s comforting because it means that such transgressions are the acts of “abnormal” people, people we could never be. It means that there are no structural factors we might want to examine and try to change because they contribute to things like this, and it means that we don’t have to reconsider our condemnation of those behaviors.

It’s easier to say that people who won’t obediently fit into one gender or the other are “sick” than to wonder if we’re wrong to prescribe such strict gender roles.

It’s easier to say that a mass shooter is “sick” than to wonder if we’ve made it too easy to access the sort of weapons that nobody would ever need for “self-defense.”

It’s easier to say that a rapist is “sick” than to wonder if something in our culture suggests to people over and over that rape isn’t really rape, and that doing it is okay.

It’s easier to say that a bully is “sick” than to wonder why we seem to be failing to teach children not to torment each other.

It’s easier to say that a compulsive shopper is “sick” than to wonder why consuming stuff is deemed so important to begin with.

Individual factors do exist, obviously, and they are important too. Ultimately people have choices to make, and sometimes they make choices that we can universally condemn (although usually things aren’t so black and white). Some things are mental illnesses, but even mental illnesses do not exist in some special biological/individual vacuum outside of the influence of society. In fact, in one of the most well-known books on sociology ever published, Émile Durkheim presents evidence that even suicide rates are influenced by cultural context.

In any case, it’s an understandable, completely human impulse to dismiss all deviant behaviors as the province of “mentally ill” people, but that doesn’t make it right.

It’s wrong for many reasons. It dilutes the concept of “mental illness” until it is almost meaningless, leading people to proclaim things like “Well everyone seems to have a mental illness these days” and dismiss the need for more funding, research, and treatment. It leads to increased stigma for mental illness when people inaccurately attribute behaviors that are universally accepted as awful, like mass shootings, to it. It causes those who have nothing “wrong” with them, such as asexual, kinky, and LGBTQ people, to keep trying to “fix” themselves rather than realizing that it’s our culture that’s the problem. It prevents us from working to change the factors that are actually contributing to these problems, such as rape culture, lack of gun control, and consumerism, because it keeps these factors invisible from us.

People disagree a lot regarding the role of the media in society. Should it merely report the facts as accurately as possible, or does it have a responsibility to educate people and promote change? Regardless of your stance on that, though, I think most people would agree that the media should at the very least do no harm. Blaming everything from murder to shyness on mental illness absolutely does harm, which is why I’m happy to see the Associated Press take a stand against it.

That said, it’s not enough for journalists to stop attributing everything to mental illness. The rest of us have to stop doing it too.

Blaming Everything On Mental Illness