You’re not “a racist”; you’re just racist

This past weekend I was chatting with a friend of mine about a variety of topics, including the tragic shootings in Norway. He was trying to establish that the event was an isolated incident by one crazy person, while I was suggesting that those kinds of things don’t happen in a vacuum. I pointed to a parallel argument I had when it comes to hate groups like Blood and Honour – the extremists are often the outliers of a group that holds similar views but would stop short of violence.

His response was fairly typical: “well there are always going to be some racists out there, but that doesn’t mean everyone is responsible.”

He was wrong, for reasons that I discuss in the linked post above, but it was the language he used that particularly irked me. “Some racists” is not a phrase I could ever see myself using, except in an unthinking moment. Not only is it an unwieldy phrase that could be convicted for abuse of the English language, it tips its hand as to how deeply the speaker misunderstands the origins and mechanisms of racism. I’ve touched on this discussion before, but I would like to talk explicitly about why this phrase is either a) meaningless, or b) profoundly ignorant.

First, we must revisit our operational definition of racism. Please note that I am using the term ‘operational definition’ intentionally – I use this definition for my own purposes, but it means many different things to different people. I think that my definition is the most accurate I’ve come across (obviously), but others would disagree. The chief component of my definition is that racism happens when attitudes or beliefs about a racial group are ascribed to an individual. Essentially, it makes the assumption that a person’s racial background provides sufficient information to predict their behaviours, which is not supported by evidence. This is to say nothing of the fact that the attitudes or beliefs about a group could be (and often are) fundamentally flawed.

It becomes fairly clear, when we consider this definition, that all people are potentially susceptible to this kind of heuristic thinking. I am sure that I have gone on rants about what “conservatives” do and do not believe, when conservativism does not necessitate given beliefs on any topic – rather conservative thinking tends to lead to a cluster of beliefs, many of which are often shared by those that describe themselves as “conservative”. It is a cognitive shortcut, but one that oversimplifies a process that is important to understand – what the mental scaffolding supporting conservative beliefs (or liberal beliefs) is. Simply labeling people as “conservatives” masks that thought process, putting effect in the place of cause.

Similarly, I rankle whenever someone uses the phrase “a racist”, because it commits the same error. Racism is a cognitive process, and as such exists as the engine behind actions and attitudes, rather than their essential component. Calling someone “a racist” suggests that there is some kind of binary state of ‘racist’ and ‘not racist’ in which people can exist. It supposes further that when someone performs an action or voices an attitude that is itself racist, that it is their existence in the first of these binary conditions that is primarily responsible – as though there is something organically racist within them that doesn’t exist in the general population. You know, the general population of ‘not racists’.

Of course it’s trivially easy to recognize the fallacious thinking at work here. All we have to do is look back over the last few decades and note the monumental rate of spontaneous remission that happened in ‘racists’. A sudden seroconversion that has removed all the malignant racist cells and replaced them with healthy non-racistocytes. Or, perhaps racism isn’t quite so simple as that. When we see racism as simply a product of human cognitive shortcuts, the idea of being “a racist” starts to fall apart. After all, if we’re all susceptible to racist thoughts and behaviours (that are, for most of us, subconscious), then can anyone be described as “not racist”? Does it exist on some kind of continuum like the DSM where people that exhibit a certain pattern of behaviour can be diagnosed with “racist personality disorder”?

No. Racism is best understood as the product of ideas, both conscious and unconscious, about other people, and our tendency to try and reduce people to convenient labels (like… oh, I dunno… ‘a racist’). I can certainly understand why people like to use this term, because it allows them to preserve their self-concept of being a good person and scapegoat racist activities as the product of “racists”. Once blame has been assigned in this way, then the speaker can dust her/his hands off and say “it’s not my problem – I’m not a racist.” However, that simply means the problems never get solved, because the only people whose self-concept allows them to brand themselves as being “a racist” are proud of that appellation.

This is why I am in favour of using my own definition of racism, because it renders the idea of being ‘a racist’ completely ridiculous. While it may be convenient to describe people as being ‘a racist’, it distracts from what is actually happening behind the scenes in such a way as to increase societal inertia when it comes to dealing with race issues. It is far more accurate and useful to think of racism as a set of cognitive conditions that encourage a certain kind of behaviour – conditions that are present in us all. What this allows us to do is confront our own biases – no matter how uncomfortable they might make us – and in so doing, make positive changes to minimize the harms they may cause.

Like this article? Follow me on Twitter!

Same planet, different worlds

“Intersectionality” is a word that is new to my lexicon – a lexicon that constantly expands as I delve deeper into the anti-racist and feminist literature. The word intersectionality refers to (so far as I can tell) the way in which identical variations in one variable can elicit a differential result based on a third variable that doesn’t seem to be related. For example, men and women have good reason to react differently to seemingly-innocuous stimuli, like being approached for sex late at night on an elevator. It is not the nature of the stimulus on its own, but the intersection of the stimulus with the third variable of gender that determines the nature of the response.

Those of us familiar with multivariate regression modeling (yes – this is the single lamest thing I have ever bragged about on the internet) can easily wrap our heads around this concept. For others, it can become quite difficult to grasp how something that might seem completely unrelated to an event could completely change the way we react to that event. To help illustrate the concept, and to tip my hat to one of my favourite comic artists, I am entitling this post Same Planet, Different Worlds.

For historical reasons, race and religion in the United States are not independent variables. However, in a scientific sense there is no biological or chemical reason why, for instance, black people would be more religious than white people. However, we do see an interesting intersection between race, religion, and attitude toward interracial marriage:

Pew’s February Political Typology Poll asked people about recent trends in American society. Pew asked if “more people of different races marrying each other” was good or bad society. Overall, only nine percent of Americans said it was bad for society. However, 16 percent of white evangelicals said this, more than twice the opposition found among other Americans (7 percent). The survey found that 27 percent of Americans overall said more interracial marriage was good for society, compared to 17 percent of evangelicals.

The first thing I want to draw your attention to in the above excerpt and figure is the difference that simply being religious makes on one’s attitude toward interracial marriage. When compared to those who reported having no religion, far fewer Christians look at an increase in marriages that transcend racial barriers as a positive outcome for society. There is nothing inherent in Christianity stating that racial groups are created separate. That kind of idea has been imprinted onto Christianity in the United States since the days of Emancipation, but it is not biblically doctrinal. That being said, because it has become doctrine in many branches of American Christianity, it is no surprise to me that religion would have this effect.

The second thing to look at, however, is the effect that being black and religious has on these attitudes. While the number who view such marriages positively is more or less neck-and-neck with their coreligionists, the number that view them negatively is tiny. It is the intersection of the dueling identities of ‘black’ and ‘Protestant’ that fuels this outcome. Because ‘miscegenation’ is still anathema to the American Christian,* there can be no approval of race mixing. However, at the same time black people have remarkably different attitudes toward interracial marriage. Because of the prevailing societal attitudes about the different races, there are remarkably different social implications for a black person in an interracial marriage than a white person.

I have tried my best so far to avoid using judgmental language in this discussion. It’s difficult, because obviously the subject of interracial marriage is very personal to me. However, I have to remain mindful of the fact that these peoples’ opinions are the product of their environments, rather than some deficit in their character (more on that on Monday). That being said, I can definitely attack the ideas they hold with no restraint, which I will do now.

The kind of evil that fuels the nearly 20% of white evangelical Christians is possible only when you think your small-mindedness is justified by some kind of divine mandate. While there will always be some hateful people in every group, please let these findings put to rest the idea that Christianity makes people more tolerant or better people. What it does, what all religions do, is give people permission to throw aside introspection and thought-based ethics in favour of easy answers and a false sense of superiority. Considering the insular nature of many evangelical communities, the lack of exposure to dissenting opinions simply serves to make matters worse.

I have a sneaking suspicion that most people I know would think that “doesn’t make much of a difference” is the ‘correct’ answer. After all, we are told we are not supposed to have feelings about race, either positive or negative. Personally, if I were asked this question I’d say that more intermingling of racial groups is definitely a good thing for society, since it furthers the erosion and blurring of the lines separating racial groups. When you have kids whose parents are two different ‘things’, then it’s kind of difficult to see either one or the other as superior (though God knows South Africa tried).

To bring it back to my original point, it’s important to recognize that ‘intersectionality’ is a real force, and understanding it is key to understanding why members of a group might have different reactions to an event. It’s certainly important to understand if you, for instance, want to increase the number of visible minorities in your political movement (wink wink, nudge nudge).

Like this article? Follow me on Twitter!

*It is important at this point to note that I don’t think that all Christians in the United States are race-baiting hate mongers. I am merely making the point that this type of ‘safeguarding’ of ‘racial purity’, when couched in religious language, comes from a uniquely American brand of Christianity.

Banking on poverty

So at various points in the past I’ve talked about the pernicious lie that is the idea of Africa as a barren wasteland. Because Africa’s people are poor, we assume that the continent itself is poor. After all, isn’t that what we see in the charity commercials? People (mostly children) poking through rubble, having to walk miles across a barren wasteland for fresh water, dry savannah with no resources to exploit? It’s a lie, all of it: Africa isn’t poor because it lacks resources; it is poor because it is kept poor:

Hedge funds are behind “land grabs” in Africa to boost their profits in the food and biofuel sectors, a US think-tank says. In a report, the Oakland Institute said hedge funds and other foreign firms had acquired large swathes of African land, often without proper contracts. It said the acquisitions had displaced millions of small farmers.

When colonial powers officially left Africa, they left behind a long legacy of abuse and destabilization of local government. The lack of domestic education and infrastructure meant that newly-minted African leaders were woefully unprepared to resist sweet-sounding offers that came from foreign corporate entities, promising high-paying jobs and modern conveniences. What people didn’t realize was that, much in the same way European powers had taken control of American land from its native people, Africans were signing their lands away.

Africa is incredibly resource rich, but lacks the human capital to exploit its own powers in the way that, say, the United States was able to do to become a world power (of course the fact that outside Mauritania, Africa doesn’t really have a thriving slave trade prevents them from really matching the USA’s rise to dominance). The result is that Africans have a choice – work for foreign corporate powers or starve. Whatever political will there is for change is tamped down by well-funded and armed warlords that act as political leaders, but reap the rewards of selling their people back into slavery chez nous.

Of course with no real options for self-improvement, people who wish to survive in Africa agree to work for the corporations. It is only by allowing the conditions to remain oppressive and hopeless that the corporations can maintain an economic stranglehold on the nations of Africa. That is why I am particularly skeptical when one of the same hedge funds that owns African land roughly the same acreage as the country of France (wait… isn’t colonialism over?) say something like this:

One company, EmVest Asset Management, strongly denied that it was involved in exploitative or illegal practices. “There are no shady deals. We acquire all land in terms of legal tender,” EmVest’s Africa director Anthony Poorter told the BBC. He said that in Mozambique the company’s employees earned salaries 40% higher than the minimum wage. The company was also involved in development projects such as the supply of clean water to rural communities. “They are extremely happy with us,” Mr Poorter said.

Anyone who knows about the existence of a “company town” knows to be wary of statements like this. When the entire economic health of a municipality is dependent on jobs from one source, the citizens of the town basically become 24/7 employees. Without strong labour unions and the rule of law, this kind of arrangement can persist in perpetuity, or at least until the company decides that there’s no more value to be squeezed from that area and the entire town collapses, creating generations of impoverished people.

Much like we say in yesterday’s discussion of First Nations reserves, when there is not a strong force for domestic development – whether governmental or otherwise – people are kept trapped in a cycle of poverty. Poverty goes beyond simply not having money – it means that one has no hope of pulling themselves out. When you lack the means, the education, and the wherewithal to “pull yourself up by your bootstraps” (a term I hate for both rhetorical and mechanical reasons – wouldn’t you just flip your feet over your own head and land up on your ass?), all of the Randian/Nietzschean fantasies of some kind of superman building his fortune from scratch can’t save you.

Which is why well-fed free-market capitalist ideologues annoy me so much. The private sector is not bound by ethics, and most of the companies doing this kind of exploitation aren’t the kind of things you can boycott (as though boycotts actually work, which they don’t – just ask BP). When profit is your only motive and law is your only restraint, you’ll immediately flock to places with the least laws and most profits. I’m not suggesting that more government is necessarily the answer – most of the governments in Africa are so corrupt that they simply watch the exploitation happen and count their kickbacks – but neither is rampant and unchecked free market involvement.

Like Canada’s First Nations people, Africans must be given not only the resources but the knowledge and tools to learn how to develop their own land. They must be treated as potential partners and allies, rather than rubes from whom a buck can be wrung. Small-scale development projects that put the control in the hands of the community rather than the land-owners are the way to accomplish this. Not only does it build a sense of psychological pride and move the locus of control back into people’s hands, but there are effects that echo into the future, as new generations of self-sufficient people grow up with ideas and the skills to make them happen.

While it’s all well and good to talk about bootstraps, when there’s a boot on your neck then all the pulling in the world won’t get you onto your own feet.

Like this article? Follow me on Twitter!

Mining the depths of “reverse racism”

A version of this post appears at Phil Ferguson’s ‘Skeptic Money’ blog.

In the past I have spoken, a couple of times actually, about the phenomenon of “regression to the mean”. Basically, this describes the process where repeated observations tend to distribute around the average value. Extreme values – those that lie far away from the average – tend to ‘move’ toward the middle. However, if you’re looking from the perspective of this extreme value, it might look like movement toward the middle is you losing something. It’s a completely understandable misapprehension, borne from a lack of ability to see the full field from any perspective other than your own (also known as privilege).

I’ve talked about this issue in terms of religious privilege – the mistaken belief that religious people are being “persecuted” when secular authority insists on enforcing laws equally for everyone, instead of giving the majority religious group their accustomed preferential treatment. However, it’s easier to spot this phenomenon in the case of what is called “reverse racism”. My problem with this term is twofold: first, it makes the assumption that “racism” is from white people to people of colour (PoCs), and anything else is the “reverse” of normal; second, it’s patently ridiculous. While it is undoubtedly true that white people face racial discrimination at an individual level, they still comprise the majority group in this part of the world (and hold a great deal of power in others).

And yet, whenever one talks about any step being taken to either treat white people according to the same standard that everyone else is treated, or to allow targeted preferential treatment for marginalized ethnic groups, the cry of “reverse racism” goes up, and it appears to have taken deep root in the common psyche:

Whites believe that they have replaced blacks as the primary victims of racial discrimination in contemporary America, according to a new study from researchers at Tufts University’s School of Arts and Sciences and Harvard Business School. The findings, say the authors, show that America has not achieved the “post-racial” society that some predicted in the wake of Barack Obama’s election.

Both whites and blacks agree that anti-black racism has decreased over the last 60 years, according to the study. However, whites believe that anti-white racism has increased and is now a bigger problem than anti-black racism.

I’m going to be honest with you: I didn’t think that the average person was this dumb. Given what we know about rates of incarceration, employment, home ownership, relative wealth, and proposed legislations that disproportionately target PoCs, I thought for sure that people would realize that it’s still a burden to be dark-skinned in the United States. However, it seems as though white America (if you’ll forgive the term) has bought wholesale into the idea that, despite all indications, they are the group most discriminated against.

It is centrally important to note that this about perceived racial discrimination, not observed. This cannot be used to demonstrate the actual existence of racism against white people, let alone to the extent that it outweighs racism against blacks or latinos. These kinds of findings are useful only in understanding what the public perception of a phenomenon is – not the strength of the phenomenon. We should be, and have reason to be, extremely skeptical of the claim that white people are the most discriminated against ethnic group – they disproportionately represent the political and economic power in the United States, and it would be quite something if that’s somehow completely reversed out among ‘the little people’.

Perhaps the most interesting and potentially revealing finding from the study, and potentially a place where work can be done, is this:

Both within each decade and across time, White respondents were more likely to see decreases in bias against Blacks as related to increases in bias against Whites—consistent with a zero- sum view of racism among Whites—whereas Blacks were less likely to see the two as linked.

Whereas both groups tended to see anti-black discrimination decreasing over the years, blacks saw this as the two groups getting closer together. Whites, on the other hand, seem to view any improvement of non-white groups as taking ‘their’ resources away. In essence, there have to be winners and losers in the game of life, and if black people are getting closer to winning then whites must be losing by definition.

The problem with this type of reasoning is that it is entirely possible for groups to grow and improve together. A higher rate of, for example, black home ownership means a reduction in crime, improvements in education, and increased entrepreneurship. This means a stronger economy, as white and black consumers alike begin innovating and producing more wealth. Having a large group of poor black people means not only that racial groups stay segregated, but that the status quo of black people on the bottom remains (with all the negative aspects associated with that).

It is entirely possible that minority ethnic groups have become more vocal in their criticism of white people. Most of this criticism comes in the form that you see here – description of phenomena that fall along racial lines that are not due to inherent genetic differences between groups but where those trivial genetic differences collide with social structures. Some of this is due to the fact that PoCs are less afraid of speaking up and becoming politically active. Some of it, to be sure, is legitimate anti-white racism based on resentment or misunderstandings of history or whatever dumb reasons anyone has to be racist.

However, the mere existence of legitimate anti-white racism does not grant the majority group the victim status. What we’re seeing is the idea of “reverse racism” coming to full fruition – white people aren’t supposed to be discriminated against, and therefore any discrimination is the worst thing that’s ever happened. Hopefully by learning to re-frame racial issues in terms of mutual benefit for all groups, we can begin to finally do away with this oh-so-stupid of ideas.

Like this article? Follow me on Twitter!

Movie Friday: A Girl Like Me – unpacking societal racism

On Wednesday I talked a bit about the subconscious realm in which racist ideologies often lie. If we’re careful, we can measure and observe exactly how these thoughts and ideations affect our decision-making. The question then arises as to where these ideas come from in the first place. Do secret cabals of white supremacists slip into our rooms as children and whisper hate-speech in our ears as we sleep (well, maybe that’s the case for some of us, I have no idea). More likely, we notice patterns of behaviour and external stimuli, and our minds forms patterns and ideas about them long before we are able to put them into words.

We have these ideas sitting in our brains, doing work on our minds without our even noticing them. This may be particularly true for black women, as the above video may suggest, simply because we simultaneously have such a negative view of black features and place such a premium on appearance in women. This kind of implicit attitude formation happens to us as children, as we are surrounded by imagines that imply the superiority of whiteness and the inferiority of colour. It is only natural that not only would white children think negatively of children of colour, but that children of colour would similarly internalize these attitudes and think poorly of themselves.

Of course these kinds of things are hard to unpack, and as we get older our conscious minds can be taught to recognize these attitudes and reverse them. However, if we are so hell-bent on denying our own racist thoughts in some fit of arch-liberal self-righteousness, we will never learn to check our own assumptions. When the chips are down and we’re under pressure, we will continue to make decisions based on these gut instincts that we learn as children.

It’s not a black/white issue either:

Society gives us narratives about the people around us, and we internalize them without thinking. Evolutionarily, this is a useful trait for ensuring group cohesion – we will tend to reach consensus and can do so instinctively. However, when it comes to trying to break out of the evolutionary mould and design a society that is equitable to all people, we run into serious problems if we rely on these instincts rather than consistent introspection and vigilance. That kind of constant self-monitoring isn’t easy (trust me, I have a propensity to say stupid misogynistic stuff in the service of getting a laugh – deprogramming yourself is hard work), but it’s the only way to overcome biases that might otherwise go completely unnoticed.

Like this article? Follow me on Twitter!

Cracking the code

I screwed up. A couple of weeks ago I introduced a new term into the discussion – “coded racism” – without doing my usual thought-piece beforehand:

To the list of code words that don’t sound racist but are, I would add ‘personal responsibility’. While personal responsibility is a good thing, its usage in discussions of race inevitably cast black and brown people as being personally irresponsible, as though some genetic flaw makes us incapable of achievement (which, in turn, explains why we deserve to be poor and why any attempt to balance the scales is ‘reverse racism’).

I have danced around the idea, and I have made occasional reference to the concept behind it, but I haven’t really explained what coded racism is. I will have to do that in next Monday’s post, so stay tuned for that. As a teaser explanation, I will simply point out that oftentimes phrases are used to identify groups in a sort of wink/nudge way, where everyone listening knows who the speaker is really talking about. It’s phrases like “Welfare queen” and “illegal immigrant” that do not explicitly name the group being criticized, but still carry with them the image of a particular race. It is not, as is the common objection, simply a phrase describing any criticism of racial minority groups.

Before we can really delve too deeply into coded racism, there is a truth that we must acknowledge and grok – that racism (like all cognitive biases) can happen at levels not available to our conscious mind. The second part of the grokking is that even though we are not aware of it, racism can influence the decisions we make. As much as we like to believe that we are free-willed agents of our own decision-making, closer to the truth is that a wide variety of things operate in our subconscious before we are even aware that a decision is being made. This is why an artist, an engineer and a physicist could all look at the same blank piece of canvas and see completely different things (a surface upon which to draw, a flat planar surface with coefficient of friction µ, a collection of molecules). We then build conscious thoughts on top of the framework of our subconscious impressions and arrive at a decision.

So when we tell ourselves “I don’t have a racist bone in my body“, what we are really referring to are those conscious thoughts. Most people refuse to entertain overtly racist attitudes, because those attitudes have become wildly unpopular and people recognize that racism is destructive. However, our decisions are only partially decided by our overt ideas, and we can end up engaging in patterns of behaviour that may surprise even us:

You are more likely to land a job interview if your name is John Martin or Emily Brown rather than Lei Li or Tara Singh – even if you have the same Canadian education and work experience. These are the findings of a new study analyzing how employers in the Greater Toronto Area responded to 6,000 mock résumés for jobs ranging from administrative assistant to accountant.

Across the board, those with English names such as Greg Johnson and Michael Smith were 40 per cent more likely to receive callbacks than people with the same education and job experience with Indian, Chinese or Pakistani names such as Maya Kumar, Dong Liu and Fatima Sheikh. The findings not only challenge Canada’s reputation as a country that celebrates diversity, but also underscore the difficulties that even highly skilled immigrants have in the labour market.

This phenomenon is well-known to people who study race disparity, but it is rare to see it make the pages of a paper like The Globe and Mail – hardly a leftist rag. People of colour (PoCs), or in this case people who seem non-Anglo, are at a disadvantage not because of how they look, or how they act, but simply because they have funny-sounding names. Now one would have to be particularly cynical to think that a human resources professional is sitting there saying “Fatima Sheikh? I don’t want no towel-head working for ME!” and throwing résumés in the trash. As I said, that kind of overt racism is rare, even in the privacy of one’s own head. What is far more likely is that, given a situation in which a choice had to be made between a number of potential candidates, the HR person made a ‘gut instinct’ decision to call back the person that they felt most comfortable with.

The problem is that when we feel different levels of comfort with people of different ethnic backgrounds, our aggregate decisions tend to benefit white people and disadvantage PoCs. This isn’t because we’re all card-carrying KKK members, but because we are products of a racist society. This kind of thinking isn’t relegated to how we hire, either:

An experiment was conducted to demonstrate the perceptual confirmation of racial stereotypes about Black and White athletes… Whereas the Black targets were rated as exhibiting significantly more athletic ability and having played a better game, White targets were rated as exhibiting significantly more basketball intelligence and hustle. The results suggest that participants relied on a stereotype of Black and White athletes to guide their evaluations of the target’s abilities and performance.

In a situation where an athlete is identified to study participants as either black or white, but performance is kept exactly the same (they listen to a radio broadcast), what is considered ‘athletic ability’ in a black player is ‘basketball intelligence’ and ‘hustle’ in a white player. The identical stimulus is perceived in different ways, based on racial ideas that are not readily available to the subjects (and, by extension, the rest of us). This finding on its own may be benign enough, but extrapolate the fact that innate ‘athletic talent’ in one race is seen as ‘intelligence and hustle’ in another – the black players are just naturally good; the white ones had to work for it. Poor white folks are ‘down on their luck’, poor black folks are ‘waiting for a handout’. Jobless white folks are ‘hit hard by the economy’; jobless brown folks are ‘lazy’.

And so, when we discuss the idea of words that are simply coded racial evaluations, we have to keep in mind that it is this subconscious type of racism that these phrases appeal to. Far from simply being a macro description of a real problem, the way they are used bypasses our conscious filters and taps right into the part of our mind we don’t know is there, and like to deny.

Like this article? Follow me on Twitter!

There’s no justice, there’s just us

There is a concept in psychology called the “just world hypothesis”, also known as the “just world fallacy”. In its essence, this concept refers to our tendency to infer that the world operates as it should – goodness is rewarded and iniquity is punished. Where the fallacious component of this phenomenon crops up is when we allow this thought process to operate in reverse – those who are punished or rewarded must have deserved it, because the world just works that way.

This is a particularly attractive heuristic for a number of reasons. First, it is reassuring to think that we live in a universe where things exist in a state of balance – chaos is unsettling and potentially dangerous. Second, and perhaps most compellingly, it gives us a sense of satisfaction to think that the hard work we put in will be rewarded. It gives us even more satisfaction to think that those who do wrong will get their come-uppance in the end, a phenomenon called schadenfreude.

There is no place in which the just world fallacy is more obvious than in theology. Regardless of which deity we are talking about, there is always a balance between the forces of good and the forces of evil, with the good guys eventually winning out in the end. Christianity falls down this path most egregiously, with an accounting of a final battle and judgment that is the stuff of great myth; however, all the great religious traditions put great faith in the idea of ultimate balance. The very concept of an afterlife is an implicit reward for a good life or punishment for a life used for ill.

This fallacy pops up outside the realm of religion, however. It is this fallacy that allows us to look at the horrendous disparity between the living conditions of First Nations people, of women, of people living in starvation in southeast Asia and Africa, and rationalize it. Take a look at the comments section of any news report from that region (particularly about what is currently happening in the Ivory Coast), and you’ll undoubtedly come across someone with a brilliant statement like “well all of those African leaders are corrupt – what do they expect?”

It’s nice to be able to explain away injustice with such a simple wave of the hand. Doing so removes any sense of responsibility you might feel for the way corporations from which we purchase goods exploit and devastate those countries, destabilizing them to a point where corruption becomes de rigeur. It removes any feelings of guilt for the fact that our cities are built on First Nations land, much of which was obtained through dishonest treaty processes. It prevents us from having to feel remorse for propping up a misogynistic system that rewards men for fictitious “superiorities” that we have been told to believe we have. We can then go about our lives without having to constantly examine our every thought and assumption, which is an exhausting process that can prevent anything from actually getting accomplished.

The problem with belief in the just world hypothesis is that it blinds us from seeing the world as it truly is. Consider this figure for a moment:

Anyone who has studied classical mechanics (called ‘physics’ in high school) will immediately recognize this as a free body diagram. The various forces at work on the rectangular object are presented. When we can identify the direction and magnitude of these forces, we can make meaningful predictions about the behaviour of the object. However, if we neglect one of the forces either in how strong it is or where it’s going, our predictions – indeed, our very understanding of the object – are fundamentally flawed (e.g., if we forget about friction, we would expect the block to slide down the ramp – friction may keep it exactly where it is).

Society and the people of which it is comprised can be thought of in much the same way. When we neglect to take into account the forces that are at work on us, our predictions and understanding of the world is meaningfully misconstrued. If we add in other forces that aren’t actually there, then we’re realy in trouble. The just world fallacy is just such an addition – it postulates the existence of an outside influence that inherently balances other forces that may result in unjust disparity. We are then relieved from any sense of responsibility to correct injustices.

The ultimate manifestation of this is the bromide “everything happens for a reason”. Starving kids in Ethiopia? Illegal wars? Abuse and deprivation? Exploitation of vulnerable peoples? Don’t worry, everything happens for a reason. Justice will win out in the end, without any need for action from you, safe behind your wall of fallacy.

It’s not exactly difficult to see why this view of the world is fundamentally dangerous. The world is not a fair place. In fact, “fairness” is an essentially human construction – sometimes animals are predated into extinction, sometimes entire ecosystems are destroyed by natural disasters, it’s entirely possible that entire planetary civilizations were wiped out by a supernova in some far-flung corner of the galaxy. These things are only “unfair” to human eyes – as far as the universe is concerned, them’s the breaks. I suppose there is some truth to the statement that “everything happens for a reason” – it’s just that this reason is that we live in a random, uncaring universe.

If we wish to live in a fair world – and I’d like to hope that we do – then it is incumbent upon us to make it that way. The only force for justice that exists is in the hands of human beings, and the only strength behind that force is the level of responsibility we feel to make it so. It is of no use to cluck our tongues and say “well that’s the way it goes” or “things will work out” – making statements like that is the same as saying “I don’t care about the suffering of those people”. If that’s the case (and oftentimes it is), we should at least be honest with ourselves and say it outright.

It is for this reason that I identify as a liberal – I am not content to let the universe sort things out. The universe doesn’t care, and there’s no reason to believe that the unfairness of random chance will result in justice for those that centuries of neglect have left behind. If we care about justice, then it’s up to us to make it happen.

Like this article? Follow me on Twitter!

TL/DR: The world is not a fair place, although we like to try and convince ourselves that it is. If we want to live in a fair world, then we have to make it that way.

Psychology beats “bootstraps”

Crommunist is back from vacation, at least physically. I will be returning to full blogging strength by next week. I appreciate your patience with my travel hangover.

Here’s a cool thing:

You don’t have to look far for instances of people lying to themselves. Whether it’s a drug-addled actor or an almost-toppled dictator, some people seem to have an endless capacity for rationalising what they did, no matter how questionable. We might imagine that these people really know that they’re deceiving themselves, and that their words are mere bravado. But Zoe Chance from Harvard Business School thinks otherwise.

Using experiments where people could cheat on a test, Chance has found that cheaters not only deceive themselves, but are largely oblivious to their own lies.

Psychology is a very interesting field. If I wasn’t chasing the get-rich-quick world of health services research, I would have probably gone into psychology. One of the basic axioms of psychology, particularly social psychology, is that self-report and self-analysis is a particularly terrible method of gaining insight into human behaviour. People cannot be relied upon to accurately gauge their motivations for engaging in a given activity – not because we are liars, but because we genuinely don’t know.

Our consciousness exists in a constant state of being in the present, but making evaluations of the past and attempting to predict the future. As a result, we search for explanations for things that we’ve done, and use those to chart what we’d do in the future. However, as careful study has indicated, the circumstances under which we find ourselves is far and away a more reliable predictor of how we react to given stimuli than is our own self-assessment. This isn’t merely a liberal culture of victimhood, or some kind of partisan way of blaming the rich for the problems of the poor – it is the logical interpretation of the best available evidence that we have.

Part of the seeming magic of this reality of human consciousness is the fact that when we cheat, we are instantaneously able to explain it away as due to our own skill. Not only can we explain it away, but we instantly believe it too. A more general way of referring to this phenomenon is internal and external attribution – if something good happens it is because of something we did; conversely, bad things that happen are due to misfortune, or a crummy roll of the dice. When seen in others, this kind of attitude is rank hypocrisy. When seen in ourselves, it is due to everyone else misunderstanding us. This is, of course, entirely normal – everyone would like to believe the best about themselves, and our minds will do what they can to preserve that belief.

The researchers in this study explored a specific type of self-deception – the phenomenon of cheating. They were able to show that even when there was monetary incentive to be honest about one’s performance and cheating, people preferred to believe their own lies than to be honest self-assessors. However, the final result tickled me in ways that I can only describe as indecent:

This final result could not be more important. Cheaters convince themselves that they succeed because of their own skill, and if other people agree, their capacity for conning themselves increases.

There is a pervasive lie in our political discourse that people who enjoy monetary and societal privilege do so because of their own hard work and superior virtue. This type of thinking is typified by the expression “pulled up by her/his bootstraps” – that rich people applied themselves and worked hard to get where they are. The implication is that anyone who isn’t rich, or who has the galling indecency to be poor, is where they are because of their own laziness and nothing more. It does not seem to me to be far-fetched at all that these people are operating under the same misapprehension that plagued the study’s participants – they succeed by means that are not necessarily due to their own hard work, and then back-fill an explanation that casts themselves in the best possible light.

Please do not interpret this as me suggesting that everyone who is rich got their by illegitimate means. If we ignore for a moment anyone who was born into wealth, there are a number of people who worked their asses off to achieve financial success – my own father is a mild example of that (although he is not rich by any reasonable measure). However, there are a number of others who did step on others, or use less-than-admirable means to accumulate their wealth. However, they are likely to provide the same “up by my bootstraps” narrative that people who genuinely did build their own wealth would, and they’ll believe it too! When surrounded by others who believe the same lie, it becomes a self-sustaining ‘truth’ that only occasionally resembles reality.

The problem with this form of thinking is that it does motivate not only attitudes but our behaviours as well. It becomes trivial to demonize poor people as leeches living off the state, and cut funding for social assistance programs as a result. People who live off social assistance programs often believe this lie too, considering themselves (in the words of John Steinbeck) to be “temporarily embarrassed millionaires” who will be rich soon because of their furious bootstrap tugging. While it is an attractive lie, it is still a lie that underlies most conservative philosophy – which isn’t to say that liberals aren’t susceptible to the same cognitive problems; we just behave in a way that is more consistent with reality, so it doesn’t show as much.

Like this article? Follow me on Twitter!

Why do all the black kids sit together?

I attended a conference in Ottawa last week that was related to work. I arrived early and picked a spot at the row of tables completely arbitrarily. Other people filtered in a bit later, and when I looked up from my computer, I realized that all of the black people in the room (well, there were only three so maybe ‘all’ is a bit misleading) were sitting in the same area as me.

It’s a phenomenon that you can observe pretty much anywhere, where members of a minority group tend to flock together. It even spawned the title of a book on racism and psychology.

Okay, and?

My job straddles a line between epidemiology, statistics and economics. While I can’t really claim to be an expert in any one of those fields individually, I can at least speak semi-intelligibly about them. A central concept in economics is the idea of an “incentive” – decisions are made by rational agents to gain something they value. By increasing the value gained by making a particular choice, you make that choice more appealing. For example, if you have the choice between two hamburgers, and I slap a piece of delicious bacon on one (but not the other), you’re more likely to choose the one with extra value.

The converse case of incentives are what are termed “disincentives” – additional features that make a rational agent less likely to make a choice. Suppose you are a vegan, and you are forced to choose between those same two hamburgers. All of a sudden, the addition of delicious bacon makes that sandwich less appealing.

This is an incredibly simplistic description of the concept, obviously, but hopefully it is clear.

Wait… what?

There is an illusion that we carry around in our minds that we have a “true self” – that we have a personality that is the “real me” version. The fact is that our personality is more strongly determined by the surrounding social environment and other external stimuli than it is by our intentions. As a result, when our environment changes, different aspects of our “self” become more apparent.

There is a classic example of this called “stereotype threat“, in which a person’s performance is (positively or negatively) affected by making a stereotype about them apparent. This is commonly seen when discussing the differential performance of women in science and mathematics. Women were inundated with a prevailing stereotype that “girls are not good at science”. As a result, when women are reminded of their gender before testing, they do worse than if they are not made aware.

What does this have to do with anything?

Social pressure exists. The presence of others is a real environmental cue, that will cause us to be aware of various aspects of our identity. As a direct result, we will switch over to one of our various “selves”. At this workshop, everyone in the room was similar in most ways – we all have similar careers, similar education, probably similar interests. However, my presence in the room reminded the other two black guys of their “black guy self”, creating an ad hoc group. This happened completely passively – I didn’t walk up to them and say “welcome, fellow black man.” It happened all by itself – all they had to do was notice that there was another black person around.

There’s another level that this operates on though. Imagine the converse – you are a physicist in a room full of actors. You are trying to have a conversation about beauty, but every time you slip into physics-speak, you are met by blank stares. Another physicist joins the conversation – your life immediately becomes easier. Even though you might not ordinarily gravitate toward this particular person, this arbitrary similarity makes her/him highly attractive to you.

It’s the same way for members of any minority group – when they feel different from the rest of the group, they are more likely to gravitate toward those who are similarly different.

So?

This ability to make certain identities more apparent can be used as an incentive to make decisions. If I would like you to donate to my women’s rights charity, I might do well to remind you that you have a sister, or a mother, or that you are a woman yourself. By bringing an aspect of your “self” to the foreground of your mind, I am able to influence you (as a rational agent) into making one decision (donating your money) rather than another (keeping it).

It is for this reason that things like the Atheist Bus Ads and the Out Campaign are useful – not for antagonizing the religious (although that is certainly what the faithful are claiming), but for bringing atheists out into the open. By making nonbelievers aware of their nonbelief, it brings that aspect of their “self” more apparent and helps motivate their behaviour.

Why is that good? Shouldn’t everyone consider themselves equal?

This kind of counterexample is appealing, and commonly used to blame those who talk about racism as “the real racists”. After all, by pointing out that there are treatment inequalities between different racial groups, aren’t you reinforcing the idea that races are different?

Describing reality is not the same as creating that reality. My usual go-to example is blaming someone for yelling “look out!”, and thereby causing a passerby to get hit by a bus. The bus was there to begin with, and would have hit the person regardless of the warning. The purpose of the warning is to make the passerby aware of the problem so she/he can take steps to avoid or fix it.

Atheists who are reminded of their atheism aren’t suddenly turned into atheists – they were already. Making that reality more apparent is not creating a difference, it’s just highlighting it for the explicit purpose of motivating people to consider their “atheist self”.

The bizarre thing about this whole phenomenon is that we often aren’t aware that these social forces play such a role. It was until I commented on our seating arrangement that the other two guys smiled and said “oh yeah”. Once aware of it, we can recognize it intuitively, but sometimes it happens without our even knowing.

Like this article? Follow me on Twitter!

The problem of morality

There’s a popular recurring question that often comes up in discussions with religious people who wish to challenge atheists: namely, why should be be moral if there is no god? If atheists don’t believe that there is a judge that overlooks the world, why bother doing good things? After all, there are no eternal consequences to our actions, what could possibly be the atheist’s motivation to either do good things or refrain from evil things?

My issue with this question is that on its own, it seems like an interesting line of discussion – what makes us be moral? If I abandon my beliefs, what would motivate me to continue to do good things? Is there another source of human morality? However, it is rarely asked in this spirit. Usually, it comes in a more snarky form – “if you don’t believe in God, why don’t you go rape and murder babies?”

The usual response is that if the only thing holding you back from raping and murdering babies is your belief in God, you should probably be under psychiatric evaluation. This response, while sufficiently dismissive of a stupid question, is not really an answer to what would be a reasonable criticism if not for the invocation of infant rape. If I, as an atheist, don’t believe that someone is keeping an omniscient record of all my misdeeds, what prevents me from engaging in minor (or major) transgressions when I am reasonably certain I can get away with it? Why, for example, would I turn in a wallet I find on the ground to the police instead of just stealing it? Why not lie to a woman at the bar in order to convince her to sleep with me? Why contribute to charity or volunteer in the community if there is no reward for my good deeds later?

Evolutionary biologists speak of genetic sources for altruism, pointing to analogues in the animal kingdom in which non-human animals show group cohesion and empathy. They lay out a reasonable pathway by which genes for altruism might prevail over genes for selfishness at the expense of others, through a process of natural selection. Philosophers point to Kant, Hume, Rawls, and other secular ethical writers as providing a basis upon which a general non-religious moral code can theoretically be built. Assuming that maximum human happiness is the point of any worthwhile moral code (and yes, this is an assumption, but what else would be better?), then a reasonable and increasingly evidence-based system can be derived from philosophy. Legal authorities point to the evolving code of law as a way of incentivizing good behaviour and punishing bad behaviour. Psychologists note that ethical instruction often leads to ethical behaviour below the level of conscious awareness –  we “naturally” act morally because we’ve been taught to do so.

Suffice it to say, there are a variety of ways to answer the question of why someone would be moral without belief in God. Any one of these on its own would be a sufficient rejoinder, and having them all operate in parallel is certainly reassuring to someone who is particularly interested in the question. However, the question embeds a certain assumption that often goes unquestioned – does belief in God make people behave better? After all, the implication of the poverty of morality in the godless is that there is in fact a moral code inherent in belief.

I’ve had religious instruction, which includes learning a list of things that are right and wrong. This should not be confused with legitimate ethical or moral instruction, which I didn’t receive until late high school. I was, for example, taught that extramarital sex is wrong, as are masturbation and homosexuality – pretty much anything that isn’t face-to-face sex with the lights off with my wife is morally wrong. I was taught that abortion is morally equivalent to murder. I was taught that faithfulness was a virtue. Now I was also taught a lot of things that I still agree with – murder is wrong; charity is good; forgiveness, justice and prudence are high ideals. However, with all of these things, I was told that the reason they were good is because they had been rubber-stamped by Yahweh. I should perhaps note that I was not taught that condoms or homosexuality were wrong in school – those things came from Rome but I felt perfectly justified in ignoring any Papal edicts that were bat-shit insane.

It would be, as I said, many years before I learned the processes by which I could evaluate why I believed in the things I did. I had of course by this time rejected the idea of Biblical truth – the story of Onan says it’s wrong to masturbate but it’s okay to fuck your niece as long as you think she’s a prostitute. It was abundantly clear to me that there may be some morality in the Bible, but it is definitely not the source of that morality. I would later learn that much of what we call “Christian Ethics” were actually written by Greek philosophers and later adopted by the church.

Of course all of this is somewhat inconsequential to the central question of whether belief in god is accompanied by better behaviour. Does the idea of an omniscient god really motivate people to refrain from evil actions? Does the promise of eternal reward really motivate people to do good deeds? The answer to the first question seems to be ‘no’, or at least ‘not necessarily’. Anecdotally, we know that religious people are responsible for some of the greatest atrocities throughout history (far from atheists, it is the priests who are the baby rapers). In fact, the more one adheres to religious doctrine, the crazier she/he becomes and the more likely she/he is to commit (what she/he thinks is justified) violent acts. Although there is a clear path from religious belief to violence, these are anecdotes only.

CLS reviewed a Pew Forum survey on religion and found that those United States that had the greatest level of religiosity had poorer performances in self-restraint and morality toward others than those state with lower levels of belief. There is most certainly a chicken/egg problem in this analysis, but it does sufficiently demonstrate that there is no reliable correlation between level of religious belief and morality, at least for the population at large. If people do in fact believe that there is a god watching them, it doesn’t seem to affect their behaviour in a meaningful way. I’m sure if this blog were more popular I’d have trolls inundating me with stories about how Jesus saved their crack-addict cousin’s life, or how Allah saved them from prison, or what-have-you. I am as uninterested in anecdotes that refute my point as I am in those that support my point.

How about the second question? Does religious belief make people more charitable in anticipation of a future reward? A study of European countries and their willingness to donate to poorer countries seems to suggest that those with more closely held religious beliefs do in fact donate more money than those who are less religious. The findings are incredibly nebulous and hard to interpret, but it seems from the general findings that while it varies from country to country, religious people are more charitable than the non-religious.

This is in no way disconcerting to me – as I suggested above there is a relationship between instruction and behaviour. If you are constantly entrained to give money to the poor, and your social environment is structured such that there is strong normative pressure to do so, it is unsurprising that you will comply. A study I’d like to see is to take people with similar levels of religiosity, show them identical videos of starving children, have one video narrated in a secular fashion and the other in a religious fashion and see if there is a difference in pledged funds.

At any rate, there are a variety of reasons why an atheist would choose to be moral, not the least of which is the fact that moral actions often benefit the giver as well as the receiver, whether that is in the form of feeling good about yourself, or in the form of making a contribution to society. There is no reason why an atheist would be less moral than a religious person, and despite all their vitriolic assertions to the contrary, it is easier to justify abhorrent cruelty from a religious standpoint than it is from an atheist one. The so-called “problem” of morality is only a problem if you assume that YahwAlladdha is the author of all goodness in the universe, completely contrary to any evidence that can be found either in scripture (aside from all of the passages just asserting it – look at His actions) or in observations of the world. Human beings have to struggle every day to be good, and leaning on the broken crutch of religion doesn’t seem to help any.

TL/DR: Believers accuse atheists of having no basis for morality. This accusation is unfounded – biology, philosophy, law and psychology all provide explanations why people would be good without belief. There does not seem to be a strong relationship between religiosity and morality, except insofar that being instructed to do something that your peers are all doing might motivate you to perform some specific behaviours.

Like this article? Follow me on Twitter!