Academic Blogging

photo

For those of you who have never visited the excellent sociological blog Family Inequality, I suggest that you stop reading right now, and head on over there for an hour or two. Don’t worry about me, I’ll wait.

Welcome back! I hope you enjoyed your stay there, I know that I spend quite a bit of time there  (often time I don’t have) digging through the blog’s many, many entries. A while back, the blog’s author, Phillip Cohen, wrote a piece called “Should every sociologist blog?” that I shamelessly linked to my own blog. The thesis of the article was simple, is it in the interests of both sociologists and the public to encourage sociologists to blog about their research? Since I’m already doing that (in my own, rather less articulate way), I of course answered in the affirmative. For Cohen’s part there seems to be a bit more reservation about the whole idea, but I’ll leave it to him to make his own points. He’s better at it than I am.

Cohen’s question got me to thinking though: should blogging at the academic level – in all disciplines – be encouraged as a side-project for scholars? As it stands, blogging means precisely nothing on any academic’s CV, and there is therefore no real incentive for any academic to engage in it, outside of maybe their own passion for writing. But aside from filling a line on a CV, could academic blogging serve a purpose?

Hell yes, and here’s why.

Traditionally, academia has had the (fairly exaggerated) reputation as being the guardian of knowledge; a society’s culture, traditions, and memory are gathered, catalogued, and maintained by armies of scholars and researchers; scientists of all stripes, historians, anthropologists, literary theorists, etc. Of course the reputation is bullshit and much of a civilization’s knowledge is contained in the minds of non-academics everywhere; but while academics might not be the sole holders of knowledge, we do certainly hold a lot of it. It’s our job, and many of us are quite good at it.

Part of the disconnect between academia and the broader population rests in the severing of academic knowledge from other, more ‘everyday’ forms of knowing. How often do we hear the expression “oh sure they’re book-smart, but they’re not really street-smart”, or hear academia being described as an ‘ivory tower’, far removed from the affairs of the proles, toiling away far below? But the distinction between these two forms of knowing is largely artificial and it’s largely maintained by the often arcane semiotics of the various dialects of different academic disciplines. We love to make up new words and hurl them at each other, and each time we do – and each time we fail to explain to everyone else what those words mean – we further distance ourselves from our fellow citizens. Some academics, like the infamous Jacques Derrida, almost seemed to revel in their obtuseness, as though being cryptic was synonymous with being profound. And while the navel-gazing and word-play continued, many of our neighbours and friends began to lose interest in what we were talking about in the first place. Our debates take place in closed conferences and behind the paywalls of peer-reviewed journals, and our discoveries and carefully crafted ideas only ever interface with the public as short sound-bites – and only after they’ve been distorted and abridged by others. In short, academics are often dreadfully isolated from society, and it’s our own fault.

Blogging allows us to begin to change that. Blogs are a handy medium for allowing researchers and scholars to speak to the public in their own way, with their own idiosyncrasies, without having to go through someone else. Blogs allow the public to ask us questions or to challenge us directly, and they can be hopeful that we will respond (well, at least some of us). In other words, blogging can help to facilitate the return of academics into the mainstream of public discourse. Granted people like Neil deGrasse Tyson are already doing this, but he and people like him represent only the tiniest fraction of academics who ever take their research and love of educating public. Other venues, like TED talks can also help to make current research available to others, but they’re not the only way.

I’m not photogenic enough for television, and I don’t know that anyone would want to listen to me talk at them for any length of time, but I’m handy with a keyboard, and I can usually convey the substance of my own research to a broad audience fairly easily. As an academic, I feel that I have a moral obligation to take the knowledge and training I’ve received at public institutions and share it with my fellow citizens. Blogging helps me to do that.

So will blogging or some other form of (usually) non-paying, non-peer reviewed public academics be embraced by researchers any time soon? Well, probably not; as universities become increasingly corporatized and professor’s salaries and advancement are increasingly tied to their ability to churn out papers and books, the demands on individual academics’ time grow as well. Many universities now demand that the lion’s share of a professor’s day be dedicated to research and publication; a minority of time is devoted to the actual practice of educating. At the graduate level, fewer universities are even offering their students courses on how to be a good teacher. Instead, graduate students – MA and PhD alike – are indoctrinated into the world of ‘Publish or Perish‘, and if they happen to pick up some skill at teaching while they’re at it, well that’s just a happy bonus. Many current professors learned to teach only by emulating a past favourite professor and few will ever take a formal class about teaching pedagogy. At some universities, a professor’s teaching ability isn’t even a factor when determining tenure, promotion, or salary adjustment. Think about that for a second: your professors – if you’ve attended university – may likely have received zero attention or accolades for their teaching ability, because to the university, teaching ability is not important. It is a grim picture, if you are of the opinion that university professors ought to be responsible for educating their fellow citizens.

And so I’ll continue to blog. These days it seems that it’s the only way I can meet what I see as my obligation to educate.

The Need for Utopia

photo

Adam Swift, in his article “Would Perfect Mobility be Perfect?”* posed the following question to researchers of social inequality and advocates of social justice activism: should social justice be concerned with achieving ‘perfect’ social mobility** and therefore ‘perfect’ social equality, or should we simply accept that such a goal is fantasy and instead focus on ‘sufficient’ social mobility? In other words, should those who are concerned with social justice waste our time with utopian ideals, or should we instead focus on the nitty-gritty of more realistic struggles?

Swift argues that the energy spent pining after the ever just-beyond-the-horizon goals of a social justice utopia blinds activists to the realities ‘on the ground’; that hoping for the day when all people can be free to pursue their dreams prevents activists from engaging in more grounded projects aimed at ameliorating more immediate problems.

This question, at least in academia, isn’t a rhetorical one. Many graduate-level courses (and not a few undergraduate-level ones too), especially in sociology, concentrate on the prosaic ‘nuts and bolts’ of contemporary social science research, and leave the more big-picture style thinking to advanced theory courses or to the individual student to discover on their own. For the most part, this approach is a sensible one, as social science researchers are expected to contribute to the ever-growing bodies of research that make up the vast bulk of contemporary social science literature. The rapidly expanding constellation of academic journals demand that researchers publish more or less constantly, and for many of them, their advancement and salaries depend on producing a corpus of published work that sometimes makes it seem as though modern academia is a game of quantity of quality. The demands on researchers are so great that more than a few will succumb to searching for the ‘SPU’ or ‘Smallest Publishable Unit’; researchers will report even the tiniest, most inconsequential findings – regardless of relevance to any overarching research program – if there is even the slightest chance that such a report could find a place in a journal or two. In a very real sense, social science researchers are often guilty of missing the forest for the trees. In such an environment, where the minutia of our individual sub-sub-specialization can seem to overwhelm us, is it any surprise that many give up on the search for utopia?

The same can sometimes be said of social justice activism; if we need to budget our already-precious time and energy, do we choose to spend it on philosophical ruminations about the type of ideal society we wish to strive for, or do we instead quickly acknowledge that the end-goal is some form of nebulous ‘equality’, and then spend our time countering the rhetoric of bigots of all stripes, either online or in the physical world? Do we sacrifice reflection and reflexivity for the need to see ‘real-world’ results aimed at eliminating real and pressing inequalities that exist all around us? Time, unfortunately, is a zero-sum game; what we spend on one project must necessarily come from time we could have budgeted for something else.

But where Swift gets his analysis wrong is in thinking that the point of utopian thinking is to achieve the imagined end-state. Many of the foundational thinkers of modern sociology, like Marx and Durkheim, imagined utopian societies and they, like more contemporary thinkers such as John Rawls offered what they saw as roadmaps to achieving them. But implicit in each and every one of their visions was that every step towards the end-goal necessarily improved the lives of the groups of people they were concerned with helping. Marx, for example, seemed to believe that every step on the road to that stateless society would bring the proletariat closer to freedom; Rawls believed that the very act of deeply and seriously considering the shape of a perfect society from behind the veil of ignorance would grant the thinker insight into contemporary social ills. Utopian visions of the future provide us with a glimpse of the world once all the solutions provided by the utopian project have been applied. If I want to build the perfect car, or the perfect computer, or write the perfect poem, play, or script, it helps if I first have an idea of what the end-state ought to look like, because knowing that can show me the steps I need to take to get there. Many authors will say that the act of writing often includes taking unexpected turns, but I’ve yet to meet an author who didn’t have at least a sketched-out idea of the ending before they began writing the first chapter.

Utopian thinking gives us an arena within which we conceive, test, and challenge our normative frameworks; they can reveal to us the specifics of our moral codes, and can give us insight into our own problem-solving strategies. But more than anything else, utopian thinking teaches us the importance of committing to the long haul, and they remind us that in a society that values disposability and instant gratification; the diligent pursuit of deeply-held convictions has worth. Utopia is not simply about the endgame; the promise of utopia is found in the striving.

*Swift, Adam, “Would Perfect Mobility be Perfect?”, European Sociological Review, Vol 20. Issue 1, September, 2002, Pp. 1-11

** Social mobility is the extent to which a person’s chances and opportunities in life are tied to the circumstances of their birth. If a person is born into a poor family, what are their chances of ‘moving up’ in society? In a world of ‘perfect mobility’, a person’s social status and their social opportunities would be unrelated; everyone would have an equal opportunity to achieve their goals.

Mayans, Moral Panic, and the Narrative of the Apocalypse

photo 2

A couple of years ago, Harold Camping rocketed to infamy when he predicted that the end of the world would come via divine Revelation on May 21st, 2011. When the day came and went without incident, Camping retreated from public view for a time before re-emerging and claiming that, actually, May 21st was merely an ‘invisible judgement’, and that the real end would come five months later, on October 21st. Again, the date came and to the surprise of almost no one, the world did not end.  Camping was hardly unique in his predictions: throughout the history of the human species, countless millions of us have held deep, unwavering convictions that the end of the world would come in our lifetimes; clearly every single one of us has been wrong… so far…

A little more than a decade ago, those who believed in the coming end-times set their sights on the dawn of the new millennium, conveniently forgetting that according to the Chinese calendar, the year was 4697, and if we all measured time by the Jewish calendar, January 1st, 2000 would have been marked as the 23rd of Tevet, 5760. Nevertheless, I remember the anxiety that surrounded Y2K, not only because of the supposed collapse of global banking and communications systems, but because of the heightened millenarian fervor that surrounded that particular date. One way or another, some people believed, the world was going to end, and we had all best get right with God/Allah/Thor/the Universe.

We humans are funny creatures; we design arbitrary systems of timekeeping, and then affix deep symbolic meaning to particular points on in those systems. We invent a system of counting based, say, on the fact that we have 10 fingers, and then decide that measurements that are divisible by ten have some sort of divine meaning. We have evolved, so we are told, brains that include hardwired pattern-recognition systems yet apparently lack any sort of evolutionary safeguard to tell us when the patterns we see are illusory. We are strange, strange animals.

These sorts of social phenomena are extremely interesting, from a sociological point of view, for a number of reasons. One of the top reasons for me is that they serve as a handy point of focus for those who study the concept of moral panic. The reason for this is simple: for those who believe – fervently – in a given end-times scenario (Mayan prophecies, Y2K, Revelations, etc.), the end is often coming for a reason. Of course a purposeful annihilation isn’t always the case, but let’s consider some of the more common ‘theories’ about what the Mayan ‘prophecies’ mean. The world will end because of environmental collapse (brought about by rampant consumerism, reliance on fossil fuels, etc), or because of global thermonuclear holocaust (in some version of “The United States versus Nation X”); or maybe the world will end because of some sort of spiritual crisis or event, or because Jesus is angry or because Shiva has had enough already.

Behind each of these possible examples of how we’re all going to die is some explanatory narrative or another, which tells us why the environment is collapsing, or why Shiva is on the warpath, or why the Mayans foresaw this time and place as being the site of Armageddon. In other words, beneath the trappings of almost any millenarian belief you will find a laundry-list of things the believer is afraid of or disgusted by. Jesus is coming back to judge the living and the dead? You can be that he’s going to judge all of the people whose lifestyles you hate. Is the world too sinful/corrupt/consumerist/complacent, in your view? Well good news! Catastrophe ‘X’ is coming to wash it all away and let you and the other survivors begin anew.

There can often be a touch of fantasizing on the part of the believer too; since it’s their end-times belief, they will most likely count themselves among the survivors (if they’ve not been raptured away, that is), due to some learned or innate property that makes them ‘worthy’ of survival. They can watch all those ‘weaker’ or ‘inferior’ people vanish, and then they can build their perfect society on the ashes of the old.

But none of this will happen – at least not right now. Today will come and go, and the world will remain. Civilization (by which we of course mean our civilization, the only one worth mentioning /sarcasm) will not have been destroyed; Christmas will come and go, then New Years after that. The people that believed in the Mayan end-times will continue to believe in a reckoning to come; only the date will change and maybe, if a cooler looking doomsday comes along, the form. Perhaps, once their disappointment or embarrassment over their end-time of choice failing to materialize abates, they’ll move on to embrace a new apocalypse; maybe they’ll start buying into Nibiru, or begin warning the world about the coming doom from the planet-killer asteroid Apophis. After all, Apophis is an Egyptian name, the name of a god – the god of dissolution, non-being, and the void; surely that means something, right?

Like this post? Then check out my blog over at the Skeptical Cubefarm, or follow me on Twitter!

Being Manly

Whenever I need a break from whatever studying or grading I happen to be doing, I often go on little adventures around the internet. I type a random word into the Googles and then click on one of the resultant links at random; I then randomly click on links found on those pages, thus winding my way through blogs, tumblrs, forums, and other strange and wondrous environments in the digital frontier. And just as all roads in the ancient world lead to Rome*, all digital roads eventually lead to Reddit.

Reddit is a strange place. It’s like every clichéd bazaar in every orientalist (of the Said variety) movie ever made; anything and everything can be found there, from pics of kitties, to pics of corpses, to pedophile-apologism and the ever-so-edgy racist jokes. There are also a few of the smaller subreddits where interesting questions are asked by genuinely curious people. The other day, someone asked the following: “Is there a problem with me, as a man, liking to do manly things?” The questioner was trying to reconcile what seemed to be genuinely feminist beliefs, with his predilection for doing ‘manly’ things. Rather quickly, someone answered his question in a way that I found myself in solid agreement with: the problem doesn’t lie in doing manly things; the problem lies in thinking those activities are ‘manly’ in the first place.

One of the ways that society ensures that ‘acceptable’ gender roles are maintained is by firmly – and often invisibly – policing gendered divisions of labour. Consider farming; the typical image of the modern farmer seen in advertisements for everything from cranberries to eggs to cereal is that of the white, tough, frontiersman (and sometimes his quiet and supportive wife and family) who provides for his family by the sweat of his brow and the skill of his hands. Farming = manly.

The same sorts of divisions are present in other, primarily blue-collar fields, such as manufacturing; factory workers are most commonly depicted as being male, as are miners, heavy equipment operators, and other tradespeople. There is a reason for this that has little to do with the worn-out ‘bu… But… women are unsuited for such work, because biology’ argument, and a lot to do with social expectations in labour. Men are supposed to be the outdoorsmen, the builders, the factory workers, the tillers of soil; women are supposed to be data-entry workers, secretaries, or housewives; they are supposed to work in the front office, away from the scary, loud machines.  But, as I’m sure many of you already know, there’s nothing biological about any of this. Women have always been capable of doing the same jobs as men do; and how do we know this? Because history tells us so. History gives us innumerable examples of women who farmed (and who still do today), women who worked in the mines, and women who built the tanks and pressed the ammunition that won the Second World War for the allies. We know that women can be warriors, because women have been warriors; and because of all of this, we know that there is nothing intrinsically ‘manly’ about ‘manly’ things.

What there are, however, are a myriad different social cues that hint – both subtly and blatantly – to men and women what their ‘natural’ roles should be in society. These are normative cues; they exist to convince us of what we ought to do, how we ought to live, and what forms of labour we ought to think are acceptable for our gender. And so many of these signals are contradictory; is kitchen work ‘manly’ or is it ‘woman’s work’, and if it is ‘woman’s work’, then why are industrial kitchens almost always male-dominated? If women ‘lack’ the ability to do the work required of coal miners, then how is it that so many women worked in the brutal conditions of Industrial Revolution-era coal pits?

What I am getting at, in a roundabout way, is that gendered divisions of labour are hardly ‘natural’ or derived from biology; there is nothing intrinsically ‘manly’ about the sorts of work that is most commonly associated with male labour today. Like so many other aspects of social life, the sorts of labour that are considered ‘manly’ become that way because society (through any number of different social institutions) concludes that engaging in certain forms of labour are part of the project of becoming men. If I wished to be a ‘manly’ man – the kind of man often associated with the dominant, hegemonic forms of masculinity – I’d be engaged in tough, demanding, physical labour, or I’d be involved in tough, ‘practical’, technical trades like engineering, mechanics, etc. As Kris Paap points out in her book, “Working Construction”, the sorts of activities and rituals engaged in by men who work in dangerous, traditionally masculine trades often have little to do with improving the quality of their work, and a great deal to do with reinforcing established gender norms. Men are not simply engaging in labour, they are engaging in a project of building men.

As I’m sure many of you have noticed by now, this discussion has pivoted around the notion that gender is a binary, that to be a man is to not be a woman. This is because for vast swaths of society, the gender binary is all that there is. Of course we know that such binaries aren’t really very accurate, and there are literally millions of people in society whose lives reveal the hollowness of gender dimorphism, but as is the case with so many of our social institutions, even socially constructed and maintained fantasies have very real effects. To be ‘manly’ in North American society (and Canadian society more specifically) often entails subordinating other forms of masculine identity (such as gay or PoC masculinities), to say nothing of how such hegemonic forms of masculinity demand the subordination of virtually all expressions of femininity. What’s more, the most commonly understood patterns of manliness are actively hostile to trans* persons, whose very existence strikes at the heart of contemporary hegemonic masculinity; how can one ‘truly’ be a man without the ‘correct’ genitals and, even more terrifying, how can a ‘real man’ know that they are dating a ‘real woman’ and not some kind of ‘imposter’? How can ‘real men’ recognize other ‘real men’ with all of this deviant gender-bending taking place all around them? What’s a ‘manly man’ to do?

My final point is simply this: the project of becoming men is unending, and it is as subject to social pressure as any other social institution. Over time, what is considered ‘manly’ will change; what it changes into, well, that’s up to us.

[QUICK EDIT] I should probably also make the point that in a perfect world, actions, activities, emotions etc. wouldn’t be gendered at all; in a perfect world, concepts like ‘manly’, ‘feminine’, etc. would be considered anachronisms best left behind. Despite my generally optimistic worldview however, I remain rather cynical about the likelihood of us ever reaching that particular goal, but that doesn’t mean that we shouldn’t strive.

* Yes I know, they really didn’t, but I didn’t make the expression.

The New Fascism

This past Sunday, I went with my partner to our city’s Remembrance Day ceremony, which I do every year. My brother has served two tours in Afghanistan with the Canadian forces, and a tour in Bosnia after the civil war there, and many other members of my family have also served. I’m not one for overt expressions of nationalism, and I have numerous issues with the overly Christian-themes on display there, but for me, Remembrance Day is about my brother and the sacrifices he made. I think about his friends – some of whom did not make it home alive – and I remember that whether I agree with the mission in Afghanistan or not, the government that represents me sent them there. If nothing else, it reminds me of the absolute necessity of striving to elect people who understand the concept of ‘Just War’, and who recognize the heavy cost of sending young people to fight and die in our name. When we decide that the past is no longer something to remember and learn from, the old, crappy ideas that caused so much damage begin to seem a lot less crappy, and they start to be rediscovered by a whole new generation. [Read more…]

Writing from Privilege

I haven’t submitted any posts lately and for that I’m sorry; school has been kicking me around and my health hasn’t been stellar lately. At any rate, here’s a new submission. Enjoy!

PS: I’d like to also say how happy I am that Jamie is now a contributor the blog. Looking forward to reading more of Jamie’s posts!

Writing from a position of privilege is easy. All I have to do is put my fingers to a keyboard; after all, by almost any axis you wish to examine me by, I’m about as privileged as you can get. I’m white, able-bodied with an invisible impairment that is generally manageable (I have clinical depression and a mild anxiety disorder) cis-gendered, heterosexual, and in an age cohort that is often the most valorised in society (late-twenties to mid-thirties). Granted my income isn’t high at the moment, but my expected earnings – thanks to a top-tier education – sit comfortably in the higher-end of the income tax bracket.

Writing from a position of privilege while trying to critique and challenge that privilege is quite a bit more difficult. The most powerful force that keeps privilege in place is its ubiquity; it’s everywhere – in all aspects of my life – and when something that totalizing has been experienced for a lifetime, it makes recognizing it all the more challenging.

In writing for this blog, I have to engage in a near-constant form of self-criticism in order to identify and counter any examples of privileged language or thought processes that might inadvertently cause offense or harm. This is a good thing. It is a very good thing. It is the sort of self-analysis that ought to be the number one tool in any skeptic’s or critical thinker’s tool-kit. Pointing out logical fallacies or contradictions in the beliefs of others is a fairly simple thing to do, and many of us do it all the time. But to turn that critical eye inward is monumentally challenging – especially when doing so turns up things that you might wish were left hidden. Uncovering a false or unjustified belief carries with it the demand that it be abandoned or modified such that the resultant, edited belief can be justified. This rejection or modification has the result of affecting any number of other, related or contingent beliefs. This problem becomes magnified the more foundational the belief you are challenging actually is. Modifying one’s belief that yellow starburst are not only a great flavour of starburst, but the best flavour of starburst is nowhere near as difficult as changing, say, from believing in God to not believing in God. Most people who believe in God do so at a foundational level – that belief is the font from which a myriad moral, epistemological, and even scientific or sociological beliefs spring from, and to remove that font is to collapse any other beliefs that rely on it for support. That’s a weighty proposition.

And that’s how privilege works; those of us who have it often seek to maintain it because by doing so, we are effectively maintaining a web of beliefs that are reliant upon it. Our beliefs in large part define us, and help us to build a society that we want to live in – a society that reflects back to us our beliefs writ-large. When activists, supporters, and members of marginalized and vulnerable communities attempt to change society they (we ) are in effect attempting to distort the social mirror that reflects the beliefs of the privileged. And just like the mirrors in funhouses in crappy fairs the world over, the distorted mirrors reflect back images of ourselves we might not like to see; in the place of a ‘perfect’ and sanitized image of ourselves (or at least an idealized notion of ourselves), we might instead see in the reflections all the ways that we are assisting in the marginalization and oppression of others.

For my part, I see my efforts to confront and check my own privilege to be a work in progress; I try to scrutinize what I say (and the beliefs behind the words that spurred me to speak in the first place) and how my actions – or lack thereof – might serve to either help or hinder people who weren’t dealt the hand I was at birth. And I screw up. All the time. I sometimes lapse into speaking for others, when I should be trying to provide the space to let them speak for themselves; I sometimes slip and use ableist slurs without thinking. I don’t always get it right, but I do try.

And therein lies the greatest challenge to making people aware of their privilege; it has to be voluntary and many, many people simply don’t want to try. For those that do, the process is one that may take a lifetime. Progress on the social justice front seems to me to be an effort that is measured in generations, rather than years; it may have been almost 50 years since Martin Luther King’s impassioned “I have a dream” speech, but it has been a mere two or three (or three or four, depending on how you measure) generations. We will not know the extent of our successes or failures until the generation of children we have raised begin to raise their own and the society they build reflects the substance of their beliefs. The passing of laws that support and protect vulnerable populations are important, but they are not the end of a struggle; the fact that gay marriage is legal in Canada hasn’t ended homophobia, nor has the election of Barack Obama ended racism in America. But they are both steps in the right direction. An even more challenging step might be influencing society’s privileged to take a closer look at their social status and maybe start to question it a little.

A Question of Authenticity

So the other day, I had written a post that was supposed to be about the strange dance away from logic that seems to be common on the fringes of the raw, organic food lifestyle (ROFL). What I ended up with however, was an extended detour into the social fascination with the concept of ‘authenticity’. I’m sure you know what I’m talking about here, the sort of ‘authenticity’ that leads people to buy clothing made by hand in the Peruvian mountains by semi-nomadic alpaca herders, because by doing so they were being more ‘authentic’ and less ‘fake’ or ‘consumerist’.

I have no problem with Peru, or mountains, or nomads or alpacas, and I have nothing but good things to say about herders of all kinds. My fascination was with the people who buy their products – or perhaps more specifically, those particular beliefs that spur people on to seek out ever-more ‘alternative’ lifestyles. There’s a scene in the Ben Stiller movie “Zoolander” where Hansel, a model played by Owen Wilson, is describing a particularly vivid memory he has of his wild and adventurous life. He describes  mountain climbing in some far away nation, and recounts the extreme danger that he was in, before revealing that he was in fact remembering a particularly vivid hallucination brought on by the heavy use of peyote. He was confused because, we are told, Hansel’s life is so wild and adventurous – so real, that the fictional mountain climbing adventure could have been something he had actually done. Hansel has put his money and status to good use, by embarking on a campaign of living life authentically. Oh, and while he was remembering this experience, he was baking artisan bread in a wood-fire stone oven located in his industrial-loft apartment. Owen’s character was the embodiment of nearly every trope and cliché associated with the idea of living ‘authentically’. [Read more…]

Settling in, Leftist Identity Politics, and Ideological Purity

I’ve been absent from the blogging world (or blogosphere, or blogodrome) for a while now, due almost entirely to having spent the better part of two weeks moving myself and my partner to a new city to begin the penultimate phase of my education. The move was rather stressful; I am not, by nature, a nomadic person. I enjoy stability and order, and moving – and everything it entails – disturbs that order.

In addition to attending a new school, I have also begun a new job as a TA for a first-year sociology course. I’ve also been assigned to a new cohort of graduate students – most of whom are easily as liberal as I am – and that always involves a period of getting to know the new folks, and letting them get to know me. Part of that ‘getting to know’ process inevitably involves learning about each other’s political/social positions and during the course of this process; I’ve discovered something about myself that I apparently didn’t know: I’m not ‘really’ a leftist.

Let me rephrase that: I’m not really a leftist according to some of the people I’ve met. To me, being situated on the left-hand side of the political spectrum has normally come about as a result of my political and economic beliefs; I align myself rather closely with the philosophies behind social democracy, and I generally have a ‘live and let live’ attitude about other people and their beliefs. But apparently that isn’t enough to establish my leftist bona fides – at least in the eyes of some.

This isn’t unique to where I’m at currently; pretty much anywhere I go in social justice circles, conferences, workshops, etc., I find people who feel that since I don’t support their particular pet-passion, I ought to be disqualified from the group of people who generally inhabit the orange part of the political spectrum. Basically what I’m implying is that just as the right wing has its ‘purity tests’ to determine a person’s level of conservatism or republicanism, so too does the ideological left.

I point this out only because it’s become something of a favourite past-time of many of us who call ourselves progressive, to mock or ridicule movements like the Tea Party who while claiming to be all about fiscal libertarianism, often employ litmus tests as a way of ensuring the ‘correct’ level of ideological purity. I’m talking about litmus tests, and they’re lurking everywhere even among those of us on the progressive end of the spectrum. I know that for many of you who are reading this blog, this is something of a broadcast from planet obvious, but I am often surprised at how many people never really stop to think about it.

But what kinds of things do some people feel need to be attached to a leftist orientation? Well, the most obvious ones that come to mind are the distrust of the medical establishment and ‘Big Pharma’ more generally. There is also the environmentalist-born assertion that GMOs are bad – even if there’s not a lot of research to indicate that this is so (or without a handy definition of what ‘bad’ means to them) – or the insistence that farming organically and buying locally are the ‘appropriate’ ways for a person to ‘live ethically’. Those concepts of course, are rife with their own problems.

So what if we don’t agree with these positions? What if we’re not bothered by Wi-fi? What if we happen to think that vaccinations ought to be mandatory – and that they’re pretty good things to get, actually. What if we happen to think that chiropractors, acupuncturists, naturopaths and homeopaths are bloody fools at best, dangerous snake-oil peddlers at worst? What if I enjoy eating meat or am an advocate of increased reliance on nuclear power as opposed to fossil fuels? Is it truly the case that unless I embrace that other, additional suite of social, moral, or political views, I cannot rightly call myself a leftist?

Of course not. Being a leftist doesn’t mean that I must forego the use of showers, toiletry supplies, and shoes (although if you want to, well that’s cool too just stay downwind of me, please), it means being able to think both deeply and empathetically about the society we live in. It means thinking about how to order society beyond simply asking how it might be ordered to best service me. I don`t need to be a vegetarian or an anti-science conspiracist or a level five laser-lotus or whatever in order to be a part of the social/political left; I just have to think that the institutions of society can be made to work for the betterment of all, not just for the betterment of me.

(EDIT 22/09/12 9:51PST) Changed the direction of the wind.

The Value of an Education

I began my university career a decade ago. I had grown tired of working for terrible wages in a hot and smelly kitchen, and I felt that the kind of challenge I was looking for in life would be found on some campus somewhere. Initially I had no idea what I wanted from a post-secondary education – I didn’t even know what sort of education I wanted. And so I drifted for the first two years between Biology, Astronomy, History, Literature, Philosophy, and Political Science. As it turned out, my fascination with biology was eclipsed by my passion for politics and philosophy, and I graduated with a degree in Political Science. I went on to do a year of undergraduate level sociology, where I discovered gender studies – and in particular the study of masculinities – and I finally went on to do a Master’s degree that allowed me to combine both political science and gender studies. By this time I had spent more time and money on education than I had ever thought possible – especially considering that the plan I had formulated during my last year of high-school would have seen me complete a two-year diploma in computer science. I was a youth of widely divergent interests.

I was also a conservative. I’m not talking about some high-minded, philosophical conservatism fuelled by an appreciation for tradition and a belief in prudent fiscal planning; I was a conservative of another sort. I believed poor people to be weak-willed failures deserving of their sad lots in life. I believed that expeditionary warfare ought to be pursued not only for the national interest, but because sometimes other nations simply deserved to be destroyed utterly – especially ones that I considered to be ‘barbaric’. I believed that women were inherently less intelligent than men, and that oftentimes they could not be trusted to make the ‘right’ decisions for themselves. I believed that because I had a black friend, I could never be racist. I distrusted and disliked ‘Indians’ and felt that the best thing we (as smart, advanced, white folk) could do for them would be to dismantle the reservation system and force them to assimilate into our obviously superior culture. Capitalism was good, Socialism was bad. Welfare was bad. Criminals should be put to death. I was the very model of a knee-jerk, authoritarian, proto-fascist conservative. I was so ridiculous in my outlook that I was approaching self-parody with a speed that bordered on the super-luminal. I knew that I was right.

By the time I had completed my undergraduate degree, I was a socialist. I had embraced many of the key tenets of feminism. I considered Foucault to be a sort of hero of mine – though I would later come to be critical of parts of his work. While I retained a respect for the purpose and history of the armed forces, I also recognized that they – like any weapon – should only be deployed in the direst of situations. I volunteered time with charities of many types, and I began to see the less fortunate members of our society as worthy of dignity, respect, and assistance. I had lost what vestiges of faith I had carried with me from my childhood, and I had embraced the analytical tools that my philosophical education had furnished me with. It wouldn’t be until I had almost completed my Master’s degree that I began to see myself as any sort of ally to social justice movements – in large part because I was still uncomfortable with the idea of standing out in a crowd. In a few rather short years, I had changed not only my political views, but my entire epistemology. I knew only that I knew very little.

A change, I think, for the better.

So what’s my point in all of this? It is only that education has value. My education in the arts and humanities changed the way I saw the world and interacted with it. It fundamentally deconstructed my old character and built in its place a person more able to empathize with – and more willing to assist – those members of my society who had been forgotten or discarded. My education stripped away uncountable layers of assumptions, false beliefs, and faulty heuristics that had coloured my perceptions of reality, and replaced them with a set of powerful tools that could be used to gain a far more accurate understanding of the world around me, and of the society I lived in.

Results may vary. Not everyone who pursues an education will end up a progressive or a leftist, and that’s not a bad thing at all. I’m not one of those people who argue that conservatism is evil or wrong, or that if we were all smart and rational and wise, we’d be progressives; a healthy society is one that that thrives on healthy debate between as many viewpoints as possible – at least in my view. I tend to think that conservatism is a necessary component of a healthy body politic; conservatives remind the rest of us that sometimes traditions are important, that prudence can often be a virtue, and that progress might sometimes benefit from a little bit of caution. These sorts of things are classical conservative values, and their importance doesn’t change, just because those who call themselves conservatives today are at best only casually familiar with the values of their ideological forebears. I think the contemporary conservative movement has strayed from its roots a fair bit – to the point where it might more accurately be called the ‘regressive movement’ or perhaps ‘the recidivist movement’. But I digress. Again.

Universities are not ‘indoctrination centres’ or ‘liberal brainwashing facilities’; they are crucibles that can, if we are willing to let them, burn away our preconceptions. Degrees in the arts and humanities are not wasted; they allow us to perceive the world in novel and challenging ways. Will my degree set me up with a career in the same way that an engineering degree can? Probably not, but that’s not why I spent all those years earning it. My education taught me how to think and how, I believe, to be a better person. That has value. That has worth.

Skepticism and Social Justice

One of the arguments that I often hear from skeptics and the skeptical community is that while skepticism is a powerful tool for analyzing truth claims or the efficacy of medical modalities, it is poorly suited to examining issues of social policy or politics. Examining claims about the efficacy of homeopathy is relatively easy (it doesn’t work) but, the argument goes, it is far more difficult for skeptics to draw any conclusions about specific policy goals or initiatives. How ought skeptics to examine abortion? What about capital punishment? Should skeptics have anything to say about political platforms?

This argument isn’t unique to skepticism either. As Jen McCreight and others have pointed out, the same sorts of assertions are often made by so-called ‘dictionary atheists’ who argue that atheism is only ever about not believing in god, and that topics like feminism or social justice lie far outside atheism’s bailiwick.  Why should atheists or skeptics concern themselves with the issues of feminism; why should skeptics look at economics or jurisprudence? Why shouldn’t skeptics stay holed up in the ‘hard’ sciences and leave issues of social policy and social justice to the sociologists, anthropologists, and other social scientists?

Because those issues matter; that’s why those of us who call ourselves skeptics ought to become involved. Well, that and the fact that many of us in the skeptical movement are anthropologists, sociologists, economists, or political scientists. Is homeopathy harmful? Obviously; it encourages people to abandon tested and proven medical treatments in favour of eating candy. But you know what else is harmful? How about lopsided justice systems that impose harsher penalties on some segments of the population based on their skin colour or heritage. How about political campaigns that aim to strip women of the right to seek abortions – even in cases of rape or incest? How about relying on economic models that benefit the ultra-wealthy at the expense of the poorest members of society – or models that reject empirical research or statistical modelling?

Sure, my questions are built on fundamental social biases – I believe that women ought to be able to control their own bodies and their own destinies; I believe that even the poorest members of our society deserve to be treated fairly and ought to be able to obtain help from those of us with the means to do so (yes, I like the idea of taxation to pay for social safety nets). I believe that people ought to be protected from predatory business practices that prey on the uninformed and ignorant. We all have these sorts of underlying biases, and we should be debating them too – that’s sort of the whole point of skepticism, isn’t it?

Skeptical inquiry alone may not be able to tell us if, for example, capital punishment for violent crimes is a good thing (‘good’ in the moral sense – should we put people to death for killing other people?), but it can allow us to examine the claims that it is a successful deterrent (it isn’t). Similarly, skeptical inquiry can help us determine the extent to which comprehensive sex education has helped to lower teen pregnancy rates in Canada (it has). Once we know the answer to these questions, we have gone a long way towards answering the follow-up question: what steps should we take now? If the stated aim of the ‘War on Drugs’ was to reduce the consumption of illegal substances and therefore dry up the market for them, then we can demonstrate, empirically, that it was a failure. Is it reasonable to continue funding a failed policy? No? Then is the United States still doing it?  Skeptical inquiry gives us the tools required to tug at the threads of social policy, and by doing so, we can follow those threads all through the social fabric in order to see what other policies and initiatives they are bound to.

I think that at least part of the reason why there are relatively few prominent social scientist skeptics is perhaps because of the worn out cliché that the social sciences are ‘subjective’ or that “there really isn’t a right or wrong way of looking at ‘X’”. A large part of the reason for the existence of this cliché probably has something to do with how the social sciences and traditional sciences have interacted over the last few decades. There was a lot of fallout from the ‘science wars’ of the 1990s; the ‘Sokal Affair’ and similar conflicts certainly helped to foster the growing rift between the sciences and the social sciences. In some universities, it isn’t uncommon for the social sciences and the traditional sciences to not talk to each other. Some of this is certainly related to funding; money is a finite resource, and there are often strong disagreements over where it should be spent. But there is also the problem of language; in a very real sense, these disciplines all speak different languages and sometimes words mean different things to different people. It takes time and effort sometimes for each party to understand the other, and in academia, time is often in very short supply.

Whatever the reasons, the fact remains that for the most part the luminaries of the skeptical movement remain firmly ensconced in the sciences, while other fields of inquiry remain overlooked. And they shouldn’t be. Homeopathy is harmful, sure. But so was the repeal of Glass-Steagall. So is institutionalized racism, or discrimination against members of the LGBT communities. So is the perpetuation of toxic patterns of masculinity.

The social sciences are every bit as important as the traditional sciences for skeptical inquiry, and social justice is just as important a topic as medical claims or creationism. As skeptics, we need to realize this and begin turning our attention to these often-overlooked areas.* Just as the atheist movement has begun to talk about issues of social justice, the skeptical movement needs to loudly begin doing the same – skepticism+**, if you will. Why not? We’re a big movement now, and we should be able to tackle many different topics at once. Surely we can walk and chew bubble-gum at the same time?

PS: I feel that it’s necessary to point out that I could be entirely wrong about there being a lack of focus on social justice in the skeptical movement. I just haven’t really seen it outside of some blogs and a podcast or two. I’d love to see it at TAM; I’d love to see it at NECSS CON. I’d love to hear more about it on the Skeptic’s Guide to the Universe.

* This is not to minimize or ignore the fantastic work done by people like the Skepchicks. They’ve been leading the charge against sexism and harassment within the movement. I’m saying that they need backup and support; they cannot do it alone, and they shouldn’t have to.

** While I understand that many people within atheist communities are also skeptics, the two terms are not synonymous, nor are the communities the same. I’m sure that most of you reading this already know this, but I’m pedantic and I feel the need to point it out anyway.