Movie Friday: STORM (Tim Minchin)

I’m not going to lie, this might be one of the greatest things ever written.

I’ve had too many first dates with the Storms of the world, especially since moving to Vancouver where critical thinking skills seem to be out of fashion. It’s perfectly reasonable to say that we don’t know how to explain everything in the world. That’s just the facts. However, that doesn’t mean we get to make up fairy stories to fill in the blanks, and until you can provide me with some facts or some logic to demonstrate the veracity of your claims, you’re just telling me made-up fairy stories. You don’t deserve any more respect than Mother Goose for believing in ghosts, spirits, reincarnation, astrology, or any of the other millions of fairy stories people try to pass off as fact. If you’ve got some evidence to show they’re right, I’ll happily look at it. If the evidence conflicts with my beliefs, I’ll happily change my mind. But don’t expect me to fake some kind of respect for your belief in magic wands or Feng Shui unless you’ve got real evidence to show me.

Also, how awesome is it that the whole thing is in rhyme?

Do you believe in flying teapots?

I grow very tired of hearing people tell me that atheism is the same as religion. “I believe there is a God, and you believe there isn’t. We both BELIEVE something – it’s the same!” This is the problem when one makes assertions based on “common sense” (a.k.a. not thinking before you speak), and is somewhat reminiscent of the “science is religion”  fallacy that I’ve talked about previously. There is a difference, and not simply a semantic one between the statement “I believe there is no God” and “I don’t believe there is a God”. The first is indeed a statement of belief – a belief in non-Godness. The second is a statement of lack of belief – a failure to believe in the existence of God.

To illustrate this difference, I am going to resurrect the oft-disturbed ghost of Bertrand Russell and his celestial teapot. For those of you who aren’t familiar with this thought experiment, Russell invites you to imagine that there is a teapot floating out in space, somewhere between the Earth and Mars, in an elliptical orbit around the sun. He further states that, even with the most powerful telescopes, it is impossible to detect the teapot – it is going too fast, there’s no light shining on it, it’s too small; the important thing is that it is impossible to detect by any means. But since you cannot detect it, you cannot prove that it isn’t there. He then invites you to consider the proposition that since you can’t prove it’s not there, you are required to believe and behave as though it is.

Of course reasonable people will dismiss this teapot out of hand. The idea that there could somehow be a teapot – a manufactured item of human origin – floating out in space is patently ridiculous. How would it have gotten there? “No, no, no” you are happy to say “even though we can’t prove there is no teapot, I’m perfectly willing to accept the position that in the absence of any confirming evidence of a teapot, it isn’t there in all likelihood.”

“But no!” says Russell “the teapot is THERE! How else do you explain why the lawn is wet in the morning? It’s because water from the teapot pours over the atmosphere and gets on the lawn!”

“Bushwah!” you retort. “We know where dew comes from – condensation of water vapour when the air cools overnight. And besides, any water that would come from space would evaporate instantly one it hit the outer atmosphere, and would never reach the ground.”

“Folly!” Russell comes back. “Why else would tea be so popular all over the world, if not for the fact that there is a subconscious recognition in all cultures of the existence of a teapot out there somewhere.”

“Fiddlesticks and balderdash!” say you. “We also know why tea is so popular – part of it has to do with the expansion of an empire that drank tea for historical, agricultural and climate reasons. Part of it has to do with the fact that tea is tasty. Besides, not every culture in the world drinks tea!”

But Russell keeps coming at you with facile explanations of real-life phenomena, invoking the intervention of an invisible teapot. He goes further and describes the colour and shape of the teapot (it’s white with blue flowers, medium-sized, and has a small chip on the handle), despite the fact that it is, by its nature, impossible to see. He even goes so far as to say the teapot demands that we wear used tea bags on our ears, and get together once every week to sing “I’m a little teapot, short and stout”, lest we tempt its ceramic wrath.

Eventually you get so tired of this clown that you slug him in the face and walk away – not a very teapot-like thing to do, says Russell.

I have stretched the metaphor beyond its original context, and made obvious allegorical reference to belief in God. But this is precisely what any faith requires you to do. In the mildest form, it demands that you believe completely in the existence of something for which there is absolutely no evidence, and never can be. In its next form, it twists observable phenomena to fit a blind belief, despite far more reasonable alternative explanations for which there are mountains of evidence. Eventually, it makes wild assertions about this evidenceless entity’s characteristics, and what it wants from humans (but not other animals). Any attempt to introduce reason into the conversation will inevitably be met with “well you can’t prove it’s wrong, so therefore it must be right.”

I want to pause for a second here and talk about that statement. “You can’t prove it’s wrong” is a ludicrous standard to hold anything to. It’s literally impossible (not just really really hard, but actually impossible) to prove that something is or isn’t there. I can’t prove to you that I exist, that you’re reading these words, that your computer is in front of you. If you’re creative enough, you can explain away pretty much everything (except your own existence). All we can do is look at the evidence and test alternative explanations. You could be hallucinating this whole thing, but you haven’t had any psychotropic drugs and don’t have a history of vivid hallucinations (plus, how lame a hallucination is this?). It’s far more reasonable to conclude, until there is evidence to the contrary, that the world is as it seems. Once there is evidence to the contrary, then you evaluate it and change your ideas accordingly. The part that really grinds my gears is the “… so therefore” part. Just because I can’t prove you wrong, that doesn’t mean you’re right. Just because I can’t prove that the food in the fridge doesn’t disappear when the door is closed is not proof that gremlins eat it and poop it out again exactly as it was. It’s not proof of anything. You don’t just get to make shit up because there’s no way to prove you’re wrong.

But it turns out that Russell is very persuasive, and people start to believe in the celestial teapot. When you say “well I don’t believe in a magical flying teapot that nobody can see”, they begin to call you an “a-pot-ist” (or if they’re clever, an a-pot-ate). They tell you that you secretly do believe in the pot, you are just bitter and angry at it, or your life has been bad and you resent the teapot, or that your belief in the absence of the teapot is just as facile as their belief in it. None of those things are the case – you are simply being reasonable and saying that in the absence of any evidence whatsoever, you don’t think there’s a pot there. And you’re right to do so. You might even go so far as to say “there is no evidence that there is a pot, and since it’s highly unlikely that a pot could get into space on its own, there probably isn’t one there.”

Your friend calls himself teapot-agnostic. “We can’t know if it’s there or not,” he says “so I’m not taking a stand on either side.” You then ask him directly if he believes in the existence of the teapot. He says “I don’t know if it’s there or not, it’s impossible to know.” But you press him – does he think there might be a dragon in his back yard? “Well no,” he says “dragons aren’t real.” But they might be, you remind him. There’s no way to know for sure. “Fine,” he says “there might be a dragon in my back yard that I just can’t see.” Does he believe in anything, you ask? Does he, for example, believe that the money in his pocket is real? “It’s impossible to know,” he says “and I refuse to take a position.” Fine, you say. Give me all the money in your wallet, since you don’t know whether it exists or not. See how far his ‘not taking a stand on belief’ goes. Scratch the surface of a systematic agnostic, and you’ll find someone who is actually a non-believer but just isn’t ready to say so. I would invite so-called ‘agnostics’ everywhere to (WARNING: Pun ahead) shit or get off the teapot.

This is the case of skeptic atheism. It is the result of following the philosophy of if there is no evidence for something, then it might as well not exist. If evidence appears later, then it probably does exist, and that’s great. But if there’s something out there that has no effect on the observable universe, whose effects are completely invisible, and without the existence of whom absolutely nothing would change, it’s perfectly fine to say it doesn’t exist, and spend your time on the stuff that you can see. You don’t have to believe that the teapot isn’t there, you just don’t see any evidence that it is.

I am not my ideas

A few years ago I met a girl at a party, and we hit it off pretty well. She was an anthropology student, and I had just finished reading Bill Bryson’s A Short History of Nearly Everything, so we were talking about how history intersects anthropology. Basically it was a night full of nerd foreplay. For a number of reasons that aren’t germane to the story, we ended up not dating, but stayed in contact anyway. One night we were talking and the discussion somehow turned to a debate on moral relativism vs. absolutism with me taking the ‘absolute’ side. The conversation went something like this (shortened for clarity):

Her: But what right do we have to go into another culture and tell them their beliefs are wrong?

Me: We interact and trade with those cultures. We are invested both in their economies and their populations, and so what affects them affects us. We have every right to express objections to things like government-sanctioned rape and female genital mutilation.

Her: But those are by our standards of right and wrong. They don’t necessarily see it as wrong.

Me: Okay, give me a circumstance in which rape could possibly be justifiable. Where any reasonable moral code could permit something like that to happen.

Her: See, this is why things never worked out between us!

I was stunned (mostly because that had nothing to do with the reason she had told me it wouldn’t work out). What I thought was a lively debate about two opposing positions on morality was, in her mind, a bitter personal fight. I had mistaken her spirited defense of her position for a deep level of interest in the topic. However, what she saw was me saying “your beliefs are stupid, and you are stupid by extension.” This couldn’t be further from the truth; at the time I thought she was actually reasonably smart (although that impression changed as I got to know her better). It was then that I realized something that is key to being able to have actual discussion and debate about issues.

We need to stop thinking of our ideas as reflections of ourselves.

Here’s what I mean by that. Cara (the girl) failed to separate criticism of her position from criticism of her as a person. As a result, when I expressed my position that refuted her position, I wasn’t so much saying that moral relativism was wrong, I was saying that she was wrong. But not just wrong about the topic, but a generally wrong person. The more I argued, and the more counter-examples I provided, the more insulted she became. Because I was unaware of this, I just kept going on in my debate (if I was interested in sparing her feelings, I would have taken the cop-out position that we’d just have to “agree to disagree”).

Now there were a whole host of other reasons I’m sure that Cara wasn’t exactly thrilled with me. We were talking online, and tone is very difficult to convey. I also tend to be a bit of an asshole, which is great when picking up girls at the bar, but not exactly endearing when having a debate. But the thing I took away was the idea that with many people it’s not possible to separate their ideas from their sense of self-worth. The Catch-22 of this whole thing is that one is often very reluctant to be rigorously critical of their self-concept, which then retards their ability to make good decisions about their closely-held ideals.

As an illustrative example, suppose my self-worth is tied into the idea that I am a popular person. It’s important for me to be well-liked by others and to fit in with the group. When I have to make a decision, the first thing I think of is whether or not those around me will approve. This isn’t necessarily a negative self-concept – fitting in with others is important to societal cohesion (imagine a world where nobody cared at all about others… deodorant sales would plummet). However, if I am not self-critical about this trait, I’m likely to be highly susceptible to peer pressure, both positive and negative. If I don’t say “well it’s good to be well-liked, but I’ve got to watch out for myself as well”, I’m probably going to go along with the crowd – possibly to a Justin Beiber concert. Furthermore, when someone says to me “hey man, Justin Beiber sucks. Why the hell are you wasting your money?”, I’m likely not going to be too receptive to that argument. Even though the idea of Justin Beiber might be completely neutral in my life, an attack on my decision to go to the show is an attack on my entire self-concept.

If, however, I am able to separate concert attendance from my most important sense of self-worth, I will be more able to dispassionately assess the idea. “Maybe I don’t have to pay $95 to see a mini Jonas Brother clone. My friends are important, but surely they won’t disown me for this one thing.” It also allows me to be more open to the idea that maybe doing what is important to others is important, but so is not going broke. Maybe sometimes I need to balance the wishes of others against my own needs.

This is a hyperbolic example, clearly. Self-worth is a much more complex psychological phenomenon than I’m able to illustrate here, but I hope the point remains clear. Ideas are good and useful things to have. However, some ideas are bad ideas – some ideas make you vulnerable to things that can hurt you. In Cara’s case, her idea that all morality is relative made her unwilling to accept the fact that things can be wrong, forcing her to argue on behalf of rapists. Nobody likes to lose an argument, obviously. The difference is how you react to being wrong. If you’re able to shrug it off and say “you know, this might be a better way of looking at it, and I have a lot to think about”, you’re more likely to walk away from losing an argument having learned something. If your reaction to losing is “if I lose this argument, it means that I am a bad or unfit person”, you’ll twist and turn and set up any number of cognitive dissonances to block yourself from even the possibility of learning.

I’m often wrong. I try to be wrong at least 3 times before I brush my teeth in the morning. It’s important to personal development to make mistakes and to learn from them. One of the most useful things about putting my ideas out there for everyone to see is that at various times, people disagree with me. Most of the people who read these are personal friends. I am lucky to have some very smart friends. If a smart person disagrees with me, my eyes light up. If they disagree and I can prevail upon them why my ideas are correct, then aside from the ego boost of winning an argument, I’ve got some supportive evidence that I might be on to something good. If they convince me that I’m wrong (i.e. I lose an argument), then I’ve learned something useful and it gives me an opportunity to reflect, refine my good ideas, and throw out the bad ones. It’s win-win for me: no matter the outcome of an argument, I become a better person.

There’s a scene in Kevin Smith’s movie Dogma where Rufus (played by Chris Rock) says to Bethany (the main character):

“I think it’s better to have ideas. You can change an idea. Changing a belief is trickier. Life should malleable and progressive; working from idea to idea permits that. Beliefs anchor you to certain points and limit growth; new ideas can’t generate. Life becomes stagnant.”

If we’re able to separate our ideas from how we see ourselves as people, it allows us to abandon bad ideas more easily. If, however, abandoning an idea also means abandoning our sense of self, any attack on that belief is going to be extremely emotionally jarring, and we’ll resist it at all costs. The first step towards making progress in any discussion of competing ideas is to make the debate about the idea, not the person.

The Pope comes soooo close to getting it right

Richard Dawkins has a really funny line about how Christianity is “better” than Hinduism because it’s much closer to recognizing the actual number of gods; but they overestimate by one. It’s amazing how tantalizingly close you can get to the truth with religion, but fail to make that final leap across the chasm of rationality (to borrow unashamedly from Kierkegaard).

After watching the Catholic church blame isolated pockets of individuals, the media, and finally “the gays” (it always seems to come down to them), Pope Benedict finally came close to actually acknowledging that the systemic sexual abuses taking place in the Catholic Church were the fault of… THE CHURCH:

Critics have previously accused the Vatican of attempting to blame the media and the Church’s opponents for the escalation of the scandal. But the Pope made clear its origin came from within the Church itself, and said forgiveness “does not replace justice”.

I’m not a demagogue. I am completely willing to recognize when someone I disagree with does something noble. Recognizing that the church had a role in the abuse and saying that having God’s forgiveness (note: evidence not shown) does not replace earthly justice is a marvelous and courageous admission. It takes a great deal of humility and respect for others to stand up and say “I have made a mistake, and the fault is mine.”

Which is almost what Benedict did here. Now I am not trying to suggest that Benedict (as his Clark Kent alter-ego, Cardinal Ratzinger)  himself is solely or even primarily responsible for covering up the sexual abuse, although there is evidence to suggest that his office was complicit. I am not expecting him to go out and own up for all of the abuse that’s ever happened in the church. However, there’s one final step that the Pope needs to take if he’s interested in being honest – he needs to stop blaming “Sin”.

Sin is a ridiculous ephemeral concept. It’s a disembodied entity that sneaks into the souls of righteous people and influences their acts. It’s like blaming the devil for possessing you and making you get drunk and beat your kids. Saying that sins within the Church are responsible for its actions is creating a non-corporeal scapegoat. It’s like Jeffy from Family Circus and his ghost pal “Not Me”. You can’t confront “Sin” and take it to task for its actions. You can’t remedy “Sin”. “Sin” is just out there, and there’s nothing to be done about it.

I’m waiting for the pope to recognize that wearing a cloak of impenetrable infallibility is going to lead to corruption. Insisting that the “good of the church” should trump doing the right thing is begging the question – how do you know that what’s good for the church is good for anyone else? What we see over and over is that the more power and secrecy a group has, the bigger the potential for abuse. That isn’t because of “Sin” or because of bad people who sneak in under the radar. It’s the inevitable outcome of an establishment that refuses to play by society’s rules and insists on its own superiority without evidence. The reason the RCC is catching all the attention right now is because it’s the biggest organized religious entity – I’d be shocked to learn it isn’t happening in other places.

As I said, I applaud the Pope for coming close to getting it right. His office’s unrepentant actions immediately following this pseduo-apology are contemptible and I am still no friend of Benedict, but I am willing to recognize when steps are made in the right direction.

Cognitive Semantics – pt. II

This post originally appeared on Facebook on May 25th, 2009

After re-reading my post of a couple days ago, I realize there is a big piece missing from my discussion of “smart” decision-making: the concept of Value. I allude to it in my example about driving 2 hours to run for 30 minutes, but I feel it needs more explanation.

What I seemed to suggest in my last post is that there is a process of intellect/sagacity/acumen calculation that will help people make smart decisions. If I were to try and boil that process down into a mathematical equation, it would look something like this:

Good = A + B + C + D + E + …

Where “Good” is a ranking of how positive or negative the decision is for the decider, and “A – E” are the various likely outcomes of the decision (i.e., A is money spent, B is fun had, C is social status gained, and so on hypothetically). However, this assumes that all of the outcomes are equally important, which they may not be. For example, a person on welfare can obtain a great deal of social status and fun from buying a brand new car. However, the amount of money spent is prohibitive (indeed, it would impossible for this person to eat or pay rent or do anything… even buy gas). On the other hand, a millionnaire would not mind paying for a new car, but may not gain as much social status (“Oh, you bought a new Camry. How… common. Excuse me, I need to finish my arugula and monocle sandwich”) Clearly each outcome does not carry the same weight for each person. Two people with different values may reach the same (or indeed, different) decisions using the same intellect and acumen, but via very distinct processes.

Perhaps a better equation might look like this:

Good = Aa + Bb + Cc + Dd + Ee + …

Where “a – e” represent the VALUE the decider places on each outcome. For example, our two would-be car buyers: the welfare recipient places a much higher value on making prudent financial decisions (insofar as a $20,000 purchase is concerned) than he/she does on achieving social status and having fun than the millionnaire, for whom social status is real currency. Consequently, the value of A+B+C(welfare) and A+B+C(millionnaire) are equal, but a(welfare) is extremely large, while a(millionnaire) is much smaller. Conversely, b(welfare) and c(welfare) are small, while being larger values for the millionnaire.

This form of the equation quickly becomes ridiculous as one realizes that there are an immeasurable number of potential outcomes that rank from trivial to potentially catastrophic. Giving each one of these outcomes an equal weight in the decision-making process would necessarily give preference to decisions which had extremely small but positive outcomes, and made no impact at all. A third piece is required, which is the PROBABILITY of each outcome occurring.

An equation might then look like this:

Good = Aa1 + Bb2 + Cc3 + Dd4 + Ee5 + …

Fans of British philosophy will recognize this as a re-hashing of Utilitarian calculus, the principle of “the greatest good for the greatest number” without the ethical connotations. This isn’t confined to making moral decisions, but a suggestion for a crude way in which people make decisions (or should make decisions). It is fairly evident that the value that people place on different outcomes is a significant component of what decisions are made that is completely independent of intellect, wisdom or intelligence.

So who cares? I guess I just wanted to point out that a decision that seems stupid by some standards (most usually my own standards) might in fact be motivated by the value the decider places on different outcomes. If I don’t think something is important (for example, I don’t put a lot of value on fitting into a crowd), I will question (and usually insult) the decision that someone else has made. However, this is a difference in values, not in cognitive ability.

HOWEVER, this issue of values does not side-step the first post’s point, which is that when making decisions, one should spend time and be aware of the ramifications of their decision, then consider the value they put on each. Making decisions from gut-feeling or “emotional reasoning” will cause you to end up deciding on different courses of action from the same set of principles, instead of making the decision that has the greatest good the greatest number of times.

Obviously decision-making is far more complicated, and we are people, not computers. However, if the goal is to make well-informed and prudent decisions, it would benefit us to put more time into thinking about why we do the things we do, rather than just doing them and sorting out the problems afterward.

Cognitive Semantics

This post originally appeared on Facebook on May 23rd, 2009

Because it’s come up in a couple of conversations recently, and because I’ve been thinking about it, I thought I’d share some of my thoughts on some stuff. Specifically, the lines I draw between the concepts of being “Wise”, “Intelligent”, “Intellectual” and “Smart”.

Wisdom is first on this list because it is probably the most easily-defined of these concepts. When I say someone is “wise” or their judgment exhibits a quality of wisdom, I am refering to their perceptive grasp of the relationships between entities. Conventionally, wisdom is a result of having a rich life experience; that is to say, a person who has lived through a lot is more wise than someone who has not. I think that experience is one method by which wisdom can be gained, but it is not the only one. Wisdom is knowing how different things (people, events, forces) interact with each other, and what kind of result one can expect to see from a given set of circumstances. As an example, a wise person knows that putting their hand on a stove will result in pain. They may know that from personal experience, or from a deeper grasp of the underlying concepts of the interplay between hot objects and body parts, and what is likely to happen when those come together. To use a different example, a wise person may know that marrying a person with a fundamentally different value system is probably not a great idea; not because he/she has experienced it before (or indeed, had friends who did), but because he/she understands something about human relationships and what makes them work well.

Intelligence is a concept that dovetails with Wisdom, but is a separate entity. Intelligence is a quality of intuitive grasp and adaptability. An intelligent person solves problems in novel situations because he/she has the ability to see things from a number of angles, and to create innovative solutions that are not guided by any previous experience. In a recent debate with a friend, she suggested to me that if a person from North America was dropped in an African jungle, his/her intelligence would be useless to them. I argued that their knowledge would not help them survive, but if they were particularly intelligent, they would be able to adapt and solve problems, without the need for specific instruction. Like Wisdom, the application of Intelligence requires an underlying grasp of the relationships between things, but Intelligence itself is not tied to any observed phenomenon. One cannot learn to be intelligent, though one can, conceivably, be tutored in skills to apply the intelligence he/she does have.

Intellectual refers to a concept I learned in a social psychology course I took back in Waterloo called “need for cognition”. This is the willingness (or tendency) of a person to spend time thinking about things. Some people have a propensity to look at things in dry, rational terms, or engage in pursuits that have a purely cognitive element to them. Others prefer to experiment in the real world, or to look at the way things happen in a pragmatic sense rather than distilling them to their constituent parts. As an example, two people attend a concert (for the sake of argument, we’ll say it’s me and Joel, although this is not really a very accurate abstraction). Both of us like the music the band is playing. Joel likes it because he likes it. His practical experience of the music is a positive one. I, on the other hand, am impressed by the innovative chord structures, the use of harmony, how it departs from other music I have heard, and other more empirical measures of quality. While we both arrived at the same conclusion – the music was good – but my appreciation was influenced by intellectual appraisal, whereas Joel just likes what he likes.

The final concept is that of being Smart. I judge Smartness by the quality of decisions a person makes. Given a set of circumstances, a person has a number of options on how to proceed. A Smart person makes choices that have the greatest long-term utility (or, if you disagree with that, fill in your own definition of what a smart decision is). Smart is the most difficult of these ideas to really put a solid definition around, because it overlaps a lot with the idea of “Taste”. I may think that driving 2 hours so you can enter a 30-minute running event is not a very smart use of time, but that’s because I’m really lazy and don’t like running. However, someone putting a value on experience and community running and racing and variety may think it is completely worth going to a new place.

It is important to differentiate these concepts, because while they are often used interchangably, they are not the same thing. I know many people who are very wise, but when confronted with a new situation where they cannot draw on their experience or grasp, they struggle. I know a number of very intelligent people who don’t think about things before doing them, trying out new solutions all the time like a person trying to assemble a hang-glider after falling off a cliff.

For me, a smart person makes decisions that are guided by the above three qualities. He/she looks at prospective options and evaluate whether he/she knows how the elements will interact, and what the long-term repercussions will be. If the situation is a novel one (i.e., no experience of how it’s solved), he/she uses his/her intelligence to intuit some novel approaches, and then intellectually tests what is likely to happen if he/she follows each option. This brings me to the antonym, stupid. A stupid person blindly makes decisions without considering the outcome beforehand. Stupid decisions are made out of a lack of wisdom (not knowing what will happen if X meets Y), a lack of intelligence (not being able to guess how X might affect Y in light of a lack of prior experience), and/or a lack of intellect (not spending the time to imagine how differences in X, like the presence of Z, might change Y).

A stupid person is a person who makes stupid decisions. It does not necessarily mean a lack of any of the above qualities, but it does mean the lack of the use of at least one. I don’t think stupid should be considered as a perjorative term; that is to say, stupidity isn’t inherently negative, it just tends to lead to negative results. It is my opinion that we should all try to exercise each of the three underlying concepts to accomplish the fourth.

The above are all, of course, simply my thoughts, opinions, and semantics. A big chunk of the inspiration for this (aside from the fact that it’s come up in conversation a few times over the past couple of weeks) came from reading “Zen and the Art of Motorcycle Maintenance” by Robert M. Pirsig. It’s a good ‘un if you haven’t read it already.

Why “everyone’s entitled to their opinion” is a lie

As children we were inducted into some terrible and damaging lies perpetrated by society (usually at the hands of our parents). Some of these were benign, like Santa and the Easter Bunny; some were intended as comfort but led us astray, I’m thinking here of guardian angels or monsters under the bed being afraid of light; but there was one that was truly abhorrent and is responsible for a whole host of problems in modern life.

That lie, my friends, is the oft-parroted screed: “everyone’s entitled to their opinion.”

Before I get started tearing this idiotic statement down, I would like to formally acknowledge the hypocrisy present in the fact that I am presenting my opinion that not everyone is entitled to an opinion. Hopefully by the end of this post I will successfully demonstrate that there are some people who are entitled; not by virtue of their specialness but by the way in which they arrive at their opinions. There, it has been disclaimed.

A great anecdote pops into my head, which hearkens me back to Grade 13 (OAC) English class with Ms. Mooney. A classmate of mine presented some half-thought-through metaphorical interpretation of something in Robertson Davie’s Fifth Business. Ms. Mooney pointed out that such an interpretation did not seem to fit the overall theme of the book, and in fact ran completely contrary to other sections in the work. Haughtily, the classmate shot back “well, everyone’s entitled to their opinion,” to which Ms. Mooney replied “yes, but yours is wrong.”

Colloquially used, “everyone is entitled to their opinion” means that anyone can think whatever they want, and they have the right to express that opinion, have it listened to, and be considered alongside other opinions. In a legal sense, I suppose it is true that it is against the principles of free speech to restrict someone from expressing an opinion – in that specific context I suppose everyone does have the right to say whatever they want. However, this is taken much too far. There are any number of people out there who should not be expressing an opinion on anything. I’m not suggesting they aren’t allowed to, I’m saying that their opinions are so damaging and retarding of actual thought and progress that it erodes the opinions of people who actually do know what they’re talking about.

I’m talking in circles a bit here, so I am going to back up a bit to first principles. I want to first start by providing my definition of opinion. The Free Dictionary provides the following definition:

A belief or conclusion held with confidence but not substantiated by positive knowledge or proof.

Which, I suppose, is close to the colloquial meaning. Obviously if there is positive knowledge or proof of an idea it becomes a fact, not merely an opinion. The problem with this definition is that the standard of “proof” is an illusory one. People today have finally caught up to the methodological skeptic philosophers of the 17th century and have happily begun telling everyone (with faux smugness) that “it’s impossible to prove anything.” In a certain metaphysical sense that is an accurate statement – outside of mathematics it is impossible to have 100% proof that anything is true. The definition of proof then comes under fire, because nothing is completely unassailable. For example, I can doubt the existence of the sun, preferring instead to believe that there is a giant light bulb floating in the air that is controlled by guy wires. “Evidence, reason and logic be damned,” I might say “I know a floating light bulb when I see one and you can’t prove otherwise.”

So we need to establish a standard for “proof” first. I would offer the following:

Sufficient evidence and/or logic to establish beyond reasonable doubt that an explanation for an event or phenomenon is an accurate description of reality.

Sure, there are huge problems with it: what is “reasonable” doubt? What is “reality”? I am happy to dispense with these as questions suitable for contemplation by metaphysicists, who in fact have a great deal to say on the matter. For the purposes of this discussion we can define reasonable as “in accordance with fair-minded logic” and reality as “the state of affairs from an independent observer” and end the ontological portion of this exercise.

Having established this standard for proof –  which does not, by the way, preclude the idea that new evidence could disprove something – we can begin to have a meaningful discussion about what an opinion is. In this case, an opinion is a world-view (I like Piaget’s term schema for a world view, and will use it hereafter unitalicized) for which there is insufficient evidence one way or another to demonstrate beyond reasonable doubt that an event is, in fact, reality.

There are any number of forms such an opinion can take. For example, there is no “right answer” when it comes to economic theory. There are examples of times when allowing privatization of a field produces better outcomes than government control would – a great example of this is the Human Genome Project, in which Celera developed a private venture that was much faster and cheaper than the international body in sequencing the genome. However, there are similarly examples of circumstances where public administration is far superior to private competition – the 407 Highway in Southern Ontario is such an example, in which a toll highway was constructed with public funds, was subsequently privatized, then had to be bailed out by the government. In addressing any given problem (for example, ‘should the government privatize pharmaceutical insurance?’) there are opinions to be had on either side. Neither approach can prove its superiority in the hypothetical, and so legitimate debate can take place.

What is common to the opinions in the private/public debate is that there is (in addition to presumably some evidence) a logical and reasoned progression from agreed-upon first principles that diverges at some point to form two partially contradictory views on the same issue. For example, advocates of both public and private can agree on the meaning of terms like “money” and “savings” and “benefit.” While they may forecast the outcomes of those terms differently, they can at least agree to be using the same dictionary.

In this context, I would like to offer a clearer and more precise definition of opinion:

A belief or conclusion held with confidence, borne by logic and common first principles, that has not been or cannot be definitively proven.

All I’ve done is shoehorn in the thesis of my argument, which is that an opinion ought to be based on a combination of fact and, failing that, evidence-supported lines of reasoning. If it’s not possible to prove something (to reference my earlier example of the classmate in English), your opinion should be consistent with reality and you should be able to “show your work” – your audience should be able to see your thought process. I also put in the “first principles” thing simply because there are people who love to shift goalposts mid-argument and say “well that’s not what _____ means to me.” If you can’t agree a priori what you’re talking about then the argument is a waste of time and precious consonants (vowels are abundant and freeeeeeeeeee).

Why is this a better definition, or at least a more useful one? Consider the alternative case, in which opinion means merely whatever idea crosses your mind at a given moment. It must be given the same consideration as a hypothesis that has been scrupulously and carefully worked out. My brainwave that the sun is a giant floating light bulb is therefore equally valid (in an “all opinions are valid” sense) as your conjecture that it is in fact a ball of gas burning millions of miles away. The problem with tolerating my hare-brained pseudo-opinion in a fit of political magnanimity is that my interpretation has wildly different consequences than yours (yours allows space travel; mine necessitates the construction of a factory to build a replacement “sun” in anticipation of when this one burns out, and the giant A-frame ladder needed to install it). If my opinion is granted credence and equal time to yours (since everyone is entitled), I might be able to persuade some gullible fools into going along with it. Given a charismatic enough spokesperson and enough political pressure, we will be forced to “teach the controversy” of sun vs. bulb in schools.

I am clearly drawing a parallel here, which should illustrate to you that this kind of thing can and does happen, with disastrous results.

Furthermore, I think it’s useful to recognize that some opinions trump others in our own lives. One of the most over-used and perhaps most moronic statements of our common parlance is “agree to disagree” when it comes to matters of clear right and wrong. A friend of mine delights in tormenting another friend (the two are roomies) by saying something completely outlandish and then, when challenged, smirking and saying “whatever, I guess we’ll just have to agree to disagree.” There is only one circumstance in which “agree to disagree” is a valid statement, and that’s in a deontological argument (ethics, discussions of “the good”). People have conflicting values, and it’s impossible to establish which values are “right” and “wrong”. Further explaining the difference would require another explanation nearly as long as this post, so I’ll save it for another time and get back to my original point.

In order to progress as a society, to develop new ideas and solve new problems, we must do away with this pernicious lie that everyone deserves to have their opinion heard. I have attempted to show the logic and reasoning from first principles behind why I feel this way. I once had a roommate say to me “the only reason you’re right more than I am is that you never say anything that people can really object to!” as though it were some sort of vice to have thought things through. However, I want to make it clear that I am not advocating the censorship of dissenting opinions or even crackpot lunatics. What I am advocating is that we stop lying to our children when we tell them that everyone opinion is equally valid. What if we told them instead “provided there’s something reasonable behind it, every opinion has at least some validity”? Definitely not as catchy, but not as destructive either.

At least, that’s my opinion.

The Forces of Stupid

Battle lines have been drawn in the intellectual plains. The respective armies have gathered and are unleashing holy hell on each other. This is not the oft-referenced battle between the Forces of Good and the Forces of Evil. No this battle is much more insidious. This is a battle between the Forces of Good and the Forces of Stupid.

Who are represented on these two sides? Warriors for the Forces of Good (FoG) include scientists, secular humanists, and those in all fields who make a genuine effort to be conscientious and thoughtful in all issues before picking a side.

Representing the Forces of Stupid are:

  • Creationists
  • Anti-vaccinationists
  • Tea-Partiers
  • Conservatives (surely not all of them, but definitely the ones who are doing all of the talking)
  • 9/11 Truthers, Holocaust Deniers, Illuminati conspiracy theorists, etc.

These are fights where there is a clear right and a clear wrong. Legitimate disagreement is possible when two sides have a philosophical difference when interpreting the same set of facts (most ethical dichotomies, the actual nature of subatomic physics, whether to get pizza or Chinese for dinner). In some fights, the controversy is resolved when new facts come to light that clearly define what is real, and what isn’t. Not so, in the minds of the Forces of Stupid. What is common to those on the FoS side is that they believe they know the “real truth” without taking any time to examine any evidence whatsoever.

By way of weaponry, these two opposing forces seem almost completely mismatched. The FoG use fact and reasoning as their chief weapons. They are able to craft logical, precise and nuanced arguments that cut as close to the heart of truth as is humanly possible. The FoS, on the other hand are armed simply with wild, unsupported assertions and every logical fallacy under the sun including personal attacks, erecting false equivalence, and their favourite tactic: straw men.

The difference, however, comes in to play when one examines the defensive armaments available to each side. The FoG, believing that their weaponry is so far superior to that of their opponents, use it also as their chief defense. They counter the flimsy and weak attacks of the armies of Stupid by cutting their attacks to pieces, parrying each volley of half-baked accusation and allegation with razor-sharp deductive precision, rendering their foes’ attacks harmless. The FoS, on the other hand, are shielded in an impregnable fortress of denial and lethe, first refusing to believe that their attacks have been utterly defeated and then turning around, forgetting that it happened at all, and re-launching their original, refuted, attack.

Why is this battle happening? Aside from the obvious fact that people disagree about things, and some of those things are highly important, who are these two opposing forces fighting for? In any war, those doing the actual fighting make up only a small percentage of the general population, being strongly outnumbered by civilians. This struggle is no different. There are a large number of people who are undecided on these issues, whether through benign ignorance or cautious equivocation. The more of these people either side can win over to their way of thinking, the stronger the force becomes and the more that side can sway decisions.

So why do these two forces appear to be on equal footing? Why don’t the FoS just rout their opponent, having completely dismantled their attack apparatus? The sad truth is, because people are stupid. Now, I don’t mean stupid as in unintelligent or as a necessarily pejorative term, I simply mean that the average person does not latch on to reasoned thought as being the only way to make decisions. This happens for a number of reasons – thinking is hard work, reality is more nuanced than a soundbyte can encapsulate, they are not educated enough to use logical tools (this is the biggie, in my opinion) – but at least part of it has to do with the fact that religion has elevated “faith” to be equivalent to logic. The argument is that evidence and reason are good for some things, but it’s equally valid to simply believe in something.

I posit that the FoS are able to appeal to that type of thinking. It gives people all the satisfaction of “knowing” that something is “true” without having to do any of the hard work required to establish verifiable truth. The FoS believe in their heart of hearts that what they believe is 100% un-nuanced reality and that anyone who believes differently is insane. This explains why when an argument is soundly defeated, the FoS simply shift the goalposts and say that the “real truth” is still there, it’s just a little different than they were saying before (or worse, that they’d been saying that all along, completely ignoring/forgetting their previous statements). This is absolutely because of faith-based “reasoning” – just look up Thomas Aquinas’ “proofs” of the existence of God, or any theological argument for that matter. This also explains the phenomenon of what has been called Crank Magnetism, where people who believe in one crackpot theory often believe in, and/or come to the aid of those who believe in, many other types of unsupported/unsupportable assertions and belief systems.

I’m not saying that belief is wrong. FoS foot soldiers often point out that people believe in science and then try their damnedest to forge a false equivalence between religious belief and belief in the evidence. However, these two types of belief are not the same. Scientific beliefs and tenets come from observing phenomena in the world, noting how they behave, discerning a pattern, and then drawing a conclusion (yes, I am aware that scientists often go in with a model in their minds already which can bias the conclusion, but that is the flaw of the scientist, not the science). Contrast this to religious belief, wherein the conclusions are drawn first, and evidence is tortured, teased, stretched and cajoled to fit the prescribed pattern. In scientific belief, evidence that does not fit the model is evidence that the model doesn’t explain reality well or is wrong and the model is abandoned; whereas in religious belief, evidence must be changed to fit the model, which can never be abandoned.

In order for the Forces of Good to triumph, it is necessary to take a number of steps. I will detail these in another post (as this one is already getting a bit lengthy) but they are, in brief:

  • Understand your own position
  • Be consistent
  • Counter value arguments with value arguments
  • Speak to the audience (those undecideds)
  • Refuse to compromise truth
  • Be respectful of the opposing side’s humanity, if not (and definitely not) their beliefs

The true path to winning this war is to educate the populace, since educated people are more likely and more able to use logic as a decision-making tool. Ever notice how conservatives want to gut education spending, or leave higher education only within the reach of the rich? Ever wonder why? It’s because uneducated people are where their votes come from.

This battle is far from over, but the smarter we get, the less likely we are to end up fighting for the Stupid.

The danger of the downward comparison

A downward comparison is a psychological/philosophical phenomenon in which a person evaluates the goodness of some object by contrasting it with an object he/she deems to be worse (or, in all technicality, “less good”). This is useful in ethics when evaluating “the lesser of two evils” or even in economics when trying to make a decision between different, unwanted, but ultimately necessary outcomes.

It is more dangerous when it occurs in a person’s self-appraisal. A downward comparison does not tell one how good he is, only whether or not there are others worse off. While occasionally useful, downward comparisons must be balanced with their counterpart, upward comparisons to give an idea of where you stand in terms of the things you care about.

For example, it might be very important to me that I am an ethical person. I put great personal value on making the right decision in ethically tempting situations (I wouldn’t, for example, steal money from a blind person not because I can’t but because I feel that I shouldn’t). I put such great value on this trait, in fact, that it is central to my self-concept – it’s very important that I see myself as an ethical person. I maintain my sense of self but constantly comparing myself to infamous historical dictators. After all, I am much more ethical than Idi Amin, or Stalin, or Pol Pot… the list can go on. Since, my reasoning goes, I have not committed the wholesale slaughter of thousands of innocent people (nor could I imagine myself doing so if given the opportunity), I must be an ethical person.

It doesn’t take a lot of brain power to see how quickly my reasoning can be picked apart – being better than Stalin simply means that I’m not one of the most brutal despots in the history of the world. This fact says absolutely nothing about my absolute standing as an ethical person. I could be cheating on my wife, victimizing my employees, or voting for the Conservative party. All of these are clearly unethical acts that are not in any way comparable to mass murder, but still pretty heartless. However, because I am relying on downward comparisons to inform my self-image, I don’t ever have to consider whether or not my self-opinion is justified (or at least not until I’ve murdered a few hundred people). All I have to do is make sure I am not the worst, and I can continue to believe anything I want about myself.

The same argument can be made about entirely upward comparisons – that you’d feel terrible about yourself for not being the best. I would argue that it is unlikely that someone would completely despair of ever being good enough when compared to the best, but that’s simply a belief statement, not a rational argument. The fact is that without making both upward and downward comparisons, it is not possible to have an accurate self-assessment.

Why am I talking about this? Two words:

Jersey Shore

Who watches this crap? Why on Earth would anyone want to give up valuable time watching orange monkeys parade around with behavior that is only matched in its ridiculousness by their haircuts? What possible benefit could one gain from viewing this show?

Don’t get me wrong: I’m all for the entertainment value of television. Not every show needs to educate its audience or deal with heavy, hard-hitting issues, but you should at the very least walk away having learned some sort of lesson – whether it be the resolution of some ethical situation or a new way of dealing with your friends more positively… even the Naked Man had some value!

When I asked this question to friends, the response I got was invariably “it’s just harmless fun” or “they’re so stupid it’s funny”, but what I heard most of all was “they are more stupid than I am, and that makes me feel better.” You want to know how I know this? Because I do it too. I used to watch Maury Povitch on days when I didn’t feel like going into the office on time. Almost without an exception, there would be an unemployed, illiterate, lazy moron who had, against all laws of nature, managed to spawn a child with some equally repulsive woman who now was “900% sure” that this particular waste of skin, and not the 4-5 other wastes of skin she’d slept with that month, was the sperm donor. Why did I watch this show? Aside from my deep-seated fear of accidentally fathering a child and cheering when DNA proved that the dude is not the father, it made me feel better about myself. Even though I was sitting on the couch in my bathrobe at 10:00 am on a weekday, surely I was better off than these throwbacks!

Again, it doesn’t take a lot of work to pick apart the gigantic holes in my logic. So what if I was better than they? So what if I wasn’t scraping the bottom of the barrel of humanity? I saw my smug self-satisfaction reflected on the faces of the audience members, whose lives were so incomplete as to attend a taping of the Maury Povitch show (unless they went for lulz). I switched my perspective, and realized that I was exactly the same as the audience, and there were a lot of people who were doing much more with their lives. So I got my ass off the couch, showered, and went to get some work done.

“Well that’s great”, you might be saying, “but it’s just a harmless television show”. I disagree with your use of the term “harmless”. There is harm in watching these kinds of shows, insofar as it encourages us to think of ourselves as superior. We become complacent in our search for excellence. We allow opportunities to improve slip through our fingers because ‘at least we’re not as bad as _____.’ My reply: so what?

There’s a much more drastic example of the dangers of downward comparisons – Canada’s health care system. Compared to other OECD countries, health care in Canada costs far more per capita and delivers, at best, equal-quality care. However, instead of taking dramatic steps to improve the state of our system, we sit back on our laurels and say “at least we’re not as bad as the USA.” The American system sucks; nobody’s denying that. But to compare ourselves to the worst and think that somehow that justifies our near-total inaction for wholesale change is the same logic that kept me unshowered and on the couch.

Here’s my point. While it’s important to feel good about yourself, that kind of reassurance is best for all when it comes from positive identification with those we wish to emulate, not from distancing ourselves from those we hate. Simple downward comparison will never move us out of the status quo of mediocrity. While not everyone can be the best, that’s not an excuse for not trying our best. The more positive examples we surround ourselves with, the more motivation we have to improve (and the more models of improvement we have at our disposal). The more we soothe ourselves by allowing ourselves to be lulled by downward comparisons, the more likely we are to stay exactly where we are, and the less likely we are to make life better for ourselves or for others.

The Placebo Effect

This post originally appeared on Facebook on January 27th, 2010.

Those of you who are not scientists may not be familiar with the term “placebo.” It is often equated in common language with “sugar pills”, or some sort of fake drug that doesn’t do anything. This is a reasonable proxy for what a placebo actually is. In a nutshell, a placebo is something that mimics the outward characteristic of an actual entity while having no real effect. This definition is imprecise, as placebos do have an effect, which is the whole point. The so-called “placebo effect” occurs when someone, believing that the placebo is actually the entity it is mimicking, undergoes some change that is attributed to the placebo, but is actually no more than their own psychosomatism (or naturally-occurring events). The key to this effect is that the person believes that what they are receiving is genuine.
Placebos are most commonly associated with clinical trials for medicines. One group, the experimental group, is given a new drug while the other, the control group, is given a placebo (often either a sugar pill, aspirin, or in the case of intravenous drugs, a saline solution). Once again, it is important to note that the patients (and in high-quality studies, the physicians) are not aware whether they are receiving the medicine or the placebo. Nowadays, placebo trials are less common, since medical ethics require that all patients receive at least the standard treatment that would be available if they weren’t in the trial.

There is a very good reason for doing this. The human mind is incredibly powerful. Sometimes merely the act of believing you’ve been given something that will help causes you to feel better. Indeed, there is marked symptom improvement even in some cases of terminal or chronic painful disease simply due to believing that the “treatment” you’re getting is fixing the problem. Thus, in order to determine concretely what effect, if any, a new treatment has, it is necessary to control for the placebo effect – make sure all patients are experiencing it. Any significant difference seen after the placebo effect has been accounted for is, therefore, a result of the real effects of the treatment.

(I’ve used the word “real” a couple of times here, and I anticipate that the more new-agey of you reading this will object to my co-opting that word for science. When I say “real”, I am using it the metaphysical sense – the real/non-real dichotomy – which states that those things which can be directly observed, measured, etc. are “real” while all other things are non-real. Please note that, although linguistically similar in English, non-real is not the same as “not real”. “Not real” means fictional, imaginary, having no basis in reality; whereas “Non-real” simply means that the concept is not a measurable, physically-based. Admittedly, a lot of things that are “non-real” are also “not real”, but that’s the subject of a different discussion. Think of it this way: unicorn farts are “real” in a metaphysical sense, but “not real” in a “WTF, UNICORNS?” sense.)

What all this means is that the simple act of believing something to be true causes our minds to behave as though it is true, even in those cases when the object of belief has no actual effect. Belief is absolutely essential to this process – if I tell you “hey, eat this sugar pill”, you’re not going to feel any better (unless you had low blood sugar, but then it’s no longer a placebo, init?).

Anyway, I said all of this as a preamble to the statement that’s been rattling around in my brain for a couple of months. It seemed particularly important to me. Maybe I am vastly overestimating the impact that my ideas have on people – maybe nobody cares about my inane ramblings and will just say “c’mon Ian, get to the swearing!” Anyway, here’s my fucking thesis:

If you have to believe in it for it to work, it’s a placebo.

Nobody intelligent denies the existence of the placebo effect. It’s been observed countless times in many different guises. However, we seem to be happy with confining it to the field of pharmaceuticals, even though it’s much bigger than that. It’s not a scientific thing, present only in beakers and pills, it’s a psychological phenomenon that occurs in the larger world around us, not only in terms of health but in the way we see the world. We carry good-luck charms, we have little personal rituals and idiosyncrasies, we talk about “fate” and “destiny”, we read horoscopes, the list goes on. This is stuff we all do, not just the crazy superstitious bunch. Remember that Seinfeld episode where George eats the éclair from the garbage? It was sitting right on top, only one bite out of it. It’s not as though coming in contact with the garbage can infused the food with virulent disease, but we all identified with the idea. That’s just a modified version of the placebo effect – we believe it’s dirty even though, rationally, we know it’s not.

So why am I talking about this? Why is this important? A placebo is given in a clinical trial as a kind of benign deception on the part of the experimenters. However, a patient in a hospital would never be given a placebo instead of real medicine in a treatment setting – we wouldn’t accept allowing someone to suffer when we have the ability to help. Why, then, are we completely willing to accept placebos in other forms – in some cases clamoring for them? Faith healing, homeopathy, crystals, reiki, tarot cards, psychics, chakras, qi, “The Secret”, placebos, placebos, placebos all. These are all examples of things that don’t work unless you believe they work.

I have, many times, heard the argument that there are other “ways of knowing” or “ways of measuring” that “Western science” can’t account for. This little fallacy will perhaps be discussed in another post, as this one is already getting really long. I’ll boil down my argument as concisely as possible here. There’s no such thing as “Western science”, there’s just “science”. Science is the act of observing the causal chain of a phenomenon to identify the “real”. If you’re not doing that, you’re not doing science. While we can argue metaphysics, ontology, theology, and all those good things from an East/West perspective, there’s only one kind of science. Everything else is slight-of-hand and superstition, washed down with a big handful of placebos.

This is the part where I provide my full-throated defence of all of the things I just attacked. It may come across in the previous paragraphs as though I think that placebos are bad, or that the only stuff that matters is the “real”. Some might believe this to be true, but I don’t. As I said, the mind is incredibly powerful. Sometimes when you’re faced with an incredibly-difficult situation (such as terminal illness, a big speech, an first date), you need to believe that you can get through it. Belief in ourselves is crucial, as otherwise we’d be far too realistic about our limitations and never try anything new or difficult. However, when we throw ourselves into the brink, come out alive, and then give all the credit to our luck rabbit’s foot, we’re doing ourselves a great disservice. When you do something good, take a victory lap! You overcame the odds and prevailed!

And, if you try something and you fail, well you can always blame immigrants, I guess.