I talk about science a lot in this blog. I am passionate about science, especially for someone who’s only studied it as a humanities major and an educated layperson. Scientists are my heroes — most obviously scientists like Galileo or Darwin, who’ve forced people to radically rethink the universe and our place in it, but also Joe and Jane Nerdiac slogging away in a lab or a swamp, trying to figure out some minute detail about the world with more patience and diligence than I could ever muster up.
And periodically, both in this blog and elsewhere, I run into people who try to convince me that my faith in science is misplaced. I hear/read people say things like, “Scientists are human, therefore science is flawed… therefore science is not to be trusted, and/or can’t really tell us anything useful about the world.”
The thing is? The first part of that is absolutely true. Science isn’t perfect. It’s a human endeavor, and it’s therefore fraught with imperfection. It’s shaped by bias, and arrogance, and the intense desire to be right, and the ability to be fooled, and the difficulty people have in seeing or imagining what they don’t expect.
I’ve never met, or read, a scientist who thought otherwise.
Which is exactly why the scientific method has developed the way it has.
People talk a lot about science as if it were a set of beliefs — like a religion, a body of theories and opinions about how things are. But while there’s some truth to this on a practical day-to-day basis, it really isn’t the big picture, or even the medium-sized picture. What science is, ultimately, is a method — a method for observing the world, and trying to explain it.
And here’s the thing about the scientific method: It’s been developed over the years to do one very specific thing — to minimize the effects of human error and bias, as much as is humanly possible.
See, scientists KNOW that they, like the rest of the human race, are arrogant, stubborn bastards who crave recognition and have axes to grind. Believe me: when you point out that many scientists are arrogant, you’ll get a dozen or more scientists laughing and saying, “Buddy, you don’t know the half of it.” And they have therefore developed this method for trying to figure out what is and isn’t real about the world — one which goes as far as we know how to minimize the effects of that arrogance and stubbornness and the rest of it.
It doesn’t do it perfectly. And it takes time, not to mention extremely hard, often tedious work. But I would argue that it does this job better than any other method we have of gathering information about the world and coming up with theories to explain it.
So I want to talk a little about the scientific method — what exactly it is, and how it works, and why it’s done the way it’s done. (FYI, this isn’t meant to be a comprehensive summary of the scientific method — just a quickie tour of the features that I think are most pertinent to these conversations.)
Transparency, of both results and methodology. When scientists publish papers, they don’t just report the results of experiments. They also report — in mind-numbingly boring detail — exactly how those experiments were done.
They do this for two reasons. They do it so other people can repeat the experiment and see if they get the same results (see Replicability below). And they do it so other people can examine and analyze their methodology, and point out any problems there might be with it. Scientists know that outside observers can often spot mistakes that an insider can’t — especially when that insider has been working on their research for years, and has a certain rabid attachment to the outcome.
Replicating results. One of the first things that happens when a scientist reports a surprising result is that a hundred other scientists run to their labs to repeat the experiment and see if they get the same result. So even if one scientist gets a particular result because they expected or wanted it and somehow skewed their experiment to make it happen… when the hundred other scientists repeat the experiment and try to replicate the results, it’s not going to come out the same. (BTW, this doesn’t just work to screen out bias — it also works to screen out fraud.)
Peer review. Again, scientists know that outside observers can often spot mistakes that an insider can’t, either because that insider cares too passionately about the outcome, or because they’re simply too close to the work to have perspective on it. So before it’s even published, research has to be reviewed by other scientists in the field — scientists who don’t have the same personal stake in the outcome as the researcher, and some of whom may even have opposing or competing stakes.
Careful control groups. As much as is humanly possible, scientists set up control groups for their experiments that are identical in every way to the testing group except in the area being tested. (And if they don’t do a good job with this, it’s likely to get caught in the peer review process — and even more likely to get caught in the attempts to replicate the research.) It’s impossible to do this perfectly — especially when you’re doing your testing on human beings and not, say, hydrogen atoms — but they do it as well as they can, and they run it by their peers to see if they missed anything (see Peer Review above). They do this because they know, from experience and history, that a hundred different variables can affect the outcome of an experiment — and a variable that you thought was trivial could turn out to be crucial.
I learned about a wonderful example of the importance of careful controls when I was in middle-school science class. We were learning about the polio vaccine, and our teacher explained that when the vaccine was first being tested, the researchers went to the schools and asked parents for permission to test this experimental vaccine on their kids. Some parents said yes, some said no… so the researchers said, “Great. We’ll test the vaccine on the kids whose parents said Yes, and the ones whose parents said No will be our control group.” But when they went to publish their results, they were told that the experiment was flawed and they had to repeat it. There was an important difference between their control group and their testing group, one that hadn’t occurred to them — namely, whether the parents had said Yes or No to the experiment. So they repeated the study, this time splitting the kids whose parents said Yes into a testing group and a control group.
And when they compared their results to the results of the original experiment, they found that, in fact, kids whose parents had refused the experiment WERE more likely to get polio than kids whose parents allowed it. Regardless of whether they’d gotten the vaccine or not. They would never in a hundred years have expected that outcome — but that’s the outcome they got. And they got it — as well as an accurate answer to the rather more important question of whether the polio vaccine worked — because of the combination of peer review and careful use of controls. (I don’t have space here to go into why they think this outcome happened — if you’re curious, ask me in the comments.)
Double-blind and placebo-controlled testing. Scientists know — especially when it comes to doing tests on people, such as medical or psychological research — that unconscious biases of the testers can influence the results of the tests. (You jiggle the test tubes of your experimental group just a little harder than your control group, and your results are fucked.) And when it comes to medical testing, scientists know about the placebo effect. So as much as possible, experiments are carefully set up so that even the researchers don’t know, for instance, which batch of blood samples came from the group that got the drug, and which batch came from the group that got the placebo — until the testing is all completed.
Falsifiability. This is one of the most important principles of science. If you have a theory that can’t be disproven — if any evidence at all can be made to fit into your theory — then you don’t have a useful theory. It has no predictive power, no explanatory power. So when you offer a theory, you have to be willing to say, “If A, B, or C happens, that would support my theory; if X, Y, or Z happens, that would contradict it.”
This is one of the reasons so many science-lovers and skeptics get so frustrated with so many religious or spiritual beliefs (not all of those beliefs, but many). Anything at all that could ever happen can get twisted around somehow to fit into the belief system. And from a scientific method point of view, that makes the belief system useless.
Which is what I was trying to get at before (somewhat clumsily) in my Lattice of Coincidence post, when I was asking, “If paranormal phenomena were ‘shy’ (i.e., inconsistent and unpredictable and tending to disappear when tested) but real, how would that information be useful?” If you have a theory about the paranormal or metaphysical (or about anything else), and no possible result or evidence — or lack thereof — could contradict that theory or convince you that it’s wrong… then it’s not a useful theory. It has no power to explain past results or predict future ones. And that’s not just a practical problem. Itâs a philosophical problem, and a big one. If you have no way of knowing whether you’re wrong, then you have no way of knowing whether you’re right.
Does this system sometimes screw up? Fuck, yeah. Especially in the short run. Early results can seem promising but don’t pan out. Surprising new evidence gets explained by boatloads of new theories that turn out to be ca-ca. And I’m sure everyone can probably think of (or Google) many, many examples of times when scientists have taken one or more of the abovementioned principles and massively screwed it up.
But when the method is followed, it works. Slowly, in the long run, with lots of stops and slowdowns and detours along the way, it works. And even when it isn’t carefully followed by an individual scientist, the method works in the long run to catch that scientist’s mistakes — and to catch mistaken assumptions and incorrect theories made by all scientists, and provide a new and more accurate theory.
And maybe more to the point:
What else do we have? What other method do we have for gathering information about the world, and coming up with explanations of what that information means, that has anywhere near the same power to minimize bias, and the desire to be right, and the difficulty in seeing what you don’t expect, and all the other obstacles our brains put in the way of understanding the world?
Intuition and inspiration are great. Scientists rely on it heavily to come up with ideas in the first place. But intuition is a starting place — not a final answer. We KNOW that intuition is heavily slanted by bias and expectations and what we want to be true. Intuition gives us ideas, gets us started on roads to explore — but if we want to be really, really sure that our ideas reflect reality, as sure as we can be with our imperfect brains and our huge and mystifying world, then we need a method to test those inspired, intuitive ideas. And as imperfect as it is, I think the scientific method is the best one we have.
In tomorrow’s post: Common objections to science and the scientific method — and my replies to them. If you have arguments against my little love letter, I’d like to ask you to hold them until then.