It would be a shock if it does but a series of nine experiments done in 2011 by Daryl Bem a professor at Cornell University and published in the prestigious Journal of Personality and Social Psychology seemed to find this remarkable effect in nine different experiments.
Steve Novella describes the experiment:
In the 9th study, for example, subjects were given a list of words in sequence on a computer screen. They were then asked to recall as many of the words as possible. Following that they were given two practice sessions with half of the word chosen by the computer at random. The results were then analyzed to see if practicing the words improved the subject’s recall for those words in the past. Bem found that they did, with the largest effect size of any of the 9 studies.
Got it? The subjects recalled better those words that they were later given to study.
Needless to say, these results sparked considerable skepticism and other researchers tried to replicate the results. When one group failed to so, they tried to get their negative results published by the same journal and failed, a common problem that I wrote about earlier. After much difficulty, they got it published elsewhere.
Now the Journal of Personality and Social Psychology has decided after all to publish a different replication attempt that also failed to reproduce the Bem study, pretty much sinking the idea that the future can influence the present
What might have gone wrong with the original Bem study? Novella has a good run-down of the possible reasons, some of which surprised me because they indicate a lack of rigor that is disturbing. I hope such practices are not widespread in psychology research.
Andrew G. says
I’ve seen expressed somewhere the idea that fields like parapsychology or some types of alt-med (e.g. homeopathy) can be considered in a sense as being the “control group” for science -- fields in which all the non-null hypotheses are false, and whose success is therefore an indication of systemic problems such as publication biases, researcher biases and so on.
eric says
Here I thought you were going to talk about quantum erasers. Flawed human subject psychological experiments are a lot more boring.
machintelligence says
The Novella article was great but…
From pharyngula http://freethoughtblogs.com/pharyngula/2012/08/28/the-price-of-a-hoax-is-too-high
He must have precognition!
*/sarcasm*
Chiroptera says
I hope such practices are not widespread in psychology research.
Peer review only works if the vast majority of peers understand how research is supposed to be done.
If poor practices are common in a field, then peer review, I would think, will tend to reinforce sloppy research.
JJ says
Well, Bem,
He of the hypothesis of the connection between ESP and quantum mechanics. After he presented that idea without running any (I mean not even the most basic) interference equations of how that is supposed to work; I have to tell you I am shocked, horribly shocked, that he presented an experiment that was unrepeatable by reputable scientists.
But, I think the real important question is, has the “Journal of Personality and Social Psychology” published that they now recognize the errors of their ways and have put in place safeguards to keep it from happening again?
DuWayne says
I think the problem here is that Bem actually produced some extremely important work in the formation of self-perception -- as in a fundamental part of how we (psychologists) understand the ways in which our beliefs about ourselves are developed. He has also produced moderately important work in the long term development of children who engage in certain abnormal behaviors and contributed to the body of knowledge about the etiology of homosexuality. He is unquestionably a brilliant scientist -- who happens to be a Believer in the paranormal.
Psychology is still an infant (or possibly more accurately, a toddler) science, struggling out of a cult of personality and Bem has a great deal of real power in this profession -- especially in social psychology. While he couldn’t destroy careers, he could certainly make it hard for one to succeed. And it is doubly hard to fight it, because he is a well respected scientist for damned good reasons.
The other problem is that psychology has some very serious systemic problems. It isn’t even a problem of standards, so much as it is a problem with how science is done. The standards wouldn’t be a problem if the science was allowed to function the way it must to develop solid evidence -- ie. allowing people to replicate the shit out of everything and publishing the results. We all have biases and even recognizing *some* of our biases, it can be hard to compensate. And of course it is impossible to compensate for the biases we aren’t aware of. With psych experiments, results are almost always rather ambiguous in some way or another -- whether the actual results are, or, more often, the experimental design is. This is why psychologists tackle problems from every which way they can.
Ideally, we would also replicate experiments exactly as described with great frequency and also replicate experiments with minor divergences -- accounting for more or different variables, in an attempt to discover if there were unknown, but consistently applied variables that might account for the results of the original experiment. Because of the complex nature of human behavior, it is absolutely essential for good science to exhaustively seek out confounding influences -- the more important a given piece of work, the more exhaustive that search must be.
Unfortunately, the way academic science (and nearly all psych research is academic science) actually works in the U.S. requires researchers tell a good story, get published and get more grant money. This problem exists in biology (I know too many biologists to accept any assertion this is a problem unique to psych research) and even exists in the much cleaner, clearer world of physics. Even while I sit in psych classes, having instructors pound the importance of replication in all it’s glorious facets into my head, they themselves are playing the game they have to play to get grants. They also stringently avoid talking smack about even out and out frauds, because some of those frauds have power, or friends with power who can affect careers.
DuWayne says
JJ -- this isn’t a problem with The Journal of Personality and Social Psychology, it’s a systemic problem that even the “holy grail” of science pubs, Nature falls prey to. This kind of crap happens every single day and will continue to happen until the entire system is changed and changed rather dramatically. The more power a given scientist’s name commands, the less scrutiny their work is given -- that’s the way the game is played.
Jared A says
Just because it has the highest impact factor doesn’t mean that Nature is the “holy grail” in terms of scientific excellence. I thought everyone knows that Nature and Science are for publishing the sexiest science, but not necessarily the highest quality science. Those journals are like the Nobel Prize; they exist for PR reasons at that interface between science and the rest of the public.
If you want the most rigorous, sophisticated science experiments you go one or two tiers down.
robb says
since 5 out of 4 people have trouble with statistics, it is not surprising the study was of dubious quality.
DuWayne says
I didn’t mean scientific excellence, I meant in terms of playing the game of science. Indeed that is my point. The publication of Bem’s parapsych paper in an important psych journal is indicative of a larger problem for science in general (not just psychology).