The news that some neutrinos may travel faster than the speed of light caused a sensation when it broke in September of last year.
The OPERA experiment that caused such a flurry of interest with its reports of faster-than-light neutrinos has been repeated to take into account one of the criticisms and they find that the neutrinos still seem to be traveling faster than the speed of light. You can read the paper on the revised experiment here. (For previous posts in this topic, see sere.)
In the earlier experiments, the neutrinos were sent in clusters that spanned 10 microseconds, much longer than the 60 nanoseconds time difference that signaled the faster-than-light effect, and thus the experimenters had to do some fancy statistical analyses to extract the time of flight of each neutrino. Some skeptics had suggested that those statistical analyses were flawed. The new experiment has clusters that last only 3 nanoseconds, thus ruling out that particular source of systematic error.
The other potential sources of error will take longer to check out.
(For previous posts in this series, see here.)
A lot of things need to happen before the extraordinary claims of faster-than-light neutrinos are accepted as true. As Carl Sagan once said, “Extraordinary claims require extraordinary evidence.” The required evidence needs to take many forms: the results should be consistent and reproducible, corroborating evidence will have to be found, consistency with other phenomena will have to established, and alternative explanations for the phenomenon based on traditional physics will have to be ruled out. All this is going to take some time.
But if the result seems to hold up, even then it is not usually the case that scientists completely discard a highly respected old theory and start from scratch. While a few bolder scientists may take this opportunity to try and create a completely new theory, the majority of them usually seek to find minimal changes in the existing theory that would accommodate the new result.
As physicist Heinrich Pas says:
Even if true, this result neither proves Einstein wrong nor implies that causality has to be violated and time travel is possible. Things can move faster than the speed of light without violating Einstein if either the speed of light is not the limiting velocity as one can observe it for light propagation in media such as, for example, water. This can be modeled with background fields in the vacuum as has been proposed by [Indiana University physicist] Alan Kostelecky.
Or spacetime could be warped in a way so that neutrinos can take a shortcut without really being faster than the speed of light. As our three space plus one time dimensions look pretty flat, this would require an extra dimension (as proposed by [University of Hawaii at Manoa physicist] Sandip Pakvasa, [Vanderbilt University physicist] Tom Weiler and myself).
It was Einstein who suggested in 1905 that there is a limiting speed in nature and that this is the speed of light in a vacuum. I have already discussed in connection with Cherenkov radiation that when traveling in a medium such as water or glass or even air, the speed of light is reduced and it is possible to have other particles travel at speeds greater than light in that medium.
So one possible explanation for the OPERA neutrino results is to decouple the speed of light with the limiting speed. Perhaps what we call the vacuum has properties that slows down light from this potentially larger limiting value, and that this new upper limit is what should appear in the theory of relativity. If so, then having neutrinos travel faster than the speed of light in the vacuum would simply mean that neutrinos are slowed down less than light by the vacuum, similar to what happens in other media like the Sun or water or glass. This would require some additional adjustments to theory. Einstein said that the limiting speed must be an invariant for all observers and equated this limiting speed to the speed of light because it overcame some problems of consistency with Maxwell’s electromagnetic theory. Decoupling those two speeds may require us to refine Maxwell’s laws as well, at the very minimum. As is well known, there is no free lunch in science. You cannot make changes in one scientific theory without having to make adjustments in other theories so that they all fit together again.
This series has tried to explain why the proper scientific response to reports of a major discovery is skepticism. This should not be equated with dogmatic obstructionism because in the case of dogma, one starts with a belief that cannot be changed whatever the evidence. Skepticism, on the other hard, is merely resistance that can be overcome with sufficient evidence and reason.
Major theories in science are rarely overthrown on the basis of a single experimental result, though textbooks sometimes tend to give that erroneous image of scientific progress. Usually what happens when a surprising result crops up is that a few people start to look at it closely to see if the results can be replicated by other people in different contexts, and if the ancillary consequences of the new result are also seen.
If none of these pans out, then the original result is deemed to be due to an error (usually a subtle one in the case of careful scientists) or to some factor that was overlooked in the data collection or analysis. The latter is often referred to as a systematic error and is more common because it is hard to be sure that you have accounted for all the possible factors that can influence an experiment, especially if you are working at the frontiers of knowledge, pushing the limits. Sometimes, as in the case of cold fusion, an adequate explanation of the phenomenon within the standard framework is not discovered for a long time and a few scientists believe they do have a new effect and continue to work on it. Such theories die only when their advocates die out.
I doubt that the faster-than-light neutrino story will remain similarly ambiguous for too long but it is a difficult experiment and so may take years to sort out. The quickest resolution to such controversies is when the original experimenters find some error that causes them to withdraw their claim. The OPERA team already has plans to repeat the neutrino experiment with modifications designed to address at least a few of the concerns expressed so far. Another group known as MINOS also plans to repeat the experiment but at locations in the US, with neutrinos produced at Fermilab near Chicago and detectors in northern Minnesota or even South Dakota, the latter being a longer distance than that between CERN and Gran Sasso,
Whatever the final outcome, the faster-than-light neutrino reports have shone a light onto how science really works and that is always a good thing.
Just for the fun, I am ending this series with a word cloud made out of this series of posts. (Ignore the href and em items since these are merely html tags and have nothing to do with the content.)
(For previous posts in this series, see here.)
Suppose that the claim that neutrinos can travel faster than light holds up. What are the implications?
As I said earlier in the series, this does not mean that Einstein’s theory of relativity is overthrown, since it always allowed for faster than light particles, though we had never observed them. But it does mean that Einstein causality, the idea that if two events are causally connected by a signal that travels from one event to the other, then all observers’ clocks will agree that the signal left the source before it arrived at the other end, will have to go.
How hard would it be to keep the theory of relativity but abandon the idea of Einstein causality? It is not impossible. The idea that if A causes B, then A must occur before B is, after all, just another hypothesis subject to empirical testing. As Victor Stenger points out, long before Einstein came along, the whole idea of causality, that we can know that one event causes another, was challenged by philosopher David Hume (1711-1776).
Wikipedia has a nice synopsis of Hume’s views on the relationship of the problem of induction to that of causality:
First, Hume ponders the discovery of causal relations, which form the basis for what he refers to as “matters of fact.” He argues that causal relations are found not by reason, but by induction. This is because for any cause, multiple effects are conceivable, and the actual effect cannot be determined by reasoning about the cause; instead, one must observe occurrences of the causal relation to discover that it holds. For example, when one thinks of “a billiard ball moving in a straight line toward another,” one can conceive that the first ball bounces back with the second ball remaining at rest, the first ball stops and the second ball moves, or the first ball jumps over the second, etc. There is no reason to conclude any of these possibilities over the others. Only through previous observation can it be predicted, inductively, what will actually happen with the balls. In general, it is not necessary that causal relation in the future resemble causal relations in the past, as it is always conceivable otherwise; for Hume, this is because the negation of the claim does not lead to a contradiction.
Next, Hume ponders the justification of induction. If all matters of fact are based on causal relations, and all causal relations are found by induction, then induction must be shown to be valid somehow. He uses the fact that induction assumes a valid connection between the proposition “I have found that such an object has always been attended with such an effect” and the proposition “I foresee that other objects which are in appearance similar will be attended with similar effects.” One connects these two propositions not by reason, but by induction. This claim is supported by the same reasoning as that for causal relations above, and by the observation that even rationally inexperienced or inferior people can infer, for example, that touching fire causes pain. Hume challenges other philosophers to come up with a (deductive) reason for the connection. If that the justification of induction cannot be deductive, then it would beg the question for induction to be based on an inductive assumption about a connection. Induction, itself, cannot explain the connection.
In this way, the problem of induction is not only concerned with the uncertainty of conclusions derived by induction, but doubts the very principle through which those uncertain conclusions are derived.
What Hume pointed out is that what we actually observe is always just a sequence of events and just because in the past we have always seen one event preceding another does not mean that it will always do so in the future or that the first event is the cause of the second. The past is not a predictor of the future, something known as the problem of induction. Just because our neighbor has, even since we have known him, picked up the morning paper from his driveway in his bathrobe does not mean that he will do so tomorrow. He may appear in a tuxedo. This is true even for events that we consider to be driven by natural laws. Just because the Sun has come up every day of our lives does not allow us to infer that it will do so tomorrow. Just because when I release my pen it falls and hits the ground, and this happens over and over again, does not allow me to conclude that it will do so the very next time I try it. Because inductive thinking is so appealing, we have developed laws that explain correlated sequential phenomena in terms of cause and effect. But just because such laws are so successful does not mean that we can ignore the fact that causality is merely an inference based on an idea of induction that has not been a priori justified.
Hume argued that our ideas of causality suffer for the same reasons that induction does. Going back to our shooting example, if person A fires a gun and the bullet enters person B and causes B to die, we say that A’s actions caused the death of B. But all that we actually observed is that there was a temporal sequence of events in which the gun was fired, and the bullet then traveled and entered B who died and so we impute causality to the process. Our belief in causality is so strong because we have constructed laws that explain those temporal correlations in behavior that enable us to predict that if the first event is repeated, the second will too. So A shooting B in the same way will always result in the death of B. But what Hume says is that we cannot be sure of this. Maybe the next time A shoots at B, the bullet will, like a boomerang, stop half way and go back and hit A.
If we view a film and see a bleeding person lying on the floor and blood flowing back inside him followed by the person standing up and a bullet emerging from his body and going back into a gun held by another person, we would conclude that the film was being run backwards because all these things seem to violate causality. If we looked at the clock readings at the locations of the two events, we would expect to find that the clock reading with the man lying on the floor would be later than the clock reading in which the bullet was in the gun even when the film was run backwards. But can we be sure of this?
Hume’s idea that causality is merely an assumption that may not always hold true received support in modern physics when it was found that the basic laws of physics are (almost always) time-reversal invariant. This means laws of physics are such that if one looked at a film showing reactions between elementary particles, one would not be able to deduce whether the film was being run forwards or backwards, because the basic laws of physics do not discriminate between the two. The only exception we have to date that violates this conclusion is the decay of an elementary particle known as the neutral kaon.
Furthermore, modern physics has shown that not every effect need be associated with a cause. For example, the decay of a radioactive nucleus appears to be totally spontaneous and unpredictable. Nothing causes it, it just happens. So if we can have acausal events and cannot even in principle assign a causal connection between two events, then Einstein causality may turn out to be just one of those convenient assumptions that seemed at one time to be self-evidently true but that we now need to outgrow and replace with a more sophisticated way of thinking, just like we grew out of assuming that the Earth was flat or that it was at the center of the universe.
Not that doing so will be easy. Causality, like the belief in free will, is so deeply ingrained on our psyches that abandoning it will be difficult.
Next: Some concluding thoughts
(For previous posts in this series, see here.)
The ‘cold fusion’ episode from back in 1989 illustrates the danger with issuing press releases announcing a major scientific discovery before the scientific community has had a chance to weigh in and sift through the evidence. Two respected scientists Stanley Pons and Martin Fleischmann at the University of Utah discovered reactions producing enormous amounts of heat when the metal palladium was immersed in what is known as ‘heavy water’, which contains a large fraction of water molecules in which the ordinary hydrogen atom has been replaced by the heavier isotope deuterium. The experimenters thought that chemical reactions could not account for the scale of the energy release and were convinced that they had discovered a way to produce nuclear fusion reactions at room temperature, thus opening the way to a vast, cheap, and clean new energy source. Needless to say, this would be a revolutionary discovery, both scientifically and practically.
In March 1989 they announced their results at a press conference to loud fanfare. I remember hearing the announcement on BBC news over my short-wave radio and thinking “Wow! This is huge.” As in the current case of faster-than-light neutrinos, the initial surprise was quickly followed by considerable skepticism within the scientific community because cold fusion went completely against all that we thought we knew about nuclear fusion. For two nuclei to come close enough to fuse, they have to overcome the strong repulsive forces due to both having positive charges. For the nuclei to overcome this ‘Coulomb barrier’, they have to have high energies that are associated with high temperatures as found in the Sun and other stars, which is what enables fusion to be their energy source. What Pons and Fleischmann were suggesting would require some new mechanism to overcome the well-known and well-understood obstacles to low-temperature fusion.
Other scientists pointed out that even if we ignored the Coulomb problem, the byproducts of fusion, which should have been copiously produced, were not observed either, throwing doubt on whether fusion was actually occurring. This objection was countered by claiming that perhaps this was a new form of nuclear reaction that did not produce those specific byproducts. As I pointed out in my series on the logic of science, almost any theory can be salvaged by the introduction of such auxiliary hypotheses. But adopting such stratagems tends to weaken the case for a new theory unless they too can be corroborated with other evidence.
If the claims of Pons and Fleischmann were true, the practical benefits and the revolutionary science they spawned were enormous and this persuaded enough scientists to take the cold fusion claims seriously enough to spend considerable time and effort and money to investigate them. As far as I am aware, over two decades later, though some scientists still continue to work on it, there is still no consistency about the cold fusion reactions, despite periodic resurgences of enthusiasm, enough that the Pentagon is funding further studies. In 2009, the program 60 Minutes did a program giving the history of cold fusion and some new developments.
One problem with cold fusion is that the heat reactions cannot be reliably reproduced. “The experiments produce excess heat at best 70 percent of the time; it can take days or weeks for the excess heat to show up. And it’s never the same amount of energy twice.” This is always a troubling sign. Scientific laws are not idiosyncratic. If they work, they should work all the time in the same way with no exceptions. If there are exceptions, these should also be law-like in that you should be able to predict exactly under what conditions they will or will not occur. Results that occur sometimes with no understanding why are signs that there are some unknown factors at work that are skewing the results.
So what has all this history to do with the recent neutrino story? The fact that this result was also announced via what was essentially a press release and not at a scientific meeting or in a peer-reviewed journal article aroused some concern. Press releases do not face the same degree of scrutiny as a journal article, where a sensational claim of this sort would be subject to close scrutiny before being approved for publication. In the above video, Fleischmann recognized this mistake, saying that he had two regrets: “calling the nuclear effect “fusion,” a name coined by a competitor, and having that news conference, something he says the University of Utah wanted.”
It is not the case that scientists are hidebound dogmatists, determined to cling on to old ideas, as is sometimes claimed by non-scientists when their pet theories (such as intelligent design) are rejected. As I said before, part of the strength of science is that because scientific knowledge is the product of a consensus-building process, it does not get easily swayed by each and every claim of a big discovery. It initially views reports of revolutionary developments with skepticism, waiting to see if the results hold up and corroborating evidence is produced. If so, the community can and does accept the new idea. For example, this year’s award of the Nobel prize for physics was for the discovery that distant galaxies are not only moving away from us (which agreed with existing theories) but are actually accelerating (which flatly contradicted everything we had thought and has led to the highly counter-intuitive idea of so-called ‘dark energy’ permeating and dominating all of space) shows that the community can change its collective mind and accept radically new ideas, and fairly quickly. But the reason such a seemingly outlandish result as dark energy became the conventional wisdom within the short space of less than two decades is because the proponents were able to marshal the evidence in favor of it that survived close scrutiny and was corroborated.
The history of cold fusion, despite not becoming mainstream, also puts the lie to the claims of the so-called intelligent design movement that scientists conspire to suppress those ideas that challenge conventional wisdom. Despite the fact that most of the scientific community is highly skeptical of it being a real effect, cold fusion advocates actually do have a research program in which they do experiments, produce data, and publicize their results. All that members of the intelligent design community do is write books and articles and give talks whining that the scientific community refuses to give them a platform to promote their ideas and that this is because the community is hidebound and refuses to even consider their bold new idea that challenges the accepted ‘dogma’ of evolution.
The actual explanation for why the scientific community rejects intelligent design is simple and mundane. More that two decades after the idea was first proposed, intelligent design advocates still have not done a single experiment or have even a research program to do any.
Next: What if Einstein causality has to be abandoned?
(For previous posts in this series, see here.)
Scientists want their work to influence the field and so they would like it to gain the widest possible audience. Most of the time, their peers (and funding agencies) are their target audience because they are the only ones who really understand what they do. But when the work also has appeal to the general public because of its practical applicability or its revolutionary implications, then there can arise tensions in how the work is publicized and in the case of the OPERA experiment on faster-than-light neutrinos, there has been considerable unease with how this whole episode was handled with respect to the media.
The usual process when scientists have something new to say is that they write up a paper with their results and send it to a journal. The journal then sends the paper to referees who work in the same field (the number of referees depends on the journal and the discretion of the editor) who provide feedback to the editor. The referees do not usually check the results or repeat the calculations and experiments. What they do is see if the paper makes sense, the methodology is correct, if the authors have taken into account all the relevant factors and provided all the necessary information so that readers know exactly what was done (and how) so that they could repeat and check the results if they are so inclined, and that proper credit has been given for prior related work. Based on this feedback, the editor decides whether to accept the paper, reject it, or send it back to the authors for revisions and/or additional work. Good referees and editors can improve a paper enormously by providing the authors with valuable feedback and useful information and suggestions.
In the sciences, authors also usually simultaneously send out copies of the paper (known as preprints) to colleagues in the field. This serves to give their colleagues advance notice of their work (since the time taken to appear in the journal can often take over a year), to get feedback, and to establish priority for any discovery. All this occurs out of the public eye. Once the paper has been accepted and published by a journal, then it enters the public discussion and the media can publicize it. If the paper has significant implications, the journals may alert the media and give reporters a copy of the paper before it appears in print so that they can research and prepare an article about it, but the reporter is under an embargo to not publish until the journal article actually appears. Some of the more influential journals will refuse to publish an article if the authors release the information to the media before the journal prints it.
In the pre-internet days, and for research results that do not have revolutionary implications, this system worked reasonably well. Due to the cost of mailing, not too many preprints went out so the pre-publication discussions remained within a fairly small circle. With the internet, it became much easier to send out preprints to huge numbers of people at no cost and it was not long before it was realized that it made sense to create a system that could serve as a permanent archive that would allow scientists to post their preprints online so that anyone could gain access to them and search for those results that interested them. Currently the most popular venue for such preprints is arXiv and Wikipedia has a good article about its history and how it works.
The articles that are found on arXiv are preprints and thus have not been peer-reviewed but the system is minimally moderated to keep out rubbish. In general, scientists are concerned about their reputations among their peers and so most are careful to only post articles that they think would meet the standards of quality required if they were submitting to a peer-reviewed journal. Almost all of them do simultaneously submit their articles to such journals. As a result, the papers that appear on arXiv tend to be of pretty good quality. All the papers associated with the faster-than-light OPERA experiment are on arXiv.
A few scientists feel that peer-reviewed print journals are an anachronism and do not bother to try to even get their work into journals, feeling that the quality of the work will speak for itself. They think that if their work is correct and important, the community of scientists will accept it and build on it, while if it is wrong the community will criticize and reject it. Possibly the worst fate is that the community will think it is useless and a waste of time and completely ignore it. It may well be the case that in the future, expensive peer-reviewed print journals will disappear and that this kind of open-source publication will become the norm, with quality being determined by the consensus judgment of the scientific community. We are not there yet.
In the case of the OPERA experiment, the system broke down somewhat for several reasons. The OPERA experiment is very difficult and is a huge enterprise involving many collaborators and lasting over three years, with the paper having over 150 authors. Given the culture of the free sharing of information in science, it is very hard to keep preliminary results under wraps and it was pretty much an open secret that these faster-than-light results had been obtained. But this knowledge stayed within the community. What the OPERA team did was the day after they posted their preprint on arXiv on September 22, they issued a press release announcing their results and promoting a big press conference the next day with media and scientists present.
This rubbed some scientists the wrong way. Scientists can be as publicity hungry as celebrities but there are norms and there is a discreet way of making one’s name known. Holding press conferences or issuing press releases so early in the game, before the scientific community has had time to pass its verdict on the research, is considered bad form and the OPERA team has received some criticisms on this score.
While some of the carping may be due to jealousy, it is also the case that trumpeting that a scientific revolution has occurred can harm the image of science if the claim has to be later retracted. The reliable knowledge that science produces tends to be the consensus verdict of the community, achieved after a lot of behind-the-scenes work has smoothed out the rough edges and corrected mistakes. Bypassing that filtering process and going public too soon can lead to embarrassing reversals and give ammunition to the critics of science that its results cannot be trusted.
Next: Recalling an earlier public relations debacle
(For previous posts in this series, see here.)
In my series on the logic of science, I recounted how philosopher of science Pierre Duhem had pointed out as far back as 1906 that the theories of science are all connected to each other and changes in one area will have unavoidable effects on others that should be discernible. In this case, if neutrinos in the OPERA experiment did in fact travel faster than the speed of light, then we should be able to look at some other effects that should occur and see if they are observed.
One of them is the ‘Cherenkov effect’. This effect says that when something travels faster than the speed of light, it should emit a certain kind of radiation that is analogous to the shock waves that are produced when something travels faster than the speed of sound. This is known as the ‘sonic boom’ that we can hear when jet planes break the speed of sound. It also occurs when bullets are fired at speeds greater than the speed of sound but because bullets are so small the sonic boom is too weak for us to hear it.
The Cherenkov effect is well known and has been studied and confirmed. How can this be if it requires something to travel faster than the speed of light? Recall that the speed of light barrier in Einstein’s theory is that in a vacuum. When light travels through any medium (light, water, atmosphere), it is slowed down by the interactions of the medium with the light particles. Other particles such as electrons are also slowed down by the medium but they may not be to the same extent, in which case it can be possible for some particles in a medium to travel faster than the speed of light in that same medium. If they do so, they should emit the light equivalent of the sonic boom and this is called Cherenkov radiation. The spectrum of light emitted lies mainly in the ultraviolet region and its overlap with the visible spectrum produces a characteristic blue glow. One can see this in the cooling water that surrounds nuclear reactors, as in the image on the right, and in this video of a pulse of radiation being sent into the cooling liquid.
In a paper, Andrew Cohen and Sheldon Glashow calculate that high energy, faster-than-light neutrinos as produced in the OPERA experiment would lose much of their energy due to Cherenkov radiation, mainly by the production of electron-positron pairs, on their way from CERN to Gran Sasso. But that does not seem to have happened, according to a different experiment at Gran Sasso (known as ICARUS) that works with the same neutrino source as the OPERA experiment.
Another concern involving consistency is with the supernova SN1987A that was observed in 1987. It turned out that a cluster of 24 neutrinos were detected in three different detectors on the Earth about three hours before the supernova was observed, i.e. before the light signals reached Earth. That difference was not put down to the neutrinos traveling faster than the speed of light but to the fact that the neutrinos, while created at the same time as the light, escaped from the exploding star three hours before the light did due to their low interactivity with matter, and so had a head start on the journey to Earth, even though they traveled in free space at the same speed as light. The measured time difference was consistent with our understanding of the processes involved in a supernova.
If the neutrinos had speeds greater than that of light by even the small amount given by the OPERA experiment, then because of the huge distance of the supernova from Earth (about 168,000 light years), the supernova neutrinos should have reached Earth about 4.7 years before we saw the supernova. If neutrinos in the OPERA experiment had in fact, been traveling faster than the speed of light, why had they not done so in other situations, such as the 1987 supernova?
The working model of science is that things behave in a law-like, repeatable manner and not idiosyncratically. If we observe something in one situation, we expect to see it happening again in similar situations. If a deviation from law-like behavior is observed, we assume that this is due to the existence of another, deeper, hitherto unknown law whose effect only became apparent because of some conditions that had been incorrectly assumed to be unimportant.
In this case, one could postulate that since the OPERA neutrinos have a thousand times as much energy as the supernova neutrinos, faster-than-light speeds only arise for such high-energy neutrinos. Of course, such a new explanation requires new corroborative evidence and so the discussion will go on as explanations and evidence play out their dialectical relationship until a consensus emerges. That is how science works.
Next: Science and public relations
(For previous posts in this series, see here.)
To understand the role of Einstein’s general theory of relativity, recall that the original OPERA experiment claimed that they had detected neutrinos traveling faster than the speed of light. This posed a challenge to what is known as Einstein’s theory of special relativity, proposed in 1905, which said that the relationship between the clock and ruler readings for two observers moving relative to one another would be different from the ones given by the seemingly obvious relationships derived by Galileo centuries earlier. According to Einstein’s theory, it is the speed of light that would be the same for all observers, while clock readings could differ, and that Einstein causality (the temporal ordering of any two events that are causally connected by a signal traveling from one to another) would be preserved for all observers. One inference that followed from Einstein causality is that no causal signal can travel faster than the speed of light, and this was what was seemingly violated by the OPERA experiment.
But Einstein had a later and more general theory that he proposed in 1915, called the general theory of relativity, that included the effects of gravity. He showed that clock readings were not only affected by the speed with which the clock was moving, they were also affected by the size of the gravitational field in which the clock found itself. This is the source of what is referred to as the ‘gravitational red shift’ that enters into cosmology that causes the light emitted by distant stars and galaxies to be shifted towards larger wavelengths as they escape the gravitational field of those objects on their journey to us.
To understand what is going on, recall that when we measure the elapsed time between two events, what we are really doing is measuring the number of clock ticks that occur between the events. According to general relativity, the stronger the gravitational field, the slower the rate at which a clock ticks. The slower the rate at which a clock ticks, the less time that it records as having elapsed between two events.
So, for example, since we know that the Earth’s gravitational field decreases as we go up, this means that if we take two identical clocks, one on the floor and the other on the ceiling, the one on the floor would have fewer ticks between two events than the one on the ceiling, even if both are stationary. So the clock on the floor would ‘run slower’ than the one on the ceiling and hence the time interval measured between two events measured by clocks on the floor will be less than that measured by clocks on the ceiling.
In the OPERA experiment, the time measurements were made using GPS satellites. These are whizzing by at both high speeds (about 4 km/s) and high altitudes (about four Earth radii). Typically, the signals are handed off from one satellite to another as they appear and disappear over the horizon and the transition is almost seamless and produces such small errors that we do not notice it. But the OPERA experiment requires such high precision that they arranged to do the experiment during the transit time of just a single satellite so that even that source of error was eliminated.
Because the rate at which clocks run depends upon the size of the gravitational field, one has to make corrections to allow for the fact that the time readings given by clock readings of the satellites will be different from the time readings given by clocks on the Earth, and so one needs to make extremely subtle corrections to the GPS time stamp to get the correct clock readings on the Earth. This is why much of the attention has focused on this aspect. It is not that the OPERA experimenters overlooked this obvious feature (such general relativistic corrections are routinely made by GPS software in order to make the GPS system function with sufficient accuracy) but whether they have made all the necessary corrections to the extremely high level of precision required by this experiment.
Carlo Contaldi at Imperial College, London has suggested that the clocks at CERN and Gran Sasso were not synchronized properly due to three effects, one of which is the fact that the gravitational field experienced by the satellite is not the same at all points on its path since the Earth is not a perfect sphere. He says that the errors that would be introduced are of the size that could produce the OPERA effect. (You can read Contaldi’s paper here.)
Ronald A. J. van Elburg at the University of Groningen has argued that subtle effects due to the motion of the detectors with respect to the satellite could have shifted the time measurements at each clock on the ground by 32 nanoseconds in the directions required to explain the 60 nanosecond discrepancy. (You can read van Elburg’s paper here and reader Evan sent me a link to a nice explanation of this work.)
The OPERA researchers (and some others) have challenged some of these explanations and said that they will provide a revised paper that explains more clearly all the things they did.
There have been no shortage of ideas and papers pointing out problems and possible alternative explanations for the OPERA results. Sorting and sifting through them all before we arrive at a consensus conclusion will take some time.
(For previous posts in this series, see here.)
The reactions to the reports of the CERN-Gran Sasso discovery of possibly faster-than-light neutrinos open a window into how science operates, and the differences in the way that the scientific community and the media and the general public react whenever a result emerges that contradicts the firmly held conclusions of a major theory.
The initial reaction within the scientific community is almost always one of skepticism, that some hitherto unknown and undetected effect has skewed the results, while the media and public are much more likely to think that a major revolution has occurred. There are sound reasons for this skepticism. Science would not have been able to advance as much if the community veered off in a new direction every time an unusual event was reported.
What usually happens is that most of the community goes on as before as if nothing had occurred while a relatively small number who are experts in that area examine the new results closely. Some will try to identify possible sources of systematic errors that the original experimenters did not consider. The experimenters who reported the possibility of faster-than-light neutrinos are reportedly careful people and if any errors occurred, we can be sure that they are not trivial ones that will be uncovered easily or quickly. Others will examine if any of the side effects that would accompany faster than light travel are also seen. If those two efforts fail to turn up any problems, other groups will try to repeat the basic experiment with different experimental set-ups, measuring the time and distance using different techniques so that the likelihood of systematic biases pushing the results in the same direction is reduced. The last option is very expensive and time-consuming, since these experiments are very difficult to do, which is why it is usually the last resort. During this period, there will often be claims and counter-claims and some confusion until the dust settles and a consensus emerges. But it is this painstaking investigation seeking replicability and consistency that characterizes science and enables it to be confident that once a consensus emerges, that it has produced reliable knowledge.
In this case, recall that the original experiment (which has the acronym OPERA) that aroused such interest involved sending neutrinos over a distance of 730 km and measuring their speed, where the distance and time measurements used GPS satellite technology. Assuming that 730 km was the exact distance, if the neutrinos travelled at exactly the speed of light, it should take them 2.435 milliseconds to make the trip. What was observed was that the neutrinos arrived 60 nanoseconds earlier than expected, thus violating Einstein causality, though not overthrowing the theory of relativity. This effect would go away if there were a 60 nanosecond error in the time measurement and/or an 18 meter error in the distance measurement of the journey, and searching for hitherto unconsidered factors that could produce effects of that size has been the initial focus.
There have already been some developments. When it comes to looking at sources of systematic errors, Lubos Motl has a long discussion on possible errors and has compiled a partial list of potential sources that need to be examined closely.
Notice that a lot of the suggested errors focus on the GPS or the Global Positioning System. This currently consists of 31 orbiting satellites that are continuously emitting signals that include the time the signal was sent as well as the orbital information of the satellite. Receivers on the ground (such as in your car) take that information and calculate the position of the receivers. The OPERA experiment used such signals to pinpoint the locations of the detectors at CERN and Grand Sasso and the time of travel. Most everyday situations do not require very high levels of accuracy. But since time interval errors of just 60 nanoseconds or distance errors of 18 meters could nullify the results, people have been looking into the possible sources of subtle errors, especially those associated with Einstein’s general theory of relativity.
Next: General relativity effects.
(For previous posts in this series, see here.)
In the previous post in this series, I said that Einstein’s claim that the speed of light must be the same when measured by all observers irrespective of how they were moving led to the conclusion that the rate at which time elapsed must depend on the state of motion of the observer. But if time is not an invariant entity, then we need to be more precise about how we measure it for observers in relative motion to one another so that we can better determine how their measurements are related.
What we now postulate is that associated with each observer is a grid of rulers that spreads out into all space in all directions. At each point in space are also a clock and a recorder. It is assumed that all the rulers and clocks of all the observers are constructed to be identical to each other, the clocks are properly synchronized, and the recorders never make errors. When an event occurs anywhere at any time, the location and time of that event are those noted by that recorder who happens to be exactly at the location of the event and who notes the ruler and clock readings located at the place at the instant when the event occurred. This rules out the need to make corrections for the time that elapses for the light to travel from the location of the event to the recorder.
If there is another observer who is moving with respect to the first, that person too will have her own set of rulers and clocks and recorders spread out through all space, and the location and time of an event will be that noted by her recorder using her rulers and clocks at the location where the event occurs. This set up seems rather extravagant in its requirement of infinite numbers of rulers and clocks and recorders but of course all these rulers and clocks and recorders are merely hypothetical except for the ones we actually need in any given experiment. The key point to bear in mind is that the location and time of an event for any observer is now unambiguously defined to be that given by that observer’s ruler and clock readings at the location of the event, as noted by the observer’s recorder located right there.
What ‘Einstein causality’ says is that if event A causes event B, then event A must have occurred before event B and this must be true for all observers. If one observer said that one event caused another and thus the two events had a particular ordering in time, all observers would agree on that ordering. Thus causality was assumed to be a universal property.
What we mean by ’causes’ is that event B occurs because of some signal sent by A that reaches B. So when the person at B is shot by the person at A, the signal that caused the event is the bullet that traveled from A to B. Hence the clock reading at event A must be earlier than the clock reading at event B, and this muust be true for every observer’s clocks, irrespective of how that observer is moving, as long as (according to Einsteinian relativity) the observer is moving at a speed less than that of light. The magnitude of the time difference between the two events will vary according to the state of motion of the observer, but the sign will never be reversed. In other words, it will never be the case that any observer’s clocks will say that event B occurred at a clock reading that is earlier than the clock reading of event A.
But according to Einstein’s theory of relativity, this holds only if the signal that causally connects event A to B travels at speeds less than that of light. If event B is caused by a signal that is sent from A at a speed V that is greater than that of light c (as was claimed to be the case with the neutrinos in the CERN-Gran Sasso experiment) then it can be shown (though I will not do so here) that an observer traveling at a speed of c2/V or greater (but still less than the speed of light) will find that the clock reading of when the signal reached B would actually be earlier than the clock reading of when the signal left A. This would be a true case of the effect preceding the cause. The idea that different observers would not be able to agree on the temporal ordering of events that some observers see as causally connected would violate Einstein causality and this is what the faster-than-light neutrino reports, if confirmed, would imply.
Note that this violation of Einstein causality occurs even though the observer is moving at speeds less than that of light. All it requires is that the signal that was sent from A to B to be traveling faster than light.
(If the observer herself can travel faster than the speed of light (which is far less likely to occur in reality than having an elementary particle like a neutrino doing so), then one can have other odd results. For example, if the speed of light is 1 m/s and I could travel at 2 m/s, then one can imagine the following scenario. I could (say) dance for five seconds. The light signals from the beginning of my dance would have traveled 5 meters away by the time my dance ended. If at the end of my five-second dance, I traveled at 2 m/s for 5 seconds, then I would reach a point 10 meters away at the same time as the light that was emitted at the beginning of my dance. So if I look back to where I came from, I could see me doing my own dance as the light from it reaches me. So I would be observing my own past in real time. This would be weird, no doubt, but in some sense would not be that much different from watching home movies of something I did before. It would not be, by itself, a violation of Einstein causality since there is no sense in which the time ordering of causal events has been reversed.)
So the violation of Einstein causality, not the theory of relativity itself, is really what is at stake in the claims that neutrinos traveling at speeds faster than light have been observed. This is still undoubtedly a major development, which is why the community is abuzz and somewhat wary of immediately accepting it is true.
Next: What could be other reasons for the CERN-Gran Sasso results?