Could the speed of light have been larger during the very early universe?


A new paper by João Magueijo and Niayesh Afshordi has been published that suggests that the speed of light has not had the same value over the age of the universe but instead could have been much higher, even infinite, at the very beginning of the universe when the cosmic temperature was extremely high. (You can see the paper without a journal subscription here.) Their suggestion is made in response to the well-known problem that the universe seems to be remarkably homogenous and isotropic over its entire size. This suggests that all the parts of the universe were in contact at one time in order for that kind of equilibrium to be reached. The problem is that the fastest communication possible is with light and that speed is not sufficient to create that kind of homogeneity.

Think of the analogy of a cup of coffee into which you pour some milk and let it sit. After some time, the liquid will be homogenous. The time taken for this to happen depends of the average speed of motion of the particles in the liquid and the size of the cup. If the liquid is found to be homogeneous, that means that there was sufficient time for complete mixing to occur. If the cup is very large and yet the liquid is still found to be homogenous despite there not being sufficient time for particles to go from one side to the other with their average speeds, that means that there has been some other mechanism causing mixing.

The theory of cosmic inflation was invented to address this problem, suggesting that every region of the universe in the early days was close enough for equilibrium to be reached but then underwent an extraordinarily rapid expansion greater than the speed of light in the first fraction of a second to create the illusion that our present universe is so large that all parts of it could not have been in contact. In our analogy, the cup itself was initially so small that mixing took place quickly, but then the cup underwent a massively rapid expansion, much faster than the speed of motion of the particles in it, so that it now looks as if there was not enough time for the particles to go from one end to another. (The universe did not become perfectly homogenous because if it did you could not explain how parts of it clumped together to form stars and galaxies. There are minute fluctuations in densities that allowed for that to happen and these fluctuations are also observed in the cosmic microwave background, which is a good indicator of the early state of the universe after it had cooled down sufficiently that radiation energy was too low to be absorbed by matter.)

The authors of this new paper claim that their theory of a much larger light speed solves the homogeneity problem without invoking inflation models. They also claim that the inflation model is too flexible in that it can be adjusted to meet the needs of new observations while theirs makes a specific prediction for a quantity known as the spectral index of the scalar fluctuations nS=0.96478 (with an uncertainty of 0.00064) that can be measured and used to test it. The current measured value of this quantity is 0.9667 with an uncertainty of 0.0040, so that their value lies within that range.

The Guardian has an article on this new claim. It is annoying that the article only mentions Stephen Hawking as the proponent of the inflation theory. Actually, the people who can be rightly considered the originators of that theory are Alan Guth, Andre Linde, Paul Steinhardt, Andy Albrecht, and others. Reporters like to attach Hawking’s name to everything since he is well known to the public and it attracts attention but this is unfair to the real originators.

But what does changing the speed of light mean? Ever since Einstein’s theory of special relativity became accepted in the early 20th century, the speed of light in vacuum has been thought to be an invariant, in that it would have the same value when measured by any observer, irrespective of how fast the observer or the source of light is moving. This constant value has been measured precisely to be 299,792,458 m/s. But if you look up standard tables of physical constants, you will see that unlike with other physical constants, that value does not come with an associated uncertainty but is given as exact. How can that be, since every measured quantity carries with it some element of uncertainty?

The reason is that it used to be the case that the units of length and time were defined first to be exact and then speeds were measured as derived quantities. The standard used for length has varied with time as our need for precision has increased. The standard meter was first established in 1799 in terms of one ten millionth of the distance of the arc of the great circle that passed through Paris from the equator to the North Pole. It was later changed in 1889 to the distance between two marks on a platinum-iridium bar kept in Paris. This too was not precise enough and with the realization that wavelengths of light were unchanging quantities, in 1960 the meter was again changed to be 1,650,763.73 vacuum wavelengths of the light given out by the 5d5 to 2p10 atomic transition of Krypton 86.

Meanwhile the second was also being defined with greater precision, starting as 1/86,400 of the mean solar day before finally arriving in 1967 to be exactly “9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom”. Given the need for greater precision in the unit of length and the invariance of the speed of light, it was decided to define the speed of light as a fundamental unit with a fixed value of 299,792,458 m/s and make length a derived unit, with the meter as the distance traveled in 1/299,792,458 of a second.

So if the speed of light is now defined to be a fixed value, ‘increasing’ it essentially means that the speeds of other things change with respect to it. Something that earlier was thought to travel with the speed of light (say gravity waves) will now be traveling slower than light. If the speed of light was much greater in the early universe, that effectively means that the meter as a unit of distance was larger then, and hence the size of the universe as measured in meters becomes effectively smaller and hence matter and radiation could have traversed that early universe more quickly, thus solving the homogeneity problem.

Needless to say, this is by no means definitive. There will be criticisms and new measurements that will test it. But I thought it provided a good occasion to discuss some important physics.

Comments

  1. Johnny Vector says

    If I’m understanding the terminology correctly, they predict r=0, which I think means no B-modes in the CMB polarization. I note, with absolutely no ulterior motive, that CMB polarization is already the highest priority unfunded midscale mission in the current Decadal Survey, and this can only push it higher.

    So it’s pretty likely that by, oh, let’s say 2026, we’ll have launched a CMB polarization mission capable of seeing B-modes unambiguously, and collected enough data to do so. That will pretty clearly falsify either this model or inflation.

  2. OverlappingMagisteria says

    Oh great.. now we can look forward to Young Earth Creationists misusing this.

    One of the (many) problems with the universe being 7,000 years old is that we see galaxies that are much more than 7,000 light years away. The Creationist response to this is usually speculating that maybe the speed of light used to be faster in the past, so that it had enough time to travel more than 7,000 light years in 7,000 years.

    Of course this new idea kinda presupposes the Big Bang so you’d think it would be awkward for a Creationist to use it.. but I doubt that’ll stop them.

  3. Pierce R. Butler says

    … the speed of light has not had the same value over the age of the universe but instead could have been much higher, even infinite, at the very beginning …

    Supposing this hypothesis bears out, the question I can’t get out of my very-non-physicist head is: what was the speed of light (can we even call it “c” in this context?) immediately after it dropped below “infinite”?

  4. Rob Grigjanis says

    Pierce @5: My impression from a quick read of some papers:

    The important value is not so much the speed of light, but the ratio of the speed of light to that of gravitational waves. In the early universe, the ratio was supposedly much greater than one (like around 10^30 maybe), and at some critical temperature, there was a very rapid phase transition* during which the ratio dropped to one.

    *Moffat estimated the duration of the transition as about 10^-48 seconds.

  5. Pierce R. Butler says

    Rob G @ # 6 -- Thanks!

    Moffat estimated the duration of the transition as about 10^-48 seconds.

    Nothing so nice & tidy as a Planck interval -- aw shucks!

    Does this tie in with the “surprising weakness of gravitational force” question?

  6. Rob Grigjanis says

    Pierce @7:

    Does this tie in with the “surprising weakness of gravitational force” question?

    It ties in because Moffat is using Planck units as natural order-of-magnitude values at the phase transition, and the Planck mass implicitly reflects this weakness because of its hugeness (10^-5 grams is a macroscopic mass).

Leave a Reply

Your email address will not be published. Required fields are marked *