Consciousness, measurement, and quantum mechanics – Part 7


(See Part 1, Part 2, Part 3, Part 4, Part 5, and Part 6. Also I am going to suspend the limit of three comments per post for this series of posts because it is a topic that benefits from back and forth discussions.)

In order to fully appreciate the role of Heisenberg’s uncertainty principle on the question of objective reality and measurement, a highly truncated history of quantum mechanics might help.

The theory traces its beginnings to 1900 when Max Planck decided to assume that the material that made up the walls of the cavity inside a body that was at a uniform temperature (called a blackbody) could be treated as oscillators that could only absorb and radiate energy in discrete amounts (‘quanta’) and not continuously as had been previously assumed. The size of these quanta depended upon the frequency of oscillation as well as a new constant he introduced that has come to be known as Planck’s constant h. The value of this constant was very small, which is why it had long seemed that the energy could be absorbed and radiated in any amount. This was a purely ad hoc move on his part that had no theoretical justification whatsoever except that it gave the correct result for the radiation spectrum of the energy emitted by the blackbody. Planck himself viewed it as a purely mathematical trick that had no basis in reality but was just a placeholder until a real theory came along. But as time went on and the idea of quanta caught on, he began to think that it could represent something real.

In 1905 Einstein proposed that light energy also came in quanta and this was used to explain the photoelectric effect, which was what he was awarded the Nobel prize for in 1921. Then Niels Bohr in 1913 used the idea of quantization to come up with a model of simple atoms that explained some of their radiation spectra. Both of their works used Planck’s constant.

Erwin Schrodinger’s eponymous equation was proposed by him in 1926 and set in motion the field of quantum mechanics because it laid the foundations of a real theory that enabled one to systematically set about making calculations of observables. Almost simultaneously, Werner Heisenberg came up with alternative formulation based on matrices. (Later on P. A. M. Dirac showed that the two formulations were equivalent.) But Schrodinger’s theory was in the form of a differential equation that enabled one to calculate the wave function of a particle that was moving under the influence of a potential. Differential equations and wave behavior were both very familiar to physicists and thus Schrodinger’s approach was more easily accessible and used more widely.

But there was from the beginning confusion about what the wave function in Schrodinger’s equation meant, what information it carried, and what it told us about the world. The fact that it was a complex quantity was a hindrance to creating a physical picture. It was Max Born’s interpretation that the square of the absolute value of the wave function (a real quantity) represented a probability density that enabled the connection of the wave function to observables.

What came to be known as the Copenhagen interpretation, advocated by Bohr, became the dominant view, and this placed heavy emphasis on the role of the measuring device playing the role of the observer. Bohr advocated something called ‘complementarity’ in dealing with measurements.

By 1935 conceptual understanding of the quantum theory was dominated by Niels Bohr’s ideas concerning complementarity. Those ideas centered on observation and measurement in the quantum domain. According to Bohr’s views at that time, observing a quantum object involves an uncontrollable physical interaction with a measuring device that affects both systems. The picture here is of a tiny object banging into a big apparatus. The effect this produces on the measuring instrument is what issues in the measurement “result” which, because it is uncontrollable, can only be predicted statistically. The effect experienced by the quantum object limits what other quantities can be co-measured with precision. According to complementarity when we observe the position of an object, we affect its momentum uncontrollably. Thus we cannot determine both position and momentum precisely. A similar situation arises for the simultaneous determination of energy and time. Thus complementarity involves a doctrine of uncontrollable physical interaction that, according to Bohr, underwrites the Heisenberg uncertainty relations and is also the source of the statistical character of the quantum theory.

In the early days, much of the discussion centered on the maximum information that one could glean from measurements on a single particle. But this posed a problem when it came to simultaneously obtaining both the position and momentum of that particle, as illustrated in a thought experiment posed by Heisenberg in his proposal concerning the uncertainty principle.

The problem can be seen by asking what we mean by measuring the position of an object. We can represent the act of measurement as looking at the object through a microscope. How precisely the position can be located depends on what is known as the resolving power of the microscope and is given by Δx, and it depends upon the diameter of the microscope lens and the wavelength of the light used.

But what does ‘looking’ mean in practice? It means that we send in a photon from some light source that bounces off the particle and enters the lens of a microscope. But when the photon bounces off the particle, it changes that particle’s momentum. By measuring the recoil of the photon, we can calculate the original momentum of the particle. But since the lens of the microscope has a certain width, that limits the precision of the calculation of the recoil, and this leads to an uncertainty in the calculated momentum of the particle given by Δp that also depends upon the width of the microscope lens and the wavelength of light used. What Heisenberg showed was that the product ΔxΔp was independent of the lens width and light frequency but had to be at least of the order of Planck’s constant h. It could not be the case that Δx=Δp=0, i.e., we could never know x and p exactly simultaneously. Hence if you measure one of them exactly so that its uncertainty tends to zero, the other one is completely unknown, i.e., its uncertainty tends to infinity, so that the product of uncertainties is non-zero as required.

The thorny question was whether the particle did have an exact position and momentum (that we could not determine because of the unavoidable disturbances caused by the measurement) or whether the particle did not have a position or momentum until it was measured. Those who believed in an objective reality thought it was the former while the Copenhagen approach assumed the latter. But there did not seem to be any way of distinguishing the two approaches experimentally. But the wave function solution of Schrodinger’s equation had the property that the position and momentum could not both be precisely specified simultaneously, and so it was assumed that those two properties did not exist until they were measured. Hence there seemed to be no objective reality.

Einstein was a firm believer in objective reality. Einstein particularly disliked the idea of the instantaneous collapse of the wave function everywhere upon measurement. After all, it was he who showed that nothing could travel faster than the speed of light, which was why he referred to this collapse as ‘spooky action at a distance’. The 1935 EPR paper that Einstein co-wrote with Nathan Rosen and Boris Podolsky is a beautiful example of the use of a thought experiment to make this particular point. (The arguments in the paper are often attributed just to Einstein alone and poor Rosen and Podolsky are shunted to the background, even though Podolsky is reputed to have played a major role in developing the ideas in the paper.)

The EPR paper suggested a way to show that a particle does indeed have an exact position and momentum prior to measurement and thus Heisenberg’s uncertainty principle was merely a statement about the limits of measurement, not about the limits placed on what exists as reality. EPR first set about defining what they mean by objective (or what they call ‘physical’) reality, saying: “If, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.” In other words, 100% predictability of the value of a quantity implied that the quantity was as good as having been measured, even if it had not actually been done so.

To show this they constructed an entangled wave function for two particles. The two particles then move in opposite directions so that they no longer interact. They then show that if we make a measurement of the exact momentum of particle 1, the wave function will collapse at the location of particle 1 such that it implies that particle 2 must have exactly the opposite momentum with 100% predictability. i.e., the momentum of particle 2 will have objective reality. However if instead we make a measurement of the exact position of particle 1, the wave function will collapse at the location of particle 1 such that it implies that particle 2 will have an exact location with 100% predictability. i.e., the position of particle 2 will have objective reality. Thus, by measuring either the momentum or position only of particle 1, we are able to predict with certainty the exact momentum and position of particle 2, even though we have not made any measurements on particle 2.

EPR take the next crucial step and argue that the measurements on particle 1 could have no effect on what is going on with particle 2 since they are widely separated. They argue that what properties of particle 1 we choose to measure cannot in any way disturb particle 2. But since particle 2’s exact position and momentum can be predicted with 100% certainty by measuring particle 1, they must be simultaneously real. In other words, without having directly measured either property of particle 2, they must both exist exactly. Thus for particle 2, Δx=Δp=0 and the uncertainty principle is violated. They say that the only way this conclusion can be avoided is if the reality of particle 2’s position and momentum depends upon the act of measurement carried out on particle 1 that is far away. They say that “No reasonable definition of reality could be expected to permit this.”

The EPR paper caused consternation in the physics community because the uncertainty principle was seen as an integral part of quantum theory and for it to not hold would cast doubt on the very foundations of the theory. But there seemed to be no way of testing this. It took another 30 years for John Bell to propose a way to do so by doing measurements on both particles, and another twenty years for the results to start coming in. Alas for EPR, as discussed in Part 6, the results showed that if the EPR claim that the measurements on particle 1 had no effect on particle 2 were true, then the results that you get disagree with the statistical predictions of quantum mechanics. Even Einstein never disputed the validity of the calculations obtained using quantum mechanics. Thus quantum mechanics does not provide a ‘reasonable definition of reality’ if by that we mean that an object’s properties must exist independently and prior to any measurement made on it.

But what Bell took away from Einstein, he also gave back. We see that the instantaneous collapse of the wave function occurs everywhere simultaneously, and not just at the location where the measurement takes place. Although quantum mechanics is a local theory in that it does not allow for information to propagate faster than the theory of light, this particular aspect of wave function collapse is a non-local phenomenon that leads to the denial of the idea of objective reality. But Bell showed that you could recover objective reality if you added in another non-local effect, and that is to allow for the results of measurements on particle 2 to also depend on the way that the detector at particle 1’s location is set up, however far away it may be. If you did that, you could recover agreement with the statistical predictions of quantum mechanics. David Bohm constructed a ‘pilot wave’ model that explicitly showed this. You would have expected that Einstein would have welcomed it but he was dismissive, calling it ‘too cheap’. He seemed to have hoped for a more sophisticated theory.

So there things stand. Unless one is willing to adopt one of the alternatives such as non-local theories or the Many-Worlds Interpretation or the spontaneous collapse models that were discussed in Part 4 and Part 6 of this series, and few in the mainstream physics community have done so, then one has to conclude that quantum mechanics precludes the existence of objective reality.

That is the end of this series of posts on this topic.

Comments

  1. file thirteen says

    Great stuff Mano. I really enjoyed the series of posts, especially this one.

    If there is no objective reality at the quantum scale, has anyone proposed an explanation as to why the reality we experience is so, well, solid? Is it similar to an infinite series converging to a finite sum? ((1+1/2+1/4+1/8+ -> infinity) = 2)

  2. Mano Singham says

    file thirteen @#2,

    There is a somewhat hand-waving argument to explain that.

    It is based on Louis de Broglie’s revolutionary idea (introduced in 1924 in his PhD thesis) that just as waves had particle-like properties that we call quanta, all particles also had wave-like properties and that it is the wave-like behavior of small particles like electrons that leads to all the quantum effects we have been discussing. de Broglie postulated that the ‘wave length’ associated with any particle is given by h/p where h is Planck’s constant and p is the momentum of the particle. Since h is so tiny, we can observe the wave effects, such as interference, diffraction, the uncertainty principle etc., only when we look at small objects on the microscopic scale that have tiny momenta. That has been confirmed in some ingenious experiments.

    The claim is that those effects also exist for macroscopic objects but the momentum of any macroscopic object is so large that its wave length is so tiny as to make those wave effects unobservable. That is why macroscopic objects appear to behave classically, because their wavelengths are infinitesimally small.

  3. Deepak Shetty says

    @Mano
    Good explanations for a complex topic!
    One question -- are there any current advancements in this area ? Most of what you describe , I dimly recollect reading about it a decade or two ago -- is any progress made on more definitive answers or has the scientific community by and large just accepted the no objective reality as you state in your conclusion and this isnt really an area of interest?

  4. Mano Singham says

    Deepak @#4

    In science, there are almost always competing schools of thought. Working on theories other than the standard paradigm carries risks because nothing might come of it. It is always harder to go against an accepted paradigm. So most scientists work within that standard paradigm, and this is true for quantum theory too.

    There are people looking at alternatives but they tend to be a minority. Often they are established physicists who have secured their careers or young people who wish to make a name for themselves by overturning the dominant paradigm. Sometimes they do this work on the side. For example, John Bell was a scientist at the CERN accelerator working on standard physics problems. His work in this area was, at least initially, not part of his main job. But once his paper took off and he became famous, he was able to devote more time to it.

  5. Jean says

    Since the reality of certain characteristics of particles becomes reality when measured does that also mean that they cease to be real when the measurement is done with? If time symmetry works at that level, that would seem to make sense that things becoming real also means that things cease to be.

  6. file thirteen says

    I feel I’m going to regret asking this, but where does gravity feature in all this? It seems that it’s range independent, but is that just because there are “gravitons”, like light photons but undetectable to us, that absolutely everything that has gravitational attraction is spewing out all the time, like tiny suns? That sounds absurd, and I thought there was currently no quantum theory of gravity, but if there are no carriers of gravitational force, doesn’t the existence of gravity break locality?

Leave a Reply

Your email address will not be published. Required fields are marked *