Consciousness, measurement, and quantum mechanics – Part 2


(See Part 1 here. Also I am going to suspend the limit of three comments per post for this series of posts because it is a topic that benefits from back and forth discussions.)

It looks like I may have not been sufficiently precise in my first post, leading to some confusion, so I am going to take a slight detour from my series of posts on this topic to address an issue that came up in the comments about the nature of the probability and statistics that is used in quantum theory and how it differs from what we use in everyday life, in particular, the nature of the uncertainty in predicting outcomes. (As always with quantum mechanics, since the phenomena involved are invisible to our senses and often counter-intuitive, we have to use analogies and metaphors to try and bring out the ideas, with the caveat that those never exactly represent the reality.)

Let’s start with classical statistics that we use in everyday life in a situation where the results of a measurement are binary. Suppose that we want to know what percentage of a population has heights less that five feet. If we measure the height of a single person, that will be either more or less than five feet. It will not give us a probability. How do we find that? We take a random sample of people and measure their heights. From those results, we can calculate the fraction of people less than five feet by dividing the number in that category by the size of the sample. That fraction also now represents the probability that if we pick any future person at random, that person will be shorter than five feet. When we pick a random person, we do not know which category they will fall into but we do know that it will be either one or the other. What we also believe is, that in the classical world, each person’s height was fixed before we measured it. We just did not know it beforehand.

It is possible that we may already have a prediction for that probability because (say) the population that we are studying is a subset of a larger population where the result has already been determined and we have no reason to think that this subset differs in any significant way. So our study of the subset could be taken as a test of the hypothesis that its properties are the same as those of the larger population.

Moving on to the quantum case, we want to find out the probability that, on measurement, an electron (or photon) that was prepared in a superposition of spin up and spin down states, will be found to be in (say) a spin up state. A single measurement will again give us only a yes or no answer, not a probability. To get a probability, we need to make a large set of measurements. We cannot repeat the measurement on that same electron because once it is put into the spin up state, it will remain in that state. To get a probability, what we need to do is create a large number of identically prepared electrons, all in the same superposition of states, and take measurements on each. Some will be found spin up and others will be found spin down. From that information, we can calculate the probability of the result of any future measurement on another identically prepared state being found spin up. If the original superposition of states was also the solution of the Schrodinger equation (the quantum equivalent of Newton’s laws of motion in classical physics), it would predict the probability of getting the result and thus could serve as a test of our theories of quantum mechanics. Up until now, it has generally passed those tests, which is why we have such confidence in its correctness.

There are two major differences between the classical and quantum cases discussed above. One is that in the classical case, when we pick a random person and make the measurement of their height, we believe that the person already had that height (more or less than five feet) even before the measurement, while in the quantum case, we believe that the result only came into being as a result of the measurement. We did not know what result we would get, we only had probabilities. The second is that in the classical human case, each person is assumed to be different. If they were all identical, the results of measurements would also be identical. But in the quantum case, despite all the initial states being identically prepared, yet they give different results upon measurement. In other words, in the classical case, the population being measured consists of different states (i.e., heights). In the quantum case, the population being measured consists of identically prepared states.

A question that will naturally arise is how we know that the electron did not have the property prior to measurement, just like the humans. After all, all we have are the results of measurements, not anything prior to it. This was a very difficult question that was hotly debated from the inception of quantum mechanics in the first couple of decades of the twentieth century, and for the longest time there seemed to be no way of experimentally distinguishing the two views. It took the work of John S. Bell in 1964 to show that it was possible. He derived an inequality (now referred to as Bell’s theorem or Bell’s inequality) which could be used to distinguish the two views. However the experiments were very difficult to do and it took another two decades before results started coming in and they generally supported the idea that the measured state came into being as a result of the measurement and did not have it before. In other words, there is no objective reality in the quantum world. (I have to add that not everyone agrees with this. There have been some ingenious efforts to find ways of bringing back objective reality into the quantum world while being consistent with all the other results but these efforts have not gained broad support. Ironically, Bell himself was a supporter of those efforts, despite his theorem seeming to work against it)

I would like to add a personal note. I believe that John Bell was one of the most profound thinkers on quantum mechanics ever, and his sudden death in 1990 at the young age of 62 was a major blow. He was also an exceedingly modest and genial man, a quality that shines through in his writings. He was devoid of the massive ego and arrogance that other prominent physicists unfortunately often have. I had the good fortune to meet him at a physics conference a year before his death where he was given a major award. After he had given his talk, I encountered him in the lobby and asked him about some ideas he had presented and we sat down on some chairs and had a wonderful conversation. Even though he was famous and I was a mere unknown postdoctoral student, he seemed quite content to spend time with me and to also autograph my copy of the book Speakable and unspeakable in quantum mechanics which is a collection of all his published papers on the foundations of quantum mechanics. I am not a collector but that is the one book of mine that I will never lend to anyone for fear of losing it.

Comments

  1. Jean says

    Mano,

    Thank you for taking the time to make that post and clarify some things. I guess that the fungibility of particles is a big part of what bothers me and I am more inclined to see superdeterminism as an attractive hypothesis. At least from what I understand of it and quantum physics in general.

    I look forward to reading the rest of this series of posts.

  2. Alan G. Humphrey says

    An explanation of how quantum experiments are set up to isolate them from and account for the universe being everywhere, such as vacuum chambers and vibration dampers, may help in grasping the complexity in these experiments and also why it takes decades to get from a conception of what may be happening and evidence in support of it. I imagine that an experiment that flipped a coin similarly set up to isolate itself from the universe would probably get over 90% heads if set up to do so, maybe even approaching 100%.

  3. Rob Grigjanis says

    A bit of a quibble: As I mentioned in the last post, it was the Kochen-Specker theorem (1966-67, which Bell also worked on; it’s also called the Bell-KS theorem) which established that you can’t assign definite values to properties like spin before measurement.

    As you say, Bell’s theorem has two possible interpretations; you can have hidden variables which specify values pre-measurement, or you can have locality (the way you measure at one location doesn’t affect a measurement at another location). But you can’t have both.

    At the risk of taking up too much bandwidth, I’m going to paste something I wrote for a discussion some years ago:

    In Bell’s paper, considering entangled fermions, locality is taken as meaning (call this assumption #1) that the axis chosen to measure the spin of particle 1 cannot affect the measurement of particle 2. In other words, if the axes were originally the same (so that the measurements, if done, would be antiparallel), fiddling with the relative angle for the axis used for particle 1 doesn’t affect the measurement of particle 2.

    In addition, the notion of hidden variables is concretized by a set of parameters λ, such that (call this assumption #2) the outcome of a measurement of 1 or 2 is determined by the axis chosen plus the value(s) of λ.

    So, if the axis chosen for particle 1 is the vector a, then A(a,λ) gives the spin of particle 1 relative to a. If the axis chosen for particle 2 is b, B(b,λ) gives the spin of particle 2 relative to b.
    The entanglement gives us the relation;

    A(a,λ) = −B(a,λ) = B(−a,λ)

    Bell shows you can’t satisfy #1 and #2. If you have ‘hidden variables’ which determine any measurement, the setting of the axis used to measure one particle must instantaneously affect the measurement of the other particle. If you don’t have ‘hidden variables’ you can still have locality in the sense of assumption #1.

    Bell’s paper is here:
    https://cds.cern.ch/record/111654/files/vol1p195-200_001.pdf

  4. Rob Grigjanis says

    I’ll add that, if you have a bit of calculus, Bell’s paper is remarkably accessible. The sign of a truly great mind.

  5. jenorafeuer says

    Hopefully Mano won’t mind, but here’s a quick and possibly oversimplified description of Bell’s Inequality, based on my reading of Nick Herbert’s Quantum Reality thirty years ago.

    So, let’s start with light polarization. Polarized light can be blocked by a filter with different polarization, and it has some odd properties. The amount of light that gets through is proportional to the square of the cosine of the difference in angles between the incoming light and the filter. Any light that makes it through a filter is polarized in the direction of that filter, so as far as polarization is concerned, any photons that have been through a filter have been ‘observed’ already.

    This leads to some of the known peculiarities of polarization, that if you have two filters at right angles to each other, no light will get through; but if you have a third filter at a 45 degree angle and slide it between those two, light will get through the three filters that wouldn’t get through the two. (Roughly a quarter of it.)

    Now, there are events which will release two photons in different directions, and are guaranteed that both photons will have the same polarization. So if you have two filters with the same orientation, both photons will go through them.

    Assuming both filters are at 0 degrees to start with, I’ll have ‘Emitted’ for whether a photon would normally pass through a 0 degree detector, ‘Filter A’ for passing through one filter, and ‘Filter B’ for the other. An ‘O’ means the photon made it through the filter, an ‘X’ means it didn’t. We’re also assuming classical mechanics here, in that a photon has a specific polarization whether we’ve observed it yet or not:

    Filter A: XOOX|XXOX|OOXO|OOXX
    Emitted : XOOX|XXOX|OOXO|OOXX
    Filter B: XOOX|XXOX|OOXO|OOXX
    Matches : YYYY|YYYY|YYYY|YYYY

    So here we have everything matching up as expected. Now, we twist filter A 30 degrees. Because of the square of the cosine mentioned above, this means roughly one quarter of the photons will no longer get through:

    Filter A: XXOO|XXOX|OOXO|XOXO
    Emitted : XOOX|XXOX|OOXO|OOXX
    Filter B: XOOX|XXOX|OOXO|OOXX
    Matches : YNYN|YYYY|YYYY|NYYN

    So we have four bits different this time. Now, if we put filter A back and twist filter B 30 degrees the opposite direction, we get:

    Filter A: XOOX|XXOX|OOXO|OOXX
    Emitted : XOOX|XXOX|OOXO|OOXX
    Filter B: OOOX|XOOX|XOXO|XOXX
    Matches : NYYY|YNYY|NYYY|NYYY

    And again we have four bits different. But then, if we do both:

    Filter A: XXOO|XXOX|OOXO|XOXO
    Emitted : XOOX|XXOX|OOXO|OOXX
    Filter B: OOOX|XOOX|XOXO|XOXX
    Matches : NNYN|YNYY|NYYY|YYYN

    So we have seven bits different. In fact, under classical mechanics, it is impossible to have more than eight ‘errors’ at the end since each change only created four errors. Each rotated filter blocked 1/4 of the photons, so rotating both shouldn’t be able to block more than 1/2, and if the two rotations are independent we should expect 7/16.

    Quantum mechanics, and the actual effects of polarization, say that the number of errors from mismatched photons should be the square of the cosine of 60 degrees… which is 3/4, much more than the 1/2 maximum of classical mechanics. That is Bell’s Inequality.

    Quantum mechanics turns out to match what we actually measure when doing this… even if we literally rotate the filters randomly while the photons are in flight so it’s not possible for one photon to know how the other photon’s filter was set because no information can travel faster than light. (That last is what ‘locality’ refers to, and the compexity of changing the filters during the travel time is part of why it took so long to really test this, properly.)

    The net effect of this is that we cannot have both a world where quantum particles have definite properties, and a world where Special Relativity works and no information can travel faster than light. One or the other can be true, but not both. And frankly, given the number of very carefully curated holes you have to poke in locality to get ‘definite properties’ to match the math and observations of quantum mechanics (i.e., Bohm’s ‘pilot wave’ interpretation) Occam’s Razor suggests that ‘quanta do not have properties until measured’ is the simpler solution, no matter how counter-intuitive it is.

  6. jenorafeuer says

    And my attempts at using <pre> to get the text blocks to line up got stripped out by the site. Ah well, it should still be readable.

  7. Snowberry says

    I haven’t gotten too deep into physics so I don’t fully understand the more advanced parts, but sometimes I wonder if the language we use can sometimes be a stumbling block to figuring out exactly what’s going on.

    Q: “Is a photon a particle or a wave?”
    A: “Yes.”
    Q: “Okay, but which?”
    A: “No.”
    Q: “I don’t understand.”
    A: “You test for a rabbit, and you get ‘rabbit’. You test for a duck, and you get ‘duck’. You examine closely the thing being tested, and you see a something which has the traits which you consider most essential to each, but isn’t clearly either a mammal or a bird and may not even be an ‘animal’ in the normal sense.”
    Q: “How would I even ‘examine closely’ a photon in that metaphor?”
    A: ¯\_(ツ)_/¯

    Also, you can definitely tell whether the cat is dead or alive without opening the box. Just whack the box and listen for angry cat noises. /s

Leave a Reply

Your email address will not be published. Required fields are marked *