Anyone who has taken a course on quantum mechanics and learned the dominant Copenhagen interpretation of it has encountered the so-called measurement problem that says that systems can exist simultaneously in a superposition of mutually exclusive states until an actual measurement is made, at which point the superposition collapses in ways that are unknown to the observer and the system is found in just one of the possible states. What happened to all the other states? No one knows. They are assumed to just cease to exist. The most famous example of this is Schrodinger’s cat which, while still in the unopened box is both alive *and* dead, but when the box is opened is found either alive *or* dead.

The process by which the system collapses, that requires treating the observer as a classical system and the observed as a quantum mechanical one, has never been explicated in a consistent mathematical form and is one of the unsatisfying features of this interpretation. In his doctoral dissertation in physics, Hugh Everett came up with a bold solution to this problem but at what seems like an extravagant cost.

In pursuing this endeavor, Everett boldly tackled the notorious measurement problem in quantum mechanics, which had bedeviled physicists since the 1920s. In a nutshell, the problem arises from a contradiction between how elementary particles (such as electrons and photons) interact at the microscopic, quantum level of reality and what happens when the particles are measured from the macroscopic, classical level. In the quantum world, an elementary particle, or a collection of such particles, can exist in a superposition of two or more possible states of being. An electron, for example, can be in a superposition of different locations, velocities and orientations of its spin. Yet anytime scientists measure one of these properties with precision, they see a definite result—just one of the elements of the superposition, not a combination of them. Nor do we ever see macroscopic objects in superpositions. The measurement problem boils down to this question: How and why does the unique world of our experience emerge from the multiplicities of alternatives available in the superposed quantum world?

…The Schrödinger equation delineates how a quantum system’s wave function will change through time, an evolution that it predicts will be smooth and deterministic (that is, with no randomness). But that elegant mathematics seems to contradict what happens when humans observe a quantum system, such as an electron, with a scientific instrument (which itself may be regarded as a quantum-mechanical system). For at the moment of measurement, the wave function describing the superposition of alternatives appears to collapse into one member of the superposition, thereby interrupting the smooth evolution of the wave function and introducing discontinuity. A single measurement outcome emerges, banishing all the other possibilities from classically described reality. Which alternative is produced at the moment of measurement appears to be arbitrary; its selection does not evolve logically from the information- packed wave function of the electron before measurement. Nor does the mathematics of collapse emerge from the seamless flow of the Schrödinger equation. In fact, collapse has to be added as a postulate, as an additional process that seems to violate the equation.

…Everett’s radical new idea was to ask, What if the continuous evolution of a wave function is not interrupted by acts of measurement? What if the Schrödinger equation always applies and applies to everything—objects and observers alike? What if no elements of superpositions are ever banished from reality? What would such a world appear like to us?

Everett saw that under those assumptions, the wave function of an observer would, in effect, bifurcate at each interaction of the observer with a superposed object. The universal wave function would contain branches for every alternative making up the object’s superposition. Each branch has its own copy of the observer, a copy that perceived one of those alternatives as the outcome. According to a fundamental mathematical property of the Schrödinger equation, once formed, the branches do not influence one another. Thus, each branch embarks on a different future, independently of the others.

The idea of the universe on each measurement splitting into other universes corresponding to all the possible superposition states ‘solves’ the measurement problem but seems extravagant though hard to refute. Most people just try to forget about the problem.

String theorist Juan Maldacena of the Institute for Advanced Study in Princeton, N.J., reflects a common attitude among his colleagues: “When I think about the Everett theory quantum mechanically, it is the most reasonable thing to believe. In everyday life, I do not believe it.”

I had not thought about this for a while but came across a surprising reference to it in an article on false memories and something called the ‘Mandela effect’ in which many people share the same false memory.

Would you trust a memory that felt as real as all your other memories, and if other people confirmed that they remembered it too? What if the memory turned out to be false? This scenario was named the ‘Mandela effect’ by the self-described ‘paranormal consultant’ Fiona Broome after she discovered that other people shared her (false) memory of the South African civil rights leader Nelson Mandela dying in prison in the 1980s.

…Is a shared false memory really due to a so-called ‘glitch in the matrix’, or is there some other explanation for what’s happening? Broome attributes the disparity to the many-worlds or ‘multiverse’ interpretation of quantum mechanics.

…It’s important to keep in mind that the many-worlds interpretation was developed to explain the results of physics experiments and not the Mandela effect. Nonetheless, Broome believes that her shared memory isn’t actually false, and that she and others who remember a different past were actually in a parallel reality with a different timeline that somehow got crossed with our current one.

While the many-worlds interpretation of Everett is extravagant, suggesting that shared false memories are a possible example of that is even more extravagant. False memories are more likely to be due to a plausible but wrong idea being spread rapidly through social interactions.

Back to Everett, I did not know much about him other than this one idea of his and the biographical article says that his idea did not gain much headway, at least initially, in the world of physics and he left physics research after getting his PhD and had a successful career working for the Pentagon on nuclear war scenarios and then starting his own consulting company.

But he had an unhappy personal life.

Despite all these successes, Everett’s life was blighted in many ways. He had a reputation for drinking, and friends say the problem seemed only to grow with time. According to Reisler, his partner usually enjoyed a three-martini lunch, sleeping it off in his office—although he still managed to be productive.

Yet his hedonism did not reflect a relaxed, playful attitude toward life. “He was not a sympathetic person,” Reisler says. “He brought a cold, brutal logic to the study of things. Civil-rights entitlements made no sense to him.”

…Everett was egocentric. “Hugh liked to espouse a form of extreme solipsism,” says Elaine Tsiang, a former employee at DBS. “Although he took pains to distance his [many-worlds] theory from any theory of mind or consciousness, obviously we all owed our existence relative to the world he had brought into being.”

And he barely knew his children, Elizabeth and Mark.

…Everett died in bed on July 19, 1982. He was just 51. His son, Mark, then a teenager, remembers finding his father’s lifeless body that morning. Feeling the cold body, Mark realized he had no memory of ever touching his dad before. “I did not know how to feel about the fact that my father just died,” he told me. “I didn’t really have any relationship with him.”

…Mark’s sister, Elizabeth, made the first of many suicide attempts in June 1982, only a month before Everett died. Mark discovered her unconscious on the bathroom floor and got her to the hospital just in time. When he returned home later that night, he recalled, his father “looked up from his newspaper and said, ‘I didn’t know she was that sad.’” In 1996 Elizabeth killed herself with an overdose of sleeping pills, leaving a note in her purse saying she was going to join her father in another universe.

Very sad.

Marcus Ranum says

I did one semester of physics in college, so I got the cat, and Wigner’s Friend, but beyond that I’m an ignoramus. That disclaimed:

It always seemed to me that the whole many universes/schrodinger’s cat thing was a side effect of humans not having a framework in which to understand the uncertainty principle. Just because we have a problem measuring something doesn’t mean necessarily that there is an infinite branching multiverse, or superposed cats — it means: we don’t know. When I observe reality around me, as it appears to be, it seems pretty consistent, constant, persistent, and non-weird. It appears to me to be easier to say “we don’t understand some shit” rather than to make up flights of fancy that are theoretical extrapolations which are consistent with our presuppositions but which don’t appear to have much to do with observable reality. The idea of universes splitting whenever there’s a measurement appears to me to be an attempt to privilege cognition -- coincidentally, because we measure and think -- but if Wigner’s friend can collapse a quantum superposition, why can’t Schrodinger’s cat do it, too? Is this just more silly humans privileging themselves as the most important things in the universe? Aren’t cat’s eyes and brains as capable of doing a “measurement” as a human’s? And what about a wolf spider’s? It seems to me that “the whole universe splits over and over so that humans can remain the most important thing in the universe” is a bit over the top. Why can’t we just say “we don’t understand this shit.”

Wounded King says

Mark Everett is quite famous in his own right as a musician, principally as the frontman of the band Eels. A lot of his work has a strong autobiographical flavour and indeed one of his songs is titled ‘Elizabeth on the bathroom floor’.

Dunc says

This is why that damn cast is one of the worst ideas in popular physics… The thing you have to remember is that it’s just an analogy -- it’s not an accurate description of how the physics works. In reality, the device in the box is what makes the “observation” and collapses the waveform (according to the Copenhagen interpretation). Even the use of the term “observation” is misleading, and the result of a questionable translation of one of Schrodinger’s papers from German -- it would be better to say “interaction”, but “observation” has stuck, leading to the misunderstanding that consciousness has something to do with it. It doesn’t.

Pierce R. Butler says

Marcus Ranum @ # 2:

It seems to me that “the whole universe splits over and over so that humans can remain the most important thing in the universe” is a bit over the top.Doesn’t some non-Copenhagen (Stockholm? Helsinki? Estonia? Schleswig-Holstein?) interpretation have it that interaction with any other particle/wave constitutes an “observation”?

I kind of enjoy, hard-agnostically, the idea that each Planck-length twitch of a quark in any direction instantaneously creates an entire and otherwise indistinguishable universe (multiply number of quarks by number of possible locations by number of, say, Planck intervals since start of Time for, um, a really big number). This of course produces a gigantic conservation-of-energy-and-mass problem, solved for further amusement by positing a plethora of anti-universes with exactly-opposite values; attempted transits from one type to the other resulting in what we naively call supernovae.

Ignorant amateur physics is fun!

Dunc says

Cast? Cat, obviously…

Pierce R. Butler says

Jeez, the “Mandela Effect” suggestor (“paranormal consultant” Fiona Broome) has a whole blog about it with (apparently -- I didn’t read ’em all) hundreds of comments supporting the “shadow effect or “bleed through” of events in a parallel universe” idea.

I personally don’t recall any such story about Mandela going around in the ’80s, but between the Swiss-cheesiness of my memory and that I spent much of that decade way out in the woods isolated from media, this means nothing. Interestingly, even the Snopes page on this concept doesn’t cite any public rumors about Mandela dying on Robbins Island.

Marcus Ranum says

(proudly waving a flag that reads: “IGNORANT ABOUT ADVANCED PHYSICS, MAKE PHILOSOPHY GREAT AGAIN!”)

Doesn’t the “universes split whenever there’s an observation/measurement” theory violate conservation laws all to fuckandgone? Or is it OK to just wave away the creation of twice as much universe, zillions of times per nano-instant as somehow irrelevant? Even YHWH can’t pull a stunt like that. Come to think of it, can YHWH make a cosmic string so strong that he cannot snap it?

Marcus Ranum says

(I am deliberately trolling all the physicists and our good host, in the preceeding comment. I’m hoping to elicit corrections that allow me to learn, I admit it!)

KG says

The “typo” was obviously a bleed-through from a universe where “cat” is spelled “cast”.

Blake Stacey says

Doesn’t some non-Copenhagen (Stockholm? Helsinki? Estonia? Schleswig-Holstein?) interpretation have it that interaction with any other particle/wave constitutes an “observation”?That is a theme one finds, for example, in the “relational quantum mechanics” interpretation espoused by Carlo Rovelli.

The business of interpreting quantum mechanics is such a mess that one physicist who tried to chart out the possibilities called the result a “map of madness.” And the problem is worse than the chart suggests at first glance, because there is no one single “Copenhagen interpretation,” and no one single “many-worlds interpretation.” (I like to compare it to putting two rabbis in a room and getting out three opinions.) In the name of eliminating Copenhagen-ish vagueness, the many-worlders reimport plenty of their own, and end up disagreeing with each other.

Blake Stacey says

(“Madness” is a strong and perhaps uncomfortably ableist word to use in this context, I should add, but passions do run high.)

Rob Grigjanis says

Marcus@8:No. Each possible pre-splitting outcome comes with an (generally complex) amplitude, and the absolute square of this amplitude is the probability of that outcome. It corresponds the the “weight” of that outcome, such that all weights must add up to 1. So, someone in a “universe” defined by a particular outcome will still see total energy E, the same value as before splitting, but from the universal wavefunction point of view, it is carrying energy E times the weight factor. Same for other conserved quantities.

Ketil Tveiten says

@10 That’s right, except missing language about what a dirty hack/kludge this is. It’s an elegant mathematical solution, but as physics, it doesn’t really save the Multiple Worlds interpretation from being total bollocks. Like the Copenhagen interpretation is complete bollocks.

(The only interpretation I’ll accept as non-bollocks is the Shut Up And Calculate interpretation; every physicist I know agrees with this.)

Anton Mates says

Yeah, that doesn’t make sense. Under many-worlds, branches can merge as well split, but the merging process requires erasing all previous measurements that would have differed between branches. (Any given observation in QM provides knowledge about some variables of a system, but invalidates previous knowledge about other variables.) That means all “memories” specific to one branch or the other are gone.

False memories have nothing to do with quantum, they’re just example #9372 of our brains not being infallible truth-finding devices. These folks are basically saying “When I’m wrong about something, it’s only because I’m actually from a universe where I’m

rightabout it!” It’s an ingenious excuse when you’re twelve years old, but….bluerizlagirl . says

It seems to me, intuitively, that the universe would tend to merge timelines — that just feels like it ought to be a lower-energy state than having many bifurcations. So probabilities in the intersecting universes would be skewed in favour of a course of events that would heal the discontinuities, bu “re-routing” towards the same ultimate outcome.

Mano Singham says

Marcus @#7,

You can produce a universe with matter that has zero energy because the positive energy matter cancels out the gravitational potential energy which is negative. In fact, estimates of our own universe show that it could have zero total energy.

Mano Singham says

Sorry, I was out the whole day and did not realize that so many questions/comments had piled up!

Mano Singham says

The Copenhagen Interpretation is not the only game in town but won out and has become kind of dogma. Mara Beller in her exhaustive book

Quantum Dialoguediscusses the history of this issue and all the problems with it and how the proponents of CI managed to beat down the rivals even though their arguments were not conclusive. (I quote her a lot in my forthcoming book.)David Bohm’s alternative view that does not have this problem of treating the observer differently from the observed has been marginalized despite the efforts of people like the late, great John Bell to keep the flame alive.

Marcus Ranum says

Rob Grigjanis@#12:

OK. I’m clearly in over my head!!

file thirteen says

Can someone please explain to me why there’s such an emphasis on measurement causing a split-off of universes? If the multiverse exists at all, could our observed universe be merely the perspective of the individual (you who are respectively reading this) travelling through it?

No collapse of the wave function in the “wider” multiverse, only a specific path taken when things are forced by, for example, something being measured that requires taking a path through the possibilities to obtain a result. That would explain why further measurements reflect only the path that was taken.

Rob Grigjanis says

file thirteen@20:First of all, “measurement” isn’t a very good term (like “observation”, as Dunc points out in #3), since it implies intelligent agents doing stuff. It really refers to any situation in which a quantum superposition resolves to a definite value/set of values via interaction with the environment. That would certainly include a human measuring the spin of an electron, or doing a double-slit experiment, but it could also be a neutrino interacting with a nucleus somewhere across the galaxy, unobserved by anyone.

The emphasis is on measurement (as defined above), because that resolution (aka wave function collapse) is what Many Worlds is trying to address and explain. Why does the smooth probability function (which is the absolute square of the wave function) for one particle in the double-slit experiment “collapse” to a single dot on the detector screen?

John Morales says

Shorter Rob: observation (measurement) entails the environment interacting with the quantum superposition.

(The interaction is the thing, not whether it’s done by a conscious agent)

John Morales says

PS https://en.wikipedia.org/wiki/Quantum_computing#Quantum_decoherence

John Morales says

Dunc @3,

From the Wikipedia article (my bold, italics in original):

Brian English says

@18 Mano

I see what you did there.

@21 Rob

So much woo have been based on the idea that a conscious agent is required in QM.

Brian English says

From Susskind’s lectures:

Mano Singham says

To add to other explanations here, the point about the Copenhagen Interpretation of ‘measurement’ is that as a result of an interaction with the environment, a quantum system in which two states are entangled (so that their properties are connected and one affects the other) become disentangled (i.e., the properties of one exist independently of the other). These two conditions are referred to as ‘coherent’ and ‘decoherent’ states.

The environment need not be a human observer or consciousness or whatever. But in the CI, it has to be a system that obeys classical, not quantum, laws. The problem has been where to draw the line in an experiment between what should be treated classically (i.e., the ‘observer’) and what should be treated quantum mechanically (i.e., the ‘observed’). The theory does not specify it. But drawing the line has become as art that practitioners have mastered quite successfully.

Everett was saying that no such drawing of lines should be necessary, that everything should be treated quantum mechanically. This is not a frivolous idea. For example, when the system of interest is the entire universe, there is no ‘environment’ because the universe, by definition, is all there is and no dividing line can be drawn since any observer lies within the observed system.

To get back to the troublesome cat, the coherent state is when the dead state and live state are in an entangled superposition and the decoherent state is when that coherence is broken as a result of an interaction with the environment (i.e., a measurement). In actual practice, the larger the system, the harder it is to create the necessary overlap between two states to create an entangled, coherent state.

Rob Grigjanis says

Mano@27: Terminology quibble: that’s not how “entangled” is generally used. So an electron can be in a coherent stateα|+> + β|->

where |+> is spin-up for a choice of axis, and |-> is spin-down, and the coefficients tell us the probabilities of measuring up or down; |α|² and |β|² respectively. But |+> and |-> are not usually called entangled in this case.

Entanglement will occur, for example, when a spin zero particle decays to two spin half particles. Denoting the decay particles 1 and 2, the spin part of the final state is

|1+>|2-> ± |1->|2+>

This is an entangled state because it can’t be written as a product of the state of particle 1 and the state of particle 2, only as a sum of products.

And if the electron in the first example has its spin measured, decoherence occurs with this state entangling with the environment (in this case, the detector).

Mano Singham says

Rob,

You are right. I was conflating a mixed state with an entangled one.

Brian English says

This is probably late, but I wonder if Mano or someone else can clear up a confusion I have.

This post got me back to reading Susskind’s QM book.

In the first lecture. He talks about preparing the spin of a particle by measuring along an axis. The measurement will be +/- 1. Then if we rotate the measuring device some amount, and measure, we will get either +1 or -1, but a statistical average will appear on repeated measurments equal to the something like cos(theta). = n.m. This is not a problem for me, but it implies that repeated measurement along the new axis contains information about the original axis, otherwise we wouldn’t be getting a which is the dot product of the two axes.

Later in the lecture, he says all experiments at QM level are invasive, and that should we measure along an axis (z), and get 1 (prepare the spin), then rotate the measuring device and measure again along another axis (x), then turn the measuring device back to the original axis (z), the subsequent measurement will not confirm the original measurement on the original axis. The intermediate measurment along a different axis will leave the the spin in a completely random state as far as the next measurement is concerned.

We can get the statistical average along a different axis (x) which is cos(theta) of the angle with the first axis. Is the randomness only in the measurement, we don’t know if it will be +1 or -1? Because the average doesn’t appear random if we repeat the measurment on the x axis.

Or is it that because we’ve measured along the x axis, the next measurement along the z axis ‘remembers’ we measured along the x axis but forgets our last measurement on the z axis?

Brian English says

I guess a first hurdle to understanding is this: If measuring along the z-axis prepares the spin. Then subsequent measurements along the x axis give the statistical average of the product or vector component with the x and z axis, why does this happen instead of the first measurment along x axis prepare the spin to be either always +1 or always -1 along the x axis as it did with the z axis?

EnlightenmentLiberal says

Depends on what you mean. I hope that you are familiar with GRW models. GRW models are the obvious mathematical formulation of the Copenhagen “intepretation”. Of course, you lose linearity with GRW, but should have been an obvious consequence of the Copenhagen “interpretation” itself.

…

IIRC, “Schrodinger’s cat” thought experiment was first proposed to show the absurdity of the Copenhagen “interpretation”.

…

Which device in which box? Are you sure it’s the whole device? Could it just be half of the device? Or just the measuring probe part of the device? Or just the tip of the measuring probe of the device?

The measurement problem can be stated shortly as follows: Run a quantum experiment with a measuring device, where the measuring device has a literal yardstick that tilts left or right to indicate the outcome of the quantum experiment. Run the experiment several times, and you observe a roughly 50%:50% breakdown on the yardstick leaning to the left or to the right. Run the quantum mechanics math on the microscopic experiment state, and you find that, under the Copenhagen interpretation, you predict a 50%:50% breakdown, matching the observed results. Finally, consider taking your quantum mechanical description of the microstate, and expanding the scope of the experiment to include the measuring apparatus itself. The measuring apparatus is nothing more than atoms et al, which are governed by Schrodinger’s equation. When you run that math, you no longer have the separation of “experiment” and “measurer”, and the math says that the yardstick exists in a superposition of states, left and right.

The measurement problem is that the division of “experiment” from “measurer” is entirely arbitrary.

Further, AFAIK, you can device experimental descriptions where the predicted result of the experiment will be different if you draw the line of separation at point A vs at point B. Imagine doing the experiment, and deciding that the line of separation will be at point A, so you collapse the model at point A, then continue the experiment

from the collapsed stateuntil point B, and then you collapse it again, and measure the result. Vs: do the experiment, but don’t collapse it at point A, keep it in a superposition of states, and keep running that superposition of states until point B, and then collapse it. For some systems, that will have differen results. That’s the fundamental problem of the measurement problem. AFAIK, this is actually the logical basis behind proposed tests of GRW theories.For the best pop-sci lecture that I’ve ever seen on this topic, I suggest the following video.

Rob Grigjanis says

Brian@30: I don’t have Susskind’s book, but I’ll have a look online for his lecture notes to see what exactly he’s getting at. Until then;Since you’re saying the spin can only be +/-1, you’re talking about spin 1/2. Measuring isn’t “preparing” unless you’re selecting, say, the + spins along that axis to pass them on. If you then measure those (the +z) spins along an axis at an angle θ to z, you’ll get (if I remember the signs correctly) the spin state

cos(θ/2)|+> -- sin(θ/2)|->

where |+> and |-> are the + and -- states along the new axis. If it’s the x axis, θ=π/2, so cos(θ/2) = sin(θ/2) = 1/√2, and the spin state is

(1/√2)(|+> -- |->)

If you’re talking about a particular particle, once you’ve measured its spin along a certain axis, it certainly doesn’t remember its past. More tomorrow.

Mano Singham says

Brian @#30 and #31,

If I understand your question correctly, this is my explanation. What is happening when talk of the average along the second x-axis is that you are not taking repeated x-measurements on the

samesystem. If you did, then if the first one results in +1, then every subsequent one will always be +1, exactly as you say you would expect.What the experiment actually envisages is preparing a

large number of identical systemsall of which have spins of (say) +1 along the z-axis. Then you take each of those systems and measure the spin along the x-axis. Some will be +1 (and on subsequent x-measurements will always be +1) and some will be -1 (and on subsequent x-measurements will always be -1) but the average will be cos(theta).Does that answer your question?

Brian English says

@Rob33

That makes sense to me, as the act of measuring perturbs the system.

@Mano 34.

This makes sense. I think it does answer my question. It could be I didn’t comprehend Susskind’s explanation, or it wasn’t clear. I’ll go back and check, but I suspect it’s the former, a lack of comprehension.

Looks like it

I missed that the whole experiment was being repeated, not just the measuring. My bad.

Thanks for your help!

Regarding measuring/preparing

When measuring the same particle on the same axis:

Rob Grigjanis says

Me @33:

Actually that’s wrong, since if it is spin 1 (rather than +/-1 meaning simply spin up/down, as I thought) it could be photons, which don’t have a spin zero component.

Anyway, I found a set of intro QM lectures by Susskind which probably correspond to your book. Now

there’sa good use of YouTube.Rob Grigjanis says

EL@32:The Geiger–Müller tube in the Geiger counter. You could say that the quantum-classical boundary is wherever the number of degrees of freedom first increases dramatically.

The math says no such thing. When the simple coherent superposition interacts with the environment (the detector), it becomes entangled with the environment states, and superposition is lost. See here. That doesn’t “solve” the measurement problem, since left or right are still possible, but it’s a choice of classical probabilities after decoherence, and left and right are no longer in quantum superposition.

That you can get different results if you first measure at A or not is a triviality of quantum mechanics, and has nothing to do with the measurement problem. The scenario you describe is two different experiments, so I’m not sure what your point is.

EnlightenmentLiberal says

What? Again, consider a simple quantum experiment and detector, where there is a literal yardstick that will tilt left or tilt right to describe the outcome of the experiment. Consider a real world such experiment, and we can observationally verify that the direct that the yardstick stilts is about 50% left, about 50% right, and it’s seemingly a simple random variable, in particular it has independence of results.

If you run Schrodinger’s wave equation on the original quantum experiment, and if you include the whole measuring device as simply part of the experiment, and you don’t apply Born’s rule nor any other collapse of the wave function, then the math of Schrodinger’s wave equation says that the yardstick will end up in a simple superposition of two states: leaning left and leaning right. This is obviously not a description of the real world, and therefore we need some additional thing, whether that is the Everett idea where our consciousness follows one of the “branches” of the superposition chosen at random, or the Copenhagen “interpretation” aka GRW theory where collapse is a real phenomenon.

GRW theory is a different theory than Everett and Bohmian theories. GRW makes actually different predictions, which can be tested, although such experimental apparatus are difficult to construct.

The other problem with Copenhagen is that when the experimenter when doing the mathematical modeling of the experiment, it’s a choice of the experimenter where to put the wave collapse. For certain interesting systems, there are multiple reasonable places where you can put the wave collapse, and depending on where you put the wave collapse in the mathematical model, you will get different predictions. AFAICT, the sorts of systems where this happens is also the same sort of systems where you can test GRW again Everett and Bohmian models, aka the same sort of systems where GRW gives different predictions than Everett and Bohmian models.

EnlightenmentLiberal says

I misspoke.

I should have said:

[…] whether that is the Everett idea where the universe multiplies, and there are two copies of me, one for each of the two “branches” of the superposition, who will forever be out of causal connection, and from either one’s classical perspective, it seems as if the branch was chosen at random, […]

Rob Grigjanis says

EL@38:The “math of Schrodinger’s wave equation” says no such thing. There’s nothing simple about the final state, once the original superposition has interacted with the environment. And there is no actual “collapse”; just the entanglement of the original “simple” state with the much more complicated environment, which effectively separates the two possibilities into classical probabilities. I suggest you read about decoherence, starting at the link I provided. And if you’re interested or ambitious enough, try this.

As for GRW, it’s a dog’s breakfast. Wake me when it has anything interesting to say.

EnlightenmentLiberal says

To Rob

I understand quite well enough.

I’m sorry, I forgot. Do you subscribe to one of the particular “interpretations”? Based on what I just read, I’m leaning towards the guess that you follow Everett, but I can also see how someone of the Copenhagen bent could say that. If you say “Copenhagen”, then I think what you wrote is nonsensical: The Schrodinger wave equation on its own comes to a description of reality where a macro-object is in a superposition of spatial positions. This alone is not a proper description of observable reality.

I understand that this superposition of states is going to have two peaks, with extremely small tails. It doesn’t solve the problem on its own (but it does aid other solutions to the problem).

This sort of superposition of a macro-object is not a proper description nor prediction of observable reality. Because it’s not a proper description nor prediction of observable reality, one therefore needs some rule in addition to and beyond the Schrodinger wave equation in order to go from this “description” of reality where the macro-object is in a superposition of spatial positions, to a proper description of reality where the macro-object is in a single spatial position.

One such approach is to apply Born’s rule, and say that in some situations, observable reality corresponds to the probability given by Born’s rule (macroscopic states), while also saying that observable reality in other cases corresponds to the Schrodinger wave equation evolution without Born’s rule (microscopic states). Of course, this approach is fundamentally flawed, for all of the obvious reasons. In particular, as GRW models show, for particular experimental apparatus, under this approach, differences in this “choice” in where the experimenter draws the line between microscopic and macroscopic will drastically change their macro-object observable predictions, and manifestly that means that the Copenhagen approach is an unsatisfactory theory.

I take it that you still disagree with what I have said. I’d be curious to know why, and I hope that I can get a proper answer out of you.

EnlightenmentLiberal says

PS:

Are there not proposed tests of the theory? Isn’t that incredibly interesting!? Isn’t it an incredibly interesting experimental question on how to device apparatus that can tease out the differences between GRW style theories and Everett and Bohmian theories? I think that’s an incredibly interesting thing to say, especially because we seem tantalizing close to creating apparatus that can rule out large swathes of GRW theories -- or confirm such theories and thereby falsify Everett and Bohmian theories, and also have GRW replace Copenhagen as a refinement and formalization of the collapse rule of Copenhagen.

EnlightenmentLiberal says

To be exceptionally clear:

I understand that in this approach, the portion of the universe’s state that is “me” can be thought of as being duplicated in the two branches, and due to the wonder that is decoherence, the me #1 on branch #1 will only be able to interact with the part of the universe that “observes” outcome #1, and the me #2 on branch #2 will only be able to interact with the part of the universe that “observes” outcome #2. No part of the universe in branch #1 can observe outcome #2, and vice versa.

However, as I already said, this doesn’t buy you anything on its own. You still need some thing in addition to the Schrodinger wave equation. You still need that little additional Everett magic that says that both branches are equally real, and you are just one copy in one of the branches, and some sort of rule that allows you to derive the prior subjective probability that you will experience winding up in branch #1 vs branch #2 based on Born’s rule.

If you want to go that approach, I have no philosophical complaints. It’s also well-defined, without the arbitrariness of the Copenhagen interpretation. It’s a satisfactory theory in this sense. It might also be true, and it’s a testably different theory than GRW, and it’s arguably a different theory than Copenhagen which should be understood as a informal version of a formal theory such as GRW.

Rob Grigjanis says

EL@41: What I “subscribe” to is that decoherence is probably all we need to talk about the quantum-classical boundary, and the rest (“left or right”) is essentially random.You keep bringing up the Schrödinger equation, so I assume you’re familiar with the idea of a Hamiltonian which defines the time evolution of a state. If you “understand quite well enough”, you should know that there will be three Hamiltonians in the case of a simple quantum system interacting with a complicated environment; the system Hamiltonian, the environment Hamiltonian, and the interaction Hamiltonian between the two. If you “understand quite well enough”, you should know that the coupling of the system to the environment can cause decoherence in the original system, and the apparent “collapse” of the wave function to a set of classical probabilities. No more superposition!

And I’ll tell you a horrible little secret about myself. I got into theoretical physics because I loved the beauty of the fundamental equations and models. That goes for Newton, Maxwell, Dirac (his eponymous equation is my favourite), Einstein (SR and GR), and so on up to the Standard Model. If the fundamental equations are as ugly and inelegant as those of GRW (or Bohmian Mechanics), I would be grossly offended. But I’m not too worried; neither GRW or BM seem to be able to deal in a natural way with special relativity or quantum field theory. Still, keep flying the flag!

EnlightenmentLiberal says

A set of classical probabilities, e.g. a classical probability distribution, is not a proper description of reality after the experiment has been done.

Imagine this experimental setup:

Start quantum experiment #1, measure result, result is described by “result A” or “result B” according to a macroscopic measuring device.

Start quantum experiment #2, which has an input of “left / right”. Choose “left / right” according to the result of the previous experiment. By previous testing, we know that experiment #2 will produce “result C” or “result D” when started with input “left” with about 50% odds, and we know that it will produce “result E” or “result F” when started with input “right” with about 50% odds.

Imagine naively modeling this larger experiment which encompasses experiment #1 and #2 according solely the Schrodinger wave equation. It’ll predict final results probabilistically: { “result C, 25%”, “result D, 25%”, “result E, 25%”, “result F, 25%” }. That would be correct from the perspective of a person at a time before the experiment was run.

However, imagine a person at a point in time in the middle of the experiment, after #1 completed, but before #2 started. This person will calculate very different final probabilities, precisely because this person has access to the result of experiment #1. He will calculate: { “result C, 50%”, “result D, 50%” } or { “result E, 50%”, “result F 50%” }.

Thus, our perception of observable reality cannot be properly modeled by the Schrodinger wave equation alone. There must be some additional rule of evolution which allows this second experimenter to revise their estimates in the fashion described above.

In Copenhagen and GRW, the proposed solution is that the Schrodinger wave equation is an approximation, and there is some other rule of evolution, collapse. The physically real collapse happens, which gives an experimenter after #1 and before #2 additional information which allows them to compute different estimates, as described above.

In Bohmian, the proposed solution is that there is an non-local hidden variable theory, and the Schrodinger wave equation is the best description that we can obtain in our situation of limited access to the underlying hidden variables. Thus, an experimenter after #1 and before #2 gains additional access, additional knowledge, concerning the hidden variables, which allows them to compute different estimates, as described above.

In Everett, the proposed solution is: after experiment #1, our observable reality is properly described by exactly one of the several decoherent components of the solution computed before experiment #1. In this way, an experimenter after #1 and before #2 gains additional knowledge, which allows them to compute different estimates, as described above. Before #1, an experimenter won’t know whether they will happen to “be” in decoherent component “result A” or “result B”, and after #1, the experimenter knows which decoherent component they’re in for “A vs B”, which allows them to refine their estimates for experiment #2. The Schroginder wave equation is not modified, and there is no collapse, and no new math is introduced, but we pile up these extra decoherent branches that we never observe (which leads some to conclude that they’re equally real, and others to conclude that this sort of colossal abundance of extra unobservable stuff is a sign that something is going horribly wrong in the theory).

In all of these approaches, additional theoretic framework needs to be added on top of the Schrodinger wave equation. I want to emphasize: even for the Everett approach, observable reality is

notdescribed by the mere simple evolution of the Schrodinger wave equation. Instead you need some rule that says that future observable reality is corrected described byone of the future decoherent historiesof the Schrodinger wave equation, and the particular decoherent history that correctly describes the future can be determined in advance only probabilistically, according to Born’s rule. This rule may be while simple and “obvious” to many, but this the rule is still a slight alteration of what it means for the Schrodinger wave equation to describe reality; only a very small part of the future Schrodinger wave equation accurately describes future reality, and once we determine which small part is accurate, the rest, which is the bulk of it, is useless for describing additional observable reality, and it is discarded.Rob Grigjanis says

EL@45:I hate to break it to you, but that’s all any theory consistent with QM

cando. If you have a theory which actually predicts the final outcome, publish it and claim your Nobel Prize.Your example shows you don’t really understand QM. The input to experiment #2 is not a quantum superposition; it’s one of the classical values A, B (I’m assuming you left out something like “if it’s A, choose left as input to #2″, etc), so the calculation is always classical, and it gives (0.5){0.5″C” + 0.5″D”} + (0.5){0.5″E” + 0.5″F”}. And the two ways you wrote it are not “very different”. They’re exactly the same. Tell me how

{ “result C, 50%”, “result D, 50%” } or { “result E, 50%”, “result F 50%” }

doesn’tmean that the probability of seeing “C” is 25%, since the probability of seeing either of the curly brackets is 50%.*With a probability of 50% each.

Rob Grigjanis says

Ignore last asterisked line. Bad editing.

Rob Grigjanis says

Oh wait, I see the distinction you’re making. The experiment is only done once. Then yes, the person in the middle knows something the outsider doesn’t know, and will get the result you quote. But it’s still a classical calculation. Nothing quantum about it.

If, however, it’s done over and over again, you’ll still see 25% for each outcome.

EnlightenmentLiberal says

To Rob

Well, I’m both glad that I got the science right, and we’re finally on the same page regarding something. I do believe that I have enough grasp of the concepts to be a meaningful participant, albeit I will try to defer to your expertise where appropriate.

So, did I say anything objectionable? I hope not and think not.

Rob Grigjanis says

EL@49: You were presenting a classical experiment, performedonce, as demonstrating something about quantum interpretations! I’m not sure what you think you got right.EnlightenmentLiberal says

Well, at least you didn’t notice anything incorrect in what I said most recently, and so I’ll take it.

Rob Grigjanis says

EL@51: Sorry, I was a bit cranky last night. Yes, you summarized the interpretations well enough.EnlightenmentLiberal says

To Rob.

Thank you. It’s also ok. I don’t take it personally, and it’s not a big deal. I think you’re a little harsh with me sometimes, but we usually have productive conversations. I generally enjoy our interactions. I can name much worse people whom I regularly interact with on these blogs.