(See Part 1, Part 2, and Part 3. Also I am going to suspend the limit of three comments per post for this series of posts because it is a topic that benefits from back and forth discussions.)
Even if we decide to treat the microscopic and macroscopic worlds as separate and governed by different laws, they is one place where the two world collide that we cannot ignore. Recall that we said that in the quantum world, many results do not come into existence until they are measured. Any contact at all of a quantum superposition of states with a macroscopic object, however small, can cause the collapse of the wave function. But in order for it to be useful to us, we need to know what the final result was, and that means we need a measurement involving a measuring device whose results we can see, such as a detector like a fluorescent screen, photometer, bubble chamber, geiger counter, and so on. So when we measure (say) the spin or location of an electron, we unavoidably have an interaction of an object that belongs to the classical world (the detector) with an object that belongs to the quantum world and this leads to what is called the measurement problem.
To understand the measurement problem, recall that we start with a quantum system that is prepared so that a particle (say an electron or photon) is created such that we cannot predict which state (spin up or spin down) it will be found in upon measurement. We describe the wave function of this particle as being in a superposition of two states, one spin up and one spin down. (Such a superposition of states is said to be coherent.) This superposition will continue to exist as long as the particle does not interact with anything that can be considered macroscopic, however small. When it does, the wave function is said to abruptly shift from being in a superposition of the two states to just one of the states. (This process is referred to as decoherence.) We can’t predict with certainty which state it will collapse into but if we know the initial wave function (say because it is a solution of the Schrodinger equation that we are able to obtain), we can predict the probability of collapsing into each one.
So an act of measurement is when a particle in a quantum state whose wave function is a superposition of states meets a macroscopic object (a detector that serves as the observer) that causes the wave function to suddenly collapse into one of its states. This collapse is sometimes referred to as a quantum jump. The measurement problem is that we do not know what exactly happens during this ‘jump’, even though quantum theory seems to require it to happen all the time, whenever a measurement occurs. The mysterious and unknown nature of this highly ubiquitous process is distasteful to pretty much everyone. This was true not just for Einstein but also for Ervin Schrodinger ,who gave us his eponymous equation which we use to calculate the wave function. He too hated the idea that the wave function abruptly jumped from one state to another and famously said, “If we have to go on with these damned quantum jumps, then I’m sorry that I ever got involved.” We tolerate it because quantum mechanics works so well and nothing better has been found that has been successful in convincing most physicists to adopt it.
Not that there have not been attempts to find alternatives. There have been strenuous efforts to remove this ugly feature of mysterious quantum jumps upon measurement by getting rid of the role for the observer (detector) that causes it. Some of these efforts come under the heading of hidden variable theories that introduced additional variables that, contrary to standard quantum theory, determine the outcome of the measurement before the measurement, even though we may not know precisely what those variables are.
The famous mathematician John von Neumann purported in 1932 to have proved on general grounds that such hidden variables were impossible within the framework of quantum theory and because of his eminence, that carried a lot of weight and put a real damper on any attempts in that direction. But while there were some private grumblings that von Neumann’s ‘impossibility proof’ may contain a faulty assumption that invalidated his strong conclusion, it was John Bell who in 1966 published a paper showing that it was flawed. Hidden variables were possible at the price of the theories being non-local. Bell said that he was inspired to investigate von Neumann’s claim when he read a paper published by David Bohm in 1952 (following up an idea by Louis de Broglie in 1927) which explicitly contradicted von Neumann’s assertion of impossibility by constructing a model that was supposedly impossible. In this model an electron (say) in addition to its wave function, is guided by an additional ‘pilot wave’ (now called the de Broglie-Bohm pilot wave) that makes its motion completely determined, like that of a classical particle, while still giving us results that are consistent with quantum mechanical experiments. One consequence of doing so, as Bell showed in 1964, is that this theory, like all hidden variable theories (a term that Bell dislikes), is necessarily non-local, which is also somewhat distasteful.
But there are other theories. In one model published in 1986 by Ghirardi, Rimini, and Weber, the wave function collapse is not due to any interaction with a detector. Instead, the wave function is randomly and spontaneously and repeatedly collapsing on its own into a single observable state, then evolving quantum mechanically again so that it could be in a superposition, then collapsing again, and so on. The collapse is built into the system and not caused by any external detector. It is influenced by the size of the difference between the two superposed states. So for a small system like an electron in spin up and spin down states, the time between these spontaneous collapses could be of the order of millions of years whereas for a macroscopic object containing say 1020 particles or more, the time between collapses is of the order of 10-5 seconds or less. So macroscopic objects keep collapsing after infinitesimally small amounts of time so it looks like they behave classically and are never in superposed quantum states, while microscopic objects like electrons can have a large amount of time between collapses, which is why we observe quantum effects only in microscopic systems.
Perhaps the most imaginative and attention-grabbing of the alternatives is the many worlds interpretation (MWI) first proposed by Hugh Everett in 1967. In this model, while standard quantum mechanics is the accepted theory, there are no quantum jumps. Instead at any stage in which an electron (say) was in a superposition of two states and is then detected to be in one, it is not the case that the wave function abruptly collapsed or jumped into that state. What is said to have happened is that the universe split into two with each universe having one of the two results. So when we say that a measurement gave us spin up, what happened is that we find ourselves in a universe where the spin was up, but that there is now a parallel universe in which the parallel ‘we’ found the electron to be spin down. But since the two universes do not communicate with each other, we think that the wave function collapsed to just the one observable state that we see.
The many-worlds interpretation implies that there are many parallel, non-interacting worlds. It is one of a number of multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized. This is intended to resolve the measurement problem and thus some paradoxes of quantum theory, such as Wigner’s friend, the EPR paradox, and Schrödinger’s cat, since every possible outcome of a quantum event exists in its own world.
As you can see, some of these theories are quite imaginative. This is because when an important problem in physics is proving to be intractable (and the measurement problem definitely fits that bill), people are willing to entertain ideas that are quite far from the orthodox. This is common in science. This ambiguous state of affairs continues until there is a preponderance of evidence that shifts the consensus firmly in one direction.
There is no such consensus now about what to do about quantum measurements and the associated jumps. What there is a consensus about is that whatever final theory is adopted must incorporate the results of standard quantum theory into its framework because those results are so robust and extensive. All these alternatives allow for that, if not exhaustively, at least minimally to the extent that they have been investigated. But there is no convincing evidentiary basis for any of them. As a result, where one ends up depends on personal preference and all of these theories have their adherents and one cannot say that any of them are wrong. Standard quantum theory with inexplicable jumps and nothing more is probably the most popular but hidden variable theories and the MWI have many supporters. Sean Carroll is one advocate of the MWI and he addresses some of the common misconceptions about it.
In the next and final post, I will get to another imaginative alternative, the one that triggered this whole series of posts, and that is the possible role of consciousness in measurement.

This is all way over my head, but I’ll ask my stupid question anyway. 🙂 Is the superposition analogous to a coin that lands on its edge or a person exactly five feet tall? And observation then somehow tips the coin over or decides which group the person is assigned to?
My plug for MWI: It’s the only interpretation (that I know of) that has no ‘spooky action at a distance’ between entangled particles.
Even though Copenhagen doesn’t allow FTL communication between the particles, there is still a non-local correlation which is indeed rather spooky.
Ridana @1: There’s no classical analogue. The superposition is in no sense an ‘edge condition’. It’s simply telling us that either outcome is possible on measurement.
Ridana @#1,
Your question is not at all stupid.
It is true that the coin might land on its edge and is thus neither heads nor tails. But that is not an analogy to superposition, neither is that of a person being exactly five feet tall.
I had thought of including that possibility as well but that would have meant treating the tossed coin as being in a superposition of three states, with the probability of finding one of those states (on the edge) being very small. I decided to make it binary for the sake of simplicity. I could still have made the outcome binary by saying that one outcome is the coin being found as heads while the other outcome is it being either tails or on the edge. But it just becomes more wordy without adding anything to the basic issue. Similarly with the heights, I could have said that the options are either less than five feet or equal to or greater than five feet
Superposition does mean merely ignorance of which state it is in but that the entity is in both states.
Do not be worried that this idea is hard to absorb. We all struggle with it because it is so counter-intuitive, going against our common expectations. It is just that over time, physicists have learned to live with it, even if we are not at ease with it.
so schrodinger’s cat isn’t that magical at all. if we assume the observation that decoheres the wave (?) is the human opening the box, the cat is in the wacky alive/dead state, but in actuality the second the coherent thingus interacts with the detection thingus, whether or not a human or cat is watching, the decision is made, and cat is definitively alive or dead before you look. it’s less conscious observation than any interaction with the macroscopic world that requires it to be one way or another.
given that, you could just say there’s a (mystery metric seemingly related to size and interaction) past which waves can’t stay coherent, and it doesn’t look magic at all. just annoyingly mysterious. something that makes this feel less screwy is that there are plenty of macroscopic/intuitively reasonable cases of tipping points. this much heat makes water turn to steam, this much mass makes a star collapse into a black hole. it’s just another tipping point, but one that is resistant to scientific observation by its nature -- not anything that imbues any significance at all on consciousness, which was always my main beef with this.
apologies to whatever extent that’s broken.
Bébé Mélange @5:
Spot on.
How is this different from taking a red card and a black card from a deck; having someone select one at random, climb in a spaceship and travel halfway across the universe; and as soon as I look at my card, say it happens to be red, I know at once that their card is black? They have always been opposites from the outset, so as soon as you know the state of either one, then you automatically know the state of the other one, by the property of oppositeness.
As for whether the Moon is still there when I’m not looking at it, what with there being a small but non-zero probability that a particle might — by dint of quantum tunnelling, or borrowing a quantity of energy and paying it back before the Laws of Thermodynamics notice — be somewhere else and all, the probability of enough particles from a system to be missed being somewhere outside the system gets smaller and smaller, the greater the number particles in a system. And there are so many particles in the Moon, that the probability of enough of the Moon still being there for me not to miss whatever is somewhere else is almost one.
What is said to have happened is that the universe split into two …
Which struck me, decades ago, with two brain-stretching (or -exploding) thoughts:
1) Consider the number N of superpositional events at a precise point in time in “the” universe. As each collapses, we have N-squared universes, every one of them with ~N impending collapsations (apparently not a word). Multiply by the number of such successive collapse-events per second, and multiply that by (at least) 13.7 billion years worth of seconds…
2) Even if we just consider one universe-mitosis, doesn’t that confront us with a massive violation of the “law” of conservation of matter and energy? Where did everything in the “daugher” universe come from?
I have had the opportunity to ask the second question of someone who knew much more than I about all this, and was told I just don’t understand. They sure got that right!
bluerizlagirl @7: You’ve made the same point before (took a bit of googling to find it);
https://freethoughtblogs.com/pharyngula/2025/06/01/dualism/#comment-2267050
If my answer was unsatisfactory, you could have said so then. Not sure what I could add.
me @ # 8 -- ahem! “… daughter universe…” Look -- a squirrel!
Pierce @8:
No violation. Energy is conserved along any branch. It’s also conserved in the superposition of split states. Suppose you start with a universe containing energy E. Denote that state symbolically by |0>. It splits into two states |1> and |2> each with energy E, such that the final state is expressed as
a|1> + b|2>, where |a|² + |b|² = 1 (a and b are generally complex)
The energy of the initial state is E, the energy of each of the final states is E, and the energy of the superposition is E.
Bloody hell! I tried to write more equations, but there’s some unknown (to me) feature which erases part of what I wrote.
@8: I’m not sure whether practicing physicists consider “many-worlds” and “no collapse” the same or different, but I find they have different causes for incredulity. “No collapse” doesn’t explain why we observe a classical universe; “many worlds” is (as you note) extravagant with its multiplication of entities.
Larry Niven proposed a refutation of “many worlds”. If there is a new universe for each possible outcome, then we should observe each outcome to be equally probably. His counterexample, a loaded die, is flawed, as rolling a die is not a simple observation of one feature (which face is uppermost), but also an observation of many other features, such as where the dice comes to a halt. But you can instead consider an observation of which decay branch a radioactive isotope follows. The contrast between probabilities can be quite large (extremely large if proton decay to a positron and a neutral pion is considered amongst the branches). To my untutored eye Larry Niven’s argument seems valid with this argument. If it is, and the probabilities are rational numbers, then one can rescue “many worlds”, by splitting the universe in “the lowest common denominator of the probabilities” branches. But I would be mildly surprised if the probabilities were never irrational or even transcendental, with the result that every observation spawns an infinite number of new universes, which I find to be an extremely extravagant hypothesis.
Given that “many worlds” has a substantial number of supporters I have been led to wonder whether I misunderstand “many worlds” : is it just a bad popularisation of “no collapse”?
another stewart @12:
I dare say that no two-word phrase explains much of anything. There is the appearance of collapse in the interaction between a coherent state and the extremely complicated environment.
Not sure what the problem is with ‘multiplication of entities’ unless you are daunted by large numbers.
bluerizlagirl @#7,
While what you are describing is an interesting point that on the surface is similar to what was done in the famous paper that has come to be known as the Einstein-Rosen Podolsky (EPR) paradox, it differs in one crucial respect.
It will take me a bit to prepare a full explanation of how it differs and why it matters but will do so and make it a new post sometime next week.
Rob Grigjanis @2, @13
Many Worlds is a bit of an understatement isn’t it? Wouldn’t Infinite Universes, or even the Infinite Tapestry of Possibility be more correct? I can conceptualise our timeline being a path through a (insert your preferred term for it here -- “existing structure” sounds so inadequate if not plain incorrect) a lot easier than the creation of two universes at every juncture, one of which is ours. But is that what you mean by multiplication? If so, where do those other universes “go”? Why don’t we ever see them affect ours?
MW (IU, ITOP) seems like a massive (such an understatement) idea to take on board in order to avoid non-locality, as unsatisfactory as the latter may be.
What stops the apparent randomness of our universe being not random but determined by a formula? Then there is no need for MW; the thread of our universe through the infinite tapestry of possibilities is very like the thread of pi through the infinite line of real numbers. The next digit of pi at any point may appear random but it isn’t. To my lay brain that seems just as possible as MW. What am I missing?
Rob Grigjanis @ # 11 -- thanks!
To the limited extent that I follow your explanation, that applies _within_ each of the split universes. I was attempting to visualize this within a sort of metaspace -- not the Zuckerberg/VR kind, more of a godseye point of view -- but that surely just reflects my un(der)educated perspective. It still seems to me that _something_ would be increased by each split (or that something would be decreased with each halving, even if imperceptible within both branchings).
And yes, I am daunted by the sort of large numbers implied by my first point.
Some years ago you explained to me that the “virtual particle” phenomenon might be better considered as a wave, which at least soothed my feelings of confusion even if lessening my ignorance only infinitesimally. Now I find myself wondering if the same many-worlds considerations apply in every “particle/antiparticle” case (incidentally raising the exponents of my # 1’s N, um, exponentially).
another stewart @ # 12: If there is a new universe for each possible outcome, then we should observe each outcome to be equally probabl[e].
Doesn’t the probability of any _observed_ outcome = 1?
Honestly, the classic ‘branching tree’ idea of the Many Worlds Interpretation makes it harder to make it actually make sense. The whole question of conservation of energy comes up, and that’s a distraction.
The way I look at MWI is more as a wavefront washing a ball along. The ball can bob from side to side effectively doing a random ‘drunkard’s walk’ while the wave pushes it forward. The wave is time, and the side-to-side position of the ball is the universe we observe. Ideally the wave would be rippling outward from a central point and getting bigger as it goes out as more and more possibilities emerge. Of course, this would require a near-infinitely-dimensional phase space to actually work as an explanation.
But the point is, there isn’t really a formal ‘splitting’ of universes so much as a drunkard’s walk through possibilities where the only required direction is forward, and different positions along the ‘wavefront’ of time can potentially look the same if they’re close enough together but still possibly lead to different locations later.
@another stewart:
Niven’s argument goes down the tubes if you assume that either the different universes have different inherent probabilities, or if you take my ‘phase space’ approach to many worlds and just each possibility represents a region within that phase space, and those regions can have different volumes and thus different probabilities.
jenorafeuer @17: As I pointed out in #11, there is no problem with conservation of energy in MW. See also here;
https://physics.stackexchange.com/a/268707
another stewart @12:
That’s not how it works. Suppose I prepare an electron such that its spin is +1/2 along the z-axis. Then I propose to measure its spin along an axis at θ = 60 degrees to the z-axis. The probability of measuring spin +1/2 is now cos²(θ/2) = 3/4, and that of measuring spin -1/2 is sin²(θ/2) = 1/4.
In other words: the spin +1/2 world has a probability of 3/4, while the spin -1/2 world has probability 1/4.
Any observer of the measurement can only end up in one of these worlds.
Why to feel daunted by large numbers.
This also seems relevant, especially the tooltip/title text. 🙂