In the previous post, I explained the early attempts to derive the Born Rule for the MWI. These attempts required assumptions for which no justification was given; as a result, critics of the MWI pointed to the lack of justification for the Born Rule as a major weakness of the interpretation.
MWI supporters often had to resort to simply postulating the Born Rule as an additional law of physics. That is not as good as a derivation, which would be a great advantage for the MWI, but it at least puts the MWI on the same footing as most other interpretations. However, it is by no means clear that it is legitimate to do that, either. Many people think that branch-counting (or some form of observer-counting) must be the basis for probabilities in an MWI, as Graham had suggested. Since branch-counting gives the wrong probabilities (as Graham failed to realize), a critic might argue that experiments (which confirm the Born rule) show the MWI must be false.
Thus, MWI supporters were forced to argue that branch-counting did not, in fact, matter. The MWI still had supporters due to its mathematical simplicity and elegance, but when it came to the Born Rule, it was in a weak position.
In the famous Everett FAQ of 1995, Price cited the old 'infinite measurements frequency operator' argument. That was my own first encounter with the problem of deriving the Born Rule for the MWI, and despite being an MWI supporter, the finite-number-of measurements hole in the infinite-measurements argument was immediately obvious to me.
5) The decision-theoretic approach to deriving the Born Rule
In 1999, David Deutsch created a new approach to deriving the Born Rule for the MWI, based on decision theory. He wrote "Previous attempts ... applied only to infinite sets of measurements (which do not occur in nature), and not to the outcomes of individual measurements (which do). My method is to analyse the behaviour of a rational decision maker who is faced with decisions involving the outcomes of future quantum-mechanical measurements. I shall prove that if he does not assume [the Born Rule], or any other probabilistic postulate, but does believe the rest of quantum theory, he necessarily makes decisions as if [the Born Rule] were true."
Deutsch's approach quickly attracted both supporters and critics. David Wallace came out with a series of papers that defended, simplified and built on the decision theory approach, which is now known as the Deutsch-Wallace approach.
Deutch's derivation contained an implicit assumption, which Wallace made explicit, and called 'measurement neutrality'. Basically, it means that the details of how a measurement is made don't matter. For example, if a second measurement is made along with the first, it is assumed that the probabilities for the outcomes of the first won't be affected. This implies that unitary transformations, which preserve the amplitudes, don't matter. That implies 'equivalence', which states that two branches of equal amplitudes have equal probabilities, and which is essentially equivalent to the Born Rule. The Born Rule is then derived from 'equivalence' using simple assumptions cast in the language of decision theory.
Wallace acknowledged that 'measurement neutrality' was controversial, admitting "The reasons why we treat the state/observable description as complete are not independent of the quantum probability rule." Indeed, if probabilities depend on something other than amplitudes, then clearly they can change under unitary transformations.
So he offered a direct defense of the 'equivalence' assumption, which formed the basis of the paper that was for a long time considered the best statement of the DW approach, certainly as of the 2007 conferences. New Scientist magazine proclaimed that his derivation of the Born Rule in the MWI was "rigorous" and was forcing people to take the MWI seriously.
His basic argument was that things that the person making a decision doesn't care about won't matter. This included the number of sub-branches, but he also took care to argue that the number of sub-branches can't matter because it is not well-defined.
Consider Albert's hypothetical fatness rule, in which probabilities are proportional both to the squared amplitudes and to the observer's mass. This obviously violates 'equivalence'. According to Wallace's argument, the decider should ignore his mass unless it comes into play for the decision, so that is impossible. But it is a circular argument; the decider should care about his mass if it in fact affects the probabilities.
My critique of Wallace's approach is presented in more detail here, where I also cover his more recent paper.
In his 2009 paper, Wallace takes a different approach. Perhaps recognizing that assuming 'equivalence' is practically the same as just assuming the Born Rule, he makes some other assumptions instead, couched in the language of decision theory, which allow him to derive 'equivalence'. The crucial new assumption is what he calls 'diachronic consistency'. In addition to consistency of desires over time, it contains the assumption of conservation of measure as a function of time, which there is no justification to assume. Of course, the classical version of diachronic consistency is unproblematic, and only a very careful reading of the paper would reveal the important difference if it were not for the fact that Wallace helpfully notes that Albert's fatness rule violates it.
6) Zurek's envariance
W. Zurek attempted to derive the Born Rule using symmetries that he called 'envariance' or enviroment-assisted invariance. While interesting, his assumptions are not justified. The most important assumption is that all parts of a branch, and all observers in a branch, have the same "probability". Albert's fatness rule provides an obvious counterexample. I also note that a substate with no observers in it can not meaningfully be assigned any effective probability.
He uses this, together with another unjustified assumption that is similar to locality of probabilities, to obtain what Wallace called 'equivalence' and then the Born Rule from that. Because the latter part of Zurek's derivation is similar to the DW approach, the two approaches are sometimes considered similar, although Zurek does not invoke decision theory.
7) Hanson's Mangled Worlds
Robin Hanson came up with a radical new attempt to derive the Born Rule in 2003. It was similar to Graham's old world-counting proposal in that Hanson proposed to count sub-branches of the wavefunction as the basis for the probabilities.
The new element Hanson proposed was that the dynamics of sub-branches of small amplitude would be ruined, or 'mangled', by interference from larger sub-branches of the wavefunction. Thus, rather than simply count sub-branches, he would count only the ones with large enough amplitude to escape the 'mangling'.
Due to microscopic scattering events, a log-normal squared-amplitude distribution of sub-branches arises, as it is a random walk in terms of multiplication of the original squared-amplitude. Interference ('mangling') from large amplitude branches imposes a minimum amplitude cutoff. If the cutoff is in the right numerical range and is uniform for all branches, then due to the mathematical form of the log-normal function, the number of branches above the cutoff is proportional to the square of the original amplitude, yielding the Born Rule.
Unfortunately, this Mangled Worlds picture relies on many highly dubious assumptions; most importantly, the uniformity of the ‘mangling’ cutoff. Branches will not interfere much with other branches unless they are very similar, so there will be no uniformity; small-amplitude main branches will have smaller sub-branches but also smaller interference from large main branches and thus a smaller cutoff.
Even aside from that, while the idea of branch-counting has some appeal, it is clear that observer-counting (with computationalism, implementation-counting) is what is fundamentally of interest. Nonetheless, 'Mangled Worlds' is an interesting proposal, and is the inspiration for a possible approach to attempt to count implementations of computations for the MWI, which will be discussed in more detail in later posts. That does require some new physics though, in the form of random noise in the initial conditions which acts to provide the uniform cutoff scale that is otherwise not present.
In the next post, proposals for MWIs that include modifications of physics will be discussed.
I hadn't seen this post before. The uniformity of a mangling cutoff is a sufficient condition, but is not obviously a necessary condition. It is just not clear how to do an analysis without it.
ReplyDeleteHi, Robin. Welcome to my qm blog.
ReplyDeleteActually I think it's rather clear that the amplitude cutoff would be larger for large-amplitude branches, because only very similar worlds would affect each other non-negligably. Thus, branch amplitude wouldn't matter much in terms of the number of non-mangled worlds in each branch. I'd say that there is no chance this would result in the Born Rule.
I thought I'd mention this to you.
ReplyDeleteA few days ago I ended up in a discussion with Sascha Vongehr over this article on his blog:
www.science20.com/alpha_meme/many_worlds_tautological_truth_be_or_not_be_not_question-85195
After some back and forths he ended up defending the approach by Deutsch.
I referred him to papers by Huw Price, Alastair Rae, Peter Lewis and you.
At first he seemed to welcome the literatture, but after having read through (at least thats what he said he did) them, he called it "crackpottery".
When I responded by calling the decision-theoretic approach crackpottery in a context of MWI he started censoring my posts.
Eventually I got to ask him what was wrong with the papers I had supplied him with and he responded that the author (you) did not understand measurement neutrality and equivalence.
I was puzzled by this due to the fact that you do adress these issues and inquired further, but he refused to defend his statements (perhaps unable to) and ended up deleting the bulk of our correspondence in his comment section.
I would really love to see you post a comment on his site asking him how he can defend decision theory in MWI and also what makes your paper "crackpottery"
qpdb, that science 2.0 blogger seems a bit cranky in more ways than one.
ReplyDeleteSince he already deleted those statements, I see little point in asking him to defend them. Of course he wouldn't be able to, but I'm sure he would never admit it. What he would probably do is just repeat himself - a lot of people do that! He might even delete the whole discussion again.
I saw that blog post already, and there's a lot wrong with it, but he's clearly committed to his way of thinking. On other posts he does little but complain that no one takes his views seriously. I wonder why ...
BTW, David Albert also wrote some good criticism of the decision theory approach.
You might be right.
ReplyDeleteMy impression of his "self consistent worlds" remind me more of some sort of "everything hypothesis", but when I confronted him with the resemblence to Tegmarks MUH he said that Tegmarks ideas were naive.
So.... I am not sure.
I think if you posted there it might yield something..
Do you have a specific paper of David Albert in mind ?
Albert's chapter in the book "Many Worlds?: Everett, Quantum Theory, and Reality"
ReplyDeletehas an explanation of his "fatness measure", p. 360-363.
Thanks
ReplyDeleteI let Matt Leifer know of your paper on his blog, he wrote quite a long comment in reply on decision theory, you might want to check it out and weigh in your .2cents
http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/comment-page-2/
It's been a very busy week, but I just revised my arxiv eprint on the decision theory approach to clarify some points. I may post a comment to that blog once the revision is available on the arxiv.
ReplyDeleteHey, what is your opinion on Vaidman's derivation of the Born Rule? http://philsci-archive.pitt.edu/8558/1/Pit3.pdf
ReplyDeleteVaidman's attempt has the same flaws that are common to a lot of attempts, namely, not taking into account the facts that measure not probability is the basic quantity; that a justification is needed for why it's approximately conserved; and that factors other than amplitude could well come into play.
ReplyDeleteI have to give Vaidman credit though for admitting that his argument is "suggestive but not rigorous".
In detail (section 5 of that paper) he assumes that "probabilities" are attached to worlds, not to observers. That is incorrect. For example, it might be that each observer has a 50% "probability" to find the particle. This can be achieved by giving equal measure to the Bob who sees it and to the Bob who didn't. The Bob who didn't find it would further split when he compares notes with Charlie and John; perhaps this further split would double his measure. If so, then in the end the chance that Bob found the particle is 1/3, but in the meantime it is 1/2. This kind of variation of probability over time is characteristic of situations in which measure of consciousness isn't conserved.
Another example of where Vaidman went wrong is to consider Albert's fatness measure.
http://onqm.blogspot.com/2009/09/early-attempts-to-derive-born-rule-in.html
Suppose that if Bob finds the particle, he'll celebrate by eating a pizza. In that case, with the fatness measure, the "probability" that Bob found the particle becomes larger than the Born Rule predicts. In this case, Fat Bob has higher measure than lean Charlie even if they are in the same world, in the same room, talking to each other. The "probability" that Bob found the particle is then NOT equal to the "probability" that the Bob that Charlie interacts with is the one that found the particle. That would make no sense with actual classical probabilities, but is perfectly OK for effective "probabilities" which are really just measure ratios.
http://onqm.blogspot.com/2009/09/measure-of-consciousness-versus.html
Thanks Mallah, your site is definitely the best when it comes to showing the inadequacy of the Everettian Born Rule programme.
ReplyDeleteI feel safe in saying that there will be no resolution to this issue and the "pure wave mechanics" will need supplementation (i.e. new physics) and will most likely be completely different from Many Worlds altogether!
Have you given up on your own computational MWI?
LOL. The pure wave mechanics might need supplementation, but as long as the wavefunction is one ingredient - and without it or something sufficiently similar you couldn't produce the predictions of QM - there will be many branches, they will give rise to observers, and thus there are many worlds. Anyone who thinks there could be just one world is deluding himself. That's why for example the pilot wave interpretation doesn't work, except for a many-worlds version of it.
ReplyDeletehttp://onqm.blogspot.com/2009/07/why-mwi.html
I know I've been slow in writing up the blog posts on it, but I've not given up at all on the MCI. Perhaps I'd be more motivated to write if more people ask about it :)
I think you are being a bit narrow-minded to be honest. You still got psi-epistemic options like retrocausality, holographic-illusiory-non-locality (read: local determinism). But if you want to waste all your potential on a interpretation of math, which you yourself has so eloquently shown to fail (Born Rule), then be my guest.
ReplyDeleteGood luck!
I haven't shown my own interpretations (plural since I have more than one hypothesis) to fail, obviously, so you are just being a jerk. But the main thing is that I will always report my results honestly. My thinking's been forced to change in the past, and it may well happen again, and again. That's how science works.
ReplyDeleteI will say that retrocausality has no plausibility. There are some toy models out there but nothing at all impressive.
I'm sorry if I offended you, this was not my intention at all.
ReplyDeleteCalling me a jerk seems way too personal for a discussion surrounding highly speculative physics.
You state that "retrocausality" has no plausibility, because all you've seen are toy models? What the fuck sort of logic is that?
All you got out there is toy models of Everett too, there is no rigoruous model and Everett has recieved 20 times the attention that retrocausality has recieved. Give Huw Price and the other people working on retrocausal interpretations a few years before you dismiss it out of hand.
Also do not forget 't Hooft, Spekkens, Leifer, Weinberg and others who are working on alternatives.
Since you seem so hellbent on saving the spirit of Everett and Born at the same time, I wonder what your thoughts are of this physicists "new model".
He claims to derive Born Rule from pure unitary evolution: http://aquantumoftheory.wordpress.com
Also as I see you are considering Bohmian Worlds, then I wonder what you'd think of this new paper: http://arxiv.org/pdf/1208.5632v2.pdf
Q, it sure didn't take long to confirm your jerkishness, now did it? If you wish to post further here, you will watch your language, or I will delete your posts.
ReplyDeleteIf the toy models had attractive features, I wouldn't have said they weren't at all impressive. I'm not going to tell those people not to work on whatever approach catches their interest, but I expect nothing worthwhile to come of it. By contrast, as soon as Everett's original paper came out, though it certainly had problems, it was obvious that the relative state approach was insightful and important, and it did help lead to the modern understanding of decoherence.
re: http://aquantumoftheory.wordpress.com: Even in the arxiv paper http://arxiv.org/abs/1205.0293 he (Tell) does not justify his assumptions.
re: http://arxiv.org/pdf/1208.5632v2.pdf: This is mostly just a reinvention of Continuum Bohmian Mechanics (CBM). He (Boström) adds the assumption that the wavefunction is made of the particle sets (as opposed to there just being both things), which is an attractive idea since there is an unnatural-seeming complexity in having both things, but he does nothing to explain it - and it would indeed need a detailed explanation in terms of a nonlinear interaction between particle sets, especially since phases come into play not just density.
Other than that, the paper is a pretty good overview of CBM, which I do think is a worthwhile approach but not the final answer. However, as such, it suffers greatly from his ignorance of the history of the interpretation, which he thinks is his new invention. Unfortunately I don't know who first proposed CBM or when, but I do remember seeing it discussed in the literature prior to 2000.
See http://onqm.blogspot.com/2009/10/mwi-proposals-that-include.html
Interesting, which of the assumptions in that paper/blog do you think requires further justification? I can't spot any wild assumptions on a first (and second) glance.
Delete