Saturday, September 26, 2009

Decision Theory & other approaches to the MWI Born Rule problem, 1999-2009

In the previous post, I explained the early attempts to derive the Born Rule for the MWI. These attempts required assumptions for which no justification was given; as a result, critics of the MWI pointed to the lack of justification for the Born Rule as a major weakness of the interpretation.

MWI supporters often had to resort to simply postulating the Born Rule as an additional law of physics. That is not as good as a derivation, which would be a great advantage for the MWI, but it at least puts the MWI on the same footing as most other interpretations. However, it is by no means clear that it is legitimate to do that, either. Many people think that branch-counting (or some form of observer-counting) must be the basis for probabilities in an MWI, as Graham had suggested. Since branch-counting gives the wrong probabilities (as Graham failed to realize), a critic might argue that experiments (which confirm the Born rule) show the MWI must be false.

Thus, MWI supporters were forced to argue that branch-counting did not, in fact, matter. The MWI still had supporters due to its mathematical simplicity and elegance, but when it came to the Born Rule, it was in a weak position.

In the famous Everett FAQ of 1995, Price cited the old 'infinite measurements frequency operator' argument. That was my own first encounter with the problem of deriving the Born Rule for the MWI, and despite being an MWI supporter, the finite-number-of measurements hole in the infinite-measurements argument was immediately obvious to me.

5) The decision-theoretic approach to deriving the Born Rule

In 1999, David Deutsch created a new approach to deriving the Born Rule for the MWI, based on decision theory. He wrote "Previous attempts ... applied only to infinite sets of measurements (which do not occur in nature), and not to the outcomes of individual measurements (which do). My method is to analyse the behaviour of a rational decision maker who is faced with decisions involving the outcomes of future quantum-mechanical measurements. I shall prove that if he does not assume [the Born Rule], or any other probabilistic postulate, but does believe the rest of quantum theory, he necessarily makes decisions as if [the Born Rule] were true."

Deutsch's approach quickly attracted both supporters and critics. David Wallace came out with a series of papers that defended, simplified and built on the decision theory approach, which is now known as the Deutsch-Wallace approach.

Deutch's derivation contained an implicit assumption, which Wallace made explicit, and called 'measurement neutrality'. Basically, it means that the details of how a measurement is made don't matter. For example, if a second measurement is made along with the first, it is assumed that the probabilities for the outcomes of the first won't be affected. This implies that unitary transformations, which preserve the amplitudes, don't matter. That implies 'equivalence', which states that two branches of equal amplitudes have equal probabilities, and which is essentially equivalent to the Born Rule. The Born Rule is then derived from 'equivalence' using simple assumptions cast in the language of decision theory.

Wallace acknowledged that 'measurement neutrality' was controversial, admitting "The reasons why we treat the state/observable description as complete are not independent of the quantum probability rule." Indeed, if probabilities depend on something other than amplitudes, then clearly they can change under unitary transformations.

So he offered a direct defense of the 'equivalence' assumption, which formed the basis of the paper that was for a long time considered the best statement of the DW approach, certainly as of the 2007 conferences. New Scientist magazine proclaimed that his derivation of the Born Rule in the MWI was "rigorous" and was forcing people to take the MWI seriously.

His basic argument was that things that the person making a decision doesn't care about won't matter. This included the number of sub-branches, but he also took care to argue that the number of sub-branches can't matter because it is not well-defined.

Consider Albert's hypothetical fatness rule, in which probabilities are proportional both to the squared amplitudes and to the observer's mass. This obviously violates 'equivalence'. According to Wallace's argument, the decider should ignore his mass unless it comes into play for the decision, so that is impossible. But it is a circular argument; the decider should care about his mass if it in fact affects the probabilities.

My critique of Wallace's approach is presented in more detail here, where I also cover his more recent paper.

In his 2009 paper, Wallace takes a different approach. Perhaps recognizing that assuming 'equivalence' is practically the same as just assuming the Born Rule, he makes some other assumptions instead, couched in the language of decision theory, which allow him to derive 'equivalence'. The crucial new assumption is what he calls 'diachronic consistency'. In addition to consistency of desires over time, it contains the assumption of conservation of measure as a function of time, which there is no justification to assume. Of course, the classical version of diachronic consistency is unproblematic, and only a very careful reading of the paper would reveal the important difference if it were not for the fact that Wallace helpfully notes that Albert's fatness rule violates it.

6) Zurek's envariance

W. Zurek attempted to derive the Born Rule using symmetries that he called 'envariance' or enviroment-assisted invariance. While interesting, his assumptions are not justified. The most important assumption is that all parts of a branch, and all observers in a branch, have the same "probability". Albert's fatness rule provides an obvious counterexample. I also note that a substate with no observers in it can not meaningfully be assigned any effective probability.

He uses this, together with another unjustified assumption that is similar to locality of probabilities, to obtain what Wallace called 'equivalence' and then the Born Rule from that. Because the latter part of Zurek's derivation is similar to the DW approach, the two approaches are sometimes considered similar, although Zurek does not invoke decision theory.

7) Hanson's Mangled Worlds

Robin Hanson came up with a radical new attempt to derive the Born Rule in 2003. It was similar to Graham's old world-counting proposal in that Hanson proposed to count sub-branches of the wavefunction as the basis for the probabilities.

The new element Hanson proposed was that the dynamics of sub-branches of small amplitude would be ruined, or 'mangled', by interference from larger sub-branches of the wavefunction. Thus, rather than simply count sub-branches, he would count only the ones with large enough amplitude to escape the 'mangling'.

Due to microscopic scattering events, a log-normal squared-amplitude distribution of sub-branches arises, as it is a random walk in terms of multiplication of the original squared-amplitude. Interference ('mangling') from large amplitude branches imposes a minimum amplitude cutoff. If the cutoff is in the right numerical range and is uniform for all branches, then due to the mathematical form of the log-normal function, the number of branches above the cutoff is proportional to the square of the original amplitude, yielding the Born Rule.

Unfortunately, this Mangled Worlds picture relies on many highly dubious assumptions; most importantly, the uniformity of the ‘mangling’ cutoff. Branches will not interfere much with other branches unless they are very similar, so there will be no uniformity; small-amplitude main branches will have smaller sub-branches but also smaller interference from large main branches and thus a smaller cutoff.

Even aside from that, while the idea of branch-counting has some appeal, it is clear that observer-counting (with computationalism, implementation-counting) is what is fundamentally of interest. Nonetheless, 'Mangled Worlds' is an interesting proposal, and is the inspiration for a possible approach to attempt to count implementations of computations for the MWI, which will be discussed in more detail in later posts. That does require some new physics though, in the form of random noise in the initial conditions which acts to provide the uniform cutoff scale that is otherwise not present.

In the next post, proposals for MWIs that include modifications of physics will be discussed.

Wednesday, September 23, 2009

Early attempts to derive the Born Rule in the MWI

When Everett wrote his thesis in 1957 on the '"Relative State" Formulation of Quantum Mechanics', he certainly needed to address how the Born Rule probabilities fit into his new interpretation of QM. While the MWI remains provocative even today, it was not taken seriously in 1957 except by a few people, to the extent that Everett had to call it "Relative State" rather than "Many Worlds". So it is perhaps fortunate that he did not realize the true challenges of fitting the Born Rule into the MWI, which could have derailed his paper. Instead, he came up with a short derivation of the Born Rule, using assumptions that he did not realize lacked justification.

Of course, the Born Rule issue has long since returned to haunt the MWI. Historically, what has happened several times was that a derivation of the Born Rule that seemed plausible to MWI supporters was produced, but soon it attracted critics. After a few years it became clear to most physicists that the critics were right, and the MWI fell into disrespect until a new justification for the Born Rule was produced. This cycle continues today, with the decision-theoretic Deutsch-Wallace approach being considered the best by many, and now attracting growing (and deserved) criticism.

When considering claimed derivations of the Born Rule in the MWI, it is often useful to keep in mind an 'alternative rule' that is being ruled out, and to question the justification for doing so. Two useful ones are as follows:

a) The unification rule: All observations that exist have the same measure. In this case, branch amplitudes don't matter, as long as they are nonzero (and they always are, in practice).

b) David Albert's fatness rule: The measure of an observer is proportional to the squared amplitude (of the branch he's on) multiplied by his mass. Here, amplitudes matter, but so does something else. This one is especially interesting because it illustrates that not all observers necessarily have the same measure, even if they are on the same branch of the wavefunction. While it is obviously implausible, it's a useful stand-in for other possibilities that may seem better more justifiable, such as using the number of neurons in the observer's brain instead of his mass, or any other detail of the wavefunction.

Another useful thing to keep in mind is the possibility of a modified counterpart to quantum mechanics, in which squared-amplitude would not be a conserved quantity. We would expect that the Born Rule might no longer hold, but some other Rule should, even in the absense of conserved quantities. Presumably, if the modification is small, so would be any departure from the Born Rule. Thus, one should not think that conserved quantities must have any special a priori importance without which no measure distribution is possible.

Let us examine a few of the early attempts to derive the Born Rule within the MWI:

1) Everett's original recipe

In Everett's 1957 paper, he models an observer in a fairly simple way, considering only a set of memory elements. This is a sort of rough approximation of a computational model, but without the dynamics (which are crucial for a well-defined account of computation). Thus, Everett was a visionary pioneer in applying computationalist thinking to quantum mechanics, but he never confronted the complexity of what would be required to do a satisfactory job of it.

He assumed that the measure of a branch would be a function of its amplitude only, and thus would not depend on the specific nature of that branch. This is a very strong assumption, and arguably contains his next assumption as a special case already. [A more general approach would allow other properties to be considered, such as in Albert's fatness rule.]

[Note: Everett's use of the term 'measure' is not stated to refer specifically to the amount of consciousness, but in this context, the role it plays is essentially the same as if it did. Some authors use 'measure of existance' to specifically mean the squared amplitude by definition; obviously Everett did not, since he wanted to prove that his measure was equal to the squared amplitude. I recommend avoiding overly suggestive terms (like 'weight') for the squared amplitude.]

Next, he assumed that measure is 'additive' in the sense that if two orthogonal branches are in superposition, they can be regarded as a single branch, and the same function of amplitude must give the same total measure in either case.

If the definition of a 'branch' is arbritrary in allowing combinations of orthogonal components, the 'additivity' assumption makes sense, since it means that it does not matter how the branches are considered to be divided up into orthogonal components. [An argument similar to that would be presented years later in Wallace's 2005 paper, in which Wallace defended the assumption of 'equivalence' (branches of equal amplitude must have equal measure) against the idea of sub-branch-counting, based on the impossibility of defining the specific number of sub-branches. Everett did not get into such detail.]

With the previous assumption, 'additivity' would only hold if the measure is proportional to the squared amplitude; thus, he concluded that the Born Rule holds.

Everett considered the additivity requirement equivalent to saying that measure is conserved; thus, when a branch splits into two branches, the sum of the new measures is equal to the measure of the original branch. He gave no justification for the conservation of measure, perhaps considering it self-evident.

In classical mechanics, conservation of probability is self-evident because the probability just indicates something about what state the single system is likely to be in. If the probabilities summed to 2, for example, a single system couldn't explain it; perhaps there would have to be 2 copies instead of one. Yet the existance of multiple copies is precisely what the MWI of QM describes, and in this case, there is no a priori reason to believe that the total measure can not change over time.

Everett's attempted derivation of the Born Rule is not considered satisfactory even by other supporters of the MWI, because he did not justify his assumptions. Soon, other attempts to explain the probabilities emerged.

2) Gleason's Theorem

Also discovered in 1957, Gleason's theorem shows that if probabilities are non-contextual, meaning that the probability of a term in the superposition does not depend on what other terms are in the superposition, then the only formula which could give the probabilities is based on squared expansion coefficients. It is straighforward to argue that the correct expansion to use is that for the current wavefunction; thus, these coefficients are the amplitudes, which gives Born's Rule.

Unfortunately, there is no known justification for assuming non-contextuality of the probabilities. If measure is not conserved, the probabilities can not generally be noncontextual. Gleason's theorem is sometimes cited in attempts to show that the MWI yields the Born Rule, but it is not a popular approach since usually those attempts make (unjustified) assumptions which are strong enough to select the Born Rule without having to rely on the more complicated math required to prove Gleason's theorem.

3) The infinite-measurements limit and its frequency operator

The frequency operator is the operator associated with the observable that is the number of cases in a series of experiments that a particular result occurs, divided by the total number of experiments. If is assumed that just the frequency itself is measured, and if the limit of the number of experiments is taken to infinity, the eigenvalue of this frequency operator is unique and equal to the Born Rule probability. The quantum system is then left in the eigenstate with that frequency; all other terms have zero amplitude, as shown by Finkelstein (1963) and Hartle (1968).

This scheme is irrelevant for two reasons. First, an infinite number of experiments can never be performed. As a result, terms of all possible frequencies remain in the superposition. Unless the Born Rule is assumed, there is no reason to discard branches of small amplitude. Assuming that they just disappear is equivalent to assuming collapse of the wavefunction.

Second, in real experiments, individual outcomes are recorded as well as the overall frequency. As a result, there are many branches with the same frequency and the amplitude of any one branch tends towards zero as the number of experiments is increased. If one discards branches that approach zero amplitude in the limit of infinite experiments, then all branches should be discarded. Furthermore, prior to taking the infinite limit, the very largest individual branch is the one where the highest amplitude outcome of each individual experiment occurred, if there is one.

A more detailed critique of the frequency operator approach is given here. The same basic approach of using infinite ensembles of measurements has been taken recently by certain Japanese physicists, Tanaka (who seems unaware of Hartle's work) and (seperately) Wada. Their work contains no significant improvements on the old, failed approach.

4) Graham's branch counting

Neil Graham came out with a paper in 1973 that appears in the book "The Many Worlds Interpretation of Quantum Mechanics" along with Everett's papers and others.

Graham claimed that the actual number of fine-grained branches is proportional to the total squared amplitude of a course-grained macroscopically defined branch. Such sub-branches would be produced by splits due to microscopic scattering events and so on which act as natural analogues of measurements.

If it were true, it could also begin to give some insight into why the Born Rule would be true, beyond just a mathematical proof; that is, each fine-grained branch would presumably support the same number of copies of the observer. (That assumption would still need to be explained, of course.)

Unfortunately, and even aside from the lack of precise definition for fine-grained branches, he failed to justify his statistical claims, which stand in contradiction to straightforward counting of outcomes. He simply assumed that fine-grained branches would on average have equal amplitudes regardless of the amplitude of the macroscopic branch that they split from.

In the next post, the more recent attempts (other than my own) to derive the Born Rule within the MWI will be described.

Monday, September 21, 2009

Why 'Quantum Immortality' is false

In the previous posts, I explained that effective 'probabilities' in an MWI are proportional to the amount (measure) of consciousness that sees the various outcomes. Because this measure need not be a conserved quantity, this can lead to nonclassical selection effects, with 'probabilities' for a given outcome still changing as a function of time even after the outcomes have been observed and recorded. That can lead to an illusion of nonlocality, which can only be properly understood by thinking in terms of the measures directly, as opposed to thinking only in terms of 'probabilities'.

The most extreme example in which it is crucial to think in terms of the measures, rather than 'probabilities' only, is the so-called 'Quantum Suicide' (QS) experiment. Failure to realize this leads to a literally dangerous misunderstanding. The issue is explained at length in my eprint "Many-Worlds Interpretations Can Not Imply 'Quantum Immortality'".

The idea of QS is as follows: Suppose Bob plays Russian Roulette, but instead of using a classical revolver chamber to determine if he lives or dies, he uses a quantum process. In the MWI, there will be branches in which he lives, and branches in which he dies. The QS fallacy is that, as far as he is concerned, he will simply find himself to survive with no ill effects, and that the experiment is therefore harmless to him.

A common variation is for him to arrange a bet, such that he gets rich in the surviving branches only, which would thus seem to benefit him. Of course in the branches where he does not survive, his friends will be upset, and this is often cited as the main reason for not doing the experiment.

That it is a fallacy can be seen in several ways. Most basically, the removal of copies of Bob in some branches does nothing to benefit the copies in the surviving branches; they would have existed anyway. Their measure is no larger than it would have been without the QS - no extra consciousness magically flows into the surviving branches, while the measure in the dead branches is removed. If our utility function states that more human life is a good thing, then clearly the overall measure reduction is bad, just as killing your twin would be bad in a classical case.

It is true that the effective probability (conditional on Bob making an observation after the QS event) of the surviving branches becomes 1. That is what creates the QS confusion; in fact, it leads to the fallacy of "Quantum Immortality" - the belief that since there are some branches in which you will always survive, then for practical purposes you are immortal.

But such a conditional effective probability being 1 is not at all the same as saying that the probability that Bob will survive is 1. Effective probability is simply a ratio of measures, and while it often plays the role we would expect a probability to play, this is not a case in which such an assumption is justified.

We can get at what does correspond for practical purposes to the concept of 'the probability that Bob will survive' in a few equivalent ways. In a case of causal differentiation, it is simple: the fraction of copies that survive is the probability we want, since the initial copy of Bob is effectively a randomly chosen one.

A more general argument is as follows: Suppose Bob makes an observation at 12:00, has a 50% chance QS at 12:30, and his surviving copies make an observation at 1:00. Given that Bob is observing at either 12:00 or 1:00, what is the effective probability that it is 12:00? (Perhaps he forgets the time, and wants to guess it in advance of looking at a clock, so that the Reflection Argument can be used here.) The answer is the measure ratio of observations at 12:00 to the total at both times, which is therefore 2/3.

That is just what we would expect if Bob had a 50% chance to survive the QS: Since there are twice as many copies at 12:00 compared to 1:00, he is twice as likely to make the observation at 12:00.

Most of your observations will be made in the span of your normal lifetime. Thus QI is a fallacy; for practical purposes, people are just as mortal in the MWI as in classical models.

It's worth mentioning another argument against a person's measure being constant:

1) "MWI immortality" believers typically think that a person's total amount of consciousness does not change even if their quantum amplitude changes, while I argue that the contrary is true.

2) In the MWI, there are definitely some (very small but nonzero) amplitudes for branches that contain Boltzmann brains (brains formed by uncoordinated processes such as thermal fluctuations) very early on. The exact amplitudes are irrelevant to the point being made.

3) Once a Boltzmann brain that matches yours has some amplitude, you start to exist. It's true that evolution, much later, will also cause _much larger amplitude_ branches to also contain versions of you. But if the belief described in point #1 were true, that would _not_ mean that your amount of consciousness increased. Thus, you would still be on even footing with the other Boltzmann brains. That is not plausible, so the immortality belief is not plausible.

Next up: Early attempts to derive the Born Rule in the MWI

Wednesday, September 16, 2009

Measure of Consciousness versus Probability

In the last post, Meaning of Probability in an MWI, it was explained that in a deterministic Many-Worlds model, with known initial conditions, that which plays the role for of a probability for practical purposes is the ratio

(the measure (amount) of consciousness which sees a given outcome)
/ (the total measure summed over outcomes)

I call that the effective probability of the outcome.

Although the effective probability is quite similar to what we normally think of as a probability in terms of its practical uses, there are also important differences, which will be explored here.

The most important differences stem from the fact that measure of consciousness need not be a conserved quantity. By definition, probabilities sum to 1, but that is not all there is to it. In a traditional, single-world model, a transfer of probability indicates causality, while the total measure remains constant over time. This is not necessarily so in a MW model.

For example, suppose there are two branches, A and B. A has 10 observers at all times. B starts off with 5 observers at T0, which increases to 10 observers at T1 and to 20 observers at T2. All observers have the same measure, and observe which branch they are in.

So the effective probability of A starts off at 2/3 at T0, while the effective probability of B is 1/3. At T1, A and B have effective probabilities of 1/2 each. At T2, the effective probability of A is 1/3 and that of B is 2/3.

There are two important effects here. First, the effective probability of B increased with time. In a single-world situation, that would mean that a system which was actually in A was more likely to change over to B as time passes. But in this MW model, there is no transfer of systems, just changes in B itself.

This means that probability changes that would require nonlocality in a single-world model don't necessarily mean nonlocality in a MW model. If A is localized at X1, and B is localized at X2 which is a light-year away, there need not be a year's delay before the effective probability of B suddenly increases.

In a single-world local hidden variable model, probability must be locally conserved, so that the change of probability in a region is equal to the transitions into and out of adjacent regions only. This need not be so in an MW model.

The second important effect of nonconservation of measure in a MW model is that total measure changes as a function of time. Observers can measure, not only what branch they are on, but also what time it is. They will be more likely to observe times with higher measure than with lower measure, just as with any other kind of observation.

A good example of this is a model proposed by Michael Weissman - a modification of physics designed to make world-counting yield the Born Rule. His scheme involved sudden splitting of existing worlds into proposed new degrees of freedom, with a higher rate of such splitting events for higher amplitude worlds. The problem with it is that if new worlds are constantly being produced, then the number of observers would be growing exponentially. The probability of future observations, as far into the future as possible, would be much greater than that of our current observations. Thus, the scheme must be false unless we are highly atypical observers, which is highly unlikely.

Edit (2/2/16): See however this post. If the SIA is correct, the above argument against Weissman's idea fails, since the SIA gives extra likelihood to theories with more observers, exactly cancelling out the effect of reducing the fraction of observers which have observations like ours. However, as discussed in that post, I don't think the SIA is the right thing to use for comparing MWIs.

It is important to realize that since changes in measure mean changes in the number of observers, decreases in measure are undesirable. This will be discussed further in the next post.

Friday, September 11, 2009

Meaning of Probability in an MWI

The quantitative problem of whether the Born Rule for quantum probabilities is consistent with the many-worlds interpretation is the key issue for interpretation of QM. Before addressing that, it is important to understand in general what probabilities mean in a many-worlds situation, because ideas from single-world thinking can lead to unjustified assumptions regarding how the probabilities must behave. Many failed attempts to derive the Born Rule make that mistake.

The issue of what probabilities mean in a Many-Worlds model is covered in greatest detail in my eprint "Many-Worlds Interpretations Can Not Imply 'Quantum Immortality'". Certain work by Hilary Greaves is directly relevant.

First, note that for a single-world, deterministic model, such as classical mechanics provides, probabilities are subjective. The classic example is tossing a coin: the outcome will depend deterministically on initial conditions, but since we don't know the details, we have to assign a subjective probability to each outcome. This may be 50%, or it may be different, depending on other information we may have such as the coin's weight distribution or a historical record of outcomes. Bayes' rule is used to update prior probabilities to reflect new information that we have.

In such a model, consciousness comes into play in a fairly trivial way: As long as we register the outcome correctly, our experienced outcome will be whatever the actual outcome was. Thus, if we are crazy and always see a coin as being heads up, then the probability that we see "up" is 100%. Physics must explain this, but the explanation will be grounded in details of our brain defects, not in the physics of coin trajectories.

By contrast, in any normal situation, the probability that we see "up" is simply equal to the probability that the coin lands face up. [Even this is really nontrivial: it means that randomly occuring "Boltzman brains" are not as common as "normal people". As we will see, if we believe in computationalism, it also means that rocks don't compute everything that brains do, which is nontrivial to prove.]

In a many-worlds situation, it may still be the case that we don't know the initial conditions. However, even if we do know the initial conditions, as we do for many simple quantum systems, there would still be more than one outcome and there is some distribution of observers that see those outcomes.

Assume that we do know the initial conditions. The question of interest becomes (roughly speaking): 'What is the probability of being among the observers that see a particular given outcome?'

It is important to note that in a many-worlds situation, the total number of obsevers might vary with time, which can lead to observer selection effects not seen in single-world situations. Because of this the fundamental quantity of interest is not probability as such, but rather the number, or quantity, of observers that sees each outcome. The amount of conscious observers that see a given outcome will be called the measure (of consciousness) for that outcome.

In a deterministic MWI with known initial conditions, it will be seen that what plays the role of the “probability” of a given observation in various situations relates to the commoness of that observation among observers.

Define the 'effective probability' for a given outcome as (the measure of observers that see a given outcome) divided by (the total measure summed over observed outcomes).

1) The Reflection Argument

When a measurement has already been performed, but the result has not yet been revealed to the experimenter, he has subjective uncertainty as to which outcome occurred in the branch of the wavefunction that he is in.

He must assign some subjective probabilities to his expectations of seeing each outcome when the result is revealed. He should set these equal to the effective probabilities. For example, if 2/3 of his copies (or measure) will see outcome A while the other 1/3 see B, he should assign a subjective probability to A of 2/3.

Why? Because that way, the amount of consciousness seeing each outcome will be proportional to its subjective probability, just as one would expect on average for many trials with a regular probability.

See Why do Anthropic Arguments work? for more details.

2) Theory Confirmation

It may be than an experimental outcome is already known, but the person does not know what situation produced it. For example, suppose a spin is measured and the result is either “up” or “down”. The probability of each outcome depends on the angle that the preparation apparatus is set to. There are two possible preparation angles; angle A gives a 90% effective probability for spin up, while angle B gives 10%. Bob knows that the result is “up”, but he does not know the preparation angle.

In this case, he will probably guess that the preparation angle was A. In general, Bayesian updating should be used to relate his prior subjective probabilities for the preparation angle to take the measured outcome into account. For the conditional probability that he should use for outcome “up” given angle A, he should use the effective probability of seeing “up” given angle A, and so on.

This procedure is justified on the basis that most observers (the greatest amount of conscious measure) who use it will get the right answer. Thus, if the preparation angle really was B, then only 10% of Bob’s measure would experience the guess that A is more likely, and the other 90% will see a “down” result and correctly guess B is more likely.

3) Causal Differentiation

It may be the case that some copies of a person have the ability to affect particular future events such as the fate of particular copies of the future person. The observer does not know which copy he is. Pure Causal Differentiation situations are the most similar to classical single-world situations, since there is genuine ignorance about the future, and normal decision theory applies. Effective probabilities here are equal to subjective probabilities just like in the Reflection Argument.

4) Caring Coefficients

As opposed to Causal Differentiation, which may not apply to the standard MWI, the most standard way to think of what happens to a person when a “split” occurs is that of personal fission. Perhaps this is the most interesting case when an experiment has not yet been performed. Decision theory comes into play here: In a single-world case, one would make a decision so as to maximize the average utility, where the probabilities are used to find the average. What is the Many-Worlds analogue?

If it is a deterministic situation and the decider knows the initial conditions, including his own place in the situation, it is important to note that he should not use some bastardized form of ‘decision theory in the presence of subjective uncertainty’ for this case. It is a case in which the decider would know all of the facts, and only his decision selects what the future will be among the options he has. He must maximize, not a probability-weighted average utility, but simply the actual utility for the decision that is chosen.

Rationality does not constrain utility functions, so at first glance it might seem that the decider’s utility function might have little to do with the effective probabilities. However, as products of Darwinian evolution and members of the human species, many people have common features among their utility functions. The feature that is important here is that of “the most good for the most people”. Typically, the decider will want his future ‘copies’ to be happy, and the more of them are happy the better.

In principle he may care about whether the copies all see the same thing or if they see different things, but in practice, most believers in the MWI would tend to adopt a utility function that is linear in the measures of each branch outcome:

U_total = Σ_i Σ_p m_ip[Choice] q_ip

where i labels the branch, p denotes the different people and other things in each branch, m_ip is the measure of consciousness of person (or animal) p which sees outcome i, and is a function of the Choice that the decider will make, and q_ip is the decider’s utility per unit measure (quality-of-life factor) for that outcome for that person.

The measures here can be called “caring measures” since the decider cares about the quality of life in each branch in proportion to them.

Utility here is linear in the measures. For cases in which measure is conserved over time, this is equivalent to adopting a utility function which is linear in the effective probabilities, which would then differ from the measures by only a constant factor. In such a case, effective probabilities are used to find the average utility in the same way that actual probabilities would have been used in a single-world model in which one outcome occurs randomly.

Next: Measure of Consciousness versus Probability

Monday, September 7, 2009

Interlude: The 2007 Perimeter Institute conference Many Worlds @ 50

As explained in the previous post, I had long been anticipating a conference on the MWI in 2007, and attended the Perimeter Institute conference Many Worlds at 50, armed with a copy of my then-new eprint on the Many Computations Interpretation.

When I arrived at my hotel the night before the conference, an older couple was checking in at the same time as I was. Someone asked the clerk for directions to the Perimeter Institute. It turned out that this couple was also attending the conference, and they were a couple of the friendliest and most interesting people I met there.

George Pugh had worked with Hugh Everett (founder of the MWI) at a defense contractor, Lambda Corp. (The work Everett did there is not so famous as his MWI but was actually important during the Cold War.) George and his impressive wife Mary had talked about the MWI with Everett himself, and they support it. They asked me which side I was on, as both pro- and con- people were attending the conference. I told them I was in favor of the MWI. They liked to hear that. We ended up having meals together on several occasions over the course of the conference.

The conference itself consisted mostly of lectures in a classroom-like atmosphere, followed by questions from the audience. Appropriately, most of the talks focused on the question of probability in the MWI.

However, and unfortunately, they mainly focused on the attempt to derive the Born Rule from decision-theoretic considerations. That approach was proposed by David Deutsch in 2000, and further developed by Simon Saunders and especially by David Wallace. Saunders and Wallace gave talks that mainly reiterated what is in their papers. There were also talks that (correctly, though of course this was not accepted by Wallace's supporters) pointed out the failures of that approach, such as those by Adrian Kent and David Albert.

The only other approach to the Born Rule that was presented at a talk was that of W. Zurek, who talked about his (equally fallacious) 'envariance' approach. Most people seemed to agree that Zurek's approach was similar to Wallace's. There was little discussion of it beyond that. When Zurek was asked about Wallace's approach during an informal discussion, he basically said that he didn't know if Wallace's approach was correct also, but he didn't seem to think it matters much, because his own approach showed that the Born Rule followed from the MWI. When I tried to point out to him why his approach fails - a task made all the more difficult by his somewhat intimidating large physical presense and lion-like bearded appearance - he didn't understand my point and soon ended the conversation.

Max Tegmark was a speaker, and he briefly discussed his heirarchy of many-worlds types, up to the Everything Hypothesis for which he is known.

Besides that, the only other controversy addressed in the talks was that of the legitimacy and meaning of talking about probability in the deterministic MWI, which is a seperate question than the quantitative problem of deriving the Born Rule. This focused on Hilary Greaves' 'caring measure' approach. She is sometimes lumped in with the decision theoretic approach to the Born Rule, because she uses decision theory in another way, but in fact her ideas are independent of that and are basically correct though not the full story.

The official speakers were basically divided into two camps: Those MWI-supporters who supported Wallace's attempted derivation of the Born Rule or who were considered allies of it (like Zurek and Greaves), versus those who not only rejected it but also were against the MWI in general (like Kent and Albert). Tegmark was neither but his one talk was largely ignored, and he did not address the Born Rule controversy.

Among the attendees, however, the situation was more complicated. I was not the only one who supported some kind of MWI, and considered understanding the Born Rule to be the key issue of interest, but utterly rejected the approaches to the Born Rule that had been presented. The alternatives that we wanted to discuss involved some form of observer-counting as the basis for probabilities in an MWI, even if it required some new physics. This led to a minor rebellion, in which a few of us tried to talk about our ideas during a lunch period in the room set aside for the conference lunch. The only official speaker that we got any help from was Hilary Greaves. We were able to speak in the lunchroom for a little while, but it didn't get much attention.

There was another young woman by the name of Hillary, I think a physicist studying at the Institute, who also helped us set up the lunchtime discussion.

The 'counter' camp included Michael Weissman, who proposed a modification of physics in order for world-counting to yield the Born Rule. His scheme involved sudden splitting of existing worlds into proposed new degrees of freedom, with a higher rate of such splitting events for higher amplitude worlds. This was interesting, but I was skeptical, and after thinking about it for a while I found the fatal flaw in it. If new worlds were constantly being produced, then the number of observers would be growing exponentially. The probability of future observations, as far into the future as possible, would be much greater than that of our current observations. Thus, the scheme must be false unless we are highly atypical observers, which is highly unlikely. While false, Mike's model serves as a good way to discuss the need for approximate conservation of measure for a successful model. In any case, Mike proved to be a good guy to talk to.

Also among the 'counters' was David Strayhorn, who proposed that an indeterminacy in General Relativity could lead to a Many Worlds model in which spacetime topologies were distributed according to, and formed the basis for, the Born Rule. His ideas did not seem fully developed, and I was skeptical of them as well, but we had interesting discussions.

Another guy with us was Allan Randall. He supports Tegmark's Everything Hypothesis, and is also interested in transhumanism and immortality. As I explained to Allen and to Max Tegmark, I wasn't sure about the Everything hypothesis, because of the problem of what would determine a unique measure distribution, but I used to support it and still like it. I think it's important and maybe useful. After all, and like many supporters of the hypothesis, I discovered a version of it on my own long before I ever heard of Tegmark.

Which brings me to a subject that received little official mention at the conference, the 'Quantum Immortality / Quantum Suicide' fallacy which Tegmark had publicized. This is the belief, which many MWI supporters have come to endorse, that the MWI implies that people always survive because some copies of them survive in branches of the wavefunction. I had always regarded this as the worst form of crackpot thinking, and had hoped to discuss it at the conference as something that MWI supporters must crush before it gets out of hand. My brief discussions about it at the conference convinced me that it was not getting the condemnation that it deserves. This ultimately led me to write my own eprint against it, Many-Worlds Interpretations Can Not Imply 'Quantum Immortality', despite my misgivings that even discussing the subject could give the dangerous idea extra publicity.

I also had interesting discussions with Mark Rubin, who had shown an explicit local formulation of the MWI using the Heisenberg picture, which is something I still need to study more. Mark and I had dinner with the Pughs. I liked the Swiss Chalet restaurant and Canadian beer.

I also happened to run into a friend of mine from NYU, where I got my Ph.D. in physics. Andre is a Russian who came to the US to study, and he had a postdoc at the Perimeter Institute. He's not an MWI supporter or really into interpretation of QM, but he knew that I am, so he was not too surprised that I showed up at the conference. I was lucky to run into him, because the next day he was heading to England for a postdoc there, studying quark-gluon plasmas using the methods he learned from models of string theory. He said he might never return to the US.

All in all, it was certainly an interesting experience. Ultimately, though, it was disappointing because I didn't get to discuss my paper much, and I never was able to have a substantive discussion with the well-known figures in the field who were there to present their own work. It was largely a lecture series rather than an egalitarian discussion group. Some discussion took place on the sidelines, such as at meals, but that was limited in who you happened to be next to. Well-known people mainly talked to each other.

One thing that grew out of the discussions on observer-counting was that a group of us decided to continue the discussion on-line. This led to the creation of the OCQM yahoo group, which included David Strayhorn, Michael Weissman, Allan Randall, Robin Hanson, and myself. Robin had not been at the conference, but he was the originator of the Mangled Worlds approach to the Born Rule, and accepted our invitation to join the group. In practice, however, posts to the group largely came from just David and myself. We all supported some form of observer-counting, but our approaches were quite different. We had some very interesting discussions, and it was a good place to 'think out loud', but ultimately even David's posting to the group petered out and it seems dead at this point.

I gave the Pughs my printed copy of the MCI paper. They were compiling a book in which they would quote various people about why Everett's interpretation of QM was important, so I wrote a few lines for them. Ultimately they decided not to use it though. I think they didn't like my criticism of the current status of the Born Rule in the MWI.

Featured Post

Why MWI?

Before getting into the details of the problems facing the Many-Worlds Interpretation (MWI), it's a good idea to explain why I believe t...

Followers