The issue of what probabilities mean in a Many-Worlds model is covered in greatest detail in my eprint "Many-Worlds Interpretations Can Not Imply 'Quantum Immortality'". Certain work by Hilary Greaves is directly relevant.

First, note that for a single-world, deterministic model, such as classical mechanics provides, probabilities are subjective. The classic example is tossing a coin: the outcome will depend deterministically on initial conditions, but since we don't know the details, we have to assign a subjective probability to each outcome. This may be 50%, or it may be different, depending on other information we may have such as the coin's weight distribution or a historical record of outcomes. Bayes' rule is used to update prior probabilities to reflect new information that we have.

In such a model, consciousness comes into play in a fairly trivial way: As long as we register the outcome correctly, our experienced outcome will be whatever the actual outcome was. Thus, if we are crazy and always see a coin as being heads up, then the probability that we see "up" is 100%. Physics must explain this, but the explanation will be grounded in details of our brain defects, not in the physics of coin trajectories.

By contrast, in any normal situation, the probability that we see "up" is simply equal to the probability that the coin lands face up. [Even this is really nontrivial: it means that randomly occuring "Boltzman brains" are not as common as "normal people". As we will see, if we believe in computationalism, it also means that rocks don't compute everything that brains do, which is nontrivial to prove.]

In a many-worlds situation, it may still be the case that we don't know the initial conditions. However, even if we do know the initial conditions, as we do for many simple quantum systems, there would still be more than one outcome and there is some distribution of observers that see those outcomes.

Assume that we do know the initial conditions. The question of interest becomes (roughly speaking): 'What is the probability of being among the observers that see a particular given outcome?'

It is important to note that in a many-worlds situation, the total number of obsevers might vary with time, which can lead to observer selection effects not seen in single-world situations. Because of this the fundamental quantity of interest is not probability as such, but rather the number, or quantity, of observers that sees each outcome. The amount of conscious observers that see a given outcome will be called the

*measure*(of consciousness) for that outcome.

In a deterministic MWI with known initial conditions, it will be seen that what plays the role of the “probability” of a given observation in various situations relates to the

*commoness*of that observation among observers.

Define the 'effective probability' for a given outcome as (the measure of observers that see a given outcome) divided by (the total measure summed over observed outcomes).

1) The Reflection Argument

When a measurement

*has already been performed*, but the result has not yet been revealed to the experimenter, he has

**subjective uncertainty**as to which outcome occurred in the branch of the wavefunction that he is in.

He must assign some subjective probabilities to his expectations of seeing each outcome when the result is revealed. He should set these equal to the effective probabilities. For example, if 2/3 of his copies (or measure) will see outcome A while the other 1/3 see B, he should assign a subjective probability to A of 2/3.

Why? Because that way, the amount of consciousness seeing each outcome will be proportional to its subjective probability, just as one would expect on average for many trials with a regular probability.

See Why do Anthropic Arguments work? for more details.

2) Theory Confirmation

It may be than an experimental

*outcome is already known*, but the person does not know what situation produced it. For example, suppose a spin is measured and the result is either “up” or “down”. The probability of each outcome depends on the angle that the preparation apparatus is set to. There are two possible preparation angles; angle A gives a 90% effective probability for spin up, while angle B gives 10%. Bob knows that the result is “up”, but he does not know the preparation angle.

In this case, he will probably guess that the preparation angle was A. In general, Bayesian updating should be used to relate his prior subjective probabilities for the preparation angle to take the measured outcome into account. For the conditional probability that he should use for outcome “up” given angle A, he should use the

*effective probability*of seeing “up” given angle A, and so on.

This procedure is justified on the basis that most observers (the greatest amount of conscious measure) who use it will get the right answer. Thus, if the preparation angle really was B, then only 10% of Bob’s measure would experience the guess that A is more likely, and the other 90% will see a “down” result and correctly guess B is more likely.

3) Causal Differentiation

It may be the case that some copies of a person have the ability to affect particular future events such as the fate of particular copies of the future person. The observer does not know which copy he is. Pure Causal Differentiation situations are the most similar to classical single-world situations, since there is genuine ignorance about the future, and normal decision theory applies. Effective probabilities here are equal to subjective probabilities just like in the Reflection Argument.

4) Caring Coefficients

As opposed to Causal Differentiation, which may not apply to the standard MWI, the most standard way to think of what happens to a person when a “split” occurs is that of personal fission. Perhaps this is the most interesting case when an experiment has not yet been performed. Decision theory comes into play here: In a single-world case, one would make a decision so as to maximize the average utility, where the probabilities are used to find the average. What is the Many-Worlds analogue?

If it is a deterministic situation and the decider knows the initial conditions, including his own place in the situation, it is important to note that he should

*not*use some bastardized form of ‘decision theory in the presence of subjective uncertainty’ for this case. It is a case in which the decider would know all of the facts, and only his decision selects what the future will be among the options he has. He must maximize, not a probability-weighted average utility, but simply the actual utility for the decision that is chosen.

Rationality does not constrain utility functions, so at first glance it might seem that the decider’s utility function might have little to do with the effective probabilities. However, as products of Darwinian evolution and members of the human species, many people have common features among their utility functions. The feature that is important here is that of “the most good for the most people”. Typically, the decider will want his future ‘copies’ to be happy, and the more of them are happy the better.

In principle he may care about whether the copies all see the same thing or if they see different things, but in practice, most believers in the MWI would tend to adopt a utility function that is linear in the measures of each branch outcome:

U_total = Σ_i Σ_p m_ip[Choice] q_ip

where i labels the branch, p denotes the different people and other things in each branch, m_ip is the measure of consciousness of person (or animal) p which sees outcome i, and is a function of the Choice that the decider will make, and q_ip is the decider’s utility per unit measure (quality-of-life factor) for that outcome for that person.

The measures here can be called “caring measures” since the decider cares about the quality of life in each branch in proportion to them.

Utility here is linear in the measures. For cases in which measure is conserved over time, this is equivalent to adopting a utility function which is linear in the effective probabilities, which would then differ from the measures by only a constant factor. In such a case, effective probabilities are used to find the average utility in the same way that actual probabilities would have been used in a single-world model in which one outcome occurs randomly.

Next: Measure of Consciousness versus Probability

## No comments:

## Post a Comment