Friday, May 25, 2012

Why do Anthropic arguments work?

See "Meaning of Probability in an MWI"

Anthropic arguments set the subjective probability of a type of observation equal to the fraction of such observations within a reference class. This is what I use for "effective probabilities" in the MWI after a split has occurred (the 'Reflection Argument' in the previous post).

There is sometimes some confusion and controversy about such arguments, so I will go into more detail here about how and why the argument works.

The anthropic 'probability' of an observation is equal to what the probability would be of obtaining that observation if an observation is randomly chosen.

Does this imply that a random selection is being assumed? Is it implied that there some non-deterministic process in which observers are randomly placed among the possible observations?

No! I am not assuming any kind of randomness at all. All that I am doing is using a general procedure – known as anthropic reasoning - to maximize the amount* of consciousness that guesses correctly the overall situation.

Suppose that, prior to making any observations, X is thought 50% likely to be true. If X is true then one person sees red and nine people see blue. If X is false, then nine people see red and one person sees blue. If you see red, you should think that X is probably false, with 90% confidence.

If people always follow this advice, then in cases like this, 90% of the people will be right. True, 10% will be wrong, but it’s the best we can do. The given confidence level should be used for betting and/or used as a prior probability for taking into account additional evidence using Bayes' theorem.

That is why the “effective probability” is proportional to the number of people or amount of consciousness; it is not because of some kind of ‘random’ selection process.

The next point is that "number of people" is not always the right thing to use for the anthropic effective probabilities. In fact, it only works as an approximation, and only in classical mechanics even then. The reason is that the amount of consciousness is not always the same for each "person". This is especially true if we consider effective probabilities in quantum mechanics, which are proportional to squared amplitude of the branch of the wavefunction. In such a case, we must set effective probabilities proportional to "amount of consciousness" which is a generalization of the idea of "number of people". I call this amount "measure of consciousness" (MOC) or "measure".

Note: In my interpretation of QM - the many computations interpretation - I do assume that the measure is proportional to the number of implementations of the computation, which can be thought of as the number of observers. However, many of the points I make in posts here do not rely on that interpretation, so the more general concept of measure is generally used.

There is no reason not to apply the same kind of reasoning to cases in which time is involved: In such cases, this maximizes the fraction* of consciousness which is associated with correct guesses. In a large enough population (which is certainly the case with the MWI), this is the same as maximizing the amount of consciousness associated with correct guesses at a given global time.

With all of this talk about consciousness, am I assuming any particular hypothesis about what consciousness is? No, I am not.

What about eliminativism - the idea that consciousness as it is commonly understood does not really exist? That's no problem either! I am just using consciousness as a way to talk about the thing that observers do when they observe. Even the most radical eliminativist does not deny that there is something about the brain that is related to observational processes; whatever that is, more people would have more of it.

Rather than "consciousness", perhaps it would be more precise to talk about "observations" or "queries". Remember, effective probability maximizes the fraction of correct answers; this implies that queries are being made. What about the quantum case, in which the "amount of queries" is proportional to squared amplitude? To make sense of this in an eliminativist view, it may be necessary to take a computationalist view, and let the "amount of queries" be the number of implementations of an appropriate computation. On the other hand, for a dualist, the effective probability should be set proportional to the amount of consciousness that sees a given "query".

Given these different philosophies, without implying any position on whether "consciousness" really exists or not, I will continue to use the term "amount of consciousness" to stand for whatever the quantity of interest is that generalizes the notion of "number of people" to give anthropic effective probabilities.

* When considering the consequences of a fixed model of reality, there is no difference between maximizing the amount of people who guess correctly as opposed to maximizing the fraction of people who guess correctly. However, if different hypotheses which predict different amounts of people are compared, there is a difference. This is closely tied to the philosophical arguments known as the Sleeping Beauty Problem and the Doomsday Argument. I discuss this important topic in the following post.

2 comments:

  1. Interesting, I was just thinking about this yesterday, and now today I happen to read your post. I mostly understand the ideas of measure and effective probabilities. Like, if I were to ask "What are the chances of me pulling an ace of spades randomly out of a deck of cards?" then I could think of it on terms of how many "me's" will or will not pull an ace of spades out of that deck.

    It starts to get more confusing when you think about confidence levels of memory, or similar things like that. Like, what if I was at work, and there was a new girl who started working there, and she told me her name. The next day, however, I forget her name, but I'm pretty sure it's Samantha. If I say that I'm 70% sure her name is Samantha, then it becomes more unclear on how I should think about this in terms of measure.

    MWI has helped me visualize and understand probabilities better, but this is one thing that has confused me for a while. In the deck of cards example, since there are 52 cards in a deck, I could say that about 0.02% of my measure will pull out the ace of spades. What do I say, though, about the confidence of memory example? If I feel 70% confident her name is Samantha, can I say that in 70% of my measure her name is Samantha, and in the other 30% her name is something different?

    I think I'm starting to understand though. I think the answers to my questions are right here in this post, but it takes me a while to compute these ideas in my mind. I usually solve these puzzles in my dreams, oddly enough..

    Damn, this stuff really gives me a headache sometimes. I'm determined to understand it, though, I don't care how long it takes me. BTW, sorry I haven't finished the graphs I'm making for you yet, I'm struggling with them. I have the ideas down, but I'm having a hard time channelling them creatively. I keep getting to a point to where I'm almost finished, and then scrap the whole thing because I see a better way I could do it. I'm too much of a perfectionist sometimes..

    Cheers!

    ReplyDelete
  2. Cheese, thanks for your comment. This subject is actually something I need to clarify in the context of making my general argument against the possibility of immortality understood by those who believe the 'quantum immortality' fallacy.

    If you are not familiar with subjective probabilities, you need to learn about Bayes' Theorem and Bayesian reasoning. See
    http://en.wikipedia.org/wiki/Bayes%27_theorem

    P(B|A) is the conditional probability of B given A. (Read P(B|A) as "P of B given A").

    P(B|A) P(A) = P(A and B) = P(A|B) P(B)

    P(A) = P(A|B) P(B) + P(A|not B) P(not B)

    Bayes' Theorem solves for P(B|A) = P(A|B) P(B) / [P(A|B) P(B) + P(A|not B) P(not B)]

    "Effective probabilities", as I use the term, are measure ratios. They should be used to set _conditional_ subjective probabilities, like P(A|B), where the condition is some possible overall multiverse situation.

    "A" is the experimental evidence that you have. "B" is what you want to know is true or not.

    Your confidence of memory example seems best understood in terms of regular subjective probability. It is not a measure ratio. Of course there is some measure ratio of people like you in worlds where he name is or isn't Samantha, but in general it will not be the same as your confidence.

    An example that may clarify it is this: Suppose that someone told you that the 100th digit of pi after the decimal point is N. You are 70% confident that N=7, but you don't remember for sure. This is NOT a measure ratio; it is just a regular subjective probability. In fact, in 100% of worlds, it is true that N=7; but you don't know that.

    Now, suppose that you know that a quantum experiment will be done using the digits of pi to set up initial conditions. A spin will be measured, up or down. Suppose that if N=7, then the result is seen to be spin up with 90% effective probability; and otherwise, up or down results would have equal effective probabilities. What is your subjective probability for the spin being up in your world, assuming that the experiment has been done already but you have yet to learn the result?

    It is P(up) = P(up|7) P(7) + P(up | not 7) P(not 7) = {0.9}(0.7) + {0.5}(0.3) = 0.78

    The numbers I put in {} curly brackets above, the conditional subjective probabilities, are what the effective probabilities gave us given each possible overall situation.

    What is the actual effective probability for spin up? 90%, since in fact N=7, but you don't know that :)

    What is the _actual probability_ that the spin is up? 100% - the same as the actual probability that the spin is down - since BOTH outcomes always occur in the MWI.

    Now, suppose that you saw the outcome and it was spin up, but you still don't know if N=7. In that case you could use Bayes' theorem to improve your prior guess about how likely N=7 is, using the new evidence (spin up).

    ReplyDelete

Featured Post

Why MWI?

Before getting into the details of the problems facing the Many-Worlds Interpretation (MWI), it's a good idea to explain why I believe t...

Followers