Thursday, December 29, 2011

Interlude: The Partial Brain thought experiment

Mark Bishop's 2002 paper "Counterfactuals Cannot Count" attacked the use of counterfactuals in computationalism using a neural replacement scenario, in which the components (e.g. neurons) of a brain (which may already be an artificial neural net) are replaced one at a time with components that merely pass through a predetermined sequence of states that happens to be the same sequence that the computational components would have passed through. Eventually, the whole brain is replaced with the equivalent of a player piano, unable to perform nontrivial computations.

There is a long history in philosophy of mind of using the neural replacement thought experiment to argue that various factors (such as what the components are made of) can't affect conscious experiences. For example, David Chalmers used it as an argument for computationalism. It works like this: the rest of the brain can't have any reaction to the replacement process, since by definition the replaced components provide the same signals to the rest of the brain. It's been argued that it doesn't seem plausible that the brain could be very much mistaken about its own experiences, so a gradual change in or vanishing of consciousness is taken to be implausible. A sudden change isn't plausible either, since there's no reason why a particular threshold of how far the replacement has gone should be singled out.

Bishop's argument is really no different from other neural replacement thought experiments, except in the radical (to a computationalist) nature of its conclusions. So, if neural replacement thought experiments do establish that consciousness must be invariant in these scenarios, then computationalism must be rejected.

My Partial Brain thought experiment shows that neural replacement thought experiments are completely worthless. It works like this: Instead of replacing the components of the brain with (whatever), just remove them, but provide the same inputs to the remainder of the brain as the missing components would have provided.

What would it be like to be such a partial brain? Some important features seem obvious: it is not plausible that as we let the partial brain decrease in size, consciousness would vanish suddenly. But now it's not even possible (unlike in neural replacement scenarios) that consciousness will remain unchanged; it must vanish when the removal of the brain is complete.

Therefore, progressively less and less of its consciousness will remain. In a sense it can't notice this - it beliefs will disappear as certain parts of the brain vanish, but they won't otherwise change - but that just means its beliefs will become more wrong until they vanish. For example, if the higher order belief center remains intact but the visual system is gone, the partial brain will believe it is experiencing vision but will in fact not be.

The same things would happen - by definition - in any neural replacement scenario in which the new components don't support consciousness; the remaining brain would have partial consciousness. So neural replacement scenarios can't show us anything about what sorts of components would support consciousness.

The partial brain thought experiment also shows that consciousness isn't a unified whole. It also illustrates that the brain can indeed be wrong about its own conscious experiences; for example, just because a brain is sure that it has qualitative experiences of color, that is not strong evidence in favor of the idea that it actually does, since a partial brain with higher-order thoughts about color but no visual system would be just as sure that it does.

Wednesday, December 14, 2011

Restrictions on mappings 2: Transference

In the previous post, Restrictions on mappings 1: Independence and Inheritance, the "inheritance" of structured state labels was explained; it allows the same group of underlying variables to be mapped to more than one independent formal variable. In the example a function on a 2-d grid was mapped to a pair of variables.

Transference is something like the reverse process: It allows a set of simpler variables to be mapped to a structured state function on a grid.

This allows ordinary digital computers to implement wave dynamics on a 3-d spaces, which could matter for the question of whether the universe could be ultimately digital. The AdS/CFT correspondence in some models of string theory would need something similar if the bulk model is to be implemented on the boundary in the computational sense.

Transference can be Direct or Indirect. It works like this:

Direct Transference could be used in a mapping by taking the value from a given variable and turning it into a label for structuring a set of new variables.

For example, if there is a single integer variable I(t), we can transfer its value to label to a set of bits B(j) which each only depend on whether I(t) equals the value of its label, e.g.

B(j) = 1 if I = j
B(j) = 0 if I does not = j

These bits can be considered an ordered series of "occupation tests" of the different regions that the underlying variable's value could be in.

Of course, only one of these bits at a time will be nonzero. But they are to be considered independent variables. At this point you might object: If you know the value of the nonzero one, don't you know the other bit values must be zero? But just as Inheritance carved out an exception to the rule for independence, so would Direct Transference carve out an exception to it.

Going the other direction is no problem: If we restrict a mapping such that only one bit in an ordered set B(i) is nonzero, then a new variable I can be constructed such that I has a value equal to the index i of the nonzero bit. Here we are doing the reverse.

We can't double count, though; if we make the new set of variables b(i), we can't make a second independent new set of variables c(i) which gets its label transferred from the same underlying variable I(t) for the same values of I.

If we have two underlying variables I and J, we could similarly use Direct Transference to map them to a 2-d grid of bits, B(i,j), in which only one bit is nonzero.

If we then re-map this grid using inheritance we could arrive back at our original I and J variables. So, basically, what Direct Transference is saying is that these two pictures are really equivalent.

We could also map the two of them to a single 1-d series of variables, e.g. which are the sum of the respective 1-d series of bits. (Since the value of the sum becomes 2 when X=Y, these are trits, not bits.)

Can the variables that were obtained using Direct Transference be used to make a mapping so flexible that it must not be allowed? Something like a clock and dial mapping? The answer to that certainly appears to be, no. And that may be justification enough for allowing it; my philosophy is to be liberal in allowing mappings, as long as those mappings don't allow implementations of arbitrary computations.

Indirect Transference is a little more complicated. Consider a computer simulation of dynamics on a 2-d grid, f(x,y). When the value of f is updated at the pair of parameters x and y, this can be done by setting one variable equal to x, another equal to y, and using them to find the memory location M in the computer at which the corresponding value of f is to be changed. Since updates of f at (x,y) always involve fixed values for each of those parameters, f(x,y) can be labeled by those values. In this way, mapping of the values of f to the actual function of x and y, f(x,y), is considered a valid mapping, even though the computer's memory is not laid out in that matter. This is an example of Indirect Transference. It can be generalized to any case in which a parameter of a function is used.

Featured Post

Why MWI?

Before getting into the details of the problems facing the Many-Worlds Interpretation (MWI), it's a good idea to explain why I believe t...

Followers