Thursday, December 29, 2011

Interlude: The Partial Brain thought experiment

Mark Bishop's 2002 paper "Counterfactuals Cannot Count" attacked the use of counterfactuals in computationalism using a neural replacement scenario, in which the components (e.g. neurons) of a brain (which may already be an artificial neural net) are replaced one at a time with components that merely pass through a predetermined sequence of states that happens to be the same sequence that the computational components would have passed through. Eventually, the whole brain is replaced with the equivalent of a player piano, unable to perform nontrivial computations.

There is a long history in philosophy of mind of using the neural replacement thought experiment to argue that various factors (such as what the components are made of) can't affect conscious experiences. For example, David Chalmers used it as an argument for computationalism. It works like this: the rest of the brain can't have any reaction to the replacement process, since by definition the replaced components provide the same signals to the rest of the brain. It's been argued that it doesn't seem plausible that the brain could be very much mistaken about its own experiences, so a gradual change in or vanishing of consciousness is taken to be implausible. A sudden change isn't plausible either, since there's no reason why a particular threshold of how far the replacement has gone should be singled out.

Bishop's argument is really no different from other neural replacement thought experiments, except in the radical (to a computationalist) nature of its conclusions. So, if neural replacement thought experiments do establish that consciousness must be invariant in these scenarios, then computationalism must be rejected.

My Partial Brain thought experiment shows that neural replacement thought experiments are completely worthless. It works like this: Instead of replacing the components of the brain with (whatever), just remove them, but provide the same inputs to the remainder of the brain as the missing components would have provided.

What would it be like to be such a partial brain? Some important features seem obvious: it is not plausible that as we let the partial brain decrease in size, consciousness would vanish suddenly. But now it's not even possible (unlike in neural replacement scenarios) that consciousness will remain unchanged; it must vanish when the removal of the brain is complete.

Therefore, progressively less and less of its consciousness will remain. In a sense it can't notice this - it beliefs will disappear as certain parts of the brain vanish, but they won't otherwise change - but that just means its beliefs will become more wrong until they vanish. For example, if the higher order belief center remains intact but the visual system is gone, the partial brain will believe it is experiencing vision but will in fact not be.

The same things would happen - by definition - in any neural replacement scenario in which the new components don't support consciousness; the remaining brain would have partial consciousness. So neural replacement scenarios can't show us anything about what sorts of components would support consciousness.

The partial brain thought experiment also shows that consciousness isn't a unified whole. It also illustrates that the brain can indeed be wrong about its own conscious experiences; for example, just because a brain is sure that it has qualitative experiences of color, that is not strong evidence in favor of the idea that it actually does, since a partial brain with higher-order thoughts about color but no visual system would be just as sure that it does.

No comments:

Post a Comment

Featured Post

Why MWI?

Before getting into the details of the problems facing the Many-Worlds Interpretation (MWI), it's a good idea to explain why I believe t...

Followers