Wednesday, August 30, 2017

Counting implementations: The Problem of Size

Counting implementations: The Problem of Size

In order to get predictions out of a computationalist model, several ingredients are required:
1) Some way of determining which computations are implemented;
2) some way of determining which ones give rise to consciousness;
3) some prediction (which need not be exhaustive) about what such a consciousness would observe;
4) and some way of determining the effective probability of different types of such observations.

The implementation problem has already been addressed in previous posts on this blog. Problems 2) and 3) are hard in principle, but for present purposes, it should suffice to assume that the human brain performs the appropriate computations (whether analog, digital, or mixed) to give rise to human-like observations, and that systems that are not brain-like (or AI-like) give rise to no observations. Going beyond that assumption is not something that I will pursue, mainly because I have no way to do so.

The effective probability question is quantitative and the answer to it could have important implications. To start with, it would have to be the link between computationalism and the Born Rule of quantum mechanics, and if the Born Rule were shown to be incompatible with the effective probabilities of computationalism, then the model must be discarded (either by modifying the assumed physics or by rejecting computationalism). In practice, it may be enough to find a formula such that that the computationalist probabilities need not be incorrect, but it would be better to derive them on other grounds.

A formula for determining effective probabilities - which is to say, relative amounts of consciousness - could have other implications. For example, certain types of brain structures might have more consciousness than others, and if we know that they do, an argument could be made that those brains ought to receive more privileges. This is especially relevant to the prospects of humans peacefully coexisting with potentially conscious AIs. This could be a dangerous thing to study, but a sufficiently intelligent AI could probably study the question on its own, so for us to study it before AIs even exist might be a good idea.

The effective probability of an observation is the fraction of consciousness that makes that observation. The amount of consciousness will be assumed to be proportional to the "number of independent implementations" of each type of conscious computation. Criteria for "independence" in this context must be chosen for this purpose; at this point they may or may not be similar to the criteria for substate independence within a computation, although such similarity is certainly a hypothesis to explore.

In principle, the amount of consciousness might also depend on other characteristics of a computation, such as its complexity.

In the case of a system with continuous variables and/or infinite extent, there might be an infinite variety of independent implementation mappings, and in such a case a "correct" regularization must be used to find a ratio of finite numbers and then a limit taken towards infinity.

If there are were distinct systems each of which implements one computation, we need only count the systems of each type - in other words, just do a head count for each type of observation.

But in general, every system implements many computations. It is quite possible for one system to implement the same computation many times over - and indeed, that is what must happen if the MWI of QM is true. Even in classical mechanics though, it could certainly happen.

In there any sense in which the size of a computer affects the number of implementations which it performs? One might think so because a larger system could provide more ways to make mappings. On the other hand, it is not generally believed that the size of a computer would affect the amount of consciousness it gives rise to, and if it did it would have strange implications. Yet also, in MWI QM the "size" (in the sense of squared amplitude) of each branch of the wavefunction must indeed affect the effective probabilities if that model is correct.

Consider a simple computer that operates using the collisions of balls to create logic gates. There may be several parallel channels though which the balls can pass.

A variety of mappings could be made by including or not including each channel, with the one actually used always included. The number of such mappings would grow exponentially with the number of channels. Clearly such a strong dependence on size is problematic. It makes sense that these mappings would not be "independent" in the appropriate way, and so this exponential growth of measure can be ruled out. The exact criteria for such independence are still to be determined, but should reproduce that result.

What about the size of the ball itself? A mapping can be made for each small part of the ball to an appropriate computational substate, and these can be combined to form an (exponential) multitude of overall implementation mappings, each of which will have the correct causal behavior given appropriate restrictions. But these all must "implement" the same computation at the same time; any difference would fragment the balls and ruin the causal behaviors. These mappings, too, should not be considered independent.

Similarly, consider the light that reflects off of the ball. It will be correlated with the position of the ball, so it provides an alternative thing to make an implementation mapping with - at least for final computational states. But implementations based on these mappings will not be independent of each other or of those based on mappings from the ball itself.

Next, I'll consider some possible Independence Criteria for Implementations (ICI) that meet these restrictions, and the implications of each of these flavors of ICI.

Featured Post

Why MWI?

Before getting into the details of the problems facing the Many-Worlds Interpretation (MWI), it's a good idea to explain why I believe t...

Followers