Previous: The Putnam-Searle-Chalmers Theorem
Note: My new paper about implementation of computations will appear in the proceedings of the 7th AISB Symposium on Computing and Philosophy (2014). It is largely compatible with the ideas presented on this blog but contains a few new ideas and explanations:
Structure and Dynamics in Implementation of Computations preprint
Powerpoint for my talk at the symposium. It's a handy summary :)
Chalmers proposed that a computation state be represented by a string of numbers, where each part of the string is "independent"; he proposed that independence can be achieved if each part of the string depends on a different part of the underlying system. He called this a Combinatorial State Automaton (CSA), because the state is given not by a single number but by the combination of the different parts of the string. The CSA avoids the false "clock and dial" implementations discussed in the previous post.
However, as Chalmers noted, there are still false implementations that can be found for the CSA. Suppose that each part of the underlying system which is mapped to part of the string at the current time step contains all of the relevant information about every other relevant part of the system at the previous time step.
In that case, a mapping can be made in which - although each part of the string depends only on its own part of the underlying system - a sequence of values can be assigned to each part of the string in such a way as to mimic any desired computation: enough information is available for the value assigned to that part to depend on any desired combination of the values of the parts of the string at the previous time step.
The amount of information that must be stored would grow exponentially with time, which leads to systems that quickly become physically unrealistic after a few time steps, but the fact that the problem exists in principle is enough to show that a different restriction on mappings is needed.
An implementation criterion will be given for a formal system consisting of a structured set of variables (also called substates), and a transition rule. Such a system will be called a Combinatorial Structured State Automaton (CSSA). It differs from Chalmers' CSA in that the structure of the state is not restricted to that of a string of values. For example, the substates could be arranged into a rectangular matrix, or a higher-dimensional array, and this arrangement and its specification would be considered part of the specification of the CSA.
One reason for including such structure is to allow label inheritance (explained below) from an underlying computation to another computation. While the ultimate underlying system is assumed to be the physical universe (or whatever underlies that!), the definitions must work in general so that the underlying system can be anything that can be described mathematically, because the criteria for implementation of a computation should not depend on experimentally discovered facts of physics but rather on pure mathematics.
It will also be useful in analyzing systems to allow computations to implement other computations. For example, if an object implements a "Windows machine" computation, then we only need to check variables on the level of that computation (rather than looking at fundamental systems variables such as the wavefunction's behavior in terms of electron positions) to see if it also implements the "Firefox" computation. In this case the "Windows machine" is now treated as the underlying system. Many of the simple examples I will use will involve a simple underlying digital system.
The criteria given below are a modification of the ones given in my MCI paper, and will be incorporated into a revised paper.
The basic rule for independence is that it must not be possible to determine from knowledge of the physical states that are mapped to the value of the current substate and of the system dynamics
- the values of any of those substates (at the previous time step) that determined what the value of a given substate is at the current time step, except when the current value itself provides enough information; or
- the values of other substates (at the same time step), except when the current value itself provides enough information.
This rules out clock and dial mappings, because the clock and dial physical state which any substate depends on would reveal all formal values of the other substates and the previous substates. It also rules out the other false implementations discussed above in which the variables record information that would reveal the values of the previous states that determine them.
However, it is sensible to allow cases in which interacting formal substates share the underlying variables that they depend on. This is especially important with quantum mechanics (see below).
In these cases, independence is instead established with the help of “inheritance” of labels. Underlying label indices are inherited when a substate depends on the values of that index, associating the substate with that index.
Knowledge of the variables that determine the other substates - modulo permutations (or in other words, limited knowledge that would not be affected by such swapping) of the labeling indexes that are inherited by the given substate - must not reveal the value that the given substate has (at the same time step).
The restriction that knowledge of the underlying variables that determine the substate must not reveal the values of the substates at previous time step that determine its value (with exceptions as the basic rule for inheritance above) still applies as well, but modulo permutations of the inherited indexes for the respective substates.
An example helps show what is involved here:
Bits on a 2-d grid plus time --> (mapped to) 2 integers as functions of time
b(i,j,t) --> I(t), J(t)
The computation would then involve the dynamics of I and J, such as
I(t+1) = f(I(t),J(t))
J(t+1) = g(I(t),J(t))
where f and g are appropriate functions.
Each bit takes on the value 0 or 1. If the number of nonzero bits is not equal to 1, this mapping will be undefined (there is no need for the mapping to cover all possible states of the underlying system).
If only one bit is nonzero at a given time, then let
I(t) = the value of i at which b(i,j,t) is 1
J(t) = the value of j at which b(i,j,t) is 1
If we were to swap around the values of the i-index, it would affect the value of I(t) but not of J(t), and similarly for the j-index affecting J(t) but not I(t). Therefore the conditions are met so that I(t) “inherits” the i label and J(t) the j label.
To verify independence - even though both I and J depend on all bits on the grid in the sense that only one nonzero bit is allowed - knowing the full state of the 2-d grid modulo a permutation of the j-index (that is, knowing it only up to a possible swapping around of the j-index values) reveals the value of I(t), but not J(t), and vice versa. Thus they are independent.
In this example, an underlying system which consists of a function ON a 2-dimensional grid was reduced to a system of two variables, which together define a single point IN a 2-d space. This can be summarized as “on --> in”.
In cases where the underlying system has other variables, e.g. the k index in b(i,j,k,t), a mapping might not be symmetric with respect to the other variables. For example, we could have
X(t) = b(1,1,1,t)+ b(1,2,2,t) - b(2,1,3,t) - b(2,2,4,t)
Y(t) = b(1,1,1,t)+ b(2,1,3,t) - b(1,2,2,t) - b(2,2,4,t)
In this case, are X and Y independent, with X inheriting from i and Y from j? If two values of j (say j=1 and j=2) are swapped as indexes of b(), then the value of X could change: b(1,1,1,t) + b(1,2,2,t) is not b(1,2,1,t) + b(1,1,2,t). On the other hand, if we only had a limited knowledge of b() and that knowledge is symmetric with respect to such swaps, we wouldn't know that. This mapping is OK. It is perhaps better to think of it as if the terms in X(t) are labeled by different i-values and j-values (the values the mapping is supposed to inherit from) while ignoring their functional dependence on k (which is not being inherited by anything).
The concept of inheritance can be generalized to cases in which more than one underlying variable is responsible for the distinction being inherited. For example, here X inherits from i, but Y is to inherit from j and k:
X(t) = b(1,1,1,t)+ b(1,2,1,t) - b(2,1,1,t) - b(2,1,2,t)
Y(t) = b(1,1,1,t)+ b(2,1,1,t) - b(1,2,1,t) - b(2,1,2,t)
Inheritance can also occur with multiple levels of functionals. For example, suppose the underlying variables are a functional g of function f(i,j), e.g. if i and j are bits, here g=g(f(0,0),f(0,1),f(1,0),f(1,1)). An intermediate mapping could be made with substates that inherit from f and are functions on (i,j). This can then be mapped to a pair of substates X,Y that inherit from i and j respectively.
In quantum mechanics, the wavefunction is a function ON a high dimensional configuration space. In classical mechanics, the set of particle positions defines a point IN configuration space. It certainly seems that, in the classical limit, the computations that would performed by the related classical system (such as a system of classical switches) would in fact be performed by the actual quantum system. Inheritance of labels - from quantum directions in configuration space to classical position variables - permits that. Quantum field theory, which is even more of a realistic model, involves functional dependency of the wavefunctional on a function of space.
Next: Restrictions on mappings 2: Additional considerations
- ▼ October (4)