Previous: Basic idea of an implementation

If a computation is implemented, there must be a mapping from states of the underlying system to formal states of the computation, and the states must have the correct behavior (transition rule) as they change over time.

For example, for an electronic digital computer, we can map a high-voltage state of a transistor lead to a "1" and a low voltage state to a "0". By doing the same for various other circuit elements, we can obtain an ordered string of 1's and 0's. This string will change over time as the voltages change. If, due to the laws of physics and the way the circuit is connected, this string must always change in accordance with the transition rules for computation C, then the system implements C.

We must allow as much flexibility as possible in the choice of mapping, because we are trying to understand the behavior of the system without any reference to things outside the system. Human convenience in recognizing the mapping is not a consideration.

The obvious way to try do this is simply to allow any mapping that is mathematically possible. This leads to what I will call the naive implementation criterion, because while it may sound good at first it is not a viable option for a satisfactory criterion. Chalmers' paper Does a Rock Implement Every Finite-State Automaton? explained this in detail.

The following is my version of what I'll call the "Putnam-Searle-Chalmers (PSC) Theorem" which shows that unrestricted mappings are not a viable option:

Suppose that a system consists of two parts, S and T, each of which has a numerical value, and that the dynamics of our system are as follows:

S(t+1) = S(t)

T(t+1) = T(t) + 1

These dynamics are fairly trivial; S is a dial that maintains a constant setting, while T is a clock.

We will check to see if this system implements a computation, C, which has the transition rule X(t+1) = F(X(t)) where t is a time index.

Here X need not be a single number; it might be a string of bits, for example.

F could be a complicated function, such as F(X) = the Xth prime number (expressed in base 2, where X is an integer expressed in base 2).

Now make a mapping M going from (S,T) to X with the following properties:

X = M(S,T)

M(S,T+1) = F(M(S,T))

We do need to make sure that any possible starting value of X is allowed by the mapping, which we can always do if our system has enough possible dial values.

Now, according to the mapping, X will change as a function of time based on the dynamics of the system as follows:

X(t+1) = M(S(t+1),T(t+1))

= M(S(t),T(t)+1)

= F(M(S(t),T(t)))

= F(X(t))

Therefore, the system would implement the computation if this mapping is allowed. But the system's dynamics are trivial while the computation can be a very complicated function. Obviously, this is not acceptable; the computation does not characterize the behavior of the system at all. This is what I call a "false implementation". All of the complicated dynamics of the computation have been put into the mapping. It is therefore necessary to put restrictions on what mappings are allowed.

One possibility is to require that each part of a string that defines a formal computational state (e.g. each bit in a bit string) takes its value based on a different part of the underlying system. That is basically what Chalmers proposed to overcome the problem.

While it somewhat counter-intuitively rules out distinctions based on the value of a single number (since a single number can still be mapped to any other single number), it goes a long way towards ruling out false implementations, while still allowing important standard examples of implementations that we want to retain, such as mapping switch positions to bit values (assuming classical physics).

However, it is not quite right. There are some systems for which it still allows false implementations, and there are other cases where it rules out what seem to be legitimate implementations - and these become clearly important when quantum mechanics is brought into the picture. In classical mechanics, different particles are different parts of the system; in quantum mechanics, different particles are different directions in the shared configuration space on which the wavefunction evolves.

Next: Restrictions on mappings 1: Independence and Inheritance

Subscribe to:
Post Comments (Atom)

## No comments:

## Post a Comment