In conceptual terms, to say that a system implements a given computation is to say that something about that system - some aspect of its behavior - is described by the computation. It is thus a way of characterizing the system.
For example, if a system can take two numbers as inputs and produces their sum as an output, we could characterize it as an "adder". If instead it can take two numbers as inputs and produces their product as an output, we could characterize it as a "multiplier". "Adding" and "multiplying" are two different _types_ of computations. These behaviors are fairly general, so they can be can be further divided into specific computations if more details are specified, such as what format the inputs are in, internal functioning during intermediate steps, and so on.
Two "multipliers" with different intermediate steps would implement different computations. Thus, while the word "computation" suggests something that is done to get a result, that is misleading in the context of the computationalist view of consciousness, which instead focuses on internal functioning of the system in question.
If we find a third system and discover that it behaves like an "adder", then it has a lot in common with our other "adder": we now know a lot about how it can behave. But there's also a lot that this characterization does NOT tell us. It doesn't tell us what the system is made out of; or what its other behaviors might be; or what internal processes it uses to perform the additions. It also does not tell us whether it performs the addition multiple times, redundantly, perhaps performing the same addition again but sending that (same) result to some other person.
In addition to its capabilities, a system's behavior is further characterized by the exact sequence of computer states that are actually involved in its behavior. For example, an adder which added the numbers 45 and 66 and got 111 is characterized by those numbers, in addition to just being an adder. Such a sequence of numbers (or more precisely the sequence that includes all intermediate states of the computation) can be called the "activity" of the computation, and together with the computation it specifies what is known as a run of the computation.
It is important to note that neither the computation alone nor the activity alone are enough to adequately describe the behavior of the system; both are needed.
Often, I will speak only of computations, but it should be understood by the context that "runs of computations" are often meant as well. For example, if I say that a particular computation gives rise to a particular conscious observation, I mean that a particular run of that computation - corresponding to a particular starting state - gives rise to that observation. A different run of the same computation (such as the same type of brain except in a different starting state) might then give rise to a different kind of observation or to none at all.
In the case of programmable systems, the distinction between computations and runs becomes somewhat arbitrary, but this is not a problem as we can always specify what computations we are interested in and use that to make the choice.
A system could be both an adder and a multiplier; for example, the universe is both if at least one of each exists as a sub-system of it. Yet a hypothetical universe with 100 adders and 1 multiplier would be significantly different from one with 1 adder and 100 multipliers. A more detailed way of characterizing such systems would be to state how many instances of each kind of computation is performed by it: in other words, to give a measure distribution on computations. Such a measure distribution (or more exactly, a measure distribution on runs of computations) is the main tool needed to evaluate the computationalist version of the Many Worlds Interpretation of QM, and will be addressed in later posts.
While artificial computers tend to operate on command, there is nothing about the concept of implementing computations that requires that. For example, a robot that walks around and listens for numbers, then once it has heard two of them, adds them and says the result, then starts again, would still be an "adder". Such a robot would behave according to its internal agenda, ignoring the desires of the people around it even if they beg it to stop.
It's also important to note "outputs" need not be distinguished from internal parts of the system. Any parts of the system that produce the behavior in question are the relevant parts, regardless of how they interact with things outside the system.
For a computation with more than one step, it is useful to define "inputs" as substates that are affected by influences outside the scope of the computation in such a way that their values are not determined by the previous state of the computation. Influences outside the scope of the computation are not necessarily outside of the system (such as the universe) which performs that computation.
Those familiar with computer science might be surprised that the concept of the Turing Machine will play no role in using computations to characterize systems. A Turing Machine is a type of computer that is useful for specifying which functions could be calculated by in principle by digital computers (if unlimited memory - an infinite number of substates - is available, but the transition rule for each substate depends on a finite number of substates) and how easily (setting bounds for how memory needs and processing time scale with the size of a problem).
Because this class of computers is so ubiquitous in computer science, those computer scientists who dare venture into the swamps of philosophy far enough to read a paper or attend a lecture on the implementation problem sometimes completely lose interest in the problem when they realize that no one is mentioning Turing Machines. In fact, it would be quite easy to describe Turing Machines as a special case of the computers that I will describe.
But here we are concerned with characterizing the behavior of actual systems as they exist, not with finding what size of problems they could be programmed to handle. It is also important to note that digital computing is just a special case of the behaviors that could be characterized as computation; analog computing can certainly be considered as well, and most of the definitions I will make are general enough to cover all cases. However, I will focus on digital computation in most of the examples I look at.
The next step is to formalize the idea of implementation by giving a mathematical criterion for whether a computation is implemented by a given mathematically described system. This will be done by requiring a mapping from states of the underlying system to formal states of the computation, and requiring the correct behavior of the states as they change over time.
However, as will be seen in the next post, this approach quickly runs into a problem: without restrictions on allowed mappings, 'almost any' physical system would seem to implement 'almost any' computation. This absurd result would imply that a rock implements the same computations as a brain. The likely solution is to impose restrictions on the allowed mappings, but finding a fully satisfactory set of restrictions has proven to be a difficult task. My proposals for this will be presented in subsequent posts; I think that I have been successful in finding (at least close to) the right set of restrictions.
Ontology & Quantum Mechanics
- ▼ October (4)