Alan Turing publishes a paper entitled “On computable numbers, with an application to the Entscheidungsproblem”. The problem referred to in the title is one posed by David Hilbert (don’t know if the problem is from those legendary 100 posed at the international colloquium…): is there any effective (i.e. mechanical & finite) way of determining, of any given statement in a formal system, whether or not it is provable in that system? Because he had set out to show that there is no such thing, i.e. that mathematics is undecidable, Alan Turing needed a precise definition of what “effective & finite” meant in this context. Turing came up with this: there is an “effective & finite” way of calculating the values of a mathematical function if it can be processed by a ‘computer’ – at that time the meaning of the term was that of ‘a human who computes’ – which he called the Turing machine. This was the first description of a very basic computer and its computer program.
Along with the description of this machine came the thesis (later dubbed ‘Turing’s theorem’) that the outputs of any one of these machines can be written as inputs to other Turing machines ad infinitum. Although it is impossible to construct and/or operate with such machines, the idea opened up an inspiration for those who believed the prototype of unlimited storage and perfect reliability.
But even assuming some functions are undecidable, what is this sub-set of decidable functions and how is it to be identified? The American logician Alonzo Church had posited in late 1933’s the idea that this set could be identified with the notion of recursive functions (a concept mathematically precise enough since it relied on the (by then) well known concept of lambda-calculus).
The Church-Turing thesis was born out of the unification of these two views. The set of recursive definitions, those functions which are computable by Turing machines, is the same as the set of functions which can be computed by any effective, mechanical, finite etc. method.
Turing publishes “Computing Machinery and Intelligence” in which he recasts the can-computers-think-question into the form of a game: we can speak of it as intelligent if it renders impossible for a human “interrogator” to tell whether the output (coming from it) is that of a machine or a human mind.
Many attacked the psychological assumptions behind this test, the Turing test. [Rather preoccupied about gluing one’s surname to each of one’s productions, isn’t he? I wouldn’t be surprised if I found out colleagues started calling him Turing’s body…] It was accused of being too behavioristic – that is, assuming that the meaning of the statements about one’s psychological states can be reduced to statements about his bodily motion. It was also accused of being too operational – the epistemological view that the concepts of science (“intelligence”, here) should be defined “in terms of” the number of operations we need to apply in order to verify them
The idea of a Turing test, together with the criticisms it received, was taken over by the American philosopher Hilary Putnam who stated what came to be known as the functionalist view of the mind. According to this view, mentality is a matter of functioning not of substance. In other words, it doesn’t matter what the mind is made of, what matters is what it does, how it works and what is capacities are. By “to matter”, here, Putnam meant mostly “to be able to speak meaningfully of”. Mental concepts, in other words, are functional concepts.
But then, if this is the case, then whatever simulates mental phenomena amounts to duplicating them – since all that there is to mental phenomena is their functioning. There’s no reason, Putnam implies, not to credit computers running appropriate programs with mentality. In fact, both minds and computer can be described in two ways: by referring to their hardware (this impulse passed from there to there, thereby enabling this and that and that causing such-and-such which gave a certain output) or to their software (the computer calculated 2+2). Hardware/software – brain/mind: the analogy worked perfectly.
Of course, many contested this view as well. It was primarily the serial workings of the Turing test that troubled scientists. We know that the mind is capable of more than one state at one time (at least a state and its introspective duplicate). A Turing machine, however, can only be in one state at a time – even if its program can process a collection of series.
Another, more profound, criticism to this idea was the point that psychological states are incompletely described if one doesn’t take into consideration their causal connection with sensory or other behavioral outputs.
Artificial Intelligence: Strong & Weak
Starting mid-1950s, but producing observable results only some ten years later, the artificial intelligence research programme, if one can speak of such a unitary entity, started by looking into the connections between neurons and symbols. First concerned with the classic task of computing, AI researcher were gradually interested in games (chess, guess-games) and then further in what might be called “understanding” (language, planning, learning etc.)
According to the view they held as to the aim of this programme, two versions of AI have been distinguished. The Weak AI is committed merely understanding human psychological phenomena and trying to replicate these so that a computer can produce the same results without the implication that such a computer would thereby have the psychological states. The more radical view, known as Strong AI, is committed to the idea that such a program would genuinely have those psychological states (remember “functionalism”?). As Searle later put it, that such a program would “have a mind in the same sense you and I have”.
[What seems apparent to me is that these “versions” of AI are different neither in their empirical commitments nor in their theoretical scope. They are methodologically different; that is, they are different in what they take as good or bad method. For one, it is cool to say that the super-computer is having this or that psychological state. For the other, it is not. I also think that the methodological import of these versions can be easily omitted. One of Preston (2002) quotes is an example of how this can be done:
Searle, on the other hand, claims to have found an argument that undercuts the idea that electronic digital computer can be said to exhibit any of the contested psychological capacities purely in virtue of their programs. Philosophers certainly have no insight into what technical tasks programmed machines might be able to perform, or when. But they can have a say about how it makes sense to characterize the abilities in question. (p. 16)
So far, from the italics above, it is clear (I think) that the bone of contention is methodological. Whether concerned with the property of terms such as “psychological states” or with how is neat and profitable to act as a scientist, they are prescriptive standpoints. Preston, however, seems to conclude the paragraph by judging CRA with having made a descriptive point about what can and cannot be the case (as opposed to “be said to be the case”). He continues:
Like anyone else they may, by using thought experiments for example, establish or refute thesis about what is logically possible.
*The series “Reconstructing the Chinese Room” follows some of the articles in:
Preston, J. & Bishop, M. (2002). Views into the Chinese Room: New essays on Searle and artificial intelligence. Oxford: Clarendon Press.
The first three posts will follow:
Preston, J. (2002). Introduction. In Preston, J & Bishop, M. Views into the Chinese Room: New essays on Searle and artificial intelligence, pp. 1-51. Oxford: Clarendon Press