Searle’s theoretical & philosophical stance
In his last article on the subject (Searle, 2002), the Chinese Room’s builder expounded a bit on the overall philosophical motivation behind the argument. Let us begin by taking not of Searle’s unchanged outlook on the matter: “My reason for having so much confidence that the basic argument is sound is that in the past twenty-one years I have not seen anything to shake its fundamental thesis” (p. 50).
The “absolutely fundamental logical” side
This fundamental thesis is spelled quite plainly in the beginning:
… the purely formal or abstract or syntactical processes of the implemented computer program could not by themselves be sufficient to guarantee the presence of mental content or semantic content of the sort that is essential to human cognition.
It is my job here to see whether this is the best formulation of the main standpoint Searle is putting forward, but let’s leave it at that for the moment.
(1) No “purely formal computer program” can guarantee “human cognition”
Searle does not deny that sometime in the future a system might have semantic content for other reasons. Some little-known branch of physics might discover how and by what mental states are caused and we might be able to duplicate that process. But simply in virtue of being a digital program, no program will do, regardless of how wonderfully complicated and powerful and reliable it is. Now why is that? And how is that making a point against “Strong AI”?
The argument “rests on” – it remains to be seen whether this is meant premise-wise – two other claims which Searle deems “absolutely fundamental logical truths”. Since, as we have seen, both of these absolutely fundamental logical truths have been (more or less successfully) contested, it’s fair to say Searle is a bit overstating at this juncture, but here they are:
(2) Syntax is not semantics
(3) Simulation is not duplication
OK. Form is not sufficient for guaranteeing content, and structure is not sufficient for guaranteeing function. If I have rules ordered and produced by syntactical rules I do not thereby have content and, in Searle’s words, the simulation of digestion on a computer would not thereby digest beer and pizza.
Why would anyone claim otherwise?
A “verificationist reductionist urge” is how Searle terms it. Quite an embarrassing pimple, if you’re unlucky enough to posess one. It consists of attempting “to treat the epistemic basis for a phenomenon […] as somehow logically sufficient to guarantee the presence of the phenomenon” (p. 52). What a bunch of dupes, right? Well, hold your horses. In the case of the study of mind the reductionist urge is exemplified by behaviourism whose principles told one it is OK to study mind via behavior – instead of concerning yourself with what happens in people’s mental states (or, even worse, what they consist of), concern yourself with how these mental states reflect upon behavior. Because at the end of the day it is behavior you can observe, not mental states. As Russell puts it in The problems of philosophy (1912): “What goes on in the minds of others is known to us through our perception of their bodies, that is, the sense-data in us which are associated with their bodies”. If, however, it is matter you want to study, then sense-data – the “behavior” of matter – is what you should study. But why is this so vehemently rejected by Searle?
Because they cheat, that’s what they do – Searle seems to say.
But the reductionist also wants to continue to track the intuitive idea of the phenomenon that was supposed to be reduced in the first place. The intuitive notion of a mental state or a material object has somehow to be preserved within the reductionist enterprise.
How is this cheating? Well, nobody asks you to go the whole way – there might be more or less radical reductionists. But when you say that a computer program would have a mind since it would implement the right program – you are thereby committing yourself to the idea that it is sufficient for it, the right program, if implemented, is constitutive of a mind (or some of the mental states it produces). In other words, there’s nothing else to a mental state than implementing the right program. The cheating, then, consists of saying “there’s nothing more to it” and “oh, but there’s still something else to it” at the same time.
The idea behind the Turing Test is an exact expression of this attitude: if it gets everybody to think that it is an intelligent being, then it is an intelligent being. So, if the Strong AI’s response to CRA would follow this idea then its proponents would have to admit that the person in the room has, in fact, understanding of Chinese (since it can pass the test). But do they respond that? No.
“The problem for all these forms of reductionism is the same: are there really two things or just one. The thesis of reductionism is that there is just one thing, but in the face of – behavior, or computer programs or sense-data or whatever – but in the face of the counter-examples the reductionist says that the other thing must be there too” (p. 53)
Now, just to strengthen his position, Searle spices everything with a bit of ad hominem:
One wonders, therefore, why the debate continues. Well, of course, there are a number of reasons. Many have a professional commitment to Strong AI. To put the point bluntly, in many cases their careers and the funding of their research projects depend on the continued belief that they are “creating minds”
The difference between a program and a brain
Because of the differences between syntax and semantics, the program is not sufficient to guarantee mental content. That is based on the “absolutely fundamental logical truth” that syntax is not sufficient for semantics. The other “absolutely fundamental logical truth” is that simulation is not duplication. But why is it not?
The answer to that question can be given in one word: causation. (p. 54)
The brain causes the mind in a way a computer program – being defined independently of the physics of its implementation – could never do it simply by virtue of being a computer program. Thus, we would say that the computer program qua program hasn’t got any physical nature. The brain does have a physical nature and it is this physical nature (in conjunction with activity which might be describable in purely formal terms) that causes higher-level features of the brain.
This, Searle stresses, must be clearly separated from the question “Can machines think?” There’s definitely one class of machines that can do it: humans. Okay, so humans are biological machines but so what? Once we are capable of duplicating the process, we will have thereby constructed another “version” of a thinking machine. This might sound improbable – but there’s no philosophical or logical reason why it cannot be the case. But again, this thinking-machine will not be thinking merely in virtue of it being a program. The hardware that implements it will matter. In other words, such a machine must be able to duplicate – not merely simulate – what the brain is doing when it is conscious.
“… computation as standardly defined does not name a machine process. Oddly enough, the problem is not that computational processes are too much machine-like to be conscious, it is rather that they are too little machine-like. [It] is defined purely formally or abstractly in terms of the implementation of a computer algorithm, not in terms of energy transfer” (p. 57)
The philosophical twist
Behaviourism and verificationism are both responses to the same basic philosophical question that now makes up the intellectual trade product of epistemology: What is true knowledge? As Searle pictures them, they both stem from a decisive Cartesian separation between mind and matter. Descartes’ famous distinction between res cogitans and res extensa left the former to religion and mysticism and the latter to proper science. Here, it was method that mattered (pun intended)! What is so special about science is that it applies the proper critical apparatus to its already restricted subject matter. People like Newton and even early empiricists like Lock were of the opinion that whatever cannot be quantified (stated in terms of math) should better be left to the other camp.
To this philosophy added – gradually, and culminating with Karl Popper – a rejection of the notion of truth. Again, the idea that about the external world we only get partial truth was still present in the early empiricist movement (e.g. theologically spiced up, in Berkeley) but it came to bear upon scientific method only recently in the nineteenth and twentieth century.
Searle sets his Chinese cannons against these three features of intellectual activity, namely dualism, “obsession with method”, and tacit rejection of truth.
We are interested in the fact of internal mental states, not in the external appearance. Our claims, if true, have to meet more than an instrumental test, they have to correspond to facts in the world. (p. 61)
Searle, R. (2002). Twenty-one years in the Chinese Room. In Preston, J & Bishop, M. Views into the Chinese Room: New essays on Searle and artificial intelligence, pp. 51-69. Oxford: Clarendon Press