An informal look at the argument*
The actual text of the CRA (Chinese Room argument) could be found anywhere on the Internet where the subject is tackled. I know I should not add one more to it, but here it is. This version is a revised one from his (1984) Reith lectures Searle held live on BBC:
Imagine that a bunch of computer programmers have written a program that will enable a computer to simulate the understanding of Chinese. So, for example, if the computer is given a question in Chinese, it will match the question against its memory, or data base, and produce appropriate answers to the questions in Chinese. Suppose for the sake of argument that the computer's answers are as good as those of a native Chinese speaker. Now then, does the computer, on the basis of this, understand Chinese, does it literally understand Chinese, in the way that Chinese speakers understand Chinese? Well, imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating these Chinese symbols. […] Now the point of the story is simply this: by virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don't understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese. (Searle, 1984, pp. 31-32)
In the original one, an elaboration appeared with the same man receiving cards with signs in English – aka letters – but the purpose of this elaboration was merely to make the differences clearer. The structure of the argument was in every respect the same.
So what is the CRA about? Preston’s (2002) “Introduction” seems very sharp on this point. Here is his informal summary:
The central claims of Searle’s original paper are clear. Something is a digital computer in virtue of performing computations. But computations alone cannot, in principle, give rise to genuine cognition. Computation being nothing but the manipulation of symbols in accordance with purely formal or syntactical rules, something that is only computing cannot be said to have access to or know or understand the ‘content’, the semantic properties of the symbols it happens to be manipulating. […] But it is a conceptual or logical truth that syntax is not sufficient for semantics. Computer, therefore, cannot be credited with understanding. (p. 19)
What I see in his commentary is a very subtle glide between two fundamentally different interpretations. Since they are different, and since this is not made explicit but appears on and off in Preston’s contribution, I think the central claims of Searle’s original paper are not in the least “clear”. Informally, we could say that CRA might be putting forward two different claims (either/or and maybe even both):
(1) While something digital can simulate intelligent performance, nothing of the sort can duplicate it
(2) While something digital can simulate intelligent performance, nothing of the sort can be said to have duplicated it.
Notice also some of Preston’s choices: “can be credited with understanding” (as opposed to “understands”, simpliciter) or “cannot be said to have access to…” (as opposed to “have access to”, simpliciter).
Interpretations and implications
There has been a lot of interest in distinguishing different types of replies to CRA. Some, however, are more noteworthy than others. It has often been stressed that ‘understanding a language’ is not a relevant component of a Turing test if the computer is not “packed” with a language-learning program. But, of course, this would only mean that Searle is to construct an analogous scenario where the task in question is learning Chinese not understanding Chinese.
Also, some replies have focused on the difference between “if squiggle-squiggle, then squaggle-squaggle” and (a) the actual formalisms in computer programs and anyway (b) the actual pragmatics of language. To both, however, there is an easy rebuttal: (a) although they become more and more complex, computer programs become thus by adding up a collection of such rules which run in virtue of what squaggle-squaggle is related to that squiggle-squiggle; (b) of course it is not, but this is an assumption that both parties agree with on the debate, namely, they both assume that computer programmers will be able to replicate language in that way [In other words, AI says, If …, then yes …, while Searle says, Even if…, then it would still not … - but both are ready to assume the if-part.]
Each of the more serious replies will be spelled out in later sections.
What is curious is that even though Searle put forward a reconstruction of his own of what CRA is intended to establish as an argument, critics have rejected that very reconstruction. The 3 premise Searle himself ascribes to his own thought-experiment are the following:
(1a) Programs are purely formal
(2a) Minds have mental content
(3a) Form is never the same as or sufficient for semantics
(4a) Programs are not sufficient for minds
This reconstruction is, I think, deceivingly simplistic – maybe part of the reason why some have dubbed it ‘the Brutally Simple Argument’. For one thing, the elements in the Chinese Room (the man, the squiggle, squiggle, the deceiving output etc.) do not appear. How is that then a reconstruction of how Searle set his argument if almost the whole of is argument is not present.
Here is the reconstruction Preston chooses to put forward. It is not utterly different from Searle’s but it at least brings into the reconstruction elements from the thought experiment itself. (Still, one might wonder why it is not the same as Searle’s):
(1b) The person in the room has access only to the formal features
(2b) To understand Chinese, the person would need access to semantic features
(3b) No set of formal features is sufficient for understanding
None of the reconstructions above seems to be very loyal to the actual text and in any case none are justified in any way. They’re just put forward either by Searle or by his Critics.
*The series “Reconstructing the Chinese Room” follows some of the articles in:
Preston, J. & Bishop, M. (2002). Views into the Chinese Room: New essays on Searle and artificial intelligence. Oxford: Clarendon Press.
The first three posts will follow:
Preston, J. (2002). Introduction. In Preston, J & Bishop, M. Views into the Chinese Room: New essays on Searle and artificial intelligence, pp. 1-51. Oxford: Clarendon Press