Feb 10, 2012
In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence.
Perhaps the simplest and most important point about ethics is purely logical. I mean the impossibility to derive non-tautological ethical rules-imperatives; principles of policy; aims; or however we may describe them-from statements of facts. Only if this fundamental logical position is realized can we begin to formulate the real problems
of moral philosophy, and to appreciate their difficulty
1. Jones uttered the words “I hereby promise to pay you, Smith, five dollars”
2. Jones promised to pay Smith five dollars.
3. Jones placed himself under the obligation to pay Smith five dollars.
4. Jones is under the obligation to pay Smith five dollars.
5. Jones ought to pay Smith five dollars.
But, interestingly, Kuhn:
Feb 9, 2012
Picha, M. (2011). How to reconstruct thought experiments. Organon, 18(2), 154-188
The empiricist hassle with thought experiments, according to Picha (2011), can be simply put in the following dictum: thought experiments cannot lead to knowledge of contingent things by means of rational inquiry. The dilemma is that that is exactly what thought experiments seem to be doing. [By the way, Picha is using an interesting term for the genus of thought experiments, namely “structure”. I think it is appropriately inexplicit and philosophically-sounding. Might use it myself in the future.] Ernst Mach, a rather moderate empiricist (an empiricist nevertheless!), was the first to propose a way out of this dilemma.
Mach supposes that not all of the information obtained through sensory perception is used to form explicit beliefs, much of it is processed on the unconscious level for which Mach uses the term ‘instinctive’. Our minds contain imaginary stocks with the well-lit areas filled with reflected, explicitly embraced beliefs. Besides those, there are, however, dark corners, whose contents are unknown but which influence our behavior and decisions. Thought experiments are one way of bringing beliefs from the dark corners into the light. (Picha, 157)
[In his syrupy manner, Sorensen (1992) referred to this stock of unprocessed beliefs as the inner cognitive Africa.] By means of thought experimentation, knowledge coming to the unaware subject by sensory paths is being structured, conceptualized, made intelligible etc.
More recent accounts hold to an empiricist notion of knowledge by putting forward an eliminativist thesis. Norton (1996) is the most well-known – maybe first? – to entertain this thesis explicitly.
Thought experiments can be reconstructed as arguments based on hidden or explicit assumptions. The resulting belief can be considered justified only to the extent that the reconstructed argument is capable of justifying its conclusion. (Norton, 2004)
We notice that this position towards thought experiments is not necessarily (or not fully) depreciatory. Thought experiments are seen as reliable sources of knowledge with the caution that they come into being in these narrative, rhetorical costumes which might hinder direct criticism. The possible “hiddenness” of the assumptions Norton refers to is, we understand, not always uncontrolled by the thought experimenters.
James Brown (1991) and Tamer Gendler (1998) challenged the eliminativist thesis. For Brown, thought experiments are tools that enable direct (!) access (!) to the ideal (!) world of physical laws. For Gendler, there are two readings of the eliminativist thesis. The dispensability thesis says that whatever a thought experiment can do, an argument can do; put differently, if you have proven or disproven something with a thought experiment, then you are sure to be able to do the same with a (“normal”?) argument. The derivativity thesis says that insofar as it can produce knowledge, thought experiments produce it by argumentative means. In other words, the dispensability thesis says that a good thought experiment can be replaced by its underlying argumentative structure, while the derivativity thesis says a thought experiment is good because its underlying argumentative structure.
Galileo’s thought experiment is often brought forward by “enthusiasts” (i.e. the ones that hope to show thought experiments are not reducible to arguments) as an example that would plead their cause. The only thing enthusiasts need to point to is the non-argumentative feature that “resist elimination” (Picha, 161) that is, elements which if taken away would bring down the whole “structure”. It is for them important to show that the thing contributes to whatever function thought experiments might have, so that its “epistemic value” is there, but not in a premise-conclusion form.
Brown devised a typology of thought experiments in which he identified some as being examples of non-argumentative thought-experimental prowess. For those thought experiments that are intended as refutations (Brown calls them “destructive”), the eliminativist thesis might hold. But, Brown purports to show, there are thought experiments that do not have a definite (i.e. explicit) theory in the backdrop, a theory the thought experimenter sets against. Picha (2011) notes:
Brown believes that the only adequate structure of a reconstructed argument is the following: Considering phenomenon P under theory T, conclusion C follows. I believe this conception of argumentative reconstruction, that is, the conception of what kind of argument the reconstruction should be, is too narrow. It seems that Brown means by reconstruction (a) the formulation of a deductive argument where (b) all premises must already be explicitly formulated in the unreconstructed form. (p. 162)
In other words, the debate appears to hinge very much on what one means by “reconstruction”. If boiling down to a deductive form is what you mean, then of course Brown must be right and the eliminativists’ claims hopeless. But neither Norton nor any of the subsequent proponents seemed to have committed themselves to such a narrow view of reconstruction.
Anyway, to follow Brown’s argument, Galileo’s thought experiment has a leap from “The speed of falling bodies is not proportional to their weight” to “All bodies fall alike”. And this leap – Brown calls it a “Platonic leap” – is made over an argumentative gap that cannot be put into a premise-conclusion form.
The move […] is neither an inference nor an inductive generalization grounded empirically. Nevertheless, after careful consideration of the experiment, this move is believed to be justified and hardly anyone would hesitate to make it. (p. 165)
Entusiasts’ claim is then that thought experiment makes “All bodies fall alike” acceptable even though it is, strictly speaking – that is, dialectically speaking – still unacceptable in the context of that discussion. Norton’s answer to this is that, indeed, from Brown’s reconstruction the leap would seem Platonic and not argumentatively supported – but the reconstruction is unfortunately poorly conducted for it misses
[…] a hidden assumption that to determine natural speed, it is not necessary, ac-cording to Aristotelian physics, to consider any quantities other than the weights of the falling bodies. In other words, natural speed depends solely on the weights of the falling bodies. Norton believes that if we put this hidden assumption into the reconstruction, no Platonic leap is needed and the conclusion can be reached by a simple inference. (p. 166)
Tamar Gendler responded to this by adding that, unlike the argument, the thought experiment is still epistemically isolated. Galileo’s conclusion, she maintains, has different degrees of justification according to whether it is defended by the argument or by the thought experiment. While the argument offers merely a number of ways in which the Aristotelian theory can be replaced, the thought experiment narrows them to only one, namely, the one that all objects fall alike. As Picha puts it “the thought experiment can tell us that something is wrong with the original theory, as well as reveal the problematic point.” It is also a matter of dialectical prowess: by pointing to the solution more precisely, the thought experiment thereby closes possible refutational paths to the party that might try to defend the Aristotelian view. These possibilities, Gendler claims, are rejected by (or in) the conducting of the experiment. The argument – or, to be more precise, the reconstructed version of the underlying argument – can very well achieve the same goal but it needs some abstract, controversial premises like “Entification is not physically determined” (?). Gendler never shows how this experimental shielding takes place, she just acknowledges that the thought experiment is more potent in that respect.
In the rest of the article Picha counters Gendler’s objection. First, he admits that the thought experiment – by the particulars it introduces – might generate additional beliefs that would lack in the case of an argument, but denies that these beliefs are also justified by the thought experiment. Thus, introduced or produced, not justified. If this is the case, then Norton is right: although they are rhetorically more compelling, and that due to our intellectual inclination to grasp particular things more easily, they remain epistemically not more than arguments.
The core of Gender’s critique of eliminativism is the objection that the recipient obtains a belief in a thought experiment that she may not obtain in a straightforward argumentative reconstruction. There is no doubt about that, since the information presented in the form of a thought experiment is easier to grasp than when in the form of a straightforward argument. Experiments no doubt make obtaining new information easier and their didactic value is beyond dispute. We commonly and successfully use thought experiments in this way. The question is whether the obvious difference in reception is only caused by the individual intellectual abilities of the audience, or whether a contributing factor is that there is an epistemic difference between thought experiments and their argumentative reconstructions. (Picha, 2011, p. 172)
Picha’s move is to re-reconstruct the thought experiment with a more powerful analytical tool – Toulmin’s model – and to show that if the unstructured set of premises both Norton and Gendler usually throw in what they term “the reconstruction” is properly modified, the same dialectical paths are blocked by the thought experiment as they are by the underlying argument.
To start with, here’s Brown’s list of premises with more structure added.
Gendler’s reconstruction is even more problematic because for her, the underlying argument does not even have a reduction structure. For Gendler, what Galileo is to be committed to is this: (I) Natural speed is mediative. (II) Weight is additive. (III) Thus, natural speed is not directly proportional to weight.
Picha then attempts to organize all this by asking the beautifully fundamental question: “What are the requirements that a successful reconstruction of Galileo’s experiment should meet?” (p. 177)
First, the condition of generalization must be met: the reconstruction must not contain premises with particular details, as those distinguish straightforward arguments from experiments. If we want to defend the view that the absence of details does not affect the epistemic power, we must do without them, of course. The reconstruction must also be adequate: the straightforward argument must be an instance of the same scheme of reasoning as the thought experiment. […]Third, there is a condition of plausibility: the reconstruction must work with premises whose plausibility, which is responsible for blocking the ways out, is the same as the plausibility of the relevant premises of the thought experiment.
I wouldn’t accept neither of the three criteria immediately, for much depends on what Picha means by generality, adequacy and plausibility. Why shouldn’t the reconstruction contain particular details – isn’t the thought experimenter using them as a starting point for his generalizations? And why must the straightforward argument be “an instance of the same scheme of reasoning as the thought experiment – couldn’t the thought experiment contain non-argumentative speech acts which one must leave aside in one’s reconstruction? In any case, this is the “compromise” solution he proposes, generated by applying the Toulminian apparatus:
In the final “phase” of the argument (4) & (6) are shown to be incompatible. Picha says that “its conclu-sion, which states the incompatibility of 4 and 6, is not important for our purposes.” (180). This is a bit surprising. I believe the exact opposite should have been the case. Regardless of how (4) & (6) are shown to be themselves supported, what was at stake was to show how this incompatibility gives sufficient ground to propose the new theory. That, as far as I understand, was Gendler’s critique in the first place – she said the thought experiment, or the thought experimental version of the argument, moves from (4) & (6) to All bodies fall alike in a way the argument does not. When it came to the reconstruction, this was somehow left aside by Picha without further ado. Quite unjustly then, Picha concludes:
The experiment with particular details, in her [Gendler’s] opinion, makes the recipient accept certain principles and thus excludes some possibilities for criticism. The straightforward argument does not enable this and all possibilities are open if they are not blocked by further, controver-sial premises. I have shown that this opinion is based on an inade-quate reconstruction of Galileo’s thought experiment. The alleged dif-ference between the experiment and the argument is illusory.
As much as I support an argumentative perspective on thought experiments, I do not believe Picha has managed to show the analytical fruitfulness of such a perspective.
Not to be overly-theoretical about this, but there must be some sort of reason why spam email is written the way it is. A couple of posts ago I discussed a bit some perspectives for which the central notion of pragmatics is that of choice – and how this might turn pragmatic analysis into a rhetorical one. In that sense, the text below exhibits some remarkable choices as to topic, shared info and lexicon.
hmm... Who are you? My name is Angelina, beautiful and young 25 years girl.
I have received the letter with yours email and I want to learn what for
I receive it. Probably it is an error or someone's joke, I do not know...
If you are interested in acquaintance, write to me on email. Now I single
and not against to meet the gallant man.
My email: firstname.lastname@example.org
Feb 2, 2012
Searle’s theoretical & philosophical stance
In his last article on the subject (Searle, 2002), the Chinese Room’s builder expounded a bit on the overall philosophical motivation behind the argument. Let us begin by taking not of Searle’s unchanged outlook on the matter: “My reason for having so much confidence that the basic argument is sound is that in the past twenty-one years I have not seen anything to shake its fundamental thesis” (p. 50).
The “absolutely fundamental logical” side
This fundamental thesis is spelled quite plainly in the beginning:
… the purely formal or abstract or syntactical processes of the implemented computer program could not by themselves be sufficient to guarantee the presence of mental content or semantic content of the sort that is essential to human cognition.
It is my job here to see whether this is the best formulation of the main standpoint Searle is putting forward, but let’s leave it at that for the moment.
(1) No “purely formal computer program” can guarantee “human cognition”
Searle does not deny that sometime in the future a system might have semantic content for other reasons. Some little-known branch of physics might discover how and by what mental states are caused and we might be able to duplicate that process. But simply in virtue of being a digital program, no program will do, regardless of how wonderfully complicated and powerful and reliable it is. Now why is that? And how is that making a point against “Strong AI”?
The argument “rests on” – it remains to be seen whether this is meant premise-wise – two other claims which Searle deems “absolutely fundamental logical truths”. Since, as we have seen, both of these absolutely fundamental logical truths have been (more or less successfully) contested, it’s fair to say Searle is a bit overstating at this juncture, but here they are:
(2) Syntax is not semantics
(3) Simulation is not duplication
OK. Form is not sufficient for guaranteeing content, and structure is not sufficient for guaranteeing function. If I have rules ordered and produced by syntactical rules I do not thereby have content and, in Searle’s words, the simulation of digestion on a computer would not thereby digest beer and pizza.
Why would anyone claim otherwise?
A “verificationist reductionist urge” is how Searle terms it. Quite an embarrassing pimple, if you’re unlucky enough to posess one. It consists of attempting “to treat the epistemic basis for a phenomenon […] as somehow logically sufficient to guarantee the presence of the phenomenon” (p. 52). What a bunch of dupes, right? Well, hold your horses. In the case of the study of mind the reductionist urge is exemplified by behaviourism whose principles told one it is OK to study mind via behavior – instead of concerning yourself with what happens in people’s mental states (or, even worse, what they consist of), concern yourself with how these mental states reflect upon behavior. Because at the end of the day it is behavior you can observe, not mental states. As Russell puts it in The problems of philosophy (1912): “What goes on in the minds of others is known to us through our perception of their bodies, that is, the sense-data in us which are associated with their bodies”. If, however, it is matter you want to study, then sense-data – the “behavior” of matter – is what you should study. But why is this so vehemently rejected by Searle?
Because they cheat, that’s what they do – Searle seems to say.
But the reductionist also wants to continue to track the intuitive idea of the phenomenon that was supposed to be reduced in the first place. The intuitive notion of a mental state or a material object has somehow to be preserved within the reductionist enterprise.
How is this cheating? Well, nobody asks you to go the whole way – there might be more or less radical reductionists. But when you say that a computer program would have a mind since it would implement the right program – you are thereby committing yourself to the idea that it is sufficient for it, the right program, if implemented, is constitutive of a mind (or some of the mental states it produces). In other words, there’s nothing else to a mental state than implementing the right program. The cheating, then, consists of saying “there’s nothing more to it” and “oh, but there’s still something else to it” at the same time.
The idea behind the Turing Test is an exact expression of this attitude: if it gets everybody to think that it is an intelligent being, then it is an intelligent being. So, if the Strong AI’s response to CRA would follow this idea then its proponents would have to admit that the person in the room has, in fact, understanding of Chinese (since it can pass the test). But do they respond that? No.
“The problem for all these forms of reductionism is the same: are there really two things or just one. The thesis of reductionism is that there is just one thing, but in the face of – behavior, or computer programs or sense-data or whatever – but in the face of the counter-examples the reductionist says that the other thing must be there too” (p. 53)
Now, just to strengthen his position, Searle spices everything with a bit of ad hominem:
One wonders, therefore, why the debate continues. Well, of course, there are a number of reasons. Many have a professional commitment to Strong AI. To put the point bluntly, in many cases their careers and the funding of their research projects depend on the continued belief that they are “creating minds”
The difference between a program and a brain
Because of the differences between syntax and semantics, the program is not sufficient to guarantee mental content. That is based on the “absolutely fundamental logical truth” that syntax is not sufficient for semantics. The other “absolutely fundamental logical truth” is that simulation is not duplication. But why is it not?
The answer to that question can be given in one word: causation. (p. 54)
The brain causes the mind in a way a computer program – being defined independently of the physics of its implementation – could never do it simply by virtue of being a computer program. Thus, we would say that the computer program qua program hasn’t got any physical nature. The brain does have a physical nature and it is this physical nature (in conjunction with activity which might be describable in purely formal terms) that causes higher-level features of the brain.
This, Searle stresses, must be clearly separated from the question “Can machines think?” There’s definitely one class of machines that can do it: humans. Okay, so humans are biological machines but so what? Once we are capable of duplicating the process, we will have thereby constructed another “version” of a thinking machine. This might sound improbable – but there’s no philosophical or logical reason why it cannot be the case. But again, this thinking-machine will not be thinking merely in virtue of it being a program. The hardware that implements it will matter. In other words, such a machine must be able to duplicate – not merely simulate – what the brain is doing when it is conscious.
“… computation as standardly defined does not name a machine process. Oddly enough, the problem is not that computational processes are too much machine-like to be conscious, it is rather that they are too little machine-like. [It] is defined purely formally or abstractly in terms of the implementation of a computer algorithm, not in terms of energy transfer” (p. 57)
The philosophical twist
Behaviourism and verificationism are both responses to the same basic philosophical question that now makes up the intellectual trade product of epistemology: What is true knowledge? As Searle pictures them, they both stem from a decisive Cartesian separation between mind and matter. Descartes’ famous distinction between res cogitans and res extensa left the former to religion and mysticism and the latter to proper science. Here, it was method that mattered (pun intended)! What is so special about science is that it applies the proper critical apparatus to its already restricted subject matter. People like Newton and even early empiricists like Lock were of the opinion that whatever cannot be quantified (stated in terms of math) should better be left to the other camp.
To this philosophy added – gradually, and culminating with Karl Popper – a rejection of the notion of truth. Again, the idea that about the external world we only get partial truth was still present in the early empiricist movement (e.g. theologically spiced up, in Berkeley) but it came to bear upon scientific method only recently in the nineteenth and twentieth century.
Searle sets his Chinese cannons against these three features of intellectual activity, namely dualism, “obsession with method”, and tacit rejection of truth.
We are interested in the fact of internal mental states, not in the external appearance. Our claims, if true, have to meet more than an instrumental test, they have to correspond to facts in the world. (p. 61)
Searle, R. (2002). Twenty-one years in the Chinese Room. In Preston, J & Bishop, M. Views into the Chinese Room: New essays on Searle and artificial intelligence, pp. 51-69. Oxford: Clarendon Press