See you next year!
Below are some notes from Östen Dahl’s (1975) “On generics”. I’ve seen this paper discussed and criticized in many occasions (in Carlson, 1980 and Krifka et al, 1995 especially). Here I’ll only lay down the main points. Ten years after this paper, Dahl published another one, “Remarques sur le generique” (1985) in a special number of Langages dedicated to genericity. I’ll end the post with some of the remarques from that paper. Before diving in, I recommend peeping in Dahl’s agreeably informal homepage.
On generics (1975)
By “generics” Dahl means generic statements: (1) “Beavers build dams”, (2) “The sun rises in the east”, (3) “A gentleman does not offend a lady”, (4) “John does not speak German” etc. We can readily notice that the presence of generic noun phrases (bare plurals, indefinite/definite articles) is not a necessary condition in order for genericity to occur. “John does not speak German” is such an instance of generic sentence lacking a generic noun phrase.
The common property of these expressions, Dahl argues, “is that they are used to express law-like, or nomic, statements” (p. 99). This sort of law-like statements should be distinguished from accidental generalizations: the latter concern “a set of actual cases”, whereas the former can transcend these and speak about “possible, non-actual cases” (p. 100). Taking this as a starting point, Dahl proposes to use modal logic to formalize this distinction between accidental and nomic generalizations. We should like to say, in this case, that nomic statements involve the modal operator of ‘necessity’, while accidental ones do not. Nonetheless, speaking of ALL possible worlds, for concrete purposes, is an unlikely business and therefore, Dahl stresses, we must always consider a relation of alternativeness (which, if I’m not mistaking, is more or less the classic relation of accessibility between worlds in modal logic). One can now express the property of nomic statements as involving the consideration of alternative worlds.
From this basic level of theoretical input, we can distinguish nomic generalizations as being “used for making predictions” (p. 101). Dahl gives the following example:
(5) My friends vote for the Socialists. Hence, when you have become my friend, you’ll vote for the Socialists.
Understood as an accidental generalization, e.g. “It just happens that my friends such-and-such”, the first sentence of (5) will not support the conclusion, whereas the nomic reading will. The nomic reading is very close to the readings of dispositional properties (e.g. ‘soluble in water’) which remain true even if are never actualized, to the effect that “saying something has a dispositional property is tantamount to making a nomic statement” (p. 102).
All that has been said so far is complicated by another semantic property which seems to go in the opposite direction; that of restricting the statement, instead of expanding it – as was the case for the difference between accidental and nomic. When we say (1) Beavers build dams, there are some covert normalcy conditions which allow us to identify the “normal” beaver across worlds. What we ultimately mean is something like under normal conditions, all beavers build dams. Another such restrictive property is that some generic sentences like (4) speak about a law or principle which is in force at a certain time, and that the application of this principle may vary – hence the tendency to also see some covert quantification (over occasions, events etc.). Both these properties are integrated in the case of generic noun phrases and generic sentences respectively:
“A generic statement means that a certain law or principle is valid at a certain time, i.e. it characterizes a certain set of worlds defined by some alternativeness relation.” (p. 105)
“My claim about indefinite generic noun phrases is that they always involve a quantification over possible objects rather than over actual ones” (p. 108)
In my short review of Carlson’s (1977, 1980 and 1982) papers, I have briefly sketched the criticisms brought forward by Carlson to both of Dahl’s claims (see here).
Remarques sur la generique (1985)
This short paper is important not for its theoretical novelty but for two caveats which it brings to light. First, Dahl is anxious to point to the distinction between truth-conditional semantics and a “verificationist theory of meaning”. I think the difference Dahl makes could be spelled out as that between the question (a) “How do we determine the way we determine the truth-value of this sentence-type?” and (b) “How do determine the actual truth-value of this proposition?” The latter, (b), amounts to verifying the statement. In the case of generic statements this is particularly important. As already observed in (1975), and reasserted in (1985), it is impossible to verify universal nomic statements simply because they speak of spatio-temporally infinite number of worlds. “Salt is soluble in water” is, in this sense, never verifiable, but the prediction it allows – as shown above, this is precisely their function – might be verifiable. However this may be, we should not be distracted: in trying to find a semantic representation, we are not interested in the way we go about answering (b).
The second issue, to which Dahl adduces new insights, is that of quantification. In some contexts, Dahl admits, “it might seem as irrelevant to talk of quantification” (p. 57) and he gives an example of Carlson’s kind-reference: (6) Professor Smith discussed the sabre-toothed tiger. There is “some truth” in Carlson’s idea that one should desist from seeking quantification in reference to kind, but Dahl points out that the bare plural and other generic noun phrases – (1)-(3) above – do not behave in the exact same way as Carlson would like us to think. Not only that the generic use of definite article seems to convey something different form the same sentence formulated with a bare plural as NP, but in some languages Carlson’s “unified analysis” is not even applicable.
Dahl concludes the article in a more pessimistic tone, borrowing one of the affirmations from the previous article: “Trop de choses dans les expressions génériques demeurent encore obscure” (p. 60).
 Most of the items in that volume (1985) can be downloaded from here.
 My attentive reader will sense the closeness between Dahl’s representation of nomic statements as illustrated in (5) and the implicit motivation of some pragmatic accounts of enthymemes (especially the ones stemming from informal logic and pragma-dialectics).
 Also, notice here the closeness between Dahl’s “making a nomic statement” and Carlson’s “reference to kind” at least in the case of dispositional properties. Be that as it may, there are important differences that hinder further similarity.
 The alleged difference is between e.g. “The 1928 T Ford is cheap” vs. “1928 T Fords are cheap”, where the first sentence might appear as strange because it “seems to imply that T Fords 1928 are still available”. I see no such implication – neither as ‘entailed’ nor ‘implicated’.
The debate over the impossibility of a theory of fallacy goes back at least as far as the sixteenth century, although the claim itself didn’t receive much attention until Hamblin’s (1970) Fallacies. Usually, some of the ideas from Peter Ramus’s (1555) Dialectique and DeMorgan’s (1847) Formal Logic are found similar in this respect; they both refuse, in some way or another, the very possibility of a fallacy theory. As a sum-up, Hamblin quotes a somewhat famous phrase belonging to H. W. B. Joseph, better known for his letters than for his (1906) Introduction to Logic: “Truth may have its norms, but error is infinite in its aberrations, and they cannot be digested in any classification” (Hamblin, 1970, p. 13).
In the second half of the twentieth century, such an idea simply couldn’t have made a career among logicians and philosophers, and especially among those either acquainted with Hamblin’s (at that time) promising dialectical approach or engaged in furthering it. Considering this, one should say Gerald J. Massey can be credited for turning the hot air in “truth is one, error is infinite” into a serious claim worthy of careful attention. His essay “The Fallacy Behind Fallacies” (1981) made enough waves to receive mindful (negative, but mindful) consideration from Govier, in her article “Four reasons there are no fallacies” (1987). I’ll try to briefly make both their cases here.
G. J. Massey - “The Fallacy Behind Fallacies”
Formal Fallacies. Lamenting the needy state of fallacy theory, Massey harshly concludes that there is no theory of fallacy whatsoever and sets about showing why this is (inevitably) so. A first obvious counterattack to the no-theory attack could be made out of the concept of at formal fallacy. It would seem that in the case of formal fallacies, more than anywhere else, we have the theory (logic), we have the argument, and from this point it shouldn’t be too hard to pinpoint wherein lies the fallaciousness. Take any two sentences you wish, put them into a form which logicians know is invalid, and you’ll get an invalid argument – hence, a formal fallacy.
Not so fast. The “theory” here (i.e. any breed of symbolic logic) is formal; the “argument” – not. Logic works with forms. It speaks of formal validity. Arguments, as one should like to use the term, have content – the kind of content we convey to and fro in natural language. So how do we ‘void’ arguments of content? We translate them into a formal language – that is, into a formal language. Can all natural language(s) be translated formal ones? Good question. Some thought such enterprise is, in principle, possible (see here), but this is beside our present point. Massey’s line of reasoning goes as follows: in propositional logic, ((p → q) & q) → p) aka affirming the consequent, is invalid. But in other formal languages, e.g. first-order predicate logic, we could have an instance of affirming the consequent that is perfectly valid. His example is (p. 161):
If something has been created by God, then everything has been created by God
The point here is that, since everything has been created by God, then a fortiori something must have been created by God, and this is skimmed over by propositional logic, where quantification does not matter. Therefore, we should be able to formulate:
|(2)||Arguments that instantiate valid argument forms [as did our (1)] are valid |
(Principle of Logical Form)
and maybe even
|(2’)||Valid arguments instantiate valid argument forms |
(Converse Principle of Logical Form)
if we’re ready to assume that we at have a good theory of translation (semantics) that might grant:
|(3)||Translations of valid arguments are valid, and translations of invalid arguments are invalid (Translation Principle)|
The problem with all this is the fact that, fine as it may work for proving validity, the method is deficient the other way around. As Massey puts it, “The naive account of formal fallacy uncritically supposes that proofs of argument invalidity go like proofs of argument validity. That is, it supposes that one proves an argument invalid by showing that it instantiates some invalid argument form” (p. 162). The average logician allegedly proves invalidity by doing little more than showing how his efforts to translate the argument into a valid argument form had failed. This, according to Massey, is a mistake.
The Asymmetry Thesis. Before answering the question of why is translation into formal language (‘content-voiding’, we might call it) incapable of providing a decision of invalidity, Massey briefly dismisses what he calls the trivial logic-indifferent method as a possible counterexample. Apart for such methods, which are outside logical theory, there is no other method of proving invalidity because, in virtue of (2) & (2’), “an argument is invalid if and only if there is no valid argument form that it instantiates” (p. 164). And how can we ever know that there is NO valid form our argument might instantiate? We could, if there would be one (or at least a finite) number of logical languages. As we have seen with our example (1), this is not the case. Moreover, Massey goes on, we have no reason to suppose that the formal languages we do have now, the one we “know and respect” (p. 164), will not be superseded by better formal languages in which our ‘invalid’ argument becomes valid (just as (1) became when we translated it in first-order predicate logic). Massey gives other examples of, on the one hand, intuitively valid arguments which could not be translated into valid argument forms until some appropriate semantic theory was developed, and on the other, intuitively invalid arguments which are still subject to “universal failure to find valid-form translations”. For the former cases what the average logician did was “suspend judgment”, for the latter, pronounce invalidity. Since the only thing that differed was intuition, this variations in the decisions were, quite literally, based on intuition. “There is nothing wrong with such appeal to intuition,” Massey writes, “but it must not be allowed to masquerade as theory” (p. 166). That is, as a principle akin to (2’’). Moreover, we shouldn’t become overnight skeptics just because the Asymmetry Thesis is a sound judgment about the capabilities of our formal languages, for in daily life the thesis “is exactly counterbalanced by pragmatic asymmetry in burden of proof”: it is your job to find the valid form, if you think your argument instantiates such form.
Fallacies, Rules, and Inferential Practice The way logicians went about the subject of fallacies was particularly harmful for the distinction between formal and natural language, between argument form and argument. Taking bad argument forms and seeing them being applied here and there in arguments, they readily concluded that the result of such a product must be a bad argument: an invalid one, a fallacy. They assume not only that applying inference rules (that is, forms of the type ‘premises, therefore conclusion’) is possible, but it is “real” – it describes what people “actually applied in composing particular arguments” (p. 169). Such ideas constitute a fundamental mistake. Although we could, in an empirical sense, predict what is the form an arguer used when producing a certain argument (by way of choosing the “easiest”, “simple” or “best” explanation), this would not amount to saying that (1) is fallacious because it instantiates ((p → q) & q) → p). If we truly believe in the limited set of inferential rules (and formal languages thereof), then we should try to describe them starting (empirically) from inferential practice. From argument to forms, not the other way around. In light of what has been said, Massey’s conclusion is not at all unreasonable: “Fallacies, therefore, are perhaps of more interest to psychologists and psychiatrists, than to logicians and philosophers” (p. 171)
T. Govier - “Reply to Massey”
Trudy Govier reconstructs (pp. 176-177) Massey’s position as:
(1) Whatever else fallacies are, they are invalid arguments
This, again, is underpinned by the asymmetry thesis, which tells us that even though proving formal validity of an argument in natural language might be done by way of translation, proving formal invalidity does not function in the same way (as explained above, due to the endlessness of logical systems). Some informal logicians, Govier notices, might be inclined to take the asymmetry thesis as actually confirming their position: precisely under these circumstances, i.e. that we have no principled, formal, way of telling invalidity, an informal approach is all the more advisable. Such a response, although maybe carrying some weight – as we shall see – is not what Massey had in mind: “such ‘low level’ nonformal judgments do not amount to a theory of invalidity […] Massey would require formal grounding for theoretical security” (p. 175)
This interpretation aside, should one grant the passage from (1) to (5)? Not quite, Govier argues. (1) is two-ways problematic: invalidity is neither necessary, nor sufficient. It is not necessary because some classic fallacies like begging the question or straw man are valid. Their fallaciousness lies not within their form, but within their content – there’s something dialectically, (as opposed to logically) wrong with them. Properly cast in an argument form, ad hominem too could be seen as deductively valid but this would also disregard the dialectical aspects (e.g. relevance of the attack within a discussion, the arguer, the opponent etc.). Nor is invalidity a sufficient criterion. Any “good” or “cogent” (p. 178) inductive argument would suffice to show this.
So what seems to be at the basis of (1) is a certain conception of formal (i.e. deductive) validity. This, it has been shown, simply will not do – given the properties arguments have in natural language, tying them to deductivism is inadequate. Nor does a semantic approach to validity (i.e. true premises, necessarily true conclusion) improve things: strong inductive arguments would still be unfairly rejected by such criterion. Thus, what fallacy theory seems to work with – and what Massey seems to disregard – is an “Umbrella Validity” (p. 178) which is be spelled out by Govier as:
|An argument is valid if its premises are properly connected to its conclusion and provide adequate reasons for it. It is invalid otherwise. (p. 178) |
The Umbrella Validity gives rise to pluralist accounts of fallaciousness and constitutes a necessary and sufficient criterion for identifying fallacies. Moreover, if we endorse this principle, we will lose the aspiration to have a formally adequate method behind every decision of validity – a non-formal, but theoretically adequate method, will do just as well in some cases. We should grant philosophers and laymen interpreting argumentation their intuitive appraise of natural arguments even in those cases where formal (deductive) validity is not at stake. Such “intuitions” will not be formally adequate, yes. But this does not mean that they will not be adequate whatsoever.
Going back to Govier’s introductory passage (which I chose to skip, because it will make more sense now), we can now see why, according to the author, one should rethink the standard definition of fallacy from this pluralist (formal + nonformal) viewpoint. A fallacy, thus construed, is “a mistake in reasoning”. This is the genus. Any other differentia specifica should start from this. Deductive invalidity is nothing more than one (possible, not particularly far-reaching) differentia.
Instead of drawing a conclusion, let me clear the table and see what has just been eaten. Few would dare to criticize Massey’s asymmetry thesis and clearly this is not what Govier responded to. Govier’s claim was that the formalist approach itself (so the approach of those holding both to (1) in her list, and a semantic&formal definition of validity) is inadequate for fallacy theory. Not that it is inadequate in and of itself. In other words, the reply in Govier’s “Reply to Massey” was not something like “you’re wrong” but something more like “we don’t share definitions [of the basic concepts: fallacy and validity]”. But what if we did? What if replace ‘invalidity’ with Govier’s mistake in reasoning? Would that work? In any case, I think this is what Govier should have responded to. Her answer defends the informal logicians’ approach against Massey’s claim, which happened to be about formal validity, not against any Massey-like charges.
 Free downloadable version of Joseph’s book can be found here
 Not to be confused with the Victorian poet Gerald Massey.
 All reference here is to their reprinted versions in Hansen & Pinto (Eds., 1995). For a full reference, see (Massey, 1995) and (Govier, 1995).
 Or, in a more formal turn of phrase, the second premise alone entails the conslusion:
∀x(God.made(x))→∃x(God.made(x)) is a tautology. The fact that this translation is valid but
((p → q) & q) → p) is not, speaks of the deficiencies of propositional logic, not of the “possible invalidity” of ∀x(α(x))⟶∃x(α(x)). This is, in one of its instances, Massey’s “asymmetry thesis”.
 The point of formulating the converse of (2) into (2’) is to strengthen it into an equivalence. The method of translation is thus based on the stronger (2’’) which could be read: Valid argument form ↔ Valid argument. Surely, the more dubious side of it is the right to left reading, which corresponds to (2’).
 The method in question is the classic: (a) I know premises are true & (b) I know conclusion is false, therefore (c) I know the reasoning is invalid. This, is hardly proving anything, and in this sense, it is “independent of logical theory” (p. 164). My (a) and (b) do not tell me anything about the argument form.
 “Classic” here should mean something like (1) traditionally recognized and studied, (2) “enjoying” a certain degree of repeatability.
 At this point, calling the umbrella definition that of “validity” is simply a matter of reflex. Depending on how you interpret the “properly” in “properly connected”, the principle could be applied to anything from formal dialectical rules to lexical standards of ambiguity. I will follow Govier’s terminology, but validity in this latter case should be taken cum grano salis.
I will gather here some notes from three of G. Carlson’s earlier works on generic NPs and generic sentences (terms to be explained): “A Unified Analysis of English Bare Plural” (1977), the first four chapters of Reference to Kinds in English (1980, pp. 6-135) and “Generic Terms and Generic Sentences” (1982). These papers are not in any way “final” with respect to the phenomena they treat, but they have been highly influential. I should be able to unfold at least the main contentions.
So what’s the problem?
For sentences like
|(0) The man ate a sandwich |
(0.1) All men ate their sandwiches,
we have a clear syntactical description of the form Det + N + VP. This means that the Determiner - the definite article in case of (0), a quantificational determiner in case of (01) – adds to the noun, thus forming a noun phrase (NP), which combines with the verb phrase (two-place predicates) to form a complete sentence. We know what (0) and (01) would mean in model-theoretic semantics, because the sentence meaning gives us the truth conditions by way of being a well-formed formula.
So far, so good. But consider
|(1) Dogs bark |
(1.1) Lions roar
(1.2) John smokes.
The NP is not (at least not overtly) formed by adding a determiner to the noun – it’s a complete NP; other complete NPs of this sort are mass nouns (e.g. milk) and in some instances abstract names (e.g. redness, goodness). These, as Carlson notes from the very beginning, have something of a reputation for defying consistent semantic analysis. Their meaning seems to change with the context of use. So we can say, at first sight, that:
|(1) Dogs bark ≅ (1a) Most dogs bark |
(2) Dogs are mammals ≅ (2a) All dogs are mammals
(3) Dogs are sitting on my lawn ≅ (3a) Some dogs are sitting on my lawn
(4) Dogs are common ≅ (4a) ? dogs are common
From this viewpoint, we would have to either posit some hidden, multi-ways ambiguous quantifier or a multiplicity of quantifiers in order to account for the differences between statements (1a) to (4a). This is certainly the easy way out. However, and even if it wouldn’t be easy, it would run into systematic problems. For instance, it is traditionally considered that quantificational NPs do not denote – if we read them as quantifying over variables not individuals. This would mean that, covert quantifier revealed, bare plurals would be such non-denoting expressions. Unfortunately, this does not fit with many semantic and syntactic instances in which we find bare plurals (Carlson, 1980, pp. 43-55; 1982, p. 151) and their overall unquantifier-like behavior. I’ll take two of the more palpable examples.
First, the quantificational approach would give incorrect truth-conditions for sentences like (4) – which cannot be expressed in terms of variable-binding (nor can, e.g. (4b)Whiskey bottles come in different shapes and sizes). Predicates like numerous, be in short supply, be in many shapes and sizes, common, widespread, extinct, rare (as Quine, Putnam and others repeatedly observed) etc. seem to select the bare plural and “refuse” the attribuition of the predicate to individuals. So, for instance, I cannot go from (4) to saying that (4c) Lassie [or any other dog] is common, just as (4d) That bottle on the table comes in different shapes and sizes, cannot be true of any one actual bottle.
Second, some syntactical aspects point to bare plurals as being referring (thus non-quantifiactional) expressions. The most salient one in this category is, I think, failure to interact with modal operators. Denoting expressions, e.g. proper names, do not participate in scope ambiguities, whereas quantifying expressions do. Consider the examples from Carlson (1982, p. 150):
(5) Many people like several/all/a few dogs (scopally ambiguous)
A similar case can be made about opacity-inducing operators:
(6) Minnie wishes to talk to a few/twelve/most psychiatrists (scopally ambiguous)
I will not go about exemplifying every one of Carlson’s argum ents against the idea that an ambiguous null-determiner (or quantifier) is in place, but let me enumerate: (a) they can serve as antecedents of pronouns, (b) participate in ‘so-called’ constructions, (c) can always  be replaced by ‘this-kind-of-…’ expressions, (d) they participate in de dicto – de re ambiguities, (e) they are “quite natural as vocatives” (1980, p. 60) etc.
An interesting point is worth mentioning before briefly going into Carlson’s answer to all this. Namely, the way he construes the origins of the misguided previous approaches. In a more historical turn of phrases, he observes how the four basic Aristotelian sentences (the existential Some and the universal All, plus their contradictories) have survived in modern symbolic logic in the form of a penchant to interpret all logical subjects and predicates in terms of the existential ∃ and the universal ∀. Since formal languages have come to treat (10)-(13) as all falling, despite their differences, under the universal operator, it was “a small leap of faith, and not an unreasonable one at first sight” that (14) “does not represent a gap in the paradigm” (p. 26). It too, along with all others, will be formalized (in first-order languages) as (15):
|(10) Any dog is a mammal. |
(11) All dogs are mammals.
(12) Each dog is a mammal.
(13) Every dog is a mammal.
(14) Dogs are mammals
(15) ∀x[Dog(x) → Mammal(x)]
Kinds as individuals
Carlson’s answer to all this is a particularly elegant one. However, due to a certain degree of formalism which I cannot neither expand upon nor presume here, I’ll only present the basics. The varying truth conditions of (1)-(4) should not be dragged into the semantic representation of those statements. Similar variations for sentences like (16) John walks to school vs. (17) John celebrates Hanukkah, although epistemologically significant, are not to be brought into the way we translate the sentences into formal language. And the bare plural is just that – a John. An individual. This is not a wavering metaphysical stipulation, Carlson holds, it is the kind of postulate language imposes on ones representation (1982, p. 151) due to the arguments sketchily presented above.
We must therefore distinguish between two types of individuals: OBJECTS and KINDS. John, in this sense, denotes an OBJECT; dogs (just as ‘this kind of animal’) denotes a KIND. Both the person John and Canis lupus familiaris are to be seen as individuals, which means they are both basic entities in our semantic representations, which means they are unanalyzable wholes. The way we do however analyze their “presence” in language is by sometimes referring not to them per se, but to STAGES of them. Following Quine, Carlson construes stages as being “a spatially and temporally bounded manifestation of something” (1980, p. 68). This should not scare any realists. It does not mean that there are out there such things as KINDS, no more than there are out there such things as STAGES – or, according to some views of individuation, not more than there are out there such things as OBJECTS. It simply means that we make this distinction in language and that by recognizing this distinction we are able to account for the “anomalies” found above.
In this new sense, although both John and dogs are individuals we can refer to, we also have such thing as John-stages or dog-stages. John, in this sense, is whatever-it-is that ties a series of John-stages together, just as dogs is whatever-it-is that ties a series of dog-stages together. In this framework, the ambiguity between the ‘existential’ and ‘universal’ reading of bare plurals can be easily shown to be nothing more than the ambiguity between ‘episodic’ and ‘habitual’ reading of sentences that do not contain bare plurals. So, just as
|(16) Bill ran. |
might be ambiguous between
(16a) Bill ran (every day when he was young) – a generic reading
and (16b) Bill ran (while they were watching) – an episodic reading,
we can now simply posit that the ‘episodic’ reading of the verb selects the existential reading of the bare plural. Therefore,
(17) Dogs bark
cannot behave in the same manner because ‘to be intelligent’ does not have an episodic reading. Or, in other words, the VP ‘to be intelligent’ refers to the individual not to any dog-stages. Notice that (16c) Bill is intelligent, is also unambiguous between an episodic and a habitual reading. The whole ontology could be represented in the following way.
Two things should be clear from this. First, what the picture should make obvious is that objects are spatio-temporally bound; that is, the first ‘set’ of stages and the second ‘set’ of stages are separate. In the most neutral sense, one can speak of dogs now being present in Africa as well as in America but cannot do the same for John now being present in Africa as well as in America. The second point one should note is that, while kinds have two levels below, objects have only one. Let’s follow Carlson’s terminology and say that, if the relation between an individual and its stage is that of REALIZATION, then kinds have two realizations on two levels, while objects have only one realization on one level. That is why our first observation makes sense – the dog-stage from America and the dog-stage from Africa are stages of the same kind, but temporally bound stages of different objects.
A lot needs to be said about how one goes about formalizing these issues Although the framework itself is a bit difficult, if Carlson’s arguments so far are accepted, the translation should only be, well, a formality. At the basis of it, one will see the distinction between kind-referring and stage-referring predicates, that between (18) Dogs are intelligent and (19) Dogs are available.
 Turning to argumentation theory for a moment, I find it interesting that Hamblin (1970) chooses to quote Quine and Copi’s observations with respect to these “kind-predicates” (extinct, common, widespread etc.) when he discusses the fallacy of division (pp. 18-20). Hamblin is of course interested not so much in the semantic representation as in the logical properties of such sentences. In order for a proper transfer of predication to occur from the whole to its parts (or the other way around), he says, “we need to distinguish physical collections, like piles of sand, from functional collections, like football teams, and these in turn from conceptual collections, like the totality of butterflies.” (p. 21). I’d say the idea of ‘conceptual collection’ is very close to what we refer to here as Carlson’s kinds, since the example of invalid argument he borrows from Copi is: “American Indians are disappearing. That man is an American Indian. Therefore, that man is disappearing.” (p. 22).
 Really. Try it. And if they behave in all respects, it would be normal – following the train of thought – that it too, the ‘this-kind-of…’, has a hidden determiner. But if this is so, where should we put it? Where do you put the quantifier in “This kind of animal is raised in cages”? A negative answer also points towards an unquantifier-like behavior of bare plurals.
 Consider also the following behavior of bare plurals in anaphoric processes (Carlson, 1980, pp. 24-25). If we distinguish the ‘existential reading’ of (3) Dogs are sitting on my lawn from the ‘generic reading’ of (1) Dogs bark (roughly corresponding with the distinction between ≅ some and ≅ most/all), then we should expect (7) to be like (8) in not changing the readings in case of pronominalization.
|(7) Several critics [≅ some] left the movie, even though they some [≅some] had strong stomachs. |
(8) Bill trapped eagles [≅ some] last night even though he knows they [≅ all] are on the verge of extinction.
This is not an isolated phenomenon and the reading can change not only from an existential to a universal one, but also the other way around. Consider:
|(9) My brother thinks snakes [≅ all] are nasty creatures, but that hasn’t stopped me from having them [≅ some] as pets|
Note that my ad hoc notational system “[≅ all] vs. [≅ some]” is just a matter of emphasizing the different interpretations of bare plural and in fact goes counter to Carlson’s final analysis – which treats bare plurals as kinds. He himself, however, in order to illustrate the inadequacies of the quantificational approach sometimes uses ϕG vs. ϕ∃ which stands for the difference between the generic reading and the existential reading. The ϕ itself is what is eventually left aside.
 There are other approaches to the problem. If the quantificational tradition can be traced back to Quine (1960), Lawler (1973), Montague (1970) and Copi (1967) [sic!], a different approach could be seen as stemming from accounts such as that of Dahl (1975), which interprets generic sentences as speaking about “law-like” regularities regarding a ‘normal’ domain of quantification. So for instance a non-barking dog will not falsify our (1) because he would not be, after a fashion, normal. This is not a very intriguing theoretical answer because bare plurals can come in many different forms. Carlson stresses the cranky behavior of NPs like ‘abnormal lions’ under this view: what is the normal abnormal lion?
So yes: why study those who have been proven wrong? And I mean this not in an overarching, “why this?, why that?” sort of asking, but in the strong practical sense: why spend time reading (understanding, taking notes, remembering etc.) those whose ideas are in your present climate… obsolete. In fact, if we dig deeper into this question, I think we might rephrase it as: Why study philosophies which have been, in one way or another, rendered useless by other, more recent philosophies? In this format, one might see the question as only applying to ancient (or at least pre-modern) philosophies and theories – and to the study of those philosophies en tant que philosophies. [I’m using the words “philosophy” and “theory” carelessly]. Of course we could look at the writings of Aristotle from any number of views: we could check their grammar, lexicon, or literary merits. But en tant que philosophies, how can the Nicomachean Ethics apply to us, the YouTube & iPad generation?
The question, nonetheless, applies to any (and I think it’s possible here to speak of ANY-any) idea which is, in some way, out of date. Think of logical positivism. Among all the noisy, for the most part ideologically-driven ideas, some are good, some are bad. But qua philosophy, logical positivism has long been thwarted even by some of its main admirers. In this case, aside from the more profitable byproducts, why bother?
The answer that “you have to understand the supporter to understand the contender” is, again practically speaking, somewhat old-fashioned. With encyclopedias, internet, and fast-search websites popping everywhere, it might be gratifying, but it’s hardly necessary to read Out of Date Scholar X, cover to cover, in order to fully understand and appreciate Up to Date Detractor Y of Out of Date Scholar X. There is a risk of unfair straw men getting heavier and heavier by such practices – but I’d say that most of the time, the bluffs are called and writers know that. Access to sources, constantly improving since the invention of printing, works to prevent (or at least unveil in the aftermath) blatant dishonesties.
Speaking about Heraclitus, Bertrand Russell involuntarily gives his personal answer to our question. The passage might also be read as the motivation for undertaking the gigantic project of writing a history of philosophy. Biased by my interests as I might be, I find much sense in the idea that from an argumentative point of view, any sane, consistent set of arguments that compile (or not) into a philosophy, is a “readable” intellectual product.
In studying a philosopher, the right attitude is neither reverence nor contempt, but first a kind of hypothetical sympathy, until it is possible to know what it feels like to believe in his theories, and only then a revival of the critical attitude, which should resemble, as far as possible, the state of mind of a person abandoning opinions which he has hitherto held. Contempt interferes with the first part of this process, and reverence with the second. Two things are to be remembered: that a man whose opinions and theories are worth studying may be presumed to have had some intelligence, but that no man is likely to have arrived at complete and final truth on any subject whatever. When an intelligent man expresses a view which seems to us obviously absurd, we should not attempt to prove that it is somehow true, but we should try to understand how it ever came to seem true. This exercise of historical and psychological imagination at once enlarges the scope of our thinking, and helps us to realize how foolish many of our own cherished prejudices will seem to an age which has a different temper of mind. (Russell, 1946, p. 39)
Not to mention the ever-threatening risk that Out of Date Scholar X was right after all.
François Récanati’s (1989) “The Pragmatics of What is Said” (reprinted as Recanati, 1991) is arguably part of what some call Post- or Neo-Gricean pragmatics. Reasons for such a label will, I hope, become clear from the review. Before I begin, however, let me be honest for a few seconds and express my worries about the notion of semantic underspecification. I’ll use the word gingerly (avoid it when I can, that is) since I’m convinced I don’t understand its full significance. This is furthered by the fact that some authors (including Récanati) use it interchangeably with “semantic underdetermination” or plainly “underdetermination”. I can imagine it has something to do with a proposition being incomplete in the sense of not being (a) a truth-evaluable propositional form and (or?) (b) capability of undergoing logical operations (Carston, 1991, p. 49). So, for instance “You are German” is such a sentence where the pronoun acts as a variable the value of which must be ascribed contextually (indeed, pragmatically) and untill such ascriptions are made, “You are German” cannot be evaluated. But do all semantic theories calculate truth-conditions in the same way? They all evaluate truth-conditions, admittedly, but is there such a consensus among what counts (or not) as an evaluation of that “propositional form”? It seems so. That is, in any case, what I’ve gathered so far. Sometimes, the notion is referred to as the Atlas-Kempson thesis and is opposed to Davidson-Harman Thesis (use these as conversation starters at the next party!) but this is not of much help. I’ll go on with this caveat in the back of my head.
The Gricean Picture
Récanati starts by delineating the classical Gricean approach (see this post), according to which the distinction between what is said (WIS, henceforth) and what is implicated (WII, henceforth) corresponds roughly with the distinction between conventional meaning and conversational implicatures. WIS is thus determined linguistically by the meaning of the sentence (SM, henceforth) and some other, still conventional but non-truth conditional elem ents. Indeed Grice’s exact description of WIS in “Logic and Conversation” (Grice, 1975) is as being “closely related” to SM. What Récanati sets about tackling is just this neat distinction between WIS and WII, between conventionally vs. pragmatically determined aspects of meaning. The gap, he says, between SM and WIS is bigger than the classic picture might lead one to think. Describing the Gricean approach, Robyn Carston (1991, p. 39) points out how, apart from the linguistically determined content of WIS, the classic approach is ready to “let in just whatever is necessary […] to bring the representation up to a complete propositional form” (i.e. reference assignment, disambiguation, supplying empty grammatical categories, e.g. two-place predicates, etc. see p.41ff).
Pragmatic, though part of WIS?
Some aspects of WIS, as we have seen, have been traditionally thought of as pragmatic (or pragmatically-driven), although the exact nature of such influence was not considered problematic. According to Kaplan (1978), a sentence conventionally determines the aspects of the proposition if it (the sentence) functions as a kind of rule from context to content, instantiating as it were the SM into WIS. “Neat and attractive though it is,” Récanati adds, “this view of the matter is quite unrealistic” (p. 99) As is the case with most of these pragmatic propositional ‘aspects’, the SM is not sufficient to determine (“conventionally”) what needs to be added to form WIS. In these cases, what is indeed added, according to a principle I’ll review below, is considered ‘free’ – which stands for something like ‘free from the rule provided by SM’ (assuming such a rule is always in place) – and is not added because of a semantic need for propositional form.
To offer what seems to be the canonical example (though not without its problems, see pp. 101-103), when someone utters (1) I’ve had breakfast, the literal meaning seems to be something like (1a) [The speaker] had eaten breakfast at least one time in his life. The (1a) version of WIS, however, is not quite WIS – though it is a complete propositional form. If that were the case, then (1) can be used to implicate “I’m not hungry” while (1a) can be used to implicate the exact opposite; which is indeed rather strange since, semantically, (1) and (1a) are equivalent – assuming reference assignment to [The speaker]. A closer version of WIS seems to be (1b) I’ve had breakfast this morning. The difference between (1) and (1b) is not given by semantic factors, nor by pragmatic factors (in the sense Grice thought of the “pragmatic factor” which implicature is) but by different pragmatic factors – the pragmatics of what is said.
But now we have two pragmatic aspects – how do we distinguish? To go back to Carston’s article again, one might wonder “What is to stop all elements of communicated meaning being interpreted as part of the explicature [i.e. WIS]?” (1991, p. 42). According to the most basic principle held in Gricean pragmatic, called the minimalist principle, the only thing one has to look for is that WIS is a complete propositional form. As we have seen, this leads to problems. Not only that the gap is bigger than it was previously thought (as Carston had already argued), but Récanati holds that there might be a touch of inconsistency at the heart of the minimalist principle. At least in its stronger, biconditional formulation, the idea that “A pragmatically determined aspect is part of WIS iff it is needed to complete an incomplete proposition” presupposes that one must already know the complete proposition. The possibility of such knowledge, with the WIS being unavailable, is doubtful.
With the minimalist principle out of the list (the list of criteria that might be used to distinguish WIS from WII, that is), Robyn Carston proposes the functional independence principle. According to this principle, WIS (explicatures, in Carston’s terminology) must be independent of WII in the sense that the latter must not entail the former. In our example, the implicature of (1b) “I had breakfast this morning” entails the explicature of (1a) “I had breakfast at least once in my lifetime” – which, she argues, is psychologically wrong. In Récanati’s formulation, the functional independence principle ultimately tells us that “When an alleged implicature does not meet this condition [1b, in our case], it must be considered as part of what is said” (p. 109).
This principle is useful in pointing out that many aspects of meaning previously classified as conversational implicatures (generalized conversational implicatures, in particular, such as those allegedly triggered by (2) John has three children, forming (2b) John has exactly three children) are better seen as part of WIS.
Nonetheless, Récanati argues that there is one problem which should in the end lead to the rejection of Carston’s principle. The “independence” Carston is regarding is not entirely functional – in the psychological sense – but indeed logical – WIS must not entail WII – or “any formal principle of this sort is mistaken, and cannot but make wrong predictions” (p. 110). Récanati’s counterexample is the following dialogue, which I must quote in full:
A: Was there anybody rich at the party, who might be asked to pay for the damages?
So here we have a case where the implicature of A’s last reply (“Somebody was there”) is that “Jim was there” – which explains the “So, …” – but this entails what is said, namely, that “John was there”. To this, Récanati adds: “who would accept the extraordinary conclusion, imposed by Carston’s Independence Principle, that A, having John in mind and uttering “Somebody was there,” has actually said that Jim was there?”
If this counterexample works properly, then the independence principle fails to account for some uses of language, which means that as a theoretical explanation it should be dropped. Récanati’s next step is to propose another principle, called the availability principle. I will review the main points of this idea here, but postpone more detailed accounts until the review of Récanati’s Literal Meaning (Récanati, 2004). The availability principle proposes a distinction between WIS & WII “merely” on the basis of our intuitions. “This, I believe, is what most theorists have always done” (p. 107). The speaker himself should, in principle, recognize WIS as being actually what has been said, and should not be surprised to learn that the explicature of “The door is closed” is the Russellian “There is one and only one door in the world and it is closed” (to give the more simple formulation). Previous analyses of (2) “John has three kids” divided into the WIS of (2a) “John has at least three kids” and the implicature of (2b) “John has no more than three kids” are easily rejected based on the availability principle – (2a) being too uninformative.
Now two adjustments go along with this new principle. First, the Gricean picture above must be rearranged: WIC, in this view, is no longer a higher-order level of meaning, with WIS & WII lying below, constituting it. According to Récanati, “what is communicated consists of what is said and what is implicated, instead of being over and above what is said and what is implicated”. WIC, in this sense, is merely a label or a name for WIS+WII. The second adjustment is a deeper one and it is not addressed by Récanati: namely, the nature of the principle. It seems to me, at least so far, that the change from the minimalist principle to the independence principle is one from on analytical criterion to another – the latter slightly more capable of accounting for linguistic phenomena. In the leap from the first two to the availability principle, the idea underlying the principle is not anymore an analytic one, but an empirical one. This, I think, is obvious from the fact that, while the first two spoke of “units of discourse”, the latter is speaking of the analyst or the language user’s intuitions. These, are, at least in a sense, real – or, in any case, empirically refutable. If I am right about this, then simply supplying units of discourse that are not covered by the first two, but are covered by the latter – although necessary – will not be sufficient.
 Récanati does not use these abbreviations.
 The views expressed in that article are, I believe, very close though not precisely the same as those of Récanati. Just for the sake of labeling, I’d say Carston’s approach, too, is to be thought of as Post-Gricean – though not Contextualist in the same way Récanati is.
The term genericity, despite its academic-sounding suffix, should at this point be taken to mean nothing more than this: general statements. Leaving behind some philosophical complications, we can even intuitively place general (or “generic”) statements somewhere between universal and singular ones. To give a few examples:
|(1) Dogs bark |
(2) A lion has a mane
(3) The potato was brought to Europe in the fifteenth century
(4) John walks to school
What distinguishes these sentences (sometimes called “habitual”, “characterizing”, “dispositional” or “gnomic” sentences) is that they do not make reference to a single, bounded instance or occurence: to a bounded number of dogs or to a bounded number of barks in (1), a “specific” lion and some specific manes in (2), to a bounded number of potatoes in (3), to a bounded number of occurrences of John walking to school etc. The term bound, for now, should be taken just as pre-theoretically as the term genericity (though it should be distinguished from the concept of binding, as used for variable binding).
What is often stressed with respect to genericity (Carlson, 1980; Declerck, 1986; Krifka et al., 1995) is that at least two, not mutually exclusive classes might be distinguished: (A) those involving kind-referring NP’s (bare plurals such as “dogs”, indefinite singulars such as “a lion” etc.) and (B) characterizing sentences (that do not have kind-referring NP’s as a logical subject, so  above). What ties the two classes together seems something like (again, pre-theoretically) a process of abstraction: with kind-referring NP’s we abstract from particular objects, with characterizing sentence we abstract from particular facts/events. It is obvious now why A and B need not be mutually exclusive – we can easily abstract in both ways in the same sentence. (1) is in fact such an example.
Another issue often remarked is that these sentences have, as Carlson put it, “notoriously erratic truth-conditions” (Carlson, 1977, p. 441). Finding one dog that does not bark does not seem to falsify (1), just as knowing that female or very young lions don’t have manes does not seem to falsify (2). The same, we feel that John not walking to school on Sunday has little to do with the truth conditions for (4). To be sure, even now the word falsify must be taken in a rather simple, intuitive way. Nonetheless, semanticists are anxious to stress that we should not leap from these remarks neither to readily positing semantic (or syntactic) ambiguity (e.g. an overt quantifier that changes mysteriously), nor lack of truth conditions. Krifka et al. (1995, p. 3, but see Carlson, 1995, for a full discussion) made this commitment in bold:
“Much of our knowledge of the world, and many of our beliefs about the world are couched in terms of characterizing sentences. Such sentences, we take it, are either true or false – they are not ‘indeterminate’, or ‘figurative’ or ‘metaphorical’ or ‘sloppy talk’. After all, we certainly would want to count the classic Snow is white as literally having a truth value”
However, ambiguity does arise in some instances. If we say (5) Dogs are common we are clearly (indeed, inevitably referring to ‘the kind dogs’) but our previous example (1) is ambiguous between a “generic” and an “episodic” (“existential”) reading. To make this more precise, consider the difference between:
|(1a) Dogs bark |
(1b) Dogs bark while he’s watching
It is somehow intuitive that (1a) refers generically to a kind, while (1b) is used to say something about a subset of that kind. Note that only (1b) is paraphrasable by: (1c) Some dogs bark while he’s watching. Generic sentences are underspecified for a number of different reasons (nine such “underspecifications” were enumerated in Declerck, 1986), which led some to posit, instead of semantic ambiguity, the influence of pragmatic (i.e. co- and contextual) factors.
Last thing I believe it’s safe to note here as a preliminary is that the subject of genericity has known a great comeback in the seventies due to mainly to Lawler’s articles “Generic to a fault” (1972) and “Tracking the generic toad” (1973), Dahl’s “On Generics” (1975) and Carlson’s “A Unified Analysis of English Bare Plural” (1977) and Reference to Kinds in English (1980). Most of the views expressed in these works have been quite extensively amended over time (see Abbott, 2010, Chap. 7; and Pelletier, 2010 for the most recent studies), but they still remain important works. This brings us to our last preliminary question: where, if anywhere, is genericity as a scientific subject to be located? There is no one place. But the most salient frameworks are those offered by semantics, logic & pragmatics.
Gregory Carlson’s (1977) influencing dissertation on the semantics of generic terms (republished as Carlson, 1980) starts with the following foreword:
In the spring of 1976, Terry Parsons and Barbara Partee taught a course on Montague grammar, which I attended. On the second to the final day of the class, Terry went around the room asking the students if there were any questions at all that remained unanswered, and promised to answer them on the last day of the class. I asked if he really meant ANY question at all, which he emphatically said that he meant. As I had encountered a few questions in my lifetime that remained at least partially unresolved, I decided to ask one of them. What is life? What is the meaning of life? After all, Barbara and Terry had promised to provide answers to any question at all.
On the final day of the class Barbara wore her Montague grammar T-shirt, and she and Terry busied themselves answering our questions. At long last, they came to my question. I anticipated a protracted and involved answer, but their reply was crisp and sucking. First Barbara, chalk in hand, showed me the meaning of life.
Terry then stepped up and showed me what life really is.
As we were asked to show on a homework assignment earlier in the year, this is equivalent to: lifeˈ.
Leaving me astounded that I had been living in such darkness for all these years, the class then turned to the much stickier problem of pronouns.”
Semanticists and their dry humor … Anyway, Carlson’s dissertation as well as other papers on genericity can be found here: http://www.ling.rochester.edu/people/carlson/carlson.html
If you’re wondering what a “Montague grammar T-shirt” is, here’s a footnote from (Partee, 2004, p. 6):
The unicorn is the “mascot” of Montague Grammar because of one of Montague’s key examples, John seeks a unicorn. Bob Rodman put a unicorn on the cover of the UCLA volume (Rodman, 1972); I did the same with Partee (1976), and unicorn T-shirts proliferated at Montague Grammar workshops. Clever unicorn-pictures head the chapters of Jansen (1983)
There. Now you know. And knowing is half the battle. You can spend the next couple of seconds staring at this bizarre painting.
Think about it. Richard Montague (1930-1971) was a young American logician and mathematician. H. P. Grice (1913-1988) was a somewhat older British philosopher or, we might say, ordinary language philosopher. Yet more or less at the same time they produced these strikingly similar paragraphs. And it wasn’t that formalists, by then, had already failed. Not at all. I think predicate logic (with all its later “add-ons”) was, at that time, successful if not fashionable. The general whim was to try to improve, not criticise, its assumed potential. Yet these two fragments seem to have been pulled out of the same brain:
There is in my opinion no important theoretical difference between natural languages and the artificial languages of logicians; indeed, I consider it possible to comprehend the syntax and semantics of both kinds of languages within a single natural and mathematically precise theory. (Montague, 1974, p. 222)
I wish rather to maintain that the common assumption […] that the divergences do in fact exist is (broadly speaking) a common mistake, and that the mistake arises from an inadequate attention to the nature and importance of the conditions governing conversation. (Grice, 1975, p. 42)
Of course, Grice and Montague took very different, if not downright opposite, paths – one could sense that even from the few sentences above. But talk about “ideas floating in the air”!