• ↓
  • ↑
  • ⇑
 
16:40 

23 decembrie


sa fie un test pentru viitor !    


 


 




 

sa fie un test pentru viitor !    


Sursa: 23 decembrie

15:59 

The conditions for satisfying the latter are patently specifiablewithout reference to the former, viz. by enumerating the shapes, colours, functions, and the like that doorknobstypically have.

Jean-marc pizano The conditions for satisfying the latter are patently specifiablewithout reference to the former, viz. by enumerating the shapes, colours, functions, and the like that doorknobstypically have.

 


It's actually sort of remarkable that all of this is so. Pace Chapter 5, concepts really ought to be stereotypes. Not only because there's so much evidence that having a concept and having its stereotype are reliably closely correlated (andwhat better explanation of reliable close correlation could there be than identity?) but also because it is, as previouslynoted, generally stereotypic examples of X-ness that one learns X from. Whereas, what you'd expect people reliably to learnfrom stereotypic examples of Xisn't


How much such experience? And under what conditions of acquisition? I assume that there are (lots of) empirical parameters that a formulation of the laws of concept acquisition would have to fill in. Doing so would be the proprietary goal of a serious psychology of cognitive development. Which, to quote a poet, “in our case we have not


got”.


the conceptXbut theXstereotype.84 A stereotypic X is always a better instance of the X stereotype than it is of X; that is a truism.85


Interesting Digression


The classic example of this sort of worry is the puzzle in psycholinguistics about ‘Motherese’. It appears that mothers go out of their way to talk to children in stereotypic sentences of their native language; in the case of English, relativelyshort sentences with NVN structure (and/or Agent Action Object structure; see Chapter 3). The child is therebyprovided with a good sample of stereotypic English sentences, from which, however, he extracts not (anyhow, notonly) the concept STEREOTYPIC ENGLISH SENTENCE, but the concept ENGLISH SENTENCE TOUTCOURT. But why on Earth does he do that? Why doesn't he instead come to believe that the grammar of English is S^ NVN, or some fairly simple elaboration thereof, taking such apparent counter-examples as he may encounter as notwell-formed? Remember, on the one hand, that Mother is following a strategy of screening him from utterances ofunstereotypic sentences; and, on the other hand, that he'll hear lots of counter-examples to whatever grammar he triesout, since people say lots of ungrammatical things. I think the answer must be that it's a law about our kinds of minds thatthey are set up to make inductions from samples consisting largely of stereotypic English sentences to the conceptENGLISH SENTENCE (viz. the concept sentences satisfy in virtue of being well-formed relative to the grammar ofEnglish) and not from samples consisting largely of stereotypic English sentences to the concept STEREOTYPICENGLISH SENTENCE (viz. the concept sentences satisfy in virtue of being NVN).

Jean-marc pizano

In short, I do think there's good reason for cognitive scientists to be unhappy about the current status of theorizing about stereotypes. The kinds of worries about compositionality that Chapter 5 reviewed show that the relation astereotype bears to the corresponding concept can't be constitutive. The standard alternative proposal is that it is simplyheuristic; e.g. that stereotypes are databases for fast recognition procedures. But this seems not to account for theubiquity and robustness of stereotype phenomena; and, anyhow, it begs the sort of question that we just discussed: whyis it the concept X rather than the concept STEREOTYPIC X that one normally gets from experience withstereotypic Xs? (Mutatis mutandis, if the way perception works is that you subsume things under 32 33


DOORKNOB by seeing that they are similar to stereotypic doorknobs, why is it that you generally see a doorknob as a doorknob, and not as something that satisfies the doorknob stereotype?) If our minds are, in effect, functions fromstereotypes to concepts, that is a fact about us. Indeed, it is a very deep fact about us. My point in going on about this is toemphasize the untriviality of the consideration that we typically get a concept from instances that exemplify itsstereotype.


That a concept has the stereotype that it does is never truistic; and that a stereotype belongs to the concept that it does is never truistic either. In particular, since the relation between a concept and its stereotype is always contingent, nocircularity arises from defining ‘the concept X by reference to ‘the stereotype of the concept X.Jean-marc pizano


15:59 

So, suppose that the prototype for NURSEincludes the feature female. Pace Smith and Osherson's kind of proposal, you can't derive the prototype for MALENURSE just by replacing female with male, all sorts of other things have to change too. This is true ev

Jean-marc pizano So, suppose that the prototype for NURSEincludes the feature female. Pace Smith and Osherson's kind of proposal, you can't derive the prototype for MALENURSE just by replacing female with male, all sorts of other things have to change too. This is true even though theconcept MALE NURSE is ‘intersective’; i.e. even though the set of male nurses is the overlap of the set of males withthe set of nurses (just as the set of pet fish is the overlap of the set of pets with the set of fish). I want to stress thispoint because prototype theorists, in their desperation, are sometimes driven to suggest that MALE NURSE, PETFISH, and the like aren't compositional after all, but it's all right that they aren't, since they are idioms. But surely, surely,not. What could be stronger evidence against PET FISH being an idiom or for its being compositional than that itentails PET and FISH and that PET, FISH entails it?

 


It's perhaps worth mentioning the most recent attempt to salvage the compositionality of prototypes from pet fish, male nurses, striped apples, and the like (Kamp and Partee 1995). The idea goes like this: maybe good examples ofstriped apples aren't good examples of striped things tout court (compare zebras). But, plausibly, a prototypic example ofa striped apple would ipso facto be as good an example of something striped as an apple can be. That is a way of sayingthat the relevant comparison class for judging the typicality of a sample of apple stripes is not the stripes on things atlarge but rather the stripes on other apples; it's these that typical apple stripes are typical of. In effect, then, what youneed to do to predict whether a certain example of apple stripes is a good example of apple stripes, is to “recalibrate”STRIPES to apples.

Jean-marc pizano

A fair amount of algebra has recently been thrown at the problem of how, given the appropriate information about a reference set, one might calculate the typicality of one of its members (for discussion, see Kampand Partee 1995; Osherson and Smith 1996). But, as far as I can see, the undertaking is pointless. For one thing, itbears emphasis that the appropriate information for recalibrating a complex concept comes from the world, not fromthe content of its constituents. If it happens that they paint fire engines in funny shades of red, then typical fire enginered won't be typical red. To decide whether the colour of a certain engine is typical, you'd therefore need to recalibrateRED to FIRE ENGINE; and to do that, you'd need to know the facts about what shades of red fire engines arepainted. Nothing about the concepts RED or FIRE ENGINE, per se, could tell you this; so nothing about theseconcepts, per se, could predict the typicality of a given sample of fire-engine red. In this sense, “recalibrated”compositionality, even if we knew how to compute it, wouldn't really be compositionality. Compositionality is thederivation of the content of a complex concept just from its structure and the content of its constituents; that's whycompositionality explains productivity and systematicity


Still worse, if possible: identifying the relevant reference set for a complex concept itself depends on a prior grasp of its compositional structure. In the case of STRIPED APPLE, for example, the reference set for the recalibration ofSTRIPE is the striped apples. How do we know that? Because we know that STRIPED APPLE applies to is theintersection of the striped things and the apple things. And how do we know that? Because we know the compositionalsemantics of STRIPED APPLE. Computing typicality for a complex concept by “recalibrating” its constituents thuspresupposes semantic compositionality; it presupposes that we already know how the content of the concept depends onthe content of the concept's constituents. So, recalibration couldn't be what makes concepts compositional, so itcouldn't be what makes them systematic and productive. So what is recalibration for? Search me.

Jean-marc pizano

By the way, these pet fish sorts of arguments ramify in ways that may not be immediately apparent; compositionality is a sharp sword and cutteth many knots.59 For example, it's very popular in philosophical circles (it's the last gasp ofEmpiricist semantics) to suppose that there are such things as ‘recognitional concepts’; RED and SQUARE, forexample, and likewise, I suppose, DOG and TREE, and many, many others.Jean-marc pizano


15:59 

cognitivist according to this criterion, and wouldn't be even if (by accident) the concept DOORKNOB happened to be triggered by doorknobs..) Well, by this criterion, my story isn't cognitivist either. My story says that what doorknobs have in commonqua do

Jean-marc pizano

cognitivist according to this criterion, and wouldn't be even if (by accident) the concept DOORKNOB happened to be triggered by doorknobs..) Well, by this criterion, my story isn't cognitivist either. My story says that what doorknobs have in commonqua doorknobs is being the kind of thing that our kind of minds (do or would) lock to from experience with instances of the doorknobstereotype. (Cf. to be red just is to have that property that minds like ours (do or would) lock to in virtue of experiences oftypical instances of redness.) Why isn't that OK?82


If you put that account of the metaphysics of doorknobhood together with the metaphysical account of concept possession that informational semantics proposes—having a concept is something like “resonating to” the propertythat the concept expresses—then you get: being a doorknob is having that property that minds like ours come to resonateto in consequence of relevant experience with stereotypic doorknobs. That, and not being learned inductively, is whatexplains the content relation between DOORKNOB and the kinds of experience that typically mediates its acquisition.It also explains how doorknobhood could seem to be undefinable and unanalysable without being metaphysically ultimate.And it is also explains how DOORKNOB could be both psychologically primitive and not innate, the StandardArgument to the contrary not withstanding.


Several points in a spirit of expatiation:


The basic idea is that what makes something a doorknob is just: being the kind of thing from experience with which our kind of mind readily acquires the concept DOORKNOB. And, conversely, what makes something the conceptDOORKNOB is just: expressing the property that our kinds of minds lock to from experience with good examples ofinstantiated doorknobhood. But this way of putting the suggestion is too weak since experience with stereotypicdoorknobs might cause one to lock to any of a whole lot of properties (or to none), depending on what else is going onat the time. (In some contexts it might cause one to lock to the property belongs to Jones.) Whereas, what I want to say isthat doorknobhood is the property that one gets locked to when experience with typical doorknobs causes the locking anddoes so in virtue of the properties they have qua typical doorknobs. We have the kinds of minds that often

Jean-marc pizano

Modal footnote (NB): Here as elsewhere through the present discussion, ‘minds like ours’ and ‘the (stereo)typical properties of doorknobs’ are to be read rigidly, viz. as denoting the properties that instances of stereotypic doorknobs and typical minds have in this world. That the typical properties of minds and doorknobs are what they are ismeant to be contingent.


acquire the concept X from experiences whose intentional objects are properties belonging to the X-stereotype8


Notice that this is not a truism, and that it's not circular; it's contingently true if it's true at all. What makes it contingent is that being a doorknob is neither necessary nor sufficient for something to have the stereotypic doorknob properties(not even in ‘normal circumstances’ in any sense of “normal circumstances” I can think of that doesn't beg thequestion).Stereotype is a statistical notion. The only theoretically interesting connection between being a doorknob andsatisfying the doorknob stereotype is that, contingently, things that do either often do both.


In fact, since the relation between instantiating the doorknob stereotype and being a doorknob is patently contingent, you might want to buy into the present account of DOORKNOB even if you don't like the Lockean story about RED.The classical problem with the latter is that it takes for granted an unexplicated notion of ‘looks red’ (‘red experience’,‘red sense datum’, or whatever) and is thus in some danger of circularity since “the expression ‘looks red’ is notsemantically unstructured. Its sense is determined by that of its constituents. If one does not understand thoseconstituents, one does not fully understand the compound” (Peacocke 1992: 408). Well, maybe this kind of objectionshows that an account of being red mustn't presuppose the property of looking red (though Peacocke doubts that it showsthat, and so do I). In any event, no parallel argument could show that an account of being a doorknob mustn'tpresuppose the property of satisfying the doorknob stereotype.Jean-marc pizano


15:59 

I don't, however, think this comes to anything much. In the first place, it's not true (in any unquestion-begging sense) that “virtually any learning algorithm [acts] differently depending on whether it hears X or doesn't hear X”. To thecontra

Jean-marc pizano

I don't, however, think this comes to anything much. In the first place, it's not true (in any unquestion-begging sense) that “virtually any learning algorithm [acts] differently depending on whether it hears X or doesn't hear X”. To thecontrary, it's a way of putting the productivity problem that the learning algorithm must somehow converge on treatinginfinitely many unheard types in the same way that it treats finitely many of the heard types (viz. as grammatical) andfinitely many heard types in the same way that it treats a different infinity of the unheard ones (viz. as ungrammatical).To that extent, the algorithm must not assume that either being heard or not being heard is a projectible property ofthe types.


On the other hand, every treatment of learning that depends on the feedback of evidence at all (whether it supposes the evidence to be direct or indirect, negative or positive, or all four) must “be several layers removed from the input,looking at broad statistical patterns across the lexicon”; otherwise the presumed feedback won't generalize. It followsthat, on anybody's account, the negative information that the environment provides can't be “the nonoccurrence ofparticular sentences” (my emphasis); it's got to be the non-occurrence of certain kinds of sentences.


This much is common ground to any learning theory that accounts for the productivity of what is learned.


Were we’ve gotten to now: probably there isn’t a Baker’s Paradox about lexical syntax; you’d need ‘no overgeneralization’ to get one, and ‘no overgeneralization’ is apparently false of the lexicon. Even if, however, therewere a Baker’s Paradox about the lexicon, that would show that the hypotheses that the child considers when he makeshis lexical inductions must be tightly endogenously constrained. But it wouldn't show, or even suggest, that they arehypotheses about semantic properties of lexical items. No more than the existence of a bona fide Baker’s Paradox forsentential syntax—which it does seem that children hardly ever overgeneralize—shows, or even suggests, that it’s interms of the semantic properties of sentences that the child’s hypotheses about their syntax are defined.

Jean-marc pizano

So much for Pinker’s two attempts at ontogenetic vindications of lexical semantics. Though neither seems to work at all, I should emphasize a difference between them: whereas the ‘Baker’s Paradox’ argument dissolves uponexamination, there’s nothing wrong with the form of the bootstrapping argument. For all that I’ve said, it could still betrue that lexical syntax is bootstrapped from lexical semantics. Making a convincing case that it is would require, at aminimum, identifying the straps that the child tugs and showing that they are bona fide semantic; specifically, it wouldrequire showing that the lexical properties over which the child generalizes are typically among the ones that semantic-level lexical representations specify. In principle, we could get a respectable argument of that form tomorrow; it’s justthat, so far, there aren't any. So too, in my view, with the other ‘empirical’ or ‘linguistic’ arguments for lexicaldecomposition; all that’s wrong with them is that they aren’t sound.


Oh, well, so be it. Let’s go see what the philosophers have.


4 The Demise of Definitions, Part II: The


Philosopher's Tale


[A] little thing affects them. A slight disorder of the stomach makes them cheats. [They] may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of underdone potato.


—Scrooge


It's a sad truth about definitions that even their warm admirers rarely loved them for themselves alone. Cognitive scientists (other than linguists; see Chapter 3) cared about definitions because they offered a handy construal of thethesis that many concepts are complex; viz. the concepts in a definition are the constituents of the concept it defines. And cognitivescientists liked many concepts being complex because then many concepts could be learned by assembling them fromtheir parts. And cognitive scientists liked many concepts being learned by assembling them from their parts becausethen only the primitive parts have to be unlearned. We'll see, in later chapters, how qualmlessly most of cognitivescience dropped definitions when it realized that it could have complex concepts without them.

Jean-marc pizano

15:59 

To simplify the exposition

Jean-marc pizano

To simplify the exposition, I’ll use this notion pretty informally; for example, I’m glossing over the distinction between Boolean sentences and Boolean predicates. But none of this corner-cutting is essential to the argument.


This is not to deny that there are typicality effects for negative categories; as Barsalou remarks, “with respect to birds, chair is a better nonmember than is butterfly ” (1987: 101). This observation does not, however, generalize to Boolean functions at large. I doubt that there are more and less typical examples of if it's a chair, then it's a Windsor orof chair or butterfly.


The moral seems clear enough: the mental representations that correspond to complex Boolean concepts specify not their prototypes but their logical forms. So, for example, NOT A CAT has the logical form not (F), and the rule ofinterpretation for a mental representation of that form assigns as its extension the complement of the set of Fs. Toadmit this, however, is to abandon the project of using prototype structure to account for the productivity (/systematicity) of complex Boolean predicates. So be it.


(II) The Pet Fish Problem


Prototype theories want to explicate notions like falling under a concept by reference to notions like being similar to the concept's exemplar. Correspondingly, prototype theories can represent conceptual repertoires as compositional only if(barring idioms) a thing's similarity to the exemplar of a complex concept is determined by its similarity to theexemplars of its constituents. However, this condition is not satisfied in the general case. So, for example, a goldfish isa poorish example of a fish, and a poorish example of a pet, but it's a prototypical example of a pet fish. So similarity tothe prototypic pet and the prototypic fish doesn't predict similarity to the prototypical pet fish. It follows that ifmeanings were prototypes, then you could know what ‘pet’ means and know what ‘fish’ means and still not know what‘pet fish’ means. Which is just to say that if meanings were prototypes, then the meaning of ‘pet fish’ wouldn't becompositional. Various solutions for this problem are on offer in the literature, but it seems to me that none is evenclose to satisfactory. Let's have a quick look at one or two.

Jean-marc pizano

Smith and Osherson (1984) take prototypes to be matrices of weighted features (rather than exemplars). So, for example, the prototype for APPLE might specify a typical shape, colour, taste, size, ripeness, . . . etc. Let's suppose, inparticular, that the prototypical apple is red, and consider the problem of constructing a prototype for PURPLEAPPLE. The basic idea is to form a derived feature matrix that's just like the one for APPLE, except that the featurepurple replaces the feature red and the weight of the new colour feature is appropriately increased. PET FISH wouldpresumably work the same way.


It's pretty clear, however, that this treatment is flawed. To see this, ask yourself how much the feature purple weighs in the feature matrix for PURPLE APPLE. Clearly, it must weigh more than the feature red does in the matrix for APPLEsince, though there can be apples that aren't red, there can't be purple apples that aren't purple; any more than therecan be red apples that aren't red, or purple apples that aren't apples. In effect,


purple has to weigh infinitely much in the feature matrix for PURPLE APPLE because purple apples are purple, unlike typical apples are red, is a logical truth.


So the Smith/Osherson proposal for composing prototypes faces a dilemma: either treat the logical truths as (merely) extreme cases of statistically reliable truths, or admit that the weights assigned to the features in derived matrices aren'tcompositional even if the matrices themselves are. Neither horn of this dilemma seems happy. Moreover, it's pretty clearwhat's gone wrong: what really sets the weight of the purple in PURPLE APPLE isn't the concept's prototype; it's theconcept's logical form. But prototypes don't have logical forms.

Jean-marc pizano

Another way to put the pet fish problem is that the ‘features’ associated with the As in AN constructions are not, in the general case, independent of the features associated with the Ns.Jean-marc pizano


15:59 

Suppose that we want thefollowing to be a prototypical case where you and I have different but similar concepts of George Washington: thoughwe agree about his having been the first American President, and the Father of His Country, and his having cut down

Jean-marc pizano Suppose that we want thefollowing to be a prototypical case where you and I have different but similar concepts of George Washington: thoughwe agree about his having been the first American President, and the Father of His Country, and his having cut down acherry tree, and so on, you think that he wore false teeth and I think that he didn't. The similarity of our GW conceptsis thus some (presumably weighted) function of the number of propositions about him that we both believe, and thedissimilarity of our GW concepts is correspondingly a function of the number of such propositions that we disagreeabout. So far, so good.

 


But the question now arises: what about the shared beliefs themselves; are they or aren't they literally shared? This poses a dilemma for the similarity theorist that is, as far as I can see, unavoidable. If he says that our agreed uponbeliefs about GW are literally shared, then he hasn't managed to do what he promised; viz. introduce a notion ofsimilarity of content that dispenses with a robust notion of publicity. But if he says


that the agreed beliefs aren't literally shared (viz. that they are only required to be similar), then his account of content similarity begs the very question it was supposed to answer: his way of saying what it is for concepts to have similar butnot identical contents presupposes a prior notion of beliefs with similar but not identical contents.


The trouble, in a nutshell, is that all the obvious construals of similarity of beliefs (in fact, all the construals that I've heard of) take it to involve partial overlap of beliefs.22 But this treatment breaks down if the beliefs that are in the overlap arethemselves construed as similar but not identical. It looks as though a robust notion of content similarity can't butpresuppose a correspondingly robust notion of content identity. Notice that this situation is not symmetrical; thenotion of content identity doesn't require a prior notion of content similarity. Leibniz's Law tells us what it is for thecontents of concepts to be identical; Leibniz's Law tells us what it is for anythings to be identical.

Jean-marc pizano

As I remarked above, different theorists find different rugs to sweep this problem under; but, as far as I can tell, none of them manages to avoid it. I propose to harp on this a bit because confusion about it is rife, not just in philosophybut in the cognitive science community at large. Not getting it straight is one of the main things that obscures how veryhard it is to construct a theory of concepts that works, and how very much cognitive science has thus far failed to doso.


Suppose, for example, it's assumed that your concept PRESIDENT is similar to my concept PRESIDENT in so far as we assign similar subjective probabilities to propositions that contain the concept. There are plenty of reasons forrejecting this sort of model; we'll discuss its main problems in Chapter 5. Our present concern is only whetherconstructing a probabilistic account of concept similarity would be a way to avoid having to postulate a robust notionof content identity.


Perhaps, in a typical case, you and I agree that p is very high for ‘FDR is/was President’ and for ‘The President is the Commander-in-Chief of the Armed Forces’ and for ‘Presidents have to be of voting age’, etc.; but, whereas you rate‘Millard Fillmore is/was President’ as having a probability close to 1, I, being less well informed, take it to be around p= 0.07 (Millard Fillmore???). This gives us an (arguably) workable construal of the idea that we have similar but notidentical PRESIDENT concepts. But it does so only by helping itself to a prior notion of belief identity, and to theassumption that there are lots of thoughts of which

Jean-marc pizano

'Why not take content similarity as primitive and stop trying to construe it?’ Sure; but then why not take content identity as primitive and stop trying to construe it ? In which case, what is semantics for ?


our respective PRESIDENTS are constituents that we literally share. Thus, you and I are, by assumption, both belief-related to the thoughts that Millard Fillmore was President, that Presidents are Commanders-in-Chief, etc.Jean-marc pizano


15:59 

C. Smart who, it seems to me, got more of this right than he is these days given credit for: “This account of secondary qualities explains their unimportance in physics.For obviously the discriminations . . . made by a very complex neurophysiologica

Jean-marc pizano C. Smart who, it seems to me, got more of this right than he is these days given credit for: “This account of secondary qualities explains their unimportance in physics.For obviously the discriminations . . . made by a very complex neurophysiological mechanism are hardly likely to correspond to simple and nonarbitrary distinctions innature” (1991: 172). My point is: this is true not just of colours, but of doorknobs too.

 

Jean-marc pizano

15:59 

If ‘doorknob’ has anominal definition, then it ought to be possible for a competent linguist or analytical philosopher to figure out what itsnominal definition is. If ‘doorknob’ has a real definition, then it ought to be possible f

Jean-marc pizano If ‘doorknob’ has anominal definition, then it ought to be possible for a competent linguist or analytical philosopher to figure out what itsnominal definition is. If ‘doorknob’ has a real definition, then it ought to be possible for a science of doorknobs touncover it. But linguists and philosophers have had no luck defining ‘doorknob’ (or, as we've seen, anything muchelse). And there is nothing for a science of doorknobs to find out. The direction this is leading in is that if ‘doorknob’ isundefinable, that must be because being a doorknob is a primitive property. But, of course, that's crazy. If a thing hasdoorknobhood, it does so entirely in virtue of others of the properties it has. If doorknobs don't have hidden essences orreal definitions, that can't possibly be because being a doorknob is one of those properties that things have simply becausethey have them; ultimates like spin, charm, charge, or the like, at which explanation ends.

 


So, here's the riddle. How could ‘doorknob’ be undefinable (contrast ‘bachelor’ =df ‘unmarried man’) and lack a hidden essence (contrast water = H2O) without being metaphysically primitive (contrast spin, charm, and charge)?


The answer (I think) is that ‘doorknob’ works like ‘red’.


Now I suppose you want to know how ‘red’ works.


Well, ‘red’ hasn't got a nominal definition, and redness doesn't have a real essence (ask any psychophysicist), and, of course, redness isn't metaphysically ultimate. This is all OK because redness is an appearance property, and the point aboutappearance properties is that they don't raise the question that definitions, real and nominal, propose to answer: viz.‘What is it that the things we take to be Xs have in common, over and above our taking them to be Xs?’ This is, to put itmildly, not a particularly original thing to say about red. All that's new is the proposal to extend this sort of analysis todoorknobs and the like; the proposal is that there are lots of appearance concepts that aren't sensory concepts.80 That this should beso is, perhaps, unsurprising on reflection. There is no obvious reason why 30a property that is constituted by the mental states that things that have it evoke in us must ipso facto be constituted by thesensory states that things that have it evoke in us.

Jean-marc pizano

All right, all right; you can't believe that something's being a doorknob is “about us” in anything like the way that maybe something's being red is. Surely ‘doorknob’ expresses a property that a thing either has or doesn't, regardless ofour views; as it were, a property of things in themselves? So be it, but which property? Consider the alternatives (herewe go again): is it that ‘doorknob’ is definable? If so, what's the definition? (And, even if ‘doorknob’ is definable, someconcepts have to be primitive, so the present sorts of issues will eventually have to be faced about them.) Is it thatdoorknobs qua doorknobs have a hidden essence? Hidden where, do you suppose? And who is in charge of finding it?Is it that being a doorknob is ontologically ultimate? You've got to be kidding.31


If you take it seriously that DOORKNOB hasn't got a conceptual analysis, and that doorknobs don't have hidden essences, all that's left to make something a doorknob (anyhow, all that's left that I can think of) is how it strikes us. But ifbeing a doorknob is a property that's constituted by how things strike us, then the intrinsic connection between the contentof DOORKNOB and the content of our doorknob-experiences is metaphysically necessary, hence not a fact that acognitivist theory of concept acquisition is required in order to explain.


To be sure, there remains something about the acquisition of DOORKNOB that does want explaining: viz. why it is the property that these guys (several doorknobs) share, and not the property that those guys (several cows) share, thatwe lock to from experience of good (e.g. stereotypic) examples of doorknobs. And, equally certainly, it's got to besomething about our kinds of minds that this explanation adverts to. But, I'm supposing, such an explanation iscognitivist only if it turns on the evidential relation between having the stereotypic doorknob properties and being a doorknob. (So,for example, triggering explanations aren't

Jean-marc pizano

15:59 

The present discussion parallels what I regard as a very deep passage in Schiffer

Jean-marc pizano

30


So, then, which appearance properties are sensory properties? Here’s a line that one might consider: £ is a sensory property only if it is possible to have an experience of which £-ness is the intentional object (e.g. an experience (as) of red) even though one hasn't got the concept £ Here the test of having the concept £ would be something like beingable to think thoughts whose truth conditions include ... £ ... (e.g. thoughts like that's red). I think this must be the notion of ‘sensory property’ that underlies the Empiricistidea that RED and the like are learned ‘by abstraction’ from experience, a doctrine which presupposes that a mind that lacks RED can none the less have experiences (as) ofredness. By this test, DOORKNOB is presumably not a sensory concept since, though it is perfectly possible to have an experience (as) of doorknobs, I suppose only a mindthat has the concept DOORKNOB can do so.‘But how could one have an experience (as) of red if one hasn't got the concept RED?’ It's easy: in the case of redness, but notof doorknobhood, one is equipped with sensory organs which produce such experiences when they are appropriately stimulated. Redness can be sensed, whereas the perceptualdetection of doorknobhood is always inferential. Just as sensible psychologists have always supposed.


31


The present discussion parallels what I regard as a very deep passage in Schiffer 1987 about being a dog. Schiffer takes for granted that ‘dog’ doesn’t name a species, and (hence?) that dogs as such don’t have a hidden essence. His conclusion is that there just isn’t (except pleonastically) any such property as being a dog My diagnosis is thatthere is too, but it’s mind-dependent.

Jean-marc pizano

32


Reminder: ‘the X stereotype’ is rigid. See n. 12 above.


33


Except in the (presumably never encountered) case where all the X s are stereotypic. In that case, there’s a dead heat.


34


In principle, they are also epistemically independent in both directions. As things are now, we find out about the stereotype by doing tests on subjects who are independentlyidentified as having the corresponding concept. But I assume that if we knew enough about the mind/brain, we could predict a concept from its stereotype and vice versa. Ineffect, given the infinite set of actual and possible doorknobs, we could predict the stereotype from which our sorts of minds would generalize to it; and given the doorknobstereotype, we could predict the set of actual and possible objects which our kinds of minds would take to instantiate doorknobhood.


35


Compare Jackendoff: “Look at the representations of, say, generative phonology... It is strange to say that English speakers know the proposition, true in the world independent of speakers [sic ], that syllable-initial voiceless consonants aspirate before stress ... In generative phonology . . . this rule of aspiration is regarded as a principle of internalcomputation, not a fact about the world. Such semantical concepts as implication, confirmation, and logical consequence seem curiously irrelevant” (1992: 29). Note that,though they are confounded in his text, the contrast that Jackendoff is insisting on isn’t between propositions and rules/principles of computation; it’s between phenomena of thekind that generative phonology studies and facts about the world. But that ‘p’ is aspirated in ‘Patrick’ is a fact about the world. That is to say: it's a fact. And of course the usuallogico-semantical concepts apply. That ‘p’ is aspirated in ‘Patrick’ is what makes the claim that ‘p’ is aspirated in ‘Patrick’ true; since ‘p’ is aspirated in ‘Patrick’, something in‘Patrick' is aspirated . . . and so forth.

Jean-marc pizano

36


In just this spirit, Keith Campbell remarks about colours that if they are “integrated reflectances across three overlapping segments clustered in the middle of the total electromagnetic spectrum, then they are, from the inanimate point of view, such highly arbitrary and idiosyncratic properties that it is no wonder the particular colors we arefamiliar with are manifest only in transactions with humans, rhesus monkeys, and machines especially built to replicate just their particular mode of sensitivity to photons”(1990: 572—3). (The force of this observation is all the greater if, as seems likely, even the reflectance theory underestimates the complexity of colour psychophysics.)See alsoJ. J.Jean-marc pizano


15:59 

Names, by contrast, succeed in their job because they aren't compositional; not even when they are syntactically complex. Consider ‘the Iron Duke’, to which ‘Iron’ does not contribute iron, and which you can therefore use to specif

Jean-marc pizano

Names, by contrast, succeed in their job because they aren't compositional; not even when they are syntactically complex. Consider ‘the Iron Duke’, to which ‘Iron’ does not contribute iron, and which you can therefore use to specifythe Iron Duke even if you don't know what he was made of. Names are nicer than descriptions because you don't haveto know much to specify their bearers, although you do have to know what their bearers are called. Descriptions arenicer than names because, although you do have to know a lot to specify their bearers, you don't have to know whattheir bearers are called. What's nicer than having the use of either names or descriptions is having the use of both. Iagree that, as a piece of semantic theory, this is all entirely banal; but that's my point, so don't complain. There is, torepeat, no need for fancy arguments that the representational systems we talk and think in are in large partcompositional; you find the effects of their compositionality just about wherever you look.


I must apologize for having gone on at such length about the arguments pro and con conceptual compositionality; the reason I've done so is that, in my view, the status of the statistical theory of concepts turns, practically entirely, on thisissue. And statistical theories are now the preferred accounts of concepts practically throughout cognitive science. Inwhat follows I will take the compositionality of conceptual repertoires for granted, and try to make clear how the thesisthat concepts are prototypes falls afoul of it.


Why Concepts Can't Be Prototypes^

Jean-marc pizano

Here's why concepts can't be prototypes: whatever conceptual content is, compositionality requires that complex concepts inherit their contents from those of their constituents, and that they do so in a way that explains theirproductivity and systematicity. Accordingly, whatever is not inherited from its constituents by a complex concept is ipsofacto not the content of that concept. But: (i) indefinitely many complex concepts have no prototypes; a fortiori they donot inherit their prototypes from their constituents. And, (ii) there are indefinitely many complex concepts whoseprototypes aren't related to the prototypes of their constituents in the ways that the compositional explanation ofproductivity and systematicity requires. So, again, if concepts are compositional then they can't be prototypes.


In short, prototypes don't compose. Since this is the heart of the case against statistical theories of concepts, I propose to expatiate a bit on the examples.


(I) The Uncat Problem


For indefinitely many “Boolean” concepts,57there isn't any prototype even though: —their primitive constituent concepts all have prototypes,


and


--the complex concept itself has definite conditions of semantic evaluation (definite satisfaction conditions).


So, for example, consider the concept NOT A CAT (mutatis mutandis, the predicate ‘is not a cat’); and let's suppose (probably contrary to fact) that CAT isn't vague; i.e. that ‘is a cat’ has either the value S or the value U for every objectin the relevant universe of discourse. Then, clearly, there is a definite semantic interpretation for NOT A CAT; i.e. itexpresses the property of not being a cat, a property which all and only objects in the extension of the complement of theset of cats instantiate.

Jean-marc pizano

However, although NOT A CAT is semantically entirely well behaved on these assumptions, it's pretty clear that it hasn't got a stereotype or an exemplar. For consider: a bagel is a pretty good example of a NOT A CAT, but a bagelcouldn't be NOT A CAT's prototype. Why not? Well, if bagels are the prototypic NOT A CATs, it follows that themore a thing is like a bagel the less it's like a cat; and the more a thing isn't like a cat, the more it's like a bagel. But the secondconjunct is patently not true. Tuesdays and erasers, both of which are very good examples of NOT A CATs, aren't atall like bagels. An Eraser is not more a Bagel for being a bad Cat. Notice that the same sort of argument goes throughif you are thinking of stereotypes in terms of features rather than exemplars. There is nothing that non-cats qua noncats as such are likely to have in common (except, of course, not being cats).58

Jean-marc pizano

15:59 

There is, however, a widespread consensus (and not only among conceptual relativists) that intentional explanation can, after all, be preserved without supposing that belief contents are often—or even ever—literally public. The idea isthat a r

Jean-marc pizano

There is, however, a widespread consensus (and not only among conceptual relativists) that intentional explanation can, after all, be preserved without supposing that belief contents are often—or even ever—literally public. The idea isthat a robust notion of content similarity would do just as well as a robust notion of content identity for the cognitivescientist's purposes. Here, to choose a specimen practically at random, is a recent passage in which Gil Harmanenunciates this faith:


Sameness of meaning from one symbol system to another is a similarity relation rather than an identity relation in the respect that sameness of meaning is not transitive ... I am inclined to extend the point to concepts, thoughts,and beliefs . . . The account of sameness of content appeals to the best way of translating between two systems,where goodness in translation has to do with preserving certain aspects of usage, with no appeal to any more‘robust’ notion of content or meaning identity. . . [There's no reason why] the resulting notion of sameness ofcontent should fail to satisfy the purposes of intentional explanation. (1993: 169—79)7


It's important whether such a view can be sustained since, as we'll see, meeting the requirement that intentional contents be literally public is non-trivial; like compositionality, publicity imposes a substantial constraint upon one'stheory of concepts and hence, derivatively, upon one's theory of language. In fact, however, the idea that contentsimilarity is the basic notion in intentional explanation is affirmed a lot more widely than it's explained; and it's quiteunclear, on reflection, how the notion of similarity that such a semantics would require might be unquestion-begginglydeveloped. On one hand, such a notion must be robust in the sense that it preserves intentional explanations prettygenerally; on the other hand, it must do so without itself presupposing a robust notion of content identity. To the best of myknowledge, it's true without exception that all the construals of concept similarity that have thus far been put on offeregregiously fail the second condition.

Jean-marc pizano

Harman, for example, doesn't say much more about content-similarity-cum-goodness-of-translation than that it isn't transitive and that it “preserves certain aspects of usage”. That's not a lot to go on. Certainly it leaves wide openwhether Harman is right in denying that his account of content similarity presupposes a “ ‘robust’ notion of content ormeaning identity”. For whether it does depends on how the relevant “aspects ofusage” are themselves supposed to be individuated, and about this we're told nothing at all.


Harman is, of course, too smart to be a behaviourist; ‘usage’, as he uses it, is itself an intentional-cum-semantic term. Suppose, what surely seems plausible, that one of the ‘aspects of usage’ that a good translation of ‘dog’ has to preserveis that it be a term that implies animal, or a term that doesn't apply to ice cubes, or, for matter, a term that means dog Ifso, then we're back where we started; Harman needs notions like same implication, same application, and same meaningin order to explicate his notion of content similarity. All that's changed is which shell the pea is under.


At one point, Harman asks rhetorically, “What aspects of use determine meaning?” Reply: “It is certainly relevant what terms are applied to and the reasons that might be offered for this application ... it is also relevant how some termsare used in relation to other terms” (ibid.: 166). But I can't make any sense of this unless some notion of ‘sameapplication’, ‘same reason’, and ‘same relation of terms’ is being taken for granted in characterizing what goodtranslations ipso facto have in common. NB on pain of circularity: same application (etc.), not similar application (etc.).Remember that similarity of semantic properties is the notion that Harman is trying to explain, so his explanation mustn'tpresuppose that notion.

Jean-marc pizano

I don't particularly mean to pick on Harman; if his story begs the question it was supposed to answer, that is quite typical of the literature on concept similarity. Though it's often hidden in a cloud of technical apparatus (for a detailedcase study, see Fodor and Lepore 1992: ch. 7), the basic problem is easy enough to see.Jean-marc pizano


15:59 

That being so, explaining thedoorknob/DOORKNOB effect requires postulating some (contingent, psychological) mechanism that reliably leadsfrom having F-experiences to acquiring the concept of beingF. It understates the case to say that no alternative tohyp

Jean-marc pizano That being so, explaining thedoorknob/DOORKNOB effect requires postulating some (contingent, psychological) mechanism that reliably leadsfrom having F-experiences to acquiring the concept of beingF. It understates the case to say that no alternative tohypothesis testing suggests itself. So I don't think that a causal/historical account of the locking relation can explainwhy there is a d/D effect without invoking the very premiss which, according to SA, it can't have: viz. that primitiveconcepts are learned inductively.

 


Note the similarity of this objection to the one that rejected a Darwinian solution of the d/D problem: just as you can't satisfy the conditions for having the concept Fjust in virtue of having interacted with Fs, so too you can't satisfy theconditions for having the concept F just in virtue of your grandmother's having interacted with Fs. In both cases,


concept acquisition requires something to have gone on in your head in consequence of the interactions. Given the ubiquity of the d/D phenomenon, the natural candidate for what's gone on in your head is inductive learning.


Second Try at a Metaphysical Solution to the d/D Problem


Maybe what it is to be a doorknob isn't evidenced by the kind of experience that leads to acquiring the concept DOORKNOB; maybe what it is to be a doorknob is constituted by the kind of experience that leads to acquiring theconcept DOORKNOB. A Very Deep Thought, that; but one that requires some unpacking. I want to take a few stepsback so as to get a running start.

Jean-marc pizano

Chapter 3 remarked that it's pretty clear that if we can't define “doorknob”, that can't be because of some accidental limitation of the available metalinguistic apparatus; such a deficit could always be remedied by switchingmetalanguages. The claim, in short, was not that we can't define “doorknob” in English, but that we can't define it at all.The implied moral is interesting: if “doorknob” can't be defined, the reason that it can't is plausibly not methodologicalbut ontological; it has something to do with what kind of property being a doorknob is. If you're inclined to doubt this, sobe it; but I think that you should have your intuitions looked at.


Well, but what could it be about being a doorknob that makes ‘doorknob’ not definable? Could it be that doorknobs have a “hidden essence” (as water, for example, is supposed to do); one that has eluded our scrutiny so far? Perhaps somescience, not yet in place, will do for doorknobs what molecular chemistry did for water and geometrical optics did formirrors: make it clear to us what they really are? But what science, for heaven's sake? And what could there be for it tomake clear? Mirrors are puzzling (it seems that they double things); and water is puzzling too (what could it be madeof, there's so much of it around?). But doorknobs aren't pugyling, doorknobs are boring. Here, for once, “furtherresearch” appears not to be required.


It's sometimes said that doorknobs (and the like) have functional essences: what makes a thing a doorknob is what it is (or is intended to be) used for. So maybe the science of doorknobs is psychology? Or sociology? Or anthropology?Once again, believe it if you can. In fact, the intentional aetiology of doorknobs is utterly transparent: they're intendedto be used as doorknobs. I don't at all doubt that's what makes them what they are, but that it is gets us nowhere. For,if DOORKNOB plausibly lacks a conceptual analysis, INTENDED TO BE USED AS A DOORKNOB does too,and for the same reasons. And surely, surely, that can't, in either case, be because there's something secret aboutdoorknobhood that depth psychology is needed to reveal? No doubt, there is a lot that we don't know about intentionstowards doorknobs qua intentions; but I can't believe there's much that's obscure about them qua intentions towardsdoorknobs.

Jean-marc pizano

Look, there is presumably something about doorknobs that makes them doorknobs, and either it's something complex or it's something simple. If it's something complex, then‘doorknob’ must have a definition, and its definition must be either “real” or “nominal” (or both).Jean-marc pizano


15:59 

Just as it’s possible to dissociate the

Jean-marc pizano

19


Just as it’s possible to dissociate the idea that concepts are complex from the claim that meaning-constitutive inferences are necessary, so too it’s possible to dissociate the idea that concepts are constituted by their roles in inferences from the claim that they are complex. See Appendix 5A.


20


More precisely, only with respect to conceptualy necessary inferences. (Notice that neither nomological nor metaphysical necessity will do; there might be laws about brown cows per se, and (who knows?) brown cows might have a proprietary hidden essence.) I don’t know what a Classical IRS theorist should say if it turns out that conceptuallynecessary inferences aren't ipso facto definitional or vice versa. That, however, is his problem, not mine.


21


They aren’t the only ones, of course. For example, Keil remarks that “Theories . . . make it impossible ... to talk about the construction of concepts solely on the basis ofprobabilistic distributions of properties in the world” (1987: 196). But that’s true only on the assumption that theories somehow constitute the concepts they contain. DittoKeil’s remark that “future work on the nature of concepts . . . must focus on the sorts of theories that emerge in children and how these theories come to influence thestructure of the concepts that they embrace” (ibid.).


22


There are exceptions. Susan Carey thinks that the individuation of concepts must be relativized to the theories they occur in, but that only the basic ‘ontological’commitments of a theory are content constitutive. (However, see Carey 1985: 168: “I assume that there is a continuum of degrees of conceptual differences, at the extremeend of which are concepts embedded in incommensurable conceptual systems.”) It’s left open how basic ontological claims are to be distinguished from commitments ofother kinds, and Carey is quite aware that problems about drawing this distinction are depressingly like the analytic/synthetic problems. But in so far as Carey has an accountof content individuation on offer, it does seem to be some version of the Classical theory.

Jean-marc pizano

23


This point is related, but not identical, to the familiar worry about whether implicit definition can effect a ‘qualitative change’ in a theory’s expressive power: the worry thatdefinitions (implicit or otherwise) can only introduce concepts whose contents are already expressible by the host theory. (For discussion, see Fodor 1975.) It looks to methat implicit definition is specially problematic for meaning holists even if it's granted that an implicit definition can (somehow) extend the host theory's expressive power.


24


I don’t particularly mean to pick on Gopnik; the cognitive science literature is full of examples of the mistake that I’m trying to draw attention to. What’s unusual aboutGopnik’s treatment is just that it's clear enough for one to see what the problem is.


25


As usual, it's essential to keep in mind that when a de dicto intentional explanation attributes to an agent knowledge (rules, etc.), it thereby credits the agent with the conceptsinvolved in formulating the knowledge, and thus incurs the burden of saying what concepts they are. See the ‘methodological digression’ in Chapter 2.


26


This chapter reconsiders some issues about the nativistic commitments of RTMs that I first raised in Fodor 1975 and then discussed extensively in 1981^. Casual familiaritywith the latter paper is recommended as a prolegomenon to this discussion.I’m especially indebted to Andrew Milne and to Peter Grim for having raised (essentially thesame) cogent objections to a previous version.

Jean-marc pizano

27


For discussions that turn on this issue, see Fodor 1986; Antony and Levine 1991; Fodor 1991.


28


Actually, of course, DOORKNOB isn't a very good example, since it's plausibly a compound composed of the constituent concepts DOOR and KNOB. But let's ignorethat for the sake of the discussion.


29


Well, maybe the acquisition of PROTON doesn’t; it’s plausible that PROTON is not typically acquired from its instances. So, as far as this part of the discussion is concerned, you are therefore free to take PROTON as a primitive concept if you want to. But I imagine you don't want to.Perhaps, in any case, it goes without saying thatthe fact that the d/D effect is widespread in concept acquisition is itself contingent and a posteriori.

Jean-marc pizano

15:59 

I hope there toplacate such scruples about DOORKNOB and CARBURETTOR as some of you may feel, and to do so within theframework of an atomistic RTM.

Jean-marc pizano I hope there toplacate such scruples about DOORKNOB and CARBURETTOR as some of you may feel, and to do so within theframework of an atomistic RTM.

 


5. Concepts are public, they're the sorts of things that lots of people can, and do, share.


Since, according to RTM, concepts are symbols, they are presumed to satisfy a type/token relation; to say that two people share a concept (i.e. that they have literally the same concept) is thus to say that they have tokens of literally thesame concept type. The present requirement is that the conditions for typing concept tokens must not be so stringentas to assign practically every concept token to a different type from practically any other.


I put it this way advisedly. I was once told, in the course of a public discussion with an otherwise perfectly rational and civilized cognitive scientist, that he “could not permit” the concept HORSE to be innate in humans (though I guess it’s OK for it to be innate in horses). I forgot to ask him whether he was likewise unprepared to permitneutrinos to lack mass.Just why feelings run so strongly on these matters is unclear to me. Whereas the ethology of all other species is widely agreed to be thoroughlyempirical and largely morally neutral, a priorizing and moralizing about the ethology of our species appears to be the order of the day. Very odd.


It seems pretty clear that all sorts of concepts (for example, DOG, FATHER, TRIANGLE, HOUSE, TREE, AND, RED, and, surely, lots of others) are ones that all sorts of people, under all sorts of circumstances, have had andcontinue to have. A theory of concepts should set the conditions for concept possession in such a way as not to violatethis intuition. Barring very pressing considerations to the contrary, it should turn out that people who live in verydifferent cultures and/or at very different times (me and Aristotle, for example) both have the concept FOOD; andthat people who are possessed of very different amounts of mathematical sophistication (me and Einstein, forexample) both have the concept TRIANGLE; and that people who have had very different kinds of learningexperiences (me and Helen Keller, for example) both have the concept TREE; and that people with very differentamounts of knowledge (me and a four-year-old, for example) both have the concept HOUSE. And so forth.Accordingly, if a theory or an experimental procedure distinguishes between my concept DOG and Aristotle's, orbetween my concept TRIANGLE and Einstein's, or between my concept TREE and Helen Keller's, etc. that is a verystrong prima facie reason to doubt that the theory has got it right about concept individuation or that the experimentalprocedure is really a measure of concept possession.

Jean-marc pizano

I am thus setting my face against a variety of kinds of conceptual relativism, and it may be supposed that my doing so is itself merely dogmatic. But I think there are good grounds for taking a firm line on this issue. Certainly RTM isrequired to. I remarked in Chapter 1 that RTM takes for granted the centrality of intentional explanation in any viablecognitive psychology. In the cases of interest, what makes such explanations intentional is that they appeal to coveringgeneralizations about people who believe that such-and-such, or people who desire that so-and-so, or people whointend that this and that, and so on. In consequence, the extent to which an RTM can achieve generality in theexplanations it proposes depends on the extent to which mental contents are supposed to be shared. If everybodyelse's concept WATER is different from mine, then it is literally true that only I have ever wanted a drink of water, andthat the intentional generalization ‘Thirsty people seek water’ applies only to me. (And, of course, only I can state thatgeneralization; words express concepts, so if your WATER concept is different from mine, ‘Thirsty people seek water’means something different when you say it and when I do.) Prima facie, it would appear that any very thoroughgoingconceptual relativism would preclude intentional generalizations with any very serious explanatory power. This holdsin spades if, as seems likely, a coherent conceptual relativist has to claim that conceptual identity can't be maintainedeven across time slices of the same individual.

Jean-marc pizano

15:59 

Who could really doubt that this is so? Systematicity seems to beone of the (very few) organizational properties of minds that our cognitive science actually makes some sense of.

Jean-marc pizano Who could really doubt that this is so? Systematicity seems to beone of the (very few) organizational properties of minds that our cognitive science actually makes some sense of.

 


If your favourite cognitive architecture doesn't support a productive cognitive repertoire, you can always argue that since minds are really finite, they aren't literally productive. But systematicity is a property that even quite finiteconceptual repertoires can have; it isn't remotely plausibly a methodological artefact. If systematicity needscompositionality to explain it, that strongly suggests that the compositionality of mental representations is mandatory.For all that, there has been an acrimonious argument about systematicity in the literature for the last ten years or so.One does wonder, sometimes, whether cognitive science is worth the bother.


Some currently popular architectures don't support systematic representation. The representations they compute with lack constituent structure; a fortiori they lack compositional constituent structure. This is true, in particular, of ‘neuralnetworks’. Connectionists have responded to this in a variety of ways. Some have denied that concepts are systematic.Some have denied that Connectionist representations are inherently unstructured. A fair number have simply failed tounderstand the problem. The most recent proposal I've heard for a Connectionist treatment of systematicity is owingto the philosopher Andy Clark (1993). Clark says that we should “bracket” the problem of systematicity. “Bracket” is atechnical term in philosophy which means try not to think about.

Jean-marc pizano

I don't propose to review this literature here. Suffice it that if you assume compositionality, you can account for both systematicity and productivity; and if you don't, you can't. Whether or not productivity and systematicity prove thatconceptual content is compositional, they are clearly substantial straws in the wind. I find it persuasive that there are


quite a few such straws, and they appear all to be blowing in the same direction.


The Best Argument for Compositionality


The best argument for the compositionality of mental (and linguistic) representation is that its traces are ubiquitous; not just in very general features of cognitive capacity like productivity and systematicity, but also everywhere in itsdetails. Deny productivity and systematicity if you will; you still have these particularities to explain away.


Consider, for example: the availability of (definite) descriptions is surely a universal property of natural languages. Descriptions are nice to have because they make it possible to talk (mutatis mutandis, to think) about a thing even if itisn't available for ostension and even if you don't know its name; even, indeed, if it doesn't have a name (as with ever somany real numbers). Descriptions can do this job because they pick out unnamed individuals by reference to their properties.So, for example, ‘the brown cow’ picks out a certain cow; viz. the brown one. It does so by referring to a property, viz.being brown, which that cow has and no other cow does that is contextually relevant. Things go wrong if (e.g.) there areno contextually relevant cows; or if none of the contextually relevant cows is brown; or if more than one of thecontextually relevant cows is brown . . . And so forth.

Jean-marc pizano

OK, but just how does all this work? Just what is it about the syntax and semantics of descriptions that allows them to pick out unnamed individuals by reference to their properties? Answer:


i. Descriptions are complex symbols which have terms that express properties among their syntactic constituents;and


ii. These terms contribute the properties that they express to determine what the descriptions that contain themspecify.


It's because ‘brown’ means brown that it's the brown cow that ‘the brown cow’ picks out. Since you can rely on this arrangement, you can be confident that ‘the brown cow’ will specify the local brown cow even if you don’t know which cowthe local brown cow is; even if you don't know that it's Bossie, for example, or that it's this cow. That, however, is just tosay that descriptions succeed in their job because they are compositional. If English didn't let you use ‘brown’ context-independently to mean brown, and ‘cow’ context-independently to mean cow, it couldn't let you use ‘the brown coV tospecify a brown cow without naming it.

Jean-marc pizano

15:59 

But it is surely not tolerable that they should lead by plausiblearguments to a contradiction. If the d/D effect shows that primitive concepts mustbe learned inductively, and SA showsthat primitive concepts can't be learned inductively, then the conclusio

Jean-marc pizano But it is surely not tolerable that they should lead by plausiblearguments to a contradiction. If the d/D effect shows that primitive concepts mustbe learned inductively, and SA showsthat primitive concepts can't be learned inductively, then the conclusion has to be that there aren't any primitiveconcepts. But if there aren't any primitive

 


concepts, then there aren't any concepts at all. And if there aren't any concepts all, RTM has gone West. Isn't it a bit late in the day (and late in the book) for me to take back RTM?


Help!


Ontology


This all started because we were in the market for some account of how DOORKNOB is acquired. The story couldn't be hypothesis testing because Conceptual Atomism was being assumed, so DOORKNOB was supposed to beprimitive; and it's common ground that the mechanism for acquiring primitive concepts can't be any kind of induction.But, as it turned out, there is a further constraint that whatever theory of concepts we settle on should satisfy: it mustexplain why there is so generally a content relation between the experience that eventuates in concept attainment andthe concept that the experience eventuates in attaining. At this point, the problem about DOORKNOB metastasized:assuming that primitive concepts are triggered, or that they're ‘caught’, won't account for their content relation to theircauses; apparently only induction will. But primitive concepts can't be induced; to suppose that they are is circular.What started as a problem about DOORKNOB now looks like undermining all of RTM. This is not good. I wasrelying on RTM to support me in my old age.

Jean-marc pizano

But, on second thought, just why must one suppose that only a hypothesis-testing acquisition model can explain the doorknob/ DOORKNOB relation? The argument for this is, I'm pleased to report, non-demonstrative. Let's go overit once more: the hypothesis-testing model takes the content relation between a concept and the experience it's acquiredfrom to be a special case of the evidential relation between a generalization and its confirming instances (between, forexample, the generalization that Fs are Gs and instances of things that are both F and G). You generally get DOGfrom (typical) dogs and not, as it might be, from ketchup. That's supposed to be because having DOG requiresbelieving (as it might be) that typical dogs bark. (Note, once again, how cognitivism about concept possession andinductivism about concept acquisition take in one another's wash.) And, whereas encounters with typical dogs constituteevidence that dogs bark, encounters with ketchup do not (ceteris paribus). If the relation between concepts andexperiences is typically evidential, that would explain why it's so often a relation of content: and what other explanationhave we got?


That is what is called in the trade a ‘what-else’ argument. I have nothing against what-else arguments in philosophy; still less in cognitive science. Rational persuasion often invokes considerations that are convincing but notdemonstrative, and what else but a what-else argument could a convincing but non-demonstrative argument be? Onthe other hand, it is in the nature of what-else arguments that Q if not P trumps What else, if not P?’; and, in thepresent case, I think there is a prima facie plausible ontological candidate for Q; that is, an explanation which makes thed/D effect the consequence of a metaphysical truth about how concepts are constituted, rather than an empirical truthabout how concepts are acquired. In fact, I know of two such candidates, one of which might even work.


First Try at a Metaphysical Solution to the d/D Problem

Jean-marc pizano

If you assume a causal/historical (as opposed to a dispositional/ counterfactual) construal of the locking relation, it might well turn out that there is a metaphysical connection between acquiring DOORKNOB and causally interactingwith doorknobs. (Cf. the familiar story according to which it's because I have causally interacted with water and myTwin hasn't that I can think water-thoughts and he can't.) Actually, I don't much like causal/historical accounts oflocking (see Fodor 1994: App. B), but we needn't argue about that here. For, even if causally interacting withdoorknobs is metaphysically necessary for DOORKNOB-acquisition, it couldn't conceivably be metaphysically sufficient,just causally interacting with doorknobs doesn't guarantee you any concepts at all.Jean-marc pizano


15:59 

The traditional locus of the inference from finite determination to finite representation is, however, not Mentalese but English (see Chomsky 1965; Davidson 1967). Natural languages are learned, and learning is an ‘act of intellection’ parexce

Jean-marc pizano

The traditional locus of the inference from finite determination to finite representation is, however, not Mentalese but English (see Chomsky 1965; Davidson 1967). Natural languages are learned, and learning is an ‘act of intellection’ parexcellence. Doesn't that show that English has to have a compositional semantics? I doubt that it does. For one thing,as a number of us have emphasized (see Chapter 1; Fodor 1975; Schiffer 1987; for a critical discussion, see Lepore1997), if you assume that thinking is computing, it's natural to think that acquiring a natural language is learning how totranslate between it and the language you compute in. Suppose that language learning requires that the translationprocedure be ‘grasped’ and grasping the translation procedure requires that it be finitely


and explicitly represented. Still, there is no obvious reason why translation between English and Mentalese requires having a compositional theory of content for either language. Maybe translation to and from Mentalese is a syntacticalprocess: maybe the Mentalese translation of an English sentence is fully determined given its canonical structuraldescriptions (including, of course, lexical inventory).


I don't really doubt that English and Mentalese are both productive; or that the reason that they are productive is that their semantics is compositional. But that's faith in search of justification. The polemical situation is, on the one hand,that minds are productive only under a tendentious idealization; and, on the other hand, that productivity doesn'tliterally entail semantic compositionality for either English or Mentalese. Somebody sane could doubt that theargument from productivity to compositionality is conclusive.


The Systematicity Argument for Compositionality

Jean-marc pizano

‘Systematicity’ is a cover term for a cluster of properties that quite a variety of cognitive capacities exhibit, apparently as a matter of nomological necessity.54 Here are some typical examples. If a mind can grasp the thought that P ^ Q, it cangrasp the thought that Q ^ P; if a mind can grasp the thought that CP( Q), it can grasp the thought that B and thethought that QP; if a mind can grasp the thought that Mary loves John, it can grasp the thought that John loves Mary. . .etc. Whereas it's by no means obvious that a mind that can grasp the thought that P ^ Q can also grasp the thoughtthat R ^ Q (not even if, for example, (P ^ Q) ^ (R ^ Q). That will depend on whether it is the kind of mind that'sable to grasp the thought that R. Correspondingly, a mind that can think Mary loves John and John loves Mary may nonethe less be unable to think Peter loves Mary. That will depend on whether it is able to think about Peter.


It seems pretty clear why the facts about systematicity fall out the way they do: mental representations are compositional, and compositionality explains systematicity.55 The reason that a capacity for John loves Mary


54



It’s been claimed that (at least some) facts about the systematicity of minds are conceptually necessary; ‘we wouldn’t call it thought if it weren’t systematic’ (see e.g. Clark 1991). I don't, in fact, know of any reason to believe this, nor do I care much whether it is so. If it's conceptually necessary that thoughts are systematic, then it's nomologicallynecessary that creatures like us have thoughts, and this latter necessity still wants explaining.

Jean-marc pizano

It's sometimes replied that compositionality doesn't explain systematicity since compositionality doesn't entail systematicity (e.g. Smolensky 1995). But that only shows that explanation doesn't entail entailment. Everybody sensible thinks that the theory of continental drift explains why (e.g.) South America fits so nicely into Africa. It does so,however, not by entailing that South America fits into Africa, but by providing a theoretical background in which the fact that they fit comes, as it were, as no surprise.Similarly, mutatis mutandis, for the explanation of systematicity by compositionality.Inferences from systematicity to compositionality are ‘arguments to the best explanation’,and are (of course) non-demonstrative; which is (of course) not at all the same as their being implausible or indecisive. Compare Cummins 1996, which appears to be confusedabout this.


thoughts implies a capacity for Mary loves John thoughts is that the two kinds of thoughts have the same constituents; correspondingly, the reason that a capacity for John loves Mary thoughts does not imply a capacity for Peter loves Marythoughts is that they don't have the same constituents.Jean-marc pizano


15:59 

Couldn’t it be that the very same concept that is expressed by a single word in English gets expressed by a phrase in Bantu, or vice versa?Notice, however, that this could happen only if the English word in question is definable; viz. definable in B

Jean-marc pizano Couldn’t it be that the very same concept that is expressed by a single word in English gets expressed by a phrase in Bantu, or vice versa?Notice, however, that this could happen only if the English word in question is definable; viz. definable in Bantu. Since it’s going to be part of my story that most words areundefinable—not just undefinable in the language that contains them, but undefinable tout court —I’m committed to claiming that this sort of case can’t arise (very often).The issue is, of course, empirical. So be it.

 


10


It’s important to distinguish the idea that definitions typically capture only the core meaning of a univocal expression from the idea that definitions typically capture only one sense of an ambiguous expression. The latter is unobjectionable because it is responsive to pretheoretic intuitions that are often pretty emphatic: surely ‘bank’ has more thanone meaning. But who knows how many “aspects” the meaning of an un ambiguous word has? A fortiori, who knows when a theory succeeds in capturing some but not allof them?


11


Examples of this tactic are legion in the literature. Consider the following, from Higginbotham 1994. “jT]he meanings of lexical items systematically infect grammar. Forexample ... it is a condition of object-preposing in derived nominal constructions in English that the object be in some sense ‘affected’ in the events over which the nominalranges: that is why one has (1) but not (2)” (renumbered):1.

Jean-marc pizano

algebra’s discovery (by the Arabs)


2. *algebra's knowledge (by the Arabs).


Note that ‘in some sense’ is doing all the work. It is what distinguishes the striking claim that preposing is sensitive to the meanings of verbs from the rather less dramatic thought that you can prepose with some verbs (including ‘discover’) and not with others (including ‘know’). You may suppose you have some intuitive grasp of what ‘affecting’amounts to here, but I think it's an illusion. Ask yourself how much algebra was affected by its discovery? More or less, would you say, than the light bulb was affected byEdison's inventing it?


12


Fodor and Lepore (forthcoming a) provides some independent evidence for the analysis proposed here. Suppose, however, that this horse won’t run, and the asymmetryPinker points to really does show that ‘keep’ is polysemous. That would be no comfort to Jackendoff, since Jackendoff's account of the polysemy doesn't predict theasymmetry of entailments either: that J2 but not J3 belongs to the semantic field “possession” in Jackendoff's analysis is pure stipulation.But I won't stress this. Auntie says Ishould swear off ad hominems.


13


Auntie’s not the only one with this grumble; Hilary Putnam has recently voiced a generalized version of the same complaint. “[O]n Fodor’s theory . . . the meaning of . . .words is not determined, even in part, by the conceptual relations among the various notions I have mastered—e.g., between ‘minute’ and my other time concepts—butdepends only on ‘nomic relations’ between the words (e.g. minute) and the corresponding universals (e.g. minutehood). These ‘universals’ are just word-shaped objects whichFodor’s metaphysics projects out into the world for the words to latch on to via mysterious ‘nomic relations’; the whole story is nothing but a ‘naturalistic’ version of theMuseum Myth of Meaning” (1995: 79; italics and scare-quotes are Putnam’s). This does seem to me to be a little underspecified. Since Putnam provides no furtherexposition (and, endearingly, no arguments at all), I’m not sure whether I’m supposed to worry that there aren’t any universals, or only that there aren’t the universals that mysemantics requires. But if Putnam thinks saying “ ‘takes a minute’ expresses the property of taking a minuté’ all by itself puts me in debt for a general refutation ofnominalism, then he needs to have his methodology examined.Still, it’s right that informational semantics needs an ontology, and that the one it opts for had better not begthe questions that a semantic theory is supposed to answer. I’ll have a lot to say about all that in Chapters 6 and 7.

Jean-marc pizano

14


For an account of language acquisition in which the horse and cart are assigned the opposite configuration—syntax bootstraps semantics—see Gleitman 1990.Jean-marc pizano


15:59 

This seems as good an opportunity as any to say something about the current status of this line of thought. Of late, the productivity argument has come under two sorts of criticism that a cognitive scientist might find persuasive:

Jean-marc pizano

This seems as good an opportunity as any to say something about the current status of this line of thought. Of late, the productivity argument has come under two sorts of criticism that a cognitive scientist might find persuasive:


—Theperformance/competence argument. The claim that conceptual repertoires are typically productive requires not just an idealization to infinite cognitive capacity, but the kind of idealization that presupposes a memory/program distinction.This presupposition is, however, tendentious in the present polemical climate. No doubt, if your model for cognitivearchitecture is a Turing machine with a finite tape, it's quite natural to equate the concepts that a mind could entertainwith the ones that its program could enumerate assuming that the tape supply is extended arbitrarily. Because the Turingpicture allows the size of the memory to vary while the program stays the same, it invites the idea that machines areindividuated by their programs.


But this way of drawing a ‘performance/competence’ distinction seems considerably less natural if your model of cognitive architecture is (e.g.) a neural net. The natural model for ‘extending’ the memory of a network (and likewise,mutatis mutandis, for other finite automata) is to add new nodes. However, the idea of adding nodes to a network whilepreserving its identity is arguably dubious in a way that the idea of preserving the identity of a Turing machine tapewhile adding to its tape is arguably not.52 The problem is precisely that the memory/program distinction isn't availablefor networks. A network is individuated by the totality of its nodes, and the nodes are individuated by the totality oftheir connections, direct and indirect, to one another.53 In consequence, ‘adding’ a node to a network changes theidentity of all the other nodes, and hence the identity

Jean-marc pizano

52



If the criterion of machine individuation is I(nput)/O(utput) equivalence, then a finite tape Turing machine is a finite automaton. This doesn’t, I think, show that the intuitions driving the discussion in the text are incoherent. Rather it shows (what’s anyhow independently plausible) that I/O equivalence isn’t what’s primarily at issue indiscussions of cognitive architecture. (See Pylyshyn 1984.)


of the network itself. In this context, the idealization from a finite cognitive performance to a productive conceptual capacity may strike the theorist as begging precisely the architectural issues that he wants to stress.


—The finite representation argument. If a finite creature has an infinite conceptual capacity, then, no doubt, the capacity must be finitely determined.; that is, there must be a finite set of sufficient conditions, call it S, such that a creature has thecapacity if S obtains. But it doesn't follow by any argument I can think of that satisfying S depends on the creature'srepresenting the compositional structure of its conceptual repertoire; or even that the conceptual repertoire has acompositional structure. For all I know, for example, it may be that sufficient conditions for having an infiniteconceptual capacity can be finitely specified in and only in the language of neurology, or of particle physics. And,presumably, notions like computational state and representation aren't accessible in these vocabularies. It's tempting tosuppose that one has one's conceptual capacities in virtue of some act of intellection that one has performed. Andthen, if the capacity is infinite, it's hard to see what act of intellection that could be other than grasping the primitivebasis of a system of representations; of Mentalese, in effect. But talk of grasping is tendentious in the present context.It's in the nature of intentional explanations of intentional capacities that they have to run out sooner or later. It'sentirely plausible that explaining what determines one's conceptual capacities (figuratively, explaining one's mastery ofMentalese) is where they run out.

Jean-marc pizano

One needs to be sort of careful here. I'm not denying that Mentalese has a compositional semantics. In fact, I can't actually think of any other way to explain its productivity, and writing blank checks on neurology (or particle physics)strikes me as unedifying. But I do think we should reject the following argument: ‘Mentalese must have a compositionalsemantics because mastering Mentalese requires grasping its compositional semantics.’ It isn't obvious that masteringMentalese requires grasping anything

Jean-marc pizano

dvrmd

главная