URL
16:40

23 decembrie


sa fie un test pentru viitor !    


 


 




 

sa fie un test pentru viitor !    


Sursa: 23 decembrie

Jean-marc pizano The conditions for satisfying the latter are patently specifiablewithout reference to the former, viz. by enumerating the shapes, colours, functions, and the like that doorknobstypically have.

 


It's actually sort of remarkable that all of this is so. Pace Chapter 5, concepts really ought to be stereotypes. Not only because there's so much evidence that having a concept and having its stereotype are reliably closely correlated (andwhat better explanation of reliable close correlation could there be than identity?) but also because it is, as previouslynoted, generally stereotypic examples of X-ness that one learns X from. Whereas, what you'd expect people reliably to learnfrom stereotypic examples of Xisn't


How much such experience? And under what conditions of acquisition? I assume that there are (lots of) empirical parameters that a formulation of the laws of concept acquisition would have to fill in. Doing so would be the proprietary goal of a serious psychology of cognitive development. Which, to quote a poet, “in our case we have not


got”.


the conceptXbut theXstereotype.84 A stereotypic X is always a better instance of the X stereotype than it is of X; that is a truism.85


Interesting Digression


The classic example of this sort of worry is the puzzle in psycholinguistics about ‘Motherese’. It appears that mothers go out of their way to talk to children in stereotypic sentences of their native language; in the case of English, relativelyshort sentences with NVN structure (and/or Agent Action Object structure; see Chapter 3). The child is therebyprovided with a good sample of stereotypic English sentences, from which, however, he extracts not (anyhow, notonly) the concept STEREOTYPIC ENGLISH SENTENCE, but the concept ENGLISH SENTENCE TOUTCOURT. But why on Earth does he do that? Why doesn't he instead come to believe that the grammar of English is S^ NVN, or some fairly simple elaboration thereof, taking such apparent counter-examples as he may encounter as notwell-formed? Remember, on the one hand, that Mother is following a strategy of screening him from utterances ofunstereotypic sentences; and, on the other hand, that he'll hear lots of counter-examples to whatever grammar he triesout, since people say lots of ungrammatical things. I think the answer must be that it's a law about our kinds of minds thatthey are set up to make inductions from samples consisting largely of stereotypic English sentences to the conceptENGLISH SENTENCE (viz. the concept sentences satisfy in virtue of being well-formed relative to the grammar ofEnglish) and not from samples consisting largely of stereotypic English sentences to the concept STEREOTYPICENGLISH SENTENCE (viz. the concept sentences satisfy in virtue of being NVN).

Jean-marc pizano

In short, I do think there's good reason for cognitive scientists to be unhappy about the current status of theorizing about stereotypes. The kinds of worries about compositionality that Chapter 5 reviewed show that the relation astereotype bears to the corresponding concept can't be constitutive. The standard alternative proposal is that it is simplyheuristic; e.g. that stereotypes are databases for fast recognition procedures. But this seems not to account for theubiquity and robustness of stereotype phenomena; and, anyhow, it begs the sort of question that we just discussed: whyis it the concept X rather than the concept STEREOTYPIC X that one normally gets from experience withstereotypic Xs? (Mutatis mutandis, if the way perception works is that you subsume things under 32 33


DOORKNOB by seeing that they are similar to stereotypic doorknobs, why is it that you generally see a doorknob as a doorknob, and not as something that satisfies the doorknob stereotype?) If our minds are, in effect, functions fromstereotypes to concepts, that is a fact about us. Indeed, it is a very deep fact about us. My point in going on about this is toemphasize the untriviality of the consideration that we typically get a concept from instances that exemplify itsstereotype.


That a concept has the stereotype that it does is never truistic; and that a stereotype belongs to the concept that it does is never truistic either. In particular, since the relation between a concept and its stereotype is always contingent, nocircularity arises from defining ‘the concept X by reference to ‘the stereotype of the concept X.Jean-marc pizano



Jean-marc pizano So, suppose that the prototype for NURSEincludes the feature female. Pace Smith and Osherson's kind of proposal, you can't derive the prototype for MALENURSE just by replacing female with male, all sorts of other things have to change too. This is true even though theconcept MALE NURSE is ‘intersective’; i.e. even though the set of male nurses is the overlap of the set of males withthe set of nurses (just as the set of pet fish is the overlap of the set of pets with the set of fish). I want to stress thispoint because prototype theorists, in their desperation, are sometimes driven to suggest that MALE NURSE, PETFISH, and the like aren't compositional after all, but it's all right that they aren't, since they are idioms. But surely, surely,not. What could be stronger evidence against PET FISH being an idiom or for its being compositional than that itentails PET and FISH and that PET, FISH entails it?

 


It's perhaps worth mentioning the most recent attempt to salvage the compositionality of prototypes from pet fish, male nurses, striped apples, and the like (Kamp and Partee 1995). The idea goes like this: maybe good examples ofstriped apples aren't good examples of striped things tout court (compare zebras). But, plausibly, a prototypic example ofa striped apple would ipso facto be as good an example of something striped as an apple can be. That is a way of sayingthat the relevant comparison class for judging the typicality of a sample of apple stripes is not the stripes on things atlarge but rather the stripes on other apples; it's these that typical apple stripes are typical of. In effect, then, what youneed to do to predict whether a certain example of apple stripes is a good example of apple stripes, is to “recalibrate”STRIPES to apples.

Jean-marc pizano

A fair amount of algebra has recently been thrown at the problem of how, given the appropriate information about a reference set, one might calculate the typicality of one of its members (for discussion, see Kampand Partee 1995; Osherson and Smith 1996). But, as far as I can see, the undertaking is pointless. For one thing, itbears emphasis that the appropriate information for recalibrating a complex concept comes from the world, not fromthe content of its constituents. If it happens that they paint fire engines in funny shades of red, then typical fire enginered won't be typical red. To decide whether the colour of a certain engine is typical, you'd therefore need to recalibrateRED to FIRE ENGINE; and to do that, you'd need to know the facts about what shades of red fire engines arepainted. Nothing about the concepts RED or FIRE ENGINE, per se, could tell you this; so nothing about theseconcepts, per se, could predict the typicality of a given sample of fire-engine red. In this sense, “recalibrated”compositionality, even if we knew how to compute it, wouldn't really be compositionality. Compositionality is thederivation of the content of a complex concept just from its structure and the content of its constituents; that's whycompositionality explains productivity and systematicity


Still worse, if possible: identifying the relevant reference set for a complex concept itself depends on a prior grasp of its compositional structure. In the case of STRIPED APPLE, for example, the reference set for the recalibration ofSTRIPE is the striped apples. How do we know that? Because we know that STRIPED APPLE applies to is theintersection of the striped things and the apple things. And how do we know that? Because we know the compositionalsemantics of STRIPED APPLE. Computing typicality for a complex concept by “recalibrating” its constituents thuspresupposes semantic compositionality; it presupposes that we already know how the content of the concept depends onthe content of the concept's constituents. So, recalibration couldn't be what makes concepts compositional, so itcouldn't be what makes them systematic and productive. So what is recalibration for? Search me.

Jean-marc pizano

By the way, these pet fish sorts of arguments ramify in ways that may not be immediately apparent; compositionality is a sharp sword and cutteth many knots.59 For example, it's very popular in philosophical circles (it's the last gasp ofEmpiricist semantics) to suppose that there are such things as ‘recognitional concepts’; RED and SQUARE, forexample, and likewise, I suppose, DOG and TREE, and many, many others.Jean-marc pizano



Jean-marc pizano

cognitivist according to this criterion, and wouldn't be even if (by accident) the concept DOORKNOB happened to be triggered by doorknobs..) Well, by this criterion, my story isn't cognitivist either. My story says that what doorknobs have in commonqua doorknobs is being the kind of thing that our kind of minds (do or would) lock to from experience with instances of the doorknobstereotype. (Cf. to be red just is to have that property that minds like ours (do or would) lock to in virtue of experiences oftypical instances of redness.) Why isn't that OK?82


If you put that account of the metaphysics of doorknobhood together with the metaphysical account of concept possession that informational semantics proposes—having a concept is something like “resonating to” the propertythat the concept expresses—then you get: being a doorknob is having that property that minds like ours come to resonateto in consequence of relevant experience with stereotypic doorknobs. That, and not being learned inductively, is whatexplains the content relation between DOORKNOB and the kinds of experience that typically mediates its acquisition.It also explains how doorknobhood could seem to be undefinable and unanalysable without being metaphysically ultimate.And it is also explains how DOORKNOB could be both psychologically primitive and not innate, the StandardArgument to the contrary not withstanding.


Several points in a spirit of expatiation:


The basic idea is that what makes something a doorknob is just: being the kind of thing from experience with which our kind of mind readily acquires the concept DOORKNOB. And, conversely, what makes something the conceptDOORKNOB is just: expressing the property that our kinds of minds lock to from experience with good examples ofinstantiated doorknobhood. But this way of putting the suggestion is too weak since experience with stereotypicdoorknobs might cause one to lock to any of a whole lot of properties (or to none), depending on what else is going onat the time. (In some contexts it might cause one to lock to the property belongs to Jones.) Whereas, what I want to say isthat doorknobhood is the property that one gets locked to when experience with typical doorknobs causes the locking anddoes so in virtue of the properties they have qua typical doorknobs. We have the kinds of minds that often

Jean-marc pizano

Modal footnote (NB): Here as elsewhere through the present discussion, ‘minds like ours’ and ‘the (stereo)typical properties of doorknobs’ are to be read rigidly, viz. as denoting the properties that instances of stereotypic doorknobs and typical minds have in this world. That the typical properties of minds and doorknobs are what they are ismeant to be contingent.


acquire the concept X from experiences whose intentional objects are properties belonging to the X-stereotype8


Notice that this is not a truism, and that it's not circular; it's contingently true if it's true at all. What makes it contingent is that being a doorknob is neither necessary nor sufficient for something to have the stereotypic doorknob properties(not even in ‘normal circumstances’ in any sense of “normal circumstances” I can think of that doesn't beg thequestion).Stereotype is a statistical notion. The only theoretically interesting connection between being a doorknob andsatisfying the doorknob stereotype is that, contingently, things that do either often do both.


In fact, since the relation between instantiating the doorknob stereotype and being a doorknob is patently contingent, you might want to buy into the present account of DOORKNOB even if you don't like the Lockean story about RED.The classical problem with the latter is that it takes for granted an unexplicated notion of ‘looks red’ (‘red experience’,‘red sense datum’, or whatever) and is thus in some danger of circularity since “the expression ‘looks red’ is notsemantically unstructured. Its sense is determined by that of its constituents. If one does not understand thoseconstituents, one does not fully understand the compound” (Peacocke 1992: 408). Well, maybe this kind of objectionshows that an account of being red mustn't presuppose the property of looking red (though Peacocke doubts that it showsthat, and so do I). In any event, no parallel argument could show that an account of being a doorknob mustn'tpresuppose the property of satisfying the doorknob stereotype.Jean-marc pizano



Jean-marc pizano

I don't, however, think this comes to anything much. In the first place, it's not true (in any unquestion-begging sense) that “virtually any learning algorithm [acts] differently depending on whether it hears X or doesn't hear X”. To thecontrary, it's a way of putting the productivity problem that the learning algorithm must somehow converge on treatinginfinitely many unheard types in the same way that it treats finitely many of the heard types (viz. as grammatical) andfinitely many heard types in the same way that it treats a different infinity of the unheard ones (viz. as ungrammatical).To that extent, the algorithm must not assume that either being heard or not being heard is a projectible property ofthe types.


On the other hand, every treatment of learning that depends on the feedback of evidence at all (whether it supposes the evidence to be direct or indirect, negative or positive, or all four) must “be several layers removed from the input,looking at broad statistical patterns across the lexicon”; otherwise the presumed feedback won't generalize. It followsthat, on anybody's account, the negative information that the environment provides can't be “the nonoccurrence ofparticular sentences” (my emphasis); it's got to be the non-occurrence of certain kinds of sentences.


This much is common ground to any learning theory that accounts for the productivity of what is learned.


Were we’ve gotten to now: probably there isn’t a Baker’s Paradox about lexical syntax; you’d need ‘no overgeneralization’ to get one, and ‘no overgeneralization’ is apparently false of the lexicon. Even if, however, therewere a Baker’s Paradox about the lexicon, that would show that the hypotheses that the child considers when he makeshis lexical inductions must be tightly endogenously constrained. But it wouldn't show, or even suggest, that they arehypotheses about semantic properties of lexical items. No more than the existence of a bona fide Baker’s Paradox forsentential syntax—which it does seem that children hardly ever overgeneralize—shows, or even suggests, that it’s interms of the semantic properties of sentences that the child’s hypotheses about their syntax are defined.

Jean-marc pizano

So much for Pinker’s two attempts at ontogenetic vindications of lexical semantics. Though neither seems to work at all, I should emphasize a difference between them: whereas the ‘Baker’s Paradox’ argument dissolves uponexamination, there’s nothing wrong with the form of the bootstrapping argument. For all that I’ve said, it could still betrue that lexical syntax is bootstrapped from lexical semantics. Making a convincing case that it is would require, at aminimum, identifying the straps that the child tugs and showing that they are bona fide semantic; specifically, it wouldrequire showing that the lexical properties over which the child generalizes are typically among the ones that semantic-level lexical representations specify. In principle, we could get a respectable argument of that form tomorrow; it’s justthat, so far, there aren't any. So too, in my view, with the other ‘empirical’ or ‘linguistic’ arguments for lexicaldecomposition; all that’s wrong with them is that they aren’t sound.


Oh, well, so be it. Let’s go see what the philosophers have.


4 The Demise of Definitions, Part II: The


Philosopher's Tale


[A] little thing affects them. A slight disorder of the stomach makes them cheats. [They] may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of underdone potato.


—Scrooge


It's a sad truth about definitions that even their warm admirers rarely loved them for themselves alone. Cognitive scientists (other than linguists; see Chapter 3) cared about definitions because they offered a handy construal of thethesis that many concepts are complex; viz. the concepts in a definition are the constituents of the concept it defines. And cognitivescientists liked many concepts being complex because then many concepts could be learned by assembling them fromtheir parts. And cognitive scientists liked many concepts being learned by assembling them from their parts becausethen only the primitive parts have to be unlearned. We'll see, in later chapters, how qualmlessly most of cognitivescience dropped definitions when it realized that it could have complex concepts without them.

Jean-marc pizano

Jean-marc pizano

To simplify the exposition, I’ll use this notion pretty informally; for example, I’m glossing over the distinction between Boolean sentences and Boolean predicates. But none of this corner-cutting is essential to the argument.


This is not to deny that there are typicality effects for negative categories; as Barsalou remarks, “with respect to birds, chair is a better nonmember than is butterfly ” (1987: 101). This observation does not, however, generalize to Boolean functions at large. I doubt that there are more and less typical examples of if it's a chair, then it's a Windsor orof chair or butterfly.


The moral seems clear enough: the mental representations that correspond to complex Boolean concepts specify not their prototypes but their logical forms. So, for example, NOT A CAT has the logical form not (F), and the rule ofinterpretation for a mental representation of that form assigns as its extension the complement of the set of Fs. Toadmit this, however, is to abandon the project of using prototype structure to account for the productivity (/systematicity) of complex Boolean predicates. So be it.


(II) The Pet Fish Problem


Prototype theories want to explicate notions like falling under a concept by reference to notions like being similar to the concept's exemplar. Correspondingly, prototype theories can represent conceptual repertoires as compositional only if(barring idioms) a thing's similarity to the exemplar of a complex concept is determined by its similarity to theexemplars of its constituents. However, this condition is not satisfied in the general case. So, for example, a goldfish isa poorish example of a fish, and a poorish example of a pet, but it's a prototypical example of a pet fish. So similarity tothe prototypic pet and the prototypic fish doesn't predict similarity to the prototypical pet fish. It follows that ifmeanings were prototypes, then you could know what ‘pet’ means and know what ‘fish’ means and still not know what‘pet fish’ means. Which is just to say that if meanings were prototypes, then the meaning of ‘pet fish’ wouldn't becompositional. Various solutions for this problem are on offer in the literature, but it seems to me that none is evenclose to satisfactory. Let's have a quick look at one or two.

Jean-marc pizano

Smith and Osherson (1984) take prototypes to be matrices of weighted features (rather than exemplars). So, for example, the prototype for APPLE might specify a typical shape, colour, taste, size, ripeness, . . . etc. Let's suppose, inparticular, that the prototypical apple is red, and consider the problem of constructing a prototype for PURPLEAPPLE. The basic idea is to form a derived feature matrix that's just like the one for APPLE, except that the featurepurple replaces the feature red and the weight of the new colour feature is appropriately increased. PET FISH wouldpresumably work the same way.


It's pretty clear, however, that this treatment is flawed. To see this, ask yourself how much the feature purple weighs in the feature matrix for PURPLE APPLE. Clearly, it must weigh more than the feature red does in the matrix for APPLEsince, though there can be apples that aren't red, there can't be purple apples that aren't purple; any more than therecan be red apples that aren't red, or purple apples that aren't apples. In effect,


purple has to weigh infinitely much in the feature matrix for PURPLE APPLE because purple apples are purple, unlike typical apples are red, is a logical truth.


So the Smith/Osherson proposal for composing prototypes faces a dilemma: either treat the logical truths as (merely) extreme cases of statistically reliable truths, or admit that the weights assigned to the features in derived matrices aren'tcompositional even if the matrices themselves are. Neither horn of this dilemma seems happy. Moreover, it's pretty clearwhat's gone wrong: what really sets the weight of the purple in PURPLE APPLE isn't the concept's prototype; it's theconcept's logical form. But prototypes don't have logical forms.

Jean-marc pizano

Another way to put the pet fish problem is that the ‘features’ associated with the As in AN constructions are not, in the general case, independent of the features associated with the Ns.Jean-marc pizano



Jean-marc pizano Suppose that we want thefollowing to be a prototypical case where you and I have different but similar concepts of George Washington: thoughwe agree about his having been the first American President, and the Father of His Country, and his having cut down acherry tree, and so on, you think that he wore false teeth and I think that he didn't. The similarity of our GW conceptsis thus some (presumably weighted) function of the number of propositions about him that we both believe, and thedissimilarity of our GW concepts is correspondingly a function of the number of such propositions that we disagreeabout. So far, so good.

 


But the question now arises: what about the shared beliefs themselves; are they or aren't they literally shared? This poses a dilemma for the similarity theorist that is, as far as I can see, unavoidable. If he says that our agreed uponbeliefs about GW are literally shared, then he hasn't managed to do what he promised; viz. introduce a notion ofsimilarity of content that dispenses with a robust notion of publicity. But if he says


that the agreed beliefs aren't literally shared (viz. that they are only required to be similar), then his account of content similarity begs the very question it was supposed to answer: his way of saying what it is for concepts to have similar butnot identical contents presupposes a prior notion of beliefs with similar but not identical contents.


The trouble, in a nutshell, is that all the obvious construals of similarity of beliefs (in fact, all the construals that I've heard of) take it to involve partial overlap of beliefs.22 But this treatment breaks down if the beliefs that are in the overlap arethemselves construed as similar but not identical. It looks as though a robust notion of content similarity can't butpresuppose a correspondingly robust notion of content identity. Notice that this situation is not symmetrical; thenotion of content identity doesn't require a prior notion of content similarity. Leibniz's Law tells us what it is for thecontents of concepts to be identical; Leibniz's Law tells us what it is for anythings to be identical.

Jean-marc pizano

As I remarked above, different theorists find different rugs to sweep this problem under; but, as far as I can tell, none of them manages to avoid it. I propose to harp on this a bit because confusion about it is rife, not just in philosophybut in the cognitive science community at large. Not getting it straight is one of the main things that obscures how veryhard it is to construct a theory of concepts that works, and how very much cognitive science has thus far failed to doso.


Suppose, for example, it's assumed that your concept PRESIDENT is similar to my concept PRESIDENT in so far as we assign similar subjective probabilities to propositions that contain the concept. There are plenty of reasons forrejecting this sort of model; we'll discuss its main problems in Chapter 5. Our present concern is only whetherconstructing a probabilistic account of concept similarity would be a way to avoid having to postulate a robust notionof content identity.


Perhaps, in a typical case, you and I agree that p is very high for ‘FDR is/was President’ and for ‘The President is the Commander-in-Chief of the Armed Forces’ and for ‘Presidents have to be of voting age’, etc.; but, whereas you rate‘Millard Fillmore is/was President’ as having a probability close to 1, I, being less well informed, take it to be around p= 0.07 (Millard Fillmore???). This gives us an (arguably) workable construal of the idea that we have similar but notidentical PRESIDENT concepts. But it does so only by helping itself to a prior notion of belief identity, and to theassumption that there are lots of thoughts of which

Jean-marc pizano

'Why not take content similarity as primitive and stop trying to construe it?’ Sure; but then why not take content identity as primitive and stop trying to construe it ? In which case, what is semantics for ?


our respective PRESIDENTS are constituents that we literally share. Thus, you and I are, by assumption, both belief-related to the thoughts that Millard Fillmore was President, that Presidents are Commanders-in-Chief, etc.Jean-marc pizano



Jean-marc pizano C. Smart who, it seems to me, got more of this right than he is these days given credit for: “This account of secondary qualities explains their unimportance in physics.For obviously the discriminations . . . made by a very complex neurophysiological mechanism are hardly likely to correspond to simple and nonarbitrary distinctions innature” (1991: 172). My point is: this is true not just of colours, but of doorknobs too.

 

Jean-marc pizano

Jean-marc pizano If ‘doorknob’ has anominal definition, then it ought to be possible for a competent linguist or analytical philosopher to figure out what itsnominal definition is. If ‘doorknob’ has a real definition, then it ought to be possible for a science of doorknobs touncover it. But linguists and philosophers have had no luck defining ‘doorknob’ (or, as we've seen, anything muchelse). And there is nothing for a science of doorknobs to find out. The direction this is leading in is that if ‘doorknob’ isundefinable, that must be because being a doorknob is a primitive property. But, of course, that's crazy. If a thing hasdoorknobhood, it does so entirely in virtue of others of the properties it has. If doorknobs don't have hidden essences orreal definitions, that can't possibly be because being a doorknob is one of those properties that things have simply becausethey have them; ultimates like spin, charm, charge, or the like, at which explanation ends.

 


So, here's the riddle. How could ‘doorknob’ be undefinable (contrast ‘bachelor’ =df ‘unmarried man’) and lack a hidden essence (contrast water = H2O) without being metaphysically primitive (contrast spin, charm, and charge)?


The answer (I think) is that ‘doorknob’ works like ‘red’.


Now I suppose you want to know how ‘red’ works.


Well, ‘red’ hasn't got a nominal definition, and redness doesn't have a real essence (ask any psychophysicist), and, of course, redness isn't metaphysically ultimate. This is all OK because redness is an appearance property, and the point aboutappearance properties is that they don't raise the question that definitions, real and nominal, propose to answer: viz.‘What is it that the things we take to be Xs have in common, over and above our taking them to be Xs?’ This is, to put itmildly, not a particularly original thing to say about red. All that's new is the proposal to extend this sort of analysis todoorknobs and the like; the proposal is that there are lots of appearance concepts that aren't sensory concepts.80 That this should beso is, perhaps, unsurprising on reflection. There is no obvious reason why 30a property that is constituted by the mental states that things that have it evoke in us must ipso facto be constituted by thesensory states that things that have it evoke in us.

Jean-marc pizano

All right, all right; you can't believe that something's being a doorknob is “about us” in anything like the way that maybe something's being red is. Surely ‘doorknob’ expresses a property that a thing either has or doesn't, regardless ofour views; as it were, a property of things in themselves? So be it, but which property? Consider the alternatives (herewe go again): is it that ‘doorknob’ is definable? If so, what's the definition? (And, even if ‘doorknob’ is definable, someconcepts have to be primitive, so the present sorts of issues will eventually have to be faced about them.) Is it thatdoorknobs qua doorknobs have a hidden essence? Hidden where, do you suppose? And who is in charge of finding it?Is it that being a doorknob is ontologically ultimate? You've got to be kidding.31


If you take it seriously that DOORKNOB hasn't got a conceptual analysis, and that doorknobs don't have hidden essences, all that's left to make something a doorknob (anyhow, all that's left that I can think of) is how it strikes us. But ifbeing a doorknob is a property that's constituted by how things strike us, then the intrinsic connection between the contentof DOORKNOB and the content of our doorknob-experiences is metaphysically necessary, hence not a fact that acognitivist theory of concept acquisition is required in order to explain.


To be sure, there remains something about the acquisition of DOORKNOB that does want explaining: viz. why it is the property that these guys (several doorknobs) share, and not the property that those guys (several cows) share, thatwe lock to from experience of good (e.g. stereotypic) examples of doorknobs. And, equally certainly, it's got to besomething about our kinds of minds that this explanation adverts to. But, I'm supposing, such an explanation iscognitivist only if it turns on the evidential relation between having the stereotypic doorknob properties and being a doorknob. (So,for example, triggering explanations aren't

Jean-marc pizano

Jean-marc pizano

30


So, then, which appearance properties are sensory properties? Here’s a line that one might consider: £ is a sensory property only if it is possible to have an experience of which £-ness is the intentional object (e.g. an experience (as) of red) even though one hasn't got the concept £ Here the test of having the concept £ would be something like beingable to think thoughts whose truth conditions include ... £ ... (e.g. thoughts like that's red). I think this must be the notion of ‘sensory property’ that underlies the Empiricistidea that RED and the like are learned ‘by abstraction’ from experience, a doctrine which presupposes that a mind that lacks RED can none the less have experiences (as) ofredness. By this test, DOORKNOB is presumably not a sensory concept since, though it is perfectly possible to have an experience (as) of doorknobs, I suppose only a mindthat has the concept DOORKNOB can do so.‘But how could one have an experience (as) of red if one hasn't got the concept RED?’ It's easy: in the case of redness, but notof doorknobhood, one is equipped with sensory organs which produce such experiences when they are appropriately stimulated. Redness can be sensed, whereas the perceptualdetection of doorknobhood is always inferential. Just as sensible psychologists have always supposed.


31


The present discussion parallels what I regard as a very deep passage in Schiffer 1987 about being a dog. Schiffer takes for granted that ‘dog’ doesn’t name a species, and (hence?) that dogs as such don’t have a hidden essence. His conclusion is that there just isn’t (except pleonastically) any such property as being a dog My diagnosis is thatthere is too, but it’s mind-dependent.

Jean-marc pizano

32


Reminder: ‘the X stereotype’ is rigid. See n. 12 above.


33


Except in the (presumably never encountered) case where all the X s are stereotypic. In that case, there’s a dead heat.


34


In principle, they are also epistemically independent in both directions. As things are now, we find out about the stereotype by doing tests on subjects who are independentlyidentified as having the corresponding concept. But I assume that if we knew enough about the mind/brain, we could predict a concept from its stereotype and vice versa. Ineffect, given the infinite set of actual and possible doorknobs, we could predict the stereotype from which our sorts of minds would generalize to it; and given the doorknobstereotype, we could predict the set of actual and possible objects which our kinds of minds would take to instantiate doorknobhood.


35


Compare Jackendoff: “Look at the representations of, say, generative phonology... It is strange to say that English speakers know the proposition, true in the world independent of speakers [sic ], that syllable-initial voiceless consonants aspirate before stress ... In generative phonology . . . this rule of aspiration is regarded as a principle of internalcomputation, not a fact about the world. Such semantical concepts as implication, confirmation, and logical consequence seem curiously irrelevant” (1992: 29). Note that,though they are confounded in his text, the contrast that Jackendoff is insisting on isn’t between propositions and rules/principles of computation; it’s between phenomena of thekind that generative phonology studies and facts about the world. But that ‘p’ is aspirated in ‘Patrick’ is a fact about the world. That is to say: it's a fact. And of course the usuallogico-semantical concepts apply. That ‘p’ is aspirated in ‘Patrick’ is what makes the claim that ‘p’ is aspirated in ‘Patrick’ true; since ‘p’ is aspirated in ‘Patrick’, something in‘Patrick' is aspirated . . . and so forth.

Jean-marc pizano

36


In just this spirit, Keith Campbell remarks about colours that if they are “integrated reflectances across three overlapping segments clustered in the middle of the total electromagnetic spectrum, then they are, from the inanimate point of view, such highly arbitrary and idiosyncratic properties that it is no wonder the particular colors we arefamiliar with are manifest only in transactions with humans, rhesus monkeys, and machines especially built to replicate just their particular mode of sensitivity to photons”(1990: 572—3). (The force of this observation is all the greater if, as seems likely, even the reflectance theory underestimates the complexity of colour psychophysics.)See alsoJ. J.Jean-marc pizano



Jean-marc pizano

Names, by contrast, succeed in their job because they aren't compositional; not even when they are syntactically complex. Consider ‘the Iron Duke’, to which ‘Iron’ does not contribute iron, and which you can therefore use to specifythe Iron Duke even if you don't know what he was made of. Names are nicer than descriptions because you don't haveto know much to specify their bearers, although you do have to know what their bearers are called. Descriptions arenicer than names because, although you do have to know a lot to specify their bearers, you don't have to know whattheir bearers are called. What's nicer than having the use of either names or descriptions is having the use of both. Iagree that, as a piece of semantic theory, this is all entirely banal; but that's my point, so don't complain. There is, torepeat, no need for fancy arguments that the representational systems we talk and think in are in large partcompositional; you find the effects of their compositionality just about wherever you look.


I must apologize for having gone on at such length about the arguments pro and con conceptual compositionality; the reason I've done so is that, in my view, the status of the statistical theory of concepts turns, practically entirely, on thisissue. And statistical theories are now the preferred accounts of concepts practically throughout cognitive science. Inwhat follows I will take the compositionality of conceptual repertoires for granted, and try to make clear how the thesisthat concepts are prototypes falls afoul of it.


Why Concepts Can't Be Prototypes^

Jean-marc pizano

Here's why concepts can't be prototypes: whatever conceptual content is, compositionality requires that complex concepts inherit their contents from those of their constituents, and that they do so in a way that explains theirproductivity and systematicity. Accordingly, whatever is not inherited from its constituents by a complex concept is ipsofacto not the content of that concept. But: (i) indefinitely many complex concepts have no prototypes; a fortiori they donot inherit their prototypes from their constituents. And, (ii) there are indefinitely many complex concepts whoseprototypes aren't related to the prototypes of their constituents in the ways that the compositional explanation ofproductivity and systematicity requires. So, again, if concepts are compositional then they can't be prototypes.


In short, prototypes don't compose. Since this is the heart of the case against statistical theories of concepts, I propose to expatiate a bit on the examples.


(I) The Uncat Problem


For indefinitely many “Boolean” concepts,57there isn't any prototype even though: —their primitive constituent concepts all have prototypes,


and


--the complex concept itself has definite conditions of semantic evaluation (definite satisfaction conditions).


So, for example, consider the concept NOT A CAT (mutatis mutandis, the predicate ‘is not a cat’); and let's suppose (probably contrary to fact) that CAT isn't vague; i.e. that ‘is a cat’ has either the value S or the value U for every objectin the relevant universe of discourse. Then, clearly, there is a definite semantic interpretation for NOT A CAT; i.e. itexpresses the property of not being a cat, a property which all and only objects in the extension of the complement of theset of cats instantiate.

Jean-marc pizano

However, although NOT A CAT is semantically entirely well behaved on these assumptions, it's pretty clear that it hasn't got a stereotype or an exemplar. For consider: a bagel is a pretty good example of a NOT A CAT, but a bagelcouldn't be NOT A CAT's prototype. Why not? Well, if bagels are the prototypic NOT A CATs, it follows that themore a thing is like a bagel the less it's like a cat; and the more a thing isn't like a cat, the more it's like a bagel. But the secondconjunct is patently not true. Tuesdays and erasers, both of which are very good examples of NOT A CATs, aren't atall like bagels. An Eraser is not more a Bagel for being a bad Cat. Notice that the same sort of argument goes throughif you are thinking of stereotypes in terms of features rather than exemplars. There is nothing that non-cats qua noncats as such are likely to have in common (except, of course, not being cats).58

Jean-marc pizano

Jean-marc pizano “[take] the burden of explaining learningout of the environmental input and [put] it back into the child” (1989: 14—15). Only if the child does not overgeneralizelexical categories is there evidence for his “differentiating [them] a priori’ (ibid.: 44, my emphasis); viz. prior toenvironmentally provided information.

 


Pinker's argument is therefore straightforwardly missing a premiss. The logical slip seems egregious, but Pinker really does make it, as far as I can tell. Consider:


[Since there is empirical evidence against the child's having negative information, and there is empirical evidence for the child's rules being productive,] the only way out of Baker's Paradox that's left is . . . rejecting arbitrariness.Perhaps the verbs that do or don't participate in these alterations do not belong to arbitrary lists after all . . .[Perhaps, in particular, these classes are specifiable by reference to semantic criteria.] ... If learners could acquireand enforce criteria delineating the[se] . . . classes of verbs, they could productively generalize an alternation to verbsthat meet the criteria without overgeneralizing it to those that do not. (ibid.: 30)


Precisely so. If, as Pinker's theory claims, the lexical facts are non-arbitrary and children are sensitive to their nonarbitrariness, then the right prediction is that children don't overgeneralize the lexical rules.


Which, however, by practically everybody's testimony, including Pinker's, children reliably do. On Pinker's own account, children aren't “conservative” in respect of the lexicon (see 1989: 19—26, sec. 1.4.4.1 for lots and lots ofcases).38 This being so, there's got to be something wrong with the theory that the child's hypotheses “differentiate”lexical classes a priori. A priori constraints would mean that false hypotheses don't even get tried. Overgeneralization, bycontrast, means that false hypotheses do get tried but are somehow expunged (presumably by some sort ofinformation that the environment supplies).

Jean-marc pizano

At one point, Pinker almost ’fesses up to this. The heart of his strategy for lexical learning is that “if the verbs that occur in both forms have some [e.g. semantic] property. . . that is missing in the verbs that occur [in the input data] inonly one form, bifurcate the verbs ... so as to expunge nonwitnessed verb forms generated by the earlierunconstrained version of the rule if they violate the newly learned constraint” (1989: 52). Pinker admits that this may“appear to be using a kind of indirect negative evidence: it is sensitive to the nonoccurrence of certain kinds of verbs”.To be sure; it sounds an awful lot like saying that there is no Baker's Paradox for the learning of verb structure, henceno argument for a priori semanticconstraints on the child's hypotheses about lexical syntax. What happens, on this view, is that the child overgeneralizes,just as you would expect, but the overgeneralizations are inhibited by lack of positive supporting evidence from thelinguistic environment and, for this reason, they eventually fade away. This would seem to be a perfectlystraightforward case of environmentally determined learning, albeit one that emphasizes (as one might have said in theold days) ‘lack of reward’ rather than ‘punishment’ as the signal that the environment uses to transmit negative data tothe learner. I'm not, of course, suggesting that this sort of story is right. (Indeed Pinker provides a good discussion ofwhy it probably isn't, see section 1.4.3.2.) My point is that Pinker's own account seems to be no more than a case of it.What is crucial to Pinker’s solution of Baker's Paradox isn't that he abandons arbitrariness; it's that he abandons ‘no negative data’.

Jean-marc pizano

Understandably, Pinker resists this diagnosis. The passage cited above continues as follows:


This procedure might appear to be using a kind of indirect negative evidence; it is sensitive to the nonoccurrence of certain kinds of forms. It does so, though, only in the uninteresting sense of acting differently depending onwhether it hears X or doesn't hear X, which is true of virtually any learning algorithm ... It is not sensitive to thenonoccurrence of particular sentences or even verb-argument structure combinations in parental speech; rather it isseveral layers removed from the input, looking at broad statistical patterns across the lexicon. (1989: 52)

Jean-marc pizano

Jean-marc pizano

There is, however, a widespread consensus (and not only among conceptual relativists) that intentional explanation can, after all, be preserved without supposing that belief contents are often—or even ever—literally public. The idea isthat a robust notion of content similarity would do just as well as a robust notion of content identity for the cognitivescientist's purposes. Here, to choose a specimen practically at random, is a recent passage in which Gil Harmanenunciates this faith:


Sameness of meaning from one symbol system to another is a similarity relation rather than an identity relation in the respect that sameness of meaning is not transitive ... I am inclined to extend the point to concepts, thoughts,and beliefs . . . The account of sameness of content appeals to the best way of translating between two systems,where goodness in translation has to do with preserving certain aspects of usage, with no appeal to any more‘robust’ notion of content or meaning identity. . . [There's no reason why] the resulting notion of sameness ofcontent should fail to satisfy the purposes of intentional explanation. (1993: 169—79)7


It's important whether such a view can be sustained since, as we'll see, meeting the requirement that intentional contents be literally public is non-trivial; like compositionality, publicity imposes a substantial constraint upon one'stheory of concepts and hence, derivatively, upon one's theory of language. In fact, however, the idea that contentsimilarity is the basic notion in intentional explanation is affirmed a lot more widely than it's explained; and it's quiteunclear, on reflection, how the notion of similarity that such a semantics would require might be unquestion-begginglydeveloped. On one hand, such a notion must be robust in the sense that it preserves intentional explanations prettygenerally; on the other hand, it must do so without itself presupposing a robust notion of content identity. To the best of myknowledge, it's true without exception that all the construals of concept similarity that have thus far been put on offeregregiously fail the second condition.

Jean-marc pizano

Harman, for example, doesn't say much more about content-similarity-cum-goodness-of-translation than that it isn't transitive and that it “preserves certain aspects of usage”. That's not a lot to go on. Certainly it leaves wide openwhether Harman is right in denying that his account of content similarity presupposes a “ ‘robust’ notion of content ormeaning identity”. For whether it does depends on how the relevant “aspects ofusage” are themselves supposed to be individuated, and about this we're told nothing at all.


Harman is, of course, too smart to be a behaviourist; ‘usage’, as he uses it, is itself an intentional-cum-semantic term. Suppose, what surely seems plausible, that one of the ‘aspects of usage’ that a good translation of ‘dog’ has to preserveis that it be a term that implies animal, or a term that doesn't apply to ice cubes, or, for matter, a term that means dog Ifso, then we're back where we started; Harman needs notions like same implication, same application, and same meaningin order to explicate his notion of content similarity. All that's changed is which shell the pea is under.


At one point, Harman asks rhetorically, “What aspects of use determine meaning?” Reply: “It is certainly relevant what terms are applied to and the reasons that might be offered for this application ... it is also relevant how some termsare used in relation to other terms” (ibid.: 166). But I can't make any sense of this unless some notion of ‘sameapplication’, ‘same reason’, and ‘same relation of terms’ is being taken for granted in characterizing what goodtranslations ipso facto have in common. NB on pain of circularity: same application (etc.), not similar application (etc.).Remember that similarity of semantic properties is the notion that Harman is trying to explain, so his explanation mustn'tpresuppose that notion.

Jean-marc pizano

I don't particularly mean to pick on Harman; if his story begs the question it was supposed to answer, that is quite typical of the literature on concept similarity. Though it's often hidden in a cloud of technical apparatus (for a detailedcase study, see Fodor and Lepore 1992: ch. 7), the basic problem is easy enough to see.Jean-marc pizano



Jean-marc pizano That being so, explaining thedoorknob/DOORKNOB effect requires postulating some (contingent, psychological) mechanism that reliably leadsfrom having F-experiences to acquiring the concept of beingF. It understates the case to say that no alternative tohypothesis testing suggests itself. So I don't think that a causal/historical account of the locking relation can explainwhy there is a d/D effect without invoking the very premiss which, according to SA, it can't have: viz. that primitiveconcepts are learned inductively.

 


Note the similarity of this objection to the one that rejected a Darwinian solution of the d/D problem: just as you can't satisfy the conditions for having the concept Fjust in virtue of having interacted with Fs, so too you can't satisfy theconditions for having the concept F just in virtue of your grandmother's having interacted with Fs. In both cases,


concept acquisition requires something to have gone on in your head in consequence of the interactions. Given the ubiquity of the d/D phenomenon, the natural candidate for what's gone on in your head is inductive learning.


Second Try at a Metaphysical Solution to the d/D Problem


Maybe what it is to be a doorknob isn't evidenced by the kind of experience that leads to acquiring the concept DOORKNOB; maybe what it is to be a doorknob is constituted by the kind of experience that leads to acquiring theconcept DOORKNOB. A Very Deep Thought, that; but one that requires some unpacking. I want to take a few stepsback so as to get a running start.

Jean-marc pizano

Chapter 3 remarked that it's pretty clear that if we can't define “doorknob”, that can't be because of some accidental limitation of the available metalinguistic apparatus; such a deficit could always be remedied by switchingmetalanguages. The claim, in short, was not that we can't define “doorknob” in English, but that we can't define it at all.The implied moral is interesting: if “doorknob” can't be defined, the reason that it can't is plausibly not methodologicalbut ontological; it has something to do with what kind of property being a doorknob is. If you're inclined to doubt this, sobe it; but I think that you should have your intuitions looked at.


Well, but what could it be about being a doorknob that makes ‘doorknob’ not definable? Could it be that doorknobs have a “hidden essence” (as water, for example, is supposed to do); one that has eluded our scrutiny so far? Perhaps somescience, not yet in place, will do for doorknobs what molecular chemistry did for water and geometrical optics did formirrors: make it clear to us what they really are? But what science, for heaven's sake? And what could there be for it tomake clear? Mirrors are puzzling (it seems that they double things); and water is puzzling too (what could it be madeof, there's so much of it around?). But doorknobs aren't pugyling, doorknobs are boring. Here, for once, “furtherresearch” appears not to be required.


It's sometimes said that doorknobs (and the like) have functional essences: what makes a thing a doorknob is what it is (or is intended to be) used for. So maybe the science of doorknobs is psychology? Or sociology? Or anthropology?Once again, believe it if you can. In fact, the intentional aetiology of doorknobs is utterly transparent: they're intendedto be used as doorknobs. I don't at all doubt that's what makes them what they are, but that it is gets us nowhere. For,if DOORKNOB plausibly lacks a conceptual analysis, INTENDED TO BE USED AS A DOORKNOB does too,and for the same reasons. And surely, surely, that can't, in either case, be because there's something secret aboutdoorknobhood that depth psychology is needed to reveal? No doubt, there is a lot that we don't know about intentionstowards doorknobs qua intentions; but I can't believe there's much that's obscure about them qua intentions towardsdoorknobs.

Jean-marc pizano

Look, there is presumably something about doorknobs that makes them doorknobs, and either it's something complex or it's something simple. If it's something complex, then‘doorknob’ must have a definition, and its definition must be either “real” or “nominal” (or both).Jean-marc pizano



Jean-marc pizano

19


Just as it’s possible to dissociate the idea that concepts are complex from the claim that meaning-constitutive inferences are necessary, so too it’s possible to dissociate the idea that concepts are constituted by their roles in inferences from the claim that they are complex. See Appendix 5A.


20


More precisely, only with respect to conceptualy necessary inferences. (Notice that neither nomological nor metaphysical necessity will do; there might be laws about brown cows per se, and (who knows?) brown cows might have a proprietary hidden essence.) I don’t know what a Classical IRS theorist should say if it turns out that conceptuallynecessary inferences aren't ipso facto definitional or vice versa. That, however, is his problem, not mine.


21


They aren’t the only ones, of course. For example, Keil remarks that “Theories . . . make it impossible ... to talk about the construction of concepts solely on the basis ofprobabilistic distributions of properties in the world” (1987: 196). But that’s true only on the assumption that theories somehow constitute the concepts they contain. DittoKeil’s remark that “future work on the nature of concepts . . . must focus on the sorts of theories that emerge in children and how these theories come to influence thestructure of the concepts that they embrace” (ibid.).


22


There are exceptions. Susan Carey thinks that the individuation of concepts must be relativized to the theories they occur in, but that only the basic ‘ontological’commitments of a theory are content constitutive. (However, see Carey 1985: 168: “I assume that there is a continuum of degrees of conceptual differences, at the extremeend of which are concepts embedded in incommensurable conceptual systems.”) It’s left open how basic ontological claims are to be distinguished from commitments ofother kinds, and Carey is quite aware that problems about drawing this distinction are depressingly like the analytic/synthetic problems. But in so far as Carey has an accountof content individuation on offer, it does seem to be some version of the Classical theory.

Jean-marc pizano

23


This point is related, but not identical, to the familiar worry about whether implicit definition can effect a ‘qualitative change’ in a theory’s expressive power: the worry thatdefinitions (implicit or otherwise) can only introduce concepts whose contents are already expressible by the host theory. (For discussion, see Fodor 1975.) It looks to methat implicit definition is specially problematic for meaning holists even if it's granted that an implicit definition can (somehow) extend the host theory's expressive power.


24


I don’t particularly mean to pick on Gopnik; the cognitive science literature is full of examples of the mistake that I’m trying to draw attention to. What’s unusual aboutGopnik’s treatment is just that it's clear enough for one to see what the problem is.


25


As usual, it's essential to keep in mind that when a de dicto intentional explanation attributes to an agent knowledge (rules, etc.), it thereby credits the agent with the conceptsinvolved in formulating the knowledge, and thus incurs the burden of saying what concepts they are. See the ‘methodological digression’ in Chapter 2.


26


This chapter reconsiders some issues about the nativistic commitments of RTMs that I first raised in Fodor 1975 and then discussed extensively in 1981^. Casual familiaritywith the latter paper is recommended as a prolegomenon to this discussion.I’m especially indebted to Andrew Milne and to Peter Grim for having raised (essentially thesame) cogent objections to a previous version.

Jean-marc pizano

27


For discussions that turn on this issue, see Fodor 1986; Antony and Levine 1991; Fodor 1991.


28


Actually, of course, DOORKNOB isn't a very good example, since it's plausibly a compound composed of the constituent concepts DOOR and KNOB. But let's ignorethat for the sake of the discussion.


29


Well, maybe the acquisition of PROTON doesn’t; it’s plausible that PROTON is not typically acquired from its instances. So, as far as this part of the discussion is concerned, you are therefore free to take PROTON as a primitive concept if you want to. But I imagine you don't want to.Perhaps, in any case, it goes without saying thatthe fact that the d/D effect is widespread in concept acquisition is itself contingent and a posteriori.

Jean-marc pizano

Jean-marc pizano I hope there toplacate such scruples about DOORKNOB and CARBURETTOR as some of you may feel, and to do so within theframework of an atomistic RTM.

 


5. Concepts are public, they're the sorts of things that lots of people can, and do, share.


Since, according to RTM, concepts are symbols, they are presumed to satisfy a type/token relation; to say that two people share a concept (i.e. that they have literally the same concept) is thus to say that they have tokens of literally thesame concept type. The present requirement is that the conditions for typing concept tokens must not be so stringentas to assign practically every concept token to a different type from practically any other.


I put it this way advisedly. I was once told, in the course of a public discussion with an otherwise perfectly rational and civilized cognitive scientist, that he “could not permit” the concept HORSE to be innate in humans (though I guess it’s OK for it to be innate in horses). I forgot to ask him whether he was likewise unprepared to permitneutrinos to lack mass.Just why feelings run so strongly on these matters is unclear to me. Whereas the ethology of all other species is widely agreed to be thoroughlyempirical and largely morally neutral, a priorizing and moralizing about the ethology of our species appears to be the order of the day. Very odd.


It seems pretty clear that all sorts of concepts (for example, DOG, FATHER, TRIANGLE, HOUSE, TREE, AND, RED, and, surely, lots of others) are ones that all sorts of people, under all sorts of circumstances, have had andcontinue to have. A theory of concepts should set the conditions for concept possession in such a way as not to violatethis intuition. Barring very pressing considerations to the contrary, it should turn out that people who live in verydifferent cultures and/or at very different times (me and Aristotle, for example) both have the concept FOOD; andthat people who are possessed of very different amounts of mathematical sophistication (me and Einstein, forexample) both have the concept TRIANGLE; and that people who have had very different kinds of learningexperiences (me and Helen Keller, for example) both have the concept TREE; and that people with very differentamounts of knowledge (me and a four-year-old, for example) both have the concept HOUSE. And so forth.Accordingly, if a theory or an experimental procedure distinguishes between my concept DOG and Aristotle's, orbetween my concept TRIANGLE and Einstein's, or between my concept TREE and Helen Keller's, etc. that is a verystrong prima facie reason to doubt that the theory has got it right about concept individuation or that the experimentalprocedure is really a measure of concept possession.

Jean-marc pizano

I am thus setting my face against a variety of kinds of conceptual relativism, and it may be supposed that my doing so is itself merely dogmatic. But I think there are good grounds for taking a firm line on this issue. Certainly RTM isrequired to. I remarked in Chapter 1 that RTM takes for granted the centrality of intentional explanation in any viablecognitive psychology. In the cases of interest, what makes such explanations intentional is that they appeal to coveringgeneralizations about people who believe that such-and-such, or people who desire that so-and-so, or people whointend that this and that, and so on. In consequence, the extent to which an RTM can achieve generality in theexplanations it proposes depends on the extent to which mental contents are supposed to be shared. If everybodyelse's concept WATER is different from mine, then it is literally true that only I have ever wanted a drink of water, andthat the intentional generalization ‘Thirsty people seek water’ applies only to me. (And, of course, only I can state thatgeneralization; words express concepts, so if your WATER concept is different from mine, ‘Thirsty people seek water’means something different when you say it and when I do.) Prima facie, it would appear that any very thoroughgoingconceptual relativism would preclude intentional generalizations with any very serious explanatory power. This holdsin spades if, as seems likely, a coherent conceptual relativist has to claim that conceptual identity can't be maintainedeven across time slices of the same individual.

Jean-marc pizano

Jean-marc pizano Who could really doubt that this is so? Systematicity seems to beone of the (very few) organizational properties of minds that our cognitive science actually makes some sense of.

 


If your favourite cognitive architecture doesn't support a productive cognitive repertoire, you can always argue that since minds are really finite, they aren't literally productive. But systematicity is a property that even quite finiteconceptual repertoires can have; it isn't remotely plausibly a methodological artefact. If systematicity needscompositionality to explain it, that strongly suggests that the compositionality of mental representations is mandatory.For all that, there has been an acrimonious argument about systematicity in the literature for the last ten years or so.One does wonder, sometimes, whether cognitive science is worth the bother.


Some currently popular architectures don't support systematic representation. The representations they compute with lack constituent structure; a fortiori they lack compositional constituent structure. This is true, in particular, of ‘neuralnetworks’. Connectionists have responded to this in a variety of ways. Some have denied that concepts are systematic.Some have denied that Connectionist representations are inherently unstructured. A fair number have simply failed tounderstand the problem. The most recent proposal I've heard for a Connectionist treatment of systematicity is owingto the philosopher Andy Clark (1993). Clark says that we should “bracket” the problem of systematicity. “Bracket” is atechnical term in philosophy which means try not to think about.

Jean-marc pizano

I don't propose to review this literature here. Suffice it that if you assume compositionality, you can account for both systematicity and productivity; and if you don't, you can't. Whether or not productivity and systematicity prove thatconceptual content is compositional, they are clearly substantial straws in the wind. I find it persuasive that there are


quite a few such straws, and they appear all to be blowing in the same direction.


The Best Argument for Compositionality


The best argument for the compositionality of mental (and linguistic) representation is that its traces are ubiquitous; not just in very general features of cognitive capacity like productivity and systematicity, but also everywhere in itsdetails. Deny productivity and systematicity if you will; you still have these particularities to explain away.


Consider, for example: the availability of (definite) descriptions is surely a universal property of natural languages. Descriptions are nice to have because they make it possible to talk (mutatis mutandis, to think) about a thing even if itisn't available for ostension and even if you don't know its name; even, indeed, if it doesn't have a name (as with ever somany real numbers). Descriptions can do this job because they pick out unnamed individuals by reference to their properties.So, for example, ‘the brown cow’ picks out a certain cow; viz. the brown one. It does so by referring to a property, viz.being brown, which that cow has and no other cow does that is contextually relevant. Things go wrong if (e.g.) there areno contextually relevant cows; or if none of the contextually relevant cows is brown; or if more than one of thecontextually relevant cows is brown . . . And so forth.

Jean-marc pizano

OK, but just how does all this work? Just what is it about the syntax and semantics of descriptions that allows them to pick out unnamed individuals by reference to their properties? Answer:


i. Descriptions are complex symbols which have terms that express properties among their syntactic constituents;and


ii. These terms contribute the properties that they express to determine what the descriptions that contain themspecify.


It's because ‘brown’ means brown that it's the brown cow that ‘the brown cow’ picks out. Since you can rely on this arrangement, you can be confident that ‘the brown cow’ will specify the local brown cow even if you don’t know which cowthe local brown cow is; even if you don't know that it's Bossie, for example, or that it's this cow. That, however, is just tosay that descriptions succeed in their job because they are compositional. If English didn't let you use ‘brown’ context-independently to mean brown, and ‘cow’ context-independently to mean cow, it couldn't let you use ‘the brown coV tospecify a brown cow without naming it.

Jean-marc pizano

Jean-marc pizano The three-year-old who thinks (perhaps out of Quinean scruples) that‘eating is acting’ is true but contingent will do just fine, so long as he's prepared to allow that contingent truths canhave syntactic reflexes.

 


So much for the bootstrapping argument. I really must stop this grumbling about lexical semantics. And I will, except for a brief, concluding discussion of Pinker's handling of (what he calls) ‘Baker's Paradox’ (after Baker 1979). This tooamounts to a claim that ontogenetic theory needs lexical semantic representations; but it makes quite a different sort ofcase from the one we've just been looking at.


The ‘Baker’s Paradox’ Argument


Pinker thinks that, unless children are assumed to represent ‘eat’ as an action verb (mutatis mutandis, ‘give’ as a verb of prospective possession, etc.). Baker's Paradox will arise and make the acquisition of lexical syntax unintelligible. I'll tellyou what Baker's Paradox is in a moment, but I want to tell you what I think the bottom line is first. I think thatBaker's Paradox is a red herring in the present context. In fact, I think that it's two red herrings: on Pinker's ownempirical assumptions, there probably isn't a


Baker's Paradox about learning the lexicon; and, anyhow, assuming that there is one provides no argument that lexical items have semantic structure. Both of these points are about to emerge.


Baker's Paradox, as Pinker understands it, is a knot of problems that turn on the (apparent) fact that children (do or can) learn the lexical syntax of their language without much in the way of overt parental correction. Pinker discerns,“three aspects of the problem [that] give it its sense of paradox”, these being the child's lack of negative evidence, theproductivity of the structures the child learns (“if children simply stuck with the argument structures that wereexemplified in parental speech . . . they would never make errors . . . and hence would have no need to figure out howto avoid or expunge them”), and the “arbitrariness” of the linguistic phenomena that the child is faced with (specifically“near synonyms [may] have different argument structures” (1989: 8—9)). If, for example, the rule of dative movementis productive, and if it is merely arbitrary that you can say ‘John gave the library the book’ but not *‘John donated thelibrary the book’, how, except by being corrected, could the child learn that the one is OK and the other is not?

Jean-marc pizano

That's a good question, to be sure; but it bears full stress that the three components do not, as stated and by themselves, make Baker's Paradox paradoxical. The problem is an unclarity in Pinker's claim that the rules the child isacquiring are ‘productive’. If this means (as it usually does in linguistics) just that the rules are general (they aren't merelists; they go ‘beyond the child's data’) then we get no paradox but just a standard sort of induction problem: the childlearns more than the input shows him, and something has to fill the gap. To get a paradox, you have to throw in theassumption that, by and large, children don't overgeneralize; i.e. that, by and large, they don't apply the productive rulesthey're learning to license usages that count as mistaken by adult standards. For suppose that assumption is untrue andthe child does overgeneralize. Then, on anybody's account, there would have to be some form of correction mechanismin play, endogenous or otherwise, that serves to expunge the child's errors. Determining what mechanism(s) it is thatserve(s) this function would, of course, be of considerable interest; especially on the assumption that it isn't parentalcorrection. But so long as the child does something that shows the world that he's got the wrong rule, there is nothingparadoxical in the fact that information the world provides ensures that he eventually converges on the right one.


To repeat, Baker's Paradox is a paradox only if you add ‘no overgeneralizations' to Pinker's list. The debilitated form of Baker's Paradox that you get without this further premiss fails to do what Pinker very much wants Baker's Paradox todo; viz.Jean-marc pizano



Jean-marc pizano To theextent that we have some grasp on what concepts terms like ‘S’, ‘NP’, ADJ’ express, the theory that children learn by syntactic boostrapping is at least better defined thanPinker’s. (And to the extent that we don’t, it’s not.)

 


15


When Pinker’s analyses are clear enough to evaluate, they are often just wrong. For example, he notes in his discussion of causatives that the analysis PAINTvtr = cover withpaint is embarrassed by such observations as this: although when Michelangelo dipped his paintbrush in his paint pot he thereby covered the paintbrush with paint,nevertheless he did not, thereby, paint the paintbrush. (The example is, in fact, borrowed from Fodor 1970.) Pinker explains that “stereotypy or conventionality of mannerconstrains the causative . . . This might be called the ‘stereotypy effect’ ” (1984: 324). So it might, for all the good it does. It is possible, faut de mieux, to paint the wall withone’s handkerchief; with one’s bare hands; by covering oneself with paint and rolling up the wall (in which last case, by the way, though covering the wall with the paintcounts as painting the wall, covering oneself with the paint does not count as painting oneself even if one does it with a paintbrush; only as getting oneself covered withpaint).Whether you paint the wall when you cover it with paint depends not on how you do it but on what you have in mind when you do it: you have to have in mind notmerely to cover the wall with paint but to paint the wall. That is, “painty” apparently can’t be defined even in terms of such closely related expressions as “painty”. Or, if itcan, none of the decompositional analyses suggested so far, Pinker’s included, comes even close to showing how.

Jean-marc pizano

16


Sober (1984: 82) makes what amounts to the converse point: “In general, we expect theoretical magnitudes to be multiply accessible ; there should be more than one way of finding out what their values are in a given circumstance. This reflects the assumption that theoretical magnitudes have multiple causes and effects. There is no such thing asthe only possible effect or cause of a given event; likewise, there is no such thing as the only possible way of finding out whether it occurred. I won’t assert that this issomehow a necessary feature of all theoretical magnitudes, but it is remarkably widespread.” Note the suggestion that the phenomena in virtue of which a “theoreticalmagnitude” is multiply epistemically accessible are naturally construed as its “causes and its effects”. In the contrasting case, when there is only one access path (or, anyhow,only one access path that one can think of) the intuition is generally that the magnitude at issue isn't bona fide theoretical, and that its connection to the criterion isconceptual rather than causal.


17


Terminological conventions with respect to the topics this chapter covers are unsettled. I’ll use ‘stereotype’ and ‘prototype’ interchangeably, to refer to mental representations of certain kinds of properties. So, ‘the dog stereotype’ and ‘the dog prototype’ designate some such (complex) concept as: BEING A DOMESTIC ANIMAL WHICHBARKS, HAS A TAIL WHICH IT WAGS WHEN IT IS PLEASED, . . . etc. I’ll use ‘exemplar’ for the mental representation of a kind, or of an individual, that instantiatesa prototype; so ‘sparrows are the exemplars of birds’ and ‘Bambi is Smith’s exemplar of a deer’ are both well-formed. ‘Sparrows are stereotypic birds’ (/‘Bambi is aprototypic deer’) are also OK; they mean that a certain kind (/individual) exhibits certain stereotypic (/prototypic) properties to a marked degree.

Jean-marc pizano

18


Elanor Rosche, who invented this account of concepts more or less single-handed, often speaks of herself as a Wittgensteinian; and there is, of course, a family resemblance. But I doubt that it goes very deep. Rosche’s project was to get modality out of semantics by substituting a probabilistic account of content-constituting inferences. Whereas Isuppose Wittgenstein’s project was to offer (or anyhow, make room for) an epistemic reconstruction of conceptual necessity. Rosche is an eliminativist where Wittgenstein is areductionist. There is, in consequence, nothing in Rosche’s theory of concepts that underwrites Wittgenstein’s criteriology, hence nothing that’s of use for bopping scepticswith.

Jean-marc pizanoПохожие записи:

  1. 1 Philosophical Introduction: The BackgroundTheory



Jean-marc pizano But it is surely not tolerable that they should lead by plausiblearguments to a contradiction. If the d/D effect shows that primitive concepts mustbe learned inductively, and SA showsthat primitive concepts can't be learned inductively, then the conclusion has to be that there aren't any primitiveconcepts. But if there aren't any primitive

 


concepts, then there aren't any concepts at all. And if there aren't any concepts all, RTM has gone West. Isn't it a bit late in the day (and late in the book) for me to take back RTM?


Help!


Ontology


This all started because we were in the market for some account of how DOORKNOB is acquired. The story couldn't be hypothesis testing because Conceptual Atomism was being assumed, so DOORKNOB was supposed to beprimitive; and it's common ground that the mechanism for acquiring primitive concepts can't be any kind of induction.But, as it turned out, there is a further constraint that whatever theory of concepts we settle on should satisfy: it mustexplain why there is so generally a content relation between the experience that eventuates in concept attainment andthe concept that the experience eventuates in attaining. At this point, the problem about DOORKNOB metastasized:assuming that primitive concepts are triggered, or that they're ‘caught’, won't account for their content relation to theircauses; apparently only induction will. But primitive concepts can't be induced; to suppose that they are is circular.What started as a problem about DOORKNOB now looks like undermining all of RTM. This is not good. I wasrelying on RTM to support me in my old age.

Jean-marc pizano

But, on second thought, just why must one suppose that only a hypothesis-testing acquisition model can explain the doorknob/ DOORKNOB relation? The argument for this is, I'm pleased to report, non-demonstrative. Let's go overit once more: the hypothesis-testing model takes the content relation between a concept and the experience it's acquiredfrom to be a special case of the evidential relation between a generalization and its confirming instances (between, forexample, the generalization that Fs are Gs and instances of things that are both F and G). You generally get DOGfrom (typical) dogs and not, as it might be, from ketchup. That's supposed to be because having DOG requiresbelieving (as it might be) that typical dogs bark. (Note, once again, how cognitivism about concept possession andinductivism about concept acquisition take in one another's wash.) And, whereas encounters with typical dogs constituteevidence that dogs bark, encounters with ketchup do not (ceteris paribus). If the relation between concepts andexperiences is typically evidential, that would explain why it's so often a relation of content: and what other explanationhave we got?


That is what is called in the trade a ‘what-else’ argument. I have nothing against what-else arguments in philosophy; still less in cognitive science. Rational persuasion often invokes considerations that are convincing but notdemonstrative, and what else but a what-else argument could a convincing but non-demonstrative argument be? Onthe other hand, it is in the nature of what-else arguments that Q if not P trumps What else, if not P?’; and, in thepresent case, I think there is a prima facie plausible ontological candidate for Q; that is, an explanation which makes thed/D effect the consequence of a metaphysical truth about how concepts are constituted, rather than an empirical truthabout how concepts are acquired. In fact, I know of two such candidates, one of which might even work.


First Try at a Metaphysical Solution to the d/D Problem

Jean-marc pizano

If you assume a causal/historical (as opposed to a dispositional/ counterfactual) construal of the locking relation, it might well turn out that there is a metaphysical connection between acquiring DOORKNOB and causally interactingwith doorknobs. (Cf. the familiar story according to which it's because I have causally interacted with water and myTwin hasn't that I can think water-thoughts and he can't.) Actually, I don't much like causal/historical accounts oflocking (see Fodor 1994: App. B), but we needn't argue about that here. For, even if causally interacting withdoorknobs is metaphysically necessary for DOORKNOB-acquisition, it couldn't conceivably be metaphysically sufficient,just causally interacting with doorknobs doesn't guarantee you any concepts at all.Jean-marc pizano