I don't, however, think this comes to anything much. In the first place, it's not true (in any unquestion-begging sense) that “virtually any learning algorithm [acts] differently depending on whether it hears X or doesn't hear X”. To thecontrary, it's a way of putting the productivity problem that the learning algorithm must somehow converge on treatinginfinitely many unheard types in the same way that it treats finitely many of the heard types (viz. as grammatical) andfinitely many heard types in the same way that it treats a different infinity of the unheard ones (viz. as ungrammatical).To that extent, the algorithm must not assume that either being heard or not being heard is a projectible property ofthe types.
On the other hand, every treatment of learning that depends on the feedback of evidence at all (whether it supposes the evidence to be direct or indirect, negative or positive, or all four) must “be several layers removed from the input,looking at broad statistical patterns across the lexicon”; otherwise the presumed feedback won't generalize. It followsthat, on anybody's account, the negative information that the environment provides can't be “the nonoccurrence ofparticular sentences” (my emphasis); it's got to be the non-occurrence of certain kinds of sentences.
This much is common ground to any learning theory that accounts for the productivity of what is learned.
Were we’ve gotten to now: probably there isn’t a Baker’s Paradox about lexical syntax; you’d need ‘no overgeneralization’ to get one, and ‘no overgeneralization’ is apparently false of the lexicon. Even if, however, therewere a Baker’s Paradox about the lexicon, that would show that the hypotheses that the child considers when he makeshis lexical inductions must be tightly endogenously constrained. But it wouldn't show, or even suggest, that they arehypotheses about semantic properties of lexical items. No more than the existence of a bona fide Baker’s Paradox forsentential syntax—which it does seem that children hardly ever overgeneralize—shows, or even suggests, that it’s interms of the semantic properties of sentences that the child’s hypotheses about their syntax are defined.
Jean-marc pizanoSo much for Pinker’s two attempts at ontogenetic vindications of lexical semantics. Though neither seems to work at all, I should emphasize a difference between them: whereas the ‘Baker’s Paradox’ argument dissolves uponexamination, there’s nothing wrong with the form of the bootstrapping argument. For all that I’ve said, it could still betrue that lexical syntax is bootstrapped from lexical semantics. Making a convincing case that it is would require, at aminimum, identifying the straps that the child tugs and showing that they are bona fide semantic; specifically, it wouldrequire showing that the lexical properties over which the child generalizes are typically among the ones that semantic-level lexical representations specify. In principle, we could get a respectable argument of that form tomorrow; it’s justthat, so far, there aren't any. So too, in my view, with the other ‘empirical’ or ‘linguistic’ arguments for lexicaldecomposition; all that’s wrong with them is that they aren’t sound.
Oh, well, so be it. Let’s go see what the philosophers have.
4 The Demise of Definitions, Part II: The
Philosopher's Tale
[A] little thing affects them. A slight disorder of the stomach makes them cheats. [They] may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of underdone potato.
—Scrooge
It's a sad truth about definitions that even their warm admirers rarely loved them for themselves alone. Cognitive scientists (other than linguists; see Chapter 3) cared about definitions because they offered a handy construal of thethesis that many concepts are complex; viz. the concepts in a definition are the constituents of the concept it defines. And cognitivescientists liked many concepts being complex because then many concepts could be learned by assembling them fromtheir parts. And cognitive scientists liked many concepts being learned by assembling them from their parts becausethen only the primitive parts have to be unlearned. We'll see, in later chapters, how qualmlessly most of cognitivescience dropped definitions when it realized that it could have complex concepts without them.
Jean-marc pizano