[ Pobierz całość w formacie PDF ]

after all . . . [Perhaps, in particular, these classes are specifiable by reference to
semantic criteria.] . . . If learners could acquire and enforce criteria delineating
the[se] . . . classes of verbs, they could productively generalize an alternation to
verbs that meet the criteria without overgeneralizing it to those that do not.
(ibid.: 30)
Precisely so. If, as Pinker s theory claims, the lexical facts are non-arbitrary
and children are sensitive to their non-arbitrariness, then the right
prediction is that children don t overgeneralize the lexical rules.
Which, however, by practically everybody s testimony, including
Pinker s, children reliably do. On Pinker s own account, children aren t
 conservative in respect of the lexicon (see 1989: 19 26, sec. 1.4.4.1 for
lots and lots of cases).15 This being so, there s got to be something wrong
with the theory that the child s hypotheses  differentiate lexical classes a
priori. A priori constraints would mean that false hypotheses don t even get
tried. Overgeneralization, by contrast, means that false hypotheses do get
tried but are somehow expunged (presumably by some sort of information
that the environment supplies).
At one point, Pinker almost  fesses up to this. The heart of his strategy
for lexical learning is that  if the verbs that occur in both forms have some
[e.g. semantic] property . . . that is missing in the verbs that occur [in the
input data] in only one form, bifurcate the verbs . . . so as to expunge
nonwitnessed verb forms generated by the earlier unconstrained version of
the rule if they violate the newly learned constraint (1989: 52). Pinker
admits that this may  appear to be using a kind of indirect negative
evidence: it is sensitive to the nonoccurrence of certain kinds of verbs . To
be sure; it sounds an awful lot like saying that there is no Baker s Paradox
for the learning of verb structure, hence no argument for a priori semantic
15
Though the facts are a little labile, to be sure. For some recent data, see Marcus et al.
1992.
Chaps. 3 & 4 11/3/97 1:11 PM Page 67
The Linguist s Tale 67
constraints on the child s hypotheses about lexical syntax. What happens,
on this view, is that the child overgeneralizes, just as you would expect,
but the overgeneralizations are inhibited by lack of positive supporting
evidence from the linguistic environment and, for this reason, they
eventually fade away. This would seem to be a perfectly straightforward
case of environmentally determined learning, albeit one that emphasizes
(as one might have said in the old days)  lack of reward rather than
 punishment as the signal that the environment uses to transmit negative
data to the learner. I m not, of course, suggesting that this sort of story is
right. (Indeed Pinker provides a good discussion of why it probably isn t,
see section 1.4.3.2.) My point is that Pinker s own account seems to be no
more than a case of it. What is crucial to Pinker s solution of Baker s
Paradox isn t that he abandons arbitrariness; it s that he abandons  no
negative data .
Understandably, Pinker resists this diagnosis. The passage cited above
continues as follows:
This procedure might appear to be using a kind of indirect negative evidence; it
is sensitive to the nonoccurrence of certain kinds of forms. It does so, though,
only in the uninteresting sense of acting differently depending on whether it hears
X or doesn t hear X, which is true of virtually any learning algorithm . . . It is not
sensitive to the nonoccurrence of particular sentences or even verb-argument
structure combinations in parental speech; rather it is several layers removed from
the input, looking at broad statistical patterns across the lexicon. (1989: 52)
I don t, however, think this comes to anything much. In the first place, it s
not true (in any unquestion-begging sense) that  virtually any learning
algorithm [acts] differently depending on whether it hears X or doesn t
hear X . To the contrary, it s a way of putting the productivity problem
that the learning algorithm must somehow converge on treating infinitely
many unheard types in the same way that it treats finitely many of the
heard types (viz. as grammatical) and finitely many heard types in the same
way that it treats a different infinity of the unheard ones (viz. as
ungrammatical). To that extent, the algorithm must not assume that either
being heard or not being heard is a projectible property of the types.
On the other hand, every treatment of learning that depends on the
feedback of evidence at all (whether it supposes the evidence to be direct
or indirect, negative or positive, or all four) must  be several layers [ Pobierz całość w formacie PDF ]

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • souvenir.htw.pl