The architecture of grammar in artificial grammar learning: Formal biases in the acquisition of morphophonology and the nature of the learning task

Thumbnail Image
If you need an accessible version of this item, please email your request to so that they may create one and provide it to you.
Journal Title
Journal ISSN
Volume Title
[Bloomington, Ind.] : Indiana University
This thesis introduces an experimental paradigm designed to test whether human language learners acquire product-oriented generalizations (e.g., "plurals must end in -i") and/or source-oriented generalizations (e.g., "add -i to the singular to form the plural"). The paradigm is applied to the morphophonological process of velar palatalization. Ecological validity of the paradigm is confirmed by comparison to corpus data from loanword adaptation in Russian. Characteristics of the training task are shown to influence whether the grammar extracted by a learner is largely product-oriented or largely source-oriented. This finding suggests that the shape of the grammar is influenced not only by innate biases of the learner (Universal Grammar) but also characteristics of the learning situation. Nonetheless, there are regularities that hold across training tasks and languages. First, learners extract both product-oriented and source-oriented generalizations. Thus, learners exposed to a lexicon of singular and plural forms learn at least 1) what typical plurals and singulars are like, 2) which segments of the singular form must be retained in the plural, and 3) which segments of the plural form must be retained in the singular. Second, learners appear to rely on schemas specifying which form classes and paradigmatic mappings are observed frequently (e.g., "plurals should end in -ti" or " a [k] in the singular corresponds to a [ti] in the plural"), rather than on constraints against underobserved form types (e.g., "plurals must not end in -ki"). Competing generalizations are weighted relative to each other stochastically. Thus, learners obey competing generalizations in proportion to how much statistical support each competitor receives from the training data, rather than obeying the most strongly supported competitor 100% of the time. Learners do not to obey the Subset Principle, which predicts that the learners should induce the most specific generalizations consistent with the training data. The observed overgeneralization patterns are shown to be expected if we assume a Bayesian approach to speech perception and word recognition, in which the output of perception is not the identity of the most likely structure but rather a probability distribution over possible structures.
Thesis (Ph.D.) - Indiana University, Linguistics, 2009
bias, generalization, learning, morphology, phonology, productivity
Link(s) to data and video for this item
Doctoral Dissertation