Christine Soh Yue passes PhD defense

Christine Soh Yue  passed her proposal defense on September 12, 2024! Her title was "Learning Variation and Systematicity in Language" and she is advised by Charles Yang. Anna Papafragou, Marlyse Baptista, and Kathryn Schuler served on her committee.

 

Abstract: 

The successful acquisition of a language is a highly complex process in which learners must extract systematic information given a sample of input utterances. The input is filled with variability: some of it is noise that must be filtered out, and some of it is a systematic feature of the language. Learners must learn early words across contexts in which the abstracted meaning of the word is ambiguous, and learners must determine the existence and scope of linguistic rules, and whether these rules are deterministic or probabilistically variable. This dissertation seeks to explore the mechanisms and representations of language acquisition by proposing a two-part cognitive model of learning: the first part can learn by rote association and by generalization, and the second part learns by generalization.  

Language has often been described as "making infinite use of finite means" (Chomsky, 1965, quoting von Humboldt, 1836). Thus, successful users of a language must master both the finite and the infinite, to learn both the rote associations and the grammars that generate the possible utterances of the language (Chomsky, 1958). Beginning with the case study of early word learning (Chapter 3), the mechanism of learning the "finite" is validated through simulations of existing experimental work as well as two novel cross-situational word learning experiments targeting memory. Next, the dissertation explores processes of learning the "infinite": regularization and of the acquisition of variable rules. Both processes are key to developing a native knowledge of a language, but the relationship between these processes remains unclear. The model proposes that regularization occurs when a single form is generalized, and that the generalization of the different forms must occur to acquire variable rules. Chapter 4 proposes a pair of artificial language learning experiments to test the model of learning via generalizations. Chapter 5 examines an example of a variable rule learned in natural language, through a corpus study of differential object marking (DOM) in various dialects of Spanish, where DOM is variable in usage.