Hierarchical Inferences Support Systematicity in the Lexicon
- Matthias Hofer, MIT, Cambridge, Massachusetts, United States
- Tessa Verhoef, Leiden Institute of Advanced Computer Science, Leiden, Netherlands
- Roger Levy, Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
AbstractLanguage exhibits striking systematicity in its form-meaning mappings: Similar meanings are assigned similar forms. Here we study how systematicity relates to another well-studied phenomenon, linguistic regularization, the removal of unpredictable variation in linguistic variants. Systematicity is ultimately a property of classes of form-meaning mappings, each member of which can be acted upon independently by linguistic regularization. Both are supported by a cognitive bias for simplicity, but this leaves open the question of how they interact to structure the lexicon. Using data from a recent artificial gesture learning experiment by Verhoef, Padden, & Kirby (2016), we formalize cognitive biases at the item level and the language level as inductive biases in a hierarchical Bayesian model. Simulated data from models that lack either one of those biases show how both are necessary to capture subjects' systematicity preferences. Our results bring conceptual clarity about the relationship between regularization and systematicity and promote a multi-level approach to cognitive biases in artificial language learning and language evolution.
Return to previous page