Word Learning through Sensorimotor Child-Parent Interaction: A Feature Selection Approach

Abstract

This paper presents a computational model of word learning with the goal to understand the mechanisms through which word learning is grounded in multimodal social interactions between young children and their parents. We designed and implemented a novel multimodal sensing environment consisting of two head-mounted mini cameras that are placed on both the child’s and the parent’s foreheads, motion tracking of head and hand movements and recording of caregiver’s speech while they were engaged in a free-play toy-naming interaction. A probabilistic model was developed that can predict the child’s learning results based on sensorimotor features extracted from child-parent interaction. More importantly, through the trained regression coefficients in the model, we discovered a set of perceptual and motor patterns that are informatively time-locked to words and their intended referents and predictive of word learning. Those patterns provide quantitative measures of the roles of various sensorimotor cues that may facilitate word learning.


Back to Table of Contents