Cued multimodal learning in infancy: a neuro-computational model

Abstract

We introduce a connectionist model of cued multimodal learning in infants. Its architecture is inspired by computational studies coming both from the fields of infant habituation and of visual attention. The model embodies in its simplest form the notion that the attentional system involves competitive networks (Lee et al, 1999). Using this model, we are able to reproduce experimental differences in looking times between cued and non-cued conditions. We then show that differences between social and non-social cues recently observed in 8-month-old infants by Wu and Kirkham (2010) can be explained by the amount of information let through from non-cued locations. We discuss these results and future lines of research on this computational work.


Back to Table of Contents