Semantic structure of infant first-person scenes changes with development
- Ziyu Xiang, Department of Computer Science, Indiana University, Bloomington, Indiana, United States
- Linda Smith, Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, United States
- David Crandall, Computer Science, Indiana University, Bloomington, Indiana, United States
AbstractThe co-occurrence of objects in visual scenes reflects the semantic structure of the world: cups are more likely to appear in scenes with tables than airplanes, for example. Both human and machine vision use these co-occurrences to support recognition of individual objects. A reasonable assumption is that these co-occurrences are ubiquitous and present for all perceivers. However, the scenes observed by infants are highly dependent on their body postures and locations, both of which change dramatically over the first year of post-natal life. To measure these changing co-occurrences in infant-perspective scenes, we collected images from infants wearing head cameras in everyday home environments comparing three age groups: 1-3, 6-8 and 11-12 months. Using graph theoretical analysis, we conclude that the semantic structure of scenes in 6-8 months differs from what’s in younger and older infants.