How does simulating aspects of primate infant visual development inform training of CNNs?
- Shantanu Jaiswal, Social and Cognitive Computing, Agency for Science, Technology and Research, Singapore, Singapore
- Dongkyu Choi, Social and Cognitive Computing, Agency for Science, Technology and Research, Singapore, Singapore
- Fernando Basura, Social and Cognitive Computing, Agency for Science, Technology and Research, Singapore, Singapore
AbstractPrimate visual development is characterized by low visual acuity and colour sensitivity besides high plasticity and synaptic growth in the first year of infancy, prior to the development of specific visual-cognitive functions. In this work, we investigate the possible synergy between the gradual variation in visual input distribution and the concurrent growth of a statistical model of vision on the task of large-scale object classification. We adopt deep convolutional neural networks (CNNs) as a statistical model of vision and study its performance in 4 training setups each varying in either the model being ‘static’ or ‘growing’ in parameters or the visual input being ‘fully-formed’ or ‘refining’ in saturation, contrast and spatial resolution. Our experiments indicate that a setup reflective of infant visual development, wherein a gradually growing model is trained on a refining visual input distribution, converges to a better generalization at a faster rate in comparison to other setups.
Return to previous page