Can Generic Neural Networks Estimate Numerosity Like Humans?
- Sharon Chen, Computer Science, Fu School of Engineering and Applied Science, Undergrad, Columbia University, New York, New York, United States
- Zhenglong Zhou, Department of Cognitive Science , Johns Hopkins University , Baltimore, Maryland, United States
- Mengting Fang, Beijing Normal University, School of Mathematical Science, Beijing, China
- Jay McClelland, Stanford University, Stanford, California, United States
AbstractResearchers exploring mathematical abilities have proposed that humans and animals possess an approximate number system (ANS) that enables them to estimate numerosities in visual displays. Experimental data shows that estimation responses exhibit a constant coefficient of variation (CV: ratio of variability of the estimates to their mean) for numerosities larger than four, and a constant CV has been taken as a signature characteristic of the innate ANS. For numerosities up to four, however, humans often produce error-free responses, suggesting the presence of estimation mechanisms distinct from the ANS specialized for this 'subitizing range'. We explored whether a constant CV might arise from learning in generic neural networks using widely-used neural network learning procedures. We find that our networks exhibit a flat CV for numerosities larger than 4, but do not do so robustly for smaller numerosities. Our findings are consistent with the idea that approximate number estimation may not require innate specialization for number, while also supporting the view that a process different from the one we model may underlie estimation responses for N less than or equal to 4.