Relation learning in a neurocomputational architecture supports cross-domain transfer
- Leonidas A. A. Doumas, Psychology, University of Edinburgh, Edinburgh, United Kingdom
- Guillermo Puebla, Psychology, University of Edinburgh, Edinburgh, United Kingdom
- Andrea Martin, Psychology of Language Department , Max Planck Institute for Psycholinguistics , Nijmegen, Gelderland, Netherlands
- John Hummel, Psychology, University of Illinois, Urbana-Champaign, Illinois, United States
AbstractHumans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning have be- gun to approximate and even surpass human performance, but these systems struggle to generalize what they have learned to untrained situations. We present a model based on well- established neurocomputational principles that demonstrates human-level generalisation. This model is trained to play one video game (Breakout) and performs one-shot generalisation to a new game (Pong) with different characteristics. The model generalizes because it learns structured representations that are functionally symbolic (viz., a role-filler binding calculus) from unstructured training data. It does so without feedback, and without requiring that structured representations are specified a priori. Specifically, the model uses neural co-activation to discover which characteristics of the input are invariant and to learn relational predicates, and oscillatory regularities in net- work firing to bind predicates to arguments. To our knowledge, this is the first demonstration of human-like generalisation in a machine system that does not assume structured representa- tions to begin with.
Return to previous page