Learning concepts from sketches via analogical generalization and near-misses

Abstract

Modeling how concepts are learned from experience is an important challenge for cognitive science. In cognitive psychology, progressive alignment, i.e., comparing highly similar examples, has been shown to lead to rapid learning. In AI, providing very similar negative examples (near-misses) has been proposed as another way to accelerate learning. This paper describes a model of concept learning that combines these two ideas, using sketched input to automatically encode data and reduce tailorability. SAGE, which models analogical generalization, is used to implement progressive alignment. Near-miss analysis is modeled by using the Structure Mapping Engine to hypothesize classification criteria based on differences. This is performed both on labeled negative examples provided as input, and by using analogical retrieval to find near-miss examples when positive examples are provided. We use a corpus of sketches to show that the model can learn concepts based on sketches and that incorporating near-miss analysis improves learning.


Back to Table of Contents