Learning in Social Environments with Curious Neural Agents
- Megumi Sano, Computer Science, Stanford University, Stanford, California, United States
- Julian De Freitas, Psychology, Harvard University, Cambridge, Massachusetts, United States
- Nick Haber, Graduate School of Education, Stanford, Stanford, California, United States
- Daniel L. K. Yamins, Psychology, Stanford, Stanford, California, United States
AbstractFrom an early age, humans are capable of learning about their social environment, making predictions of how other agents will operate and decisions about how they themselves will interact. In this work, we address the problem of formalizing the learning principles underlying these abilities. We construct a curious neural agent that can efficiently learn predictive models of social environments that are rich with external agents inspired by real-world animate behaviors such as peekaboo, chasing, and mimicry. Our curious neural agent consists of a controller driven by gamma-progress, a scalable and effective curiosity signal, and a disentangled world model that allocates separate networks for interdependent components of the world. We show that our disentangled curiosity-driven agent achieves higher learning efficiency and prediction performance than strong baselines. Crucially, we find that a preference for animate attention emerges naturally in our model, and is a key driver of performance. Finally we discuss future directions including applications of our framework to modeling human behavior and designing early indicators for developmental variability.
Return to previous page