Active physical inference via reinforcement learning
- Shuaiji Li, Center for Data Science, New York University, New York City, New York, United States
- Yu Sun, Center for Data Science, New York University, New York City, New York, United States
- Sijia Liu, New York University , Center for DataScience, New York City, New York, United States
- Tianyu Wang, Center for Data Science, New York University, New York City, New York, United States
- Todd Gureckis, New York University, New York, New York, United States
- Neil Bramley, Psychology and Data Science, NYU, New York, New York, United States
AbstractWhen encountering unfamiliar physical objects, children and adults often perform structured interrogatory actions such as grasping and prodding, so revealing latent physical properties such as masses and textures. However, the processes driving and supporting these curious behaviors are still largely mysterious. In this paper, we develop and train an agent able to actively uncover latent physical properties such as the mass and force of objects in a simulated physical “micro-world’. Concretely, we used a simulation-based-inference framework to quantify the physical information produced by observation and interaction with the evolving dynamic environment. We used model-free reinforcement learning algorithm to train an agent to implement general strategies for revealing latent physical properties. We compare the behaviors of this agent to the human behaviors observed in a similar task.