Learning to refer informatively by amortizing pragmatic reasoning
- Julia White, Electrical Engineering, Stanford University, Stanford, California, United States
- Jesse Mu, Computer Science Department, Stanford University, Stanford, California, United States
- Noah Goodman, Psychology, Stanford University, Stanford, California, United States
AbstractA hallmark of human language is the ability to effectively and efficiently convey contextually relevant information. One theory for how humans reason about language is presented in the Rational Speech Acts (RSA) framework, which captures pragmatic phenomena via a process of recursive social reasoning (Goodman & Frank, 2016). However, RSA represents ideal reasoning in an unconstrained setting. We explore the idea that speakers might learn to amortize the cost of RSA computation over time by directly optimizing for successful communication with an internal listener model. In simulations with grounded neural speakers and listeners across two communication game datasets representing synthetic and human-generated data, we find that our amortized model is able to quickly generate language that is effective and concise across a range of contexts, without the need for explicit pragmatic reasoning.
Return to previous page