Approximate Inference through Sequential Measurements of Likelihoods Accounts for Hick’s Law

AbstractIn Bayesian categorization, exactly computing likelihoods and posteriors might be hard for humans. We propose an approximate inference framework inspired by Bayesian quadrature and Thompson sampling. An agent can pay a fixed cost to make a noisy measurement of the likelihood of one category. By sequentially making measurements, the agent refines their beliefs over the likelihoods. When the agent stops measuring and chooses a category, they get rewarded for being correct; the agent chooses the category that maximizes probability correct. To decide whether to make another measurement, the agent simulates one measurement for each category. If any of the gains in expected reward exceeds the cost, they make a real measurement corresponding to the simulation with the largest gain. We find that the average number of measurements grows approximately logarithmically with the number of categories, reminiscent of Hick’s law. Furthermore, our model makes predictions for decision confidence among multiple alternatives.


Return to previous page