In noisy domains where category distributions overlap, people categorise better at test after being trained on idealised category structures. This may happen, because under the assumption that humans selectively sample from memory when performing categorisation, idealised category learning leads to sampling of more appropriate items and better performance. Here we propose that idealisation of category distributions occurs naturally via a process of forgetting and re-estimation of category labels. We model a process in which items’ category membership is forgotten and then re-estimated from the remaining distributions. With time this leads to lowering the variance of category distributions, equivalent to idealising training data. We test this potential idealisation in a paradigm in which we train participants on overlapping category distributions and withdraw feedback for some trials in one group thus enforcing re-estimation of categories. The model predicts that the group with less feedback will perform better at test due to idealisation.