When learners are exposed to inconsistent input, do they reproduce the probabilities in the input (probability matching), or produce some variants disproportionately often (regularization)? Laboratory results and computational models of artificial language learning both argue that the learning mechanism is basically probability matching, with regularization arising from additional factors. However, these models were fit to aggregated experimental data, which can exhibit probability matching even if all individuals regularize. To assess whether learning can be accurately characterized as basically probability matching or systematizing at the individual level, we ran a large-scale experiment. We found substantial individual variation. The structure of this variation is not predicted by recent beta-binomial models. We introduce a new model, the Double Scaling Sigmoid (DSS) model, fit its parameters on a by-participant basis, and show that it captures the patterns in the data. Prior expectations in the DSS are abstract, and do not entirely represent previous experience.