We present an algorithmic model for the development of childrens intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic context-free grammar. Our algorithm performs stochastic search at two levels of abstraction an outer loop in the space of theories, and an inner loop in the space of explanations or models generated by each theory given a particular dataset in order to discover the theory that best explains the observed data. We show that this model is capable of learning correct theories in several everyday domains, and discuss the dynamics of learning in the context of childrens cognitive development.