Use of Bayesian models to explain both high- and low-level aspects of cognitive function promises better connections between cognitive science and cognitive neuroscience. But standing in the way are fundamental problems, such as the computational intractability of Bayesian inference, and the general difficulty of understanding how Bayesian calculation can deal with structural representation. Getting around the problem of intractability seems to involve devising effective methods for approximating optimal inference. But there is the alternative of simplifying the interpretation of how inference arises. While the process is normally taken to involve calculations over an implied joint distribution, it is possible to view it more simply as data-driven application of conditional assertions. This naive interpretation has several advantages with regard to tractability and representation. The paper formalizes the model and demonstrates some of its virtues.