How the curse of intractability can be cognitive science's blessing


Intractability has been a thorny issue in cognitive science. Informally, intractability refers to the problem that computations that work for small toy domains cannot scale to domains of real-world complexity due to prohibitive resource consumption. It’s not uncommon for cognitive scientists to try to discredit competing frameworks by pointing out that those frameworks run into intractability issues. For instance, connectionists and Bayesians have argued against symbolic and logic approaches, respectively, because the latter two would yield intractable theories of cognition. Yet, it’s now well known that also connectionist and Bayesian theories can be intractable. Moreover, even heuristic and dynamical systems theories, both often lauded for being tractable, seem unable to live up to that image when forced to scale beyond toy domains. Evidently, intractability is not a problem for specific theories, or even for specific theoretical frameworks. Instead, it seems a ubiquitous feature of frameworks with high degrees of generality.

Back to Table of Contents