Weakly Supervised Learning: What Could It Do and What Could Not?


Weakly supervised learning is not only a typical way of human concept learning, but also has wide real-world applications. Of particular interest to this paper is the theoretical aspect of weakly supervised learning: (a) Could weakly supervised learning learn the target concept the same as that of fully supervised learning? (b) If yes, under what conditions it will and how to achieve it? In other words, this paper will investigate what weakly supervised learning could do and what could not. The basic idea is, weakly supervised learning could be transformed into an equivalent supervised learning problem, in which way, it could be understood with the tools of supervised learning. The major results of the paper include: (a) the hardness of weakly supervised learning depends on the properties of training data and the adopted feature representation; (b) though there is no theoretical guarantee for a unique identification of the relevant variables, incorporating minimum description length principle may help infer target concept; (c) weakly supervised learning could be solved by EM-style algorithm, which is not a novel idea, however, the theoretical analysis suggests that the E-step and M-step should adopt feature representations with distinct properties rather than using the same feature.

Back to Table of Contents