Notions of entropy and uncertainty are fundamental to many domains, ranging from the philosophy of science to physics. One important application is to quantify the expected usefulness of possible experiments (or questions or tests). Many different entropy models could be used; different models do not in general lead to the same conclusions about which tests (or experiments) are most valuable. It is often unclear whether this is due to different theoretical and practical goals or are merely due to historical accident. We introduce a unified two-parameter family of entropy models that incorporates a great deal of entropies as special cases. This family of models offers insight into heretofore perplexing psychological results, and generates predictions for future research.