Abstract
This paper studies theoretically and empirically a method of turning machine learning algorithms into probabilistic predictors that automatically enjoys a property of validity (perfect calibration), is computationally efficient, and preserves predictive efficiency. The price to pay for perfect calibration is that these probabilistic predictors produce imprecise (in practice, almost precise for large data sets) probabilities. When these imprecise probabilities are merged into precise probabilities, the resulting predictors, while losing the theoretical property of perfect calibration, consistently outperform the existing methods in empirical studies.
Original language | English |
---|---|
Title of host publication | NIPS'15 |
Subtitle of host publication | Proceedings of the 28th International Conference on Neural Information Processing Systems |
Publisher | MIT Press |
Pages | 892-900 |
Number of pages | 9 |
Volume | 1 |
DOIs | |
Publication status | Published - 7 Dec 2015 |