Small and Large Scale Probabilistic Classifiers with Guarantees of Validity. / Petej, Ivan.

2018. 133 p.

Research output: ThesisDoctoral Thesis

Unpublished

Documents

Abstract

This thesis addresses and expands research in probabilistic prediction with a particular emphasis on generating forecasts which are well calibrated. Chapter 1 describes standard techniques in machine learning and outlines two methods - conformal prediction and Venn prediction - both which serve as important building blocks for the remainder of the results in this thesis. Chapter 2 introduces the field of probabilistic machine learning and highlights some of the advantages and challenges of the methods developed to date. Chapter 3 proposes a new method of probabilistic prediction which is based on conformal prediction - a machine learning method for generating prediction sets that are guaranteed to have a specified coverage probability. The method is applied to the standard USPS data set with encouraging results. Chapter 4 focuses on the study of Venn prediction, concentrating on binary prediction problems. Venn predictors produce probability-type predictions for the labels of test objects which are guaranteed to be well calibrated under the standard assumption that the observations are generated independently from the same distribution. A new class of Venn predictors is introduced, called Venn–Abers predictors, which are based on the idea of isotonic regression. Promising empirical results are demonstrated both for Venn–Abers predictors and for their more computationally efficient simplified version. Chapter 5 studies theoretically and empirically a method of turning machine learning algorithms into probabilistic predictors that, as the Venn-Abers predictors described in the preceding chapter, automatically enjoy a property of validity (perfect calibration) but are computationally more efficient. The price to pay for perfect calibration is that these probabilistic predictors produce imprecise probabilities. When these imprecise probabilities are merged into precise probabilities, the resulting predictors, while losing the theoretical property of perfect calibration, are shown to be consistently more accurate than the existing methods in empirical studies.
Original languageEnglish
QualificationPh.D.
Awarding Institution
Supervisors/Advisors
Award date1 Jul 2018
StateUnpublished - 2018
This open access research output is licenced under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

ID: 30400648