Small and Large Scale Probabilistic Classifiers with Guarantees of Validity. / Petej, Ivan.

2018. 133 p.

Research output: ThesisDoctoral Thesis

Unpublished

Standard

Small and Large Scale Probabilistic Classifiers with Guarantees of Validity. / Petej, Ivan.

2018. 133 p.

Research output: ThesisDoctoral Thesis

Harvard

APA

Vancouver

Author

BibTeX

@phdthesis{0b4f2ee38a2c4ace815f440de3af6858,
title = "Small and Large Scale Probabilistic Classifiers with Guarantees of Validity",
abstract = "This thesis addresses and expands research in probabilistic prediction with a particular emphasis on generating forecasts which are well calibrated. Chapter 1 describes standard techniques in machine learning and outlines two methods - conformal prediction and Venn prediction - both which serve as important building blocks for the remainder of the results in this thesis. Chapter 2 introduces the field of probabilistic machine learning and highlights some of the advantages and challenges of the methods developed to date. Chapter 3 proposes a new method of probabilistic prediction which is based on conformal prediction - a machine learning method for generating prediction sets that are guaranteed to have a specified coverage probability. The method is applied to the standard USPS data set with encouraging results. Chapter 4 focuses on the study of Venn prediction, concentrating on binary prediction problems. Venn predictors produce probability-type predictions for the labels of test objects which are guaranteed to be well calibrated under the standard assumption that the observations are generated independently from the same distribution. A new class of Venn predictors is introduced, called Venn–Abers predictors, which are based on the idea of isotonic regression. Promising empirical results are demonstrated both for Venn–Abers predictors and for their more computationally efficient simplified version. Chapter 5 studies theoretically and empirically a method of turning machine learning algorithms into probabilistic predictors that, as the Venn-Abers predictors described in the preceding chapter, automatically enjoy a property of validity (perfect calibration) but are computationally more efficient. The price to pay for perfect calibration is that these probabilistic predictors produce imprecise probabilities. When these imprecise probabilities are merged into precise probabilities, the resulting predictors, while losing the theoretical property of perfect calibration, are shown to be consistently more accurate than the existing methods in empirical studies.",
keywords = "Venn machine, reliable probabilistic prediction, additional information, transfer, Conformal Prediction, Probabilistic prediction, Class membership probability",
author = "Ivan Petej",
year = "2018",
language = "English",
school = "Royal Holloway, University of London",

}

RIS

TY - THES

T1 - Small and Large Scale Probabilistic Classifiers with Guarantees of Validity

AU - Petej, Ivan

PY - 2018

Y1 - 2018

N2 - This thesis addresses and expands research in probabilistic prediction with a particular emphasis on generating forecasts which are well calibrated. Chapter 1 describes standard techniques in machine learning and outlines two methods - conformal prediction and Venn prediction - both which serve as important building blocks for the remainder of the results in this thesis. Chapter 2 introduces the field of probabilistic machine learning and highlights some of the advantages and challenges of the methods developed to date. Chapter 3 proposes a new method of probabilistic prediction which is based on conformal prediction - a machine learning method for generating prediction sets that are guaranteed to have a specified coverage probability. The method is applied to the standard USPS data set with encouraging results. Chapter 4 focuses on the study of Venn prediction, concentrating on binary prediction problems. Venn predictors produce probability-type predictions for the labels of test objects which are guaranteed to be well calibrated under the standard assumption that the observations are generated independently from the same distribution. A new class of Venn predictors is introduced, called Venn–Abers predictors, which are based on the idea of isotonic regression. Promising empirical results are demonstrated both for Venn–Abers predictors and for their more computationally efficient simplified version. Chapter 5 studies theoretically and empirically a method of turning machine learning algorithms into probabilistic predictors that, as the Venn-Abers predictors described in the preceding chapter, automatically enjoy a property of validity (perfect calibration) but are computationally more efficient. The price to pay for perfect calibration is that these probabilistic predictors produce imprecise probabilities. When these imprecise probabilities are merged into precise probabilities, the resulting predictors, while losing the theoretical property of perfect calibration, are shown to be consistently more accurate than the existing methods in empirical studies.

AB - This thesis addresses and expands research in probabilistic prediction with a particular emphasis on generating forecasts which are well calibrated. Chapter 1 describes standard techniques in machine learning and outlines two methods - conformal prediction and Venn prediction - both which serve as important building blocks for the remainder of the results in this thesis. Chapter 2 introduces the field of probabilistic machine learning and highlights some of the advantages and challenges of the methods developed to date. Chapter 3 proposes a new method of probabilistic prediction which is based on conformal prediction - a machine learning method for generating prediction sets that are guaranteed to have a specified coverage probability. The method is applied to the standard USPS data set with encouraging results. Chapter 4 focuses on the study of Venn prediction, concentrating on binary prediction problems. Venn predictors produce probability-type predictions for the labels of test objects which are guaranteed to be well calibrated under the standard assumption that the observations are generated independently from the same distribution. A new class of Venn predictors is introduced, called Venn–Abers predictors, which are based on the idea of isotonic regression. Promising empirical results are demonstrated both for Venn–Abers predictors and for their more computationally efficient simplified version. Chapter 5 studies theoretically and empirically a method of turning machine learning algorithms into probabilistic predictors that, as the Venn-Abers predictors described in the preceding chapter, automatically enjoy a property of validity (perfect calibration) but are computationally more efficient. The price to pay for perfect calibration is that these probabilistic predictors produce imprecise probabilities. When these imprecise probabilities are merged into precise probabilities, the resulting predictors, while losing the theoretical property of perfect calibration, are shown to be consistently more accurate than the existing methods in empirical studies.

KW - Venn machine, reliable probabilistic prediction, additional information, transfer

KW - Conformal Prediction

KW - Probabilistic prediction

KW - Class membership probability

M3 - Doctoral Thesis

ER -