Universal algorithms for multinomial logistic regression under Kullback-Leibler game. / Dzhamtyrova, Raisa; Kalnishkan, Yuri.

In: Neurocomputing, Vol. 397, 15.07.2020, p. 369-380.

Research output: Contribution to journalArticlepeer-review

Published

Standard

Universal algorithms for multinomial logistic regression under Kullback-Leibler game. / Dzhamtyrova, Raisa; Kalnishkan, Yuri.

In: Neurocomputing, Vol. 397, 15.07.2020, p. 369-380.

Research output: Contribution to journalArticlepeer-review

Harvard

APA

Vancouver

Author

BibTeX

@article{f6afda098b2540bfa61df0fa181d43de,
title = "Universal algorithms for multinomial logistic regression under Kullback-Leibler game",
abstract = "We consider the framework of competitive prediction, where one provides guarantees compared to other predictive models that are called experts. We propose a universal algorithm predicting finite-dimensional distributions, i.e. points from a simplex, under Kullback-Leibler game. In the standard framework for prediction with expert advice, the performance of the learner is measured by means of the cumulative loss. In this paper we consider a generalisation of this setting and discount losses with time. A natural choice of predictors for the probability games is a class of multinomial logistic regression functions as they output a distribution that lies inside a probability simplex. We consider the class of multinomial logistic regressions to be our experts. We provide a strategy that allows us to `track the best expert' of this type and derive the theoretical bound on the discounted loss of the strategy. We provide the kernelized version of our algorithm, which competes with a wider set of experts from Reproducing Kernel Hilbert Space (RKHS) and prove a theoretical guarantee for the kernelized strategy. We carry out experiments on three data sets and compare the cumulative losses of our algorithm and multinomial logistic regression.",
keywords = "online learning, sequential prediction, competitive prediction, universal algorithms, loss bounds, Kullback-Leibler game",
author = "Raisa Dzhamtyrova and Yuri Kalnishkan",
year = "2020",
month = jul,
day = "15",
doi = "10.1016/j.neucom.2019.06.105",
language = "English",
volume = "397",
pages = "369--380",
journal = "Neurocomputing",
issn = "0925-2312",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - Universal algorithms for multinomial logistic regression under Kullback-Leibler game

AU - Dzhamtyrova, Raisa

AU - Kalnishkan, Yuri

PY - 2020/7/15

Y1 - 2020/7/15

N2 - We consider the framework of competitive prediction, where one provides guarantees compared to other predictive models that are called experts. We propose a universal algorithm predicting finite-dimensional distributions, i.e. points from a simplex, under Kullback-Leibler game. In the standard framework for prediction with expert advice, the performance of the learner is measured by means of the cumulative loss. In this paper we consider a generalisation of this setting and discount losses with time. A natural choice of predictors for the probability games is a class of multinomial logistic regression functions as they output a distribution that lies inside a probability simplex. We consider the class of multinomial logistic regressions to be our experts. We provide a strategy that allows us to `track the best expert' of this type and derive the theoretical bound on the discounted loss of the strategy. We provide the kernelized version of our algorithm, which competes with a wider set of experts from Reproducing Kernel Hilbert Space (RKHS) and prove a theoretical guarantee for the kernelized strategy. We carry out experiments on three data sets and compare the cumulative losses of our algorithm and multinomial logistic regression.

AB - We consider the framework of competitive prediction, where one provides guarantees compared to other predictive models that are called experts. We propose a universal algorithm predicting finite-dimensional distributions, i.e. points from a simplex, under Kullback-Leibler game. In the standard framework for prediction with expert advice, the performance of the learner is measured by means of the cumulative loss. In this paper we consider a generalisation of this setting and discount losses with time. A natural choice of predictors for the probability games is a class of multinomial logistic regression functions as they output a distribution that lies inside a probability simplex. We consider the class of multinomial logistic regressions to be our experts. We provide a strategy that allows us to `track the best expert' of this type and derive the theoretical bound on the discounted loss of the strategy. We provide the kernelized version of our algorithm, which competes with a wider set of experts from Reproducing Kernel Hilbert Space (RKHS) and prove a theoretical guarantee for the kernelized strategy. We carry out experiments on three data sets and compare the cumulative losses of our algorithm and multinomial logistic regression.

KW - online learning

KW - sequential prediction

KW - competitive prediction

KW - universal algorithms

KW - loss bounds

KW - Kullback-Leibler game

U2 - 10.1016/j.neucom.2019.06.105

DO - 10.1016/j.neucom.2019.06.105

M3 - Article

VL - 397

SP - 369

EP - 380

JO - Neurocomputing

JF - Neurocomputing

SN - 0925-2312

ER -