On Generating Efficient Data Summaries for Logistic Regression: A Coreset-based Approach

Nery Riquelme Granada, Khuong An Nguyen, Zhiyuan Luo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In the era of datasets of unprecedented sizes, data compression techniques are an attractive approach for speeding up machine learning algorithms. One of the most successful paradigms for achieving good-quality compression is that of coresets: small summaries of data that act as proxies to the original input data. Even though coresets proved to be extremely useful to accelerate unsupervised learning problems, applying them to supervised learning problems may bring unexpected computational bottlenecks. We show that this is the case for Logistic Regression classification, and hence propose two methods for accelerating the computation of coresets for this problem. When coresets are computed using our methods on three public datasets, computing the coreset and learning from it is, in the worst case, 11 times faster than learning directly from the full input data, and 34 times faster in the best case. Furthermore, our results indicate that our accelerating approaches do not degrade the empirical performance of coresets.
Original languageEnglish
Title of host publication9th International Conference on Data Science, Technology and Applications (DATA 2020)
Pages78-89
Number of pages12
Volume1
ISBN (Electronic)978-989-758-440-4
DOIs
Publication statusPublished - Jul 2020

Cite this