Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory. / Wang, Wei; Wang, Hao; Zhang, Chen; Gao, Yang.

In: Neural Computation, Vol. 30, No. 3, 03.2018, p. 820-855.

Research output: Contribution to journalArticlepeer-review

Published

Standard

Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory. / Wang, Wei; Wang, Hao; Zhang, Chen; Gao, Yang.

In: Neural Computation, Vol. 30, No. 3, 03.2018, p. 820-855.

Research output: Contribution to journalArticlepeer-review

Harvard

Wang, W, Wang, H, Zhang, C & Gao, Y 2018, 'Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory', Neural Computation, vol. 30, no. 3, pp. 820-855. https://doi.org/10.1162/neco_a_01053

APA

Vancouver

Author

Wang, Wei ; Wang, Hao ; Zhang, Chen ; Gao, Yang. / Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory. In: Neural Computation. 2018 ; Vol. 30, No. 3. pp. 820-855.

BibTeX

@article{58193de3d8b3493c85f45916a89bf245,
title = "Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory",
abstract = "Learning an appropriate distance metric plays a substantial role in the success of many learning machines. Conventional metric learning algorithms have limited utility when the training and test samples are drawn from related but different domains (i.e., source domain and target domain). In this letter, we propose two novel metric learning algorithms for domain adaptation in an information-theoretic setting, allowing for discriminating power transfer and standard learning machine propagation across two domains. In the first one, a cross-domain Mahalanobis distance is learned by combining three goals: reducing the distribution difference between different domains, preserving the geometry of target domain data, and aligning the geometry of source domain data with label information. Furthermore, we devote our efforts to solving complex domain adaptation problems and go beyond linear cross-domain metric learning by extending the first method to a multiple kernel learning framework. A convex combination of multiple kernels and a linear transformation are adaptively learned in a single optimization, which greatly benefits the exploration of prior knowledge and the description of data characteristics. Comprehensive experiments in three real-world applications (face recognition, text classification, and object categorization) verify that the proposed methods outperform state-of-the-art metric learning and domain adaptation methods.",
author = "Wei Wang and Hao Wang and Chen Zhang and Yang Gao",
year = "2018",
month = mar,
doi = "10.1162/neco_a_01053",
language = "English",
volume = "30",
pages = "820--855",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "3",

}

RIS

TY - JOUR

T1 - Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory

AU - Wang, Wei

AU - Wang, Hao

AU - Zhang, Chen

AU - Gao, Yang

PY - 2018/3

Y1 - 2018/3

N2 - Learning an appropriate distance metric plays a substantial role in the success of many learning machines. Conventional metric learning algorithms have limited utility when the training and test samples are drawn from related but different domains (i.e., source domain and target domain). In this letter, we propose two novel metric learning algorithms for domain adaptation in an information-theoretic setting, allowing for discriminating power transfer and standard learning machine propagation across two domains. In the first one, a cross-domain Mahalanobis distance is learned by combining three goals: reducing the distribution difference between different domains, preserving the geometry of target domain data, and aligning the geometry of source domain data with label information. Furthermore, we devote our efforts to solving complex domain adaptation problems and go beyond linear cross-domain metric learning by extending the first method to a multiple kernel learning framework. A convex combination of multiple kernels and a linear transformation are adaptively learned in a single optimization, which greatly benefits the exploration of prior knowledge and the description of data characteristics. Comprehensive experiments in three real-world applications (face recognition, text classification, and object categorization) verify that the proposed methods outperform state-of-the-art metric learning and domain adaptation methods.

AB - Learning an appropriate distance metric plays a substantial role in the success of many learning machines. Conventional metric learning algorithms have limited utility when the training and test samples are drawn from related but different domains (i.e., source domain and target domain). In this letter, we propose two novel metric learning algorithms for domain adaptation in an information-theoretic setting, allowing for discriminating power transfer and standard learning machine propagation across two domains. In the first one, a cross-domain Mahalanobis distance is learned by combining three goals: reducing the distribution difference between different domains, preserving the geometry of target domain data, and aligning the geometry of source domain data with label information. Furthermore, we devote our efforts to solving complex domain adaptation problems and go beyond linear cross-domain metric learning by extending the first method to a multiple kernel learning framework. A convex combination of multiple kernels and a linear transformation are adaptively learned in a single optimization, which greatly benefits the exploration of prior knowledge and the description of data characteristics. Comprehensive experiments in three real-world applications (face recognition, text classification, and object categorization) verify that the proposed methods outperform state-of-the-art metric learning and domain adaptation methods.

U2 - 10.1162/neco_a_01053

DO - 10.1162/neco_a_01053

M3 - Article

VL - 30

SP - 820

EP - 855

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 3

ER -