Explaining Deep Learning Models with Constrained Adversarial Examples. / Moore, Jonathan; Hammerla, Nils; Watkins, Chris.

Pacific Rim International Conference on Artificial Intelligence. Springer, 2019. p. 43-56 (Lecture Notes in Computer Science; Vol. 11670).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

E-pub ahead of print

Standard

Explaining Deep Learning Models with Constrained Adversarial Examples. / Moore, Jonathan; Hammerla, Nils; Watkins, Chris.

Pacific Rim International Conference on Artificial Intelligence. Springer, 2019. p. 43-56 (Lecture Notes in Computer Science; Vol. 11670).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Moore, J, Hammerla, N & Watkins, C 2019, Explaining Deep Learning Models with Constrained Adversarial Examples. in Pacific Rim International Conference on Artificial Intelligence. Lecture Notes in Computer Science, vol. 11670, Springer, pp. 43-56. https://doi.org/10.1007/978-3-030-29908-8_4

APA

Moore, J., Hammerla, N., & Watkins, C. (2019). Explaining Deep Learning Models with Constrained Adversarial Examples. In Pacific Rim International Conference on Artificial Intelligence (pp. 43-56). (Lecture Notes in Computer Science; Vol. 11670). Springer. https://doi.org/10.1007/978-3-030-29908-8_4

Vancouver

Moore J, Hammerla N, Watkins C. Explaining Deep Learning Models with Constrained Adversarial Examples. In Pacific Rim International Conference on Artificial Intelligence. Springer. 2019. p. 43-56. (Lecture Notes in Computer Science). https://doi.org/10.1007/978-3-030-29908-8_4

Author

Moore, Jonathan ; Hammerla, Nils ; Watkins, Chris. / Explaining Deep Learning Models with Constrained Adversarial Examples. Pacific Rim International Conference on Artificial Intelligence. Springer, 2019. pp. 43-56 (Lecture Notes in Computer Science).

BibTeX

@inproceedings{c282fb2651f24da5a92fa70886e37dad,
title = "Explaining Deep Learning Models with Constrained Adversarial Examples",
abstract = "Machine learning algorithms generally suffer from a problem of ex- plainability. Given a classification result from a model, it is typically hard to determine what caused the decision to be made, and to give an informative ex- planation. We explore a new method of generating counterfactual explanations, which instead of explaining why a particular classification was made explain how a different outcome can be achieved. This gives the recipients of the explanation a better way to understand the outcome, and provides an actionable suggestion. We show that the introduced method of Constrained Adversarial Examples (CADEX) can be used in real world applications, and yields explanations which incorporate business or domain constraints such as handling categorical attributes and range constraints.",
author = "Jonathan Moore and Nils Hammerla and Chris Watkins",
year = "2019",
month = aug,
day = "23",
doi = "10.1007/978-3-030-29908-8_4",
language = "English",
isbn = "978-3-030-29907-1",
series = "Lecture Notes in Computer Science",
publisher = "Springer",
pages = "43--56",
booktitle = "Pacific Rim International Conference on Artificial Intelligence",

}

RIS

TY - GEN

T1 - Explaining Deep Learning Models with Constrained Adversarial Examples

AU - Moore, Jonathan

AU - Hammerla, Nils

AU - Watkins, Chris

PY - 2019/8/23

Y1 - 2019/8/23

N2 - Machine learning algorithms generally suffer from a problem of ex- plainability. Given a classification result from a model, it is typically hard to determine what caused the decision to be made, and to give an informative ex- planation. We explore a new method of generating counterfactual explanations, which instead of explaining why a particular classification was made explain how a different outcome can be achieved. This gives the recipients of the explanation a better way to understand the outcome, and provides an actionable suggestion. We show that the introduced method of Constrained Adversarial Examples (CADEX) can be used in real world applications, and yields explanations which incorporate business or domain constraints such as handling categorical attributes and range constraints.

AB - Machine learning algorithms generally suffer from a problem of ex- plainability. Given a classification result from a model, it is typically hard to determine what caused the decision to be made, and to give an informative ex- planation. We explore a new method of generating counterfactual explanations, which instead of explaining why a particular classification was made explain how a different outcome can be achieved. This gives the recipients of the explanation a better way to understand the outcome, and provides an actionable suggestion. We show that the introduced method of Constrained Adversarial Examples (CADEX) can be used in real world applications, and yields explanations which incorporate business or domain constraints such as handling categorical attributes and range constraints.

U2 - 10.1007/978-3-030-29908-8_4

DO - 10.1007/978-3-030-29908-8_4

M3 - Conference contribution

SN - 978-3-030-29907-1

T3 - Lecture Notes in Computer Science

SP - 43

EP - 56

BT - Pacific Rim International Conference on Artificial Intelligence

PB - Springer

ER -