Explaining Deep Learning Models with Constrained Adversarial Examples

Jonathan Moore, Nils Hammerla, Chris Watkins

Research output: Chapter in Book/Report/Conference proceedingConference contribution

132 Downloads (Pure)


Machine learning algorithms generally suffer from a problem of ex- plainability. Given a classification result from a model, it is typically hard to determine what caused the decision to be made, and to give an informative ex- planation. We explore a new method of generating counterfactual explanations, which instead of explaining why a particular classification was made explain how a different outcome can be achieved. This gives the recipients of the explanation a better way to understand the outcome, and provides an actionable suggestion. We show that the introduced method of Constrained Adversarial Examples (CADEX) can be used in real world applications, and yields explanations which incorporate business or domain constraints such as handling categorical attributes and range constraints.
Original languageEnglish
Title of host publicationPacific Rim International Conference on Artificial Intelligence
Number of pages14
ISBN (Electronic)978-3-030-29908-8
ISBN (Print)978-3-030-29907-1
Publication statusE-pub ahead of print - 23 Aug 2019

Publication series

NameLecture Notes in Computer Science

Cite this