Reference : Towards Refined Classifications Driven by SHAP Explanations
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
Computational Sciences; Finance
http://hdl.handle.net/10993/52114
Towards Refined Classifications Driven by SHAP Explanations
English
Arslan, Yusuf mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Lebichot, Bertrand mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Allix, Kevin mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Veiber, Lisa mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Lefebvre, Clement mailto []
Boytsov, Andrey mailto []
Goujon, Anne mailto []
Bissyande, Tegawendé François D Assise mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Klein, Jacques mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
11-Aug-2022
Machine Learning and Knowledge Extraction
Holzinger, Andreas
Kieseberg, Peter
Tjoa, A. Min
Weippl, Edgar
Springer
68-81
Yes
International
978-3-031-14463-9
Cross Domain Conference for Machine Learning & Knowledge Extraction
from 23-08-2022 to 26-08-2022
17th International Conference on Availability, Reliability and Security ARES 2022
Vienna
Austria
[en] Interpretable Machine Learning ; SHAP Explanations ; Second-step Classification
[en] Machine Learning (ML) models are inherently approximate; as a result, the predictions of an ML model can be wrong. In applications where errors can jeopardize a company's reputation, human experts often have to manually check the alarms raised by the ML models by hand, as wrong or delayed decisions can have a significant business impact. These experts often use interpretable ML tools for the verification of predictions. However, post-prediction verification is also costly. In this paper, we hypothesize that the outputs of interpretable ML tools, such as SHAP explanations, can be exploited by machine learning techniques to improve classifier performance. By doing so, the cost of the post-prediction analysis can be reduced. To confirm our intuition, we conduct several experiments where we use SHAP explanations directly as new features. In particular, by considering nine datasets, we first compare the performance of these "SHAP features" against traditional "base features" on binary classification tasks. Then, we add a second-step classifier relying on SHAP features, with the goal of reducing false-positive and false-negative results of typical classifiers. We show that SHAP explanations used as SHAP features can help to improve classification performance, especially for false-negative reduction.
http://hdl.handle.net/10993/52114

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
CDMAKE2022.pdfPublisher postprint2.9 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.