Paper published in a book (Scientific congresses, symposiums and conference proceedings)
On the Suitability of SHAP Explanations for Refining Classifications
Arslan, Yusuf; Lebichot, Bertrand; Allix, Kevin et al.
2022In In Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART 2022)
Peer reviewed
 

Files


Full Text
ICAART_2022_106_CR.pdf
Author postprint (1.1 MB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
SHAP Explanations; Shapley Values; Explainable Machine Learning; Clustering; Rule Mining
Abstract :
[en] In industrial contexts, when an ML model classifies a sample as positive, it raises an alarm, which is subsequently sent to human analysts for verification. Reducing the number of false alarms upstream in an ML pipeline is paramount to reduce the workload of experts while increasing customers’ trust. Increasingly, SHAP Explanations are leveraged to facilitate manual analysis. Because they have been shown to be useful to human analysts in the detection of false positives, we postulate that SHAP Explanations may provide a means to automate false-positive reduction. To confirm our intuition, we evaluate clustering and rules detection metrics with ground truth labels to understand the utility of SHAP Explanations to discriminate false positives from true positives. We show that SHAP Explanations are indeed relevant in discriminating samples and are a relevant candidate to automate ML tasks and help to detect and reduce false-positive results.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Trustworthy Software Engineering (TruX)
Disciplines :
Computer science
Author, co-author :
Arslan, Yusuf ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
Lebichot, Bertrand ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
Allix, Kevin ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
Veiber, Lisa ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
Lefebvre, Clément
Boytsov, Andrey 
Goujon, Anne
Bissyande, Tegawendé François D Assise  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
Klein, Jacques ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
External co-authors :
no
Language :
English
Title :
On the Suitability of SHAP Explanations for Refining Classifications
Publication date :
February 2022
Event name :
14th International Conference on Agents and Artificial Intelligence
Event date :
from 03-02-2022 to 05-02-2022
Audience :
International
Main work title :
In Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART 2022)
Peer reviewed :
Peer reviewed
Focus Area :
Finance
FnR Project :
FNR13778825 - Explainable Machine Learning In Fintech, 2019 (01/07/2019-30/06/2022) - Jacques Klein
Name of the research project :
ExLiFT
Funders :
FNR - Fonds National de la Recherche [LU]
Available on ORBilu :
since 09 December 2021

Statistics


Number of views
519 (22 by Unilu)
Number of downloads
215 (10 by Unilu)

Scopus citations®
 
1
Scopus citations®
without self-citations
0

Bibliography


Similar publications



Contact ORBilu