Reference : On the Suitability of SHAP Explanations for Refining Classifications
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
Finance
http://hdl.handle.net/10993/48926
On the Suitability of SHAP Explanations for Refining Classifications
English
Arslan, Yusuf mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Lebichot, Bertrand mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Allix, Kevin mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Veiber, Lisa mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Lefebvre, Clément []
Boytsov, Andrey []
Goujon, Anne []
Bissyande, Tegawendé François D Assise mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Klein, Jacques mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX >]
Feb-2022
In Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART 2022)
Yes
International
14th International Conference on Agents and Artificial Intelligence
from 03-02-2022 to 05-02-2022
[en] SHAP Explanations ; Shapley Values ; Explainable Machine Learning ; Clustering ; Rule Mining
[en] In industrial contexts, when an ML model classifies a sample as positive, it raises an alarm, which is subsequently sent to human analysts for verification. Reducing the number of false alarms upstream in an ML pipeline is paramount to reduce the workload of experts while increasing customers’ trust. Increasingly, SHAP Explanations are leveraged to facilitate manual analysis. Because they have been shown to be useful to human analysts in the detection of false positives, we postulate that SHAP Explanations may provide a means to automate false-positive reduction. To confirm our intuition, we evaluate clustering and rules detection metrics with ground truth labels to understand the utility of SHAP Explanations to discriminate false positives from true positives. We show that SHAP Explanations are indeed relevant in discriminating samples and are a relevant candidate to automate ML tasks and help to detect and reduce false-positive results.
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Trustworthy Software Engineering (TruX)
Fonds National de la Recherche - FnR
ExLiFT
Researchers ; Professionals ; Students
http://hdl.handle.net/10993/48926
FnR ; FNR13778825 > Jacques Klein > ExLiFT > Explainable Machine Learning In Fintech > 01/07/2019 > 30/06/2022 > 2019

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
ICAART_2022_106_CR.pdfAuthor postprint1.08 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.