References of "Arslan, Yusuf 50034666"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailTowards Refined Classifications Driven by SHAP Explanations
Arslan, Yusuf UL; Lebichot, Bertrand UL; Allix, Kevin UL et al

in Holzinger, Andreas; Kieseberg, Peter; Tjoa, A. Min (Eds.) et al Machine Learning and Knowledge Extraction (2022, August 11)

Machine Learning (ML) models are inherently approximate; as a result, the predictions of an ML model can be wrong. In applications where errors can jeopardize a company's reputation, human experts often ... [more ▼]

Machine Learning (ML) models are inherently approximate; as a result, the predictions of an ML model can be wrong. In applications where errors can jeopardize a company's reputation, human experts often have to manually check the alarms raised by the ML models by hand, as wrong or delayed decisions can have a significant business impact. These experts often use interpretable ML tools for the verification of predictions. However, post-prediction verification is also costly. In this paper, we hypothesize that the outputs of interpretable ML tools, such as SHAP explanations, can be exploited by machine learning techniques to improve classifier performance. By doing so, the cost of the post-prediction analysis can be reduced. To confirm our intuition, we conduct several experiments where we use SHAP explanations directly as new features. In particular, by considering nine datasets, we first compare the performance of these "SHAP features" against traditional "base features" on binary classification tasks. Then, we add a second-step classifier relying on SHAP features, with the goal of reducing false-positive and false-negative results of typical classifiers. We show that SHAP explanations used as SHAP features can help to improve classification performance, especially for false-negative reduction. [less ▲]

Detailed reference viewed: 32 (3 UL)
Full Text
Peer Reviewed
See detailOn the Suitability of SHAP Explanations for Refining Classifications
Arslan, Yusuf UL; Lebichot, Bertrand UL; Allix, Kevin UL et al

in In Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART 2022) (2022, February)

In industrial contexts, when an ML model classifies a sample as positive, it raises an alarm, which is subsequently sent to human analysts for verification. Reducing the number of false alarms upstream in ... [more ▼]

In industrial contexts, when an ML model classifies a sample as positive, it raises an alarm, which is subsequently sent to human analysts for verification. Reducing the number of false alarms upstream in an ML pipeline is paramount to reduce the workload of experts while increasing customers’ trust. Increasingly, SHAP Explanations are leveraged to facilitate manual analysis. Because they have been shown to be useful to human analysts in the detection of false positives, we postulate that SHAP Explanations may provide a means to automate false-positive reduction. To confirm our intuition, we evaluate clustering and rules detection metrics with ground truth labels to understand the utility of SHAP Explanations to discriminate false positives from true positives. We show that SHAP Explanations are indeed relevant in discriminating samples and are a relevant candidate to automate ML tasks and help to detect and reduce false-positive results. [less ▲]

Detailed reference viewed: 258 (12 UL)
Full Text
Peer Reviewed
See detailExploiting Prototypical Explanations for Undersampling Imbalanced Datasets
Arslan, Yusuf UL; Allix, Kevin UL; Lefebvre, Clément et al

in 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA) (2022)

Among the reported solutions to the class imbalance issue, the undersampling approaches, which remove instances of insignificant samples from the majority class, are quite prevalent. However, the ... [more ▼]

Among the reported solutions to the class imbalance issue, the undersampling approaches, which remove instances of insignificant samples from the majority class, are quite prevalent. However, the undersampling approaches may discard significant patterns in the datasets. A prototype, which is always an actual sample from the data, represents a group of samples in the dataset. Our hypothesis is that prototypes can fill the missing significant patterns that are discarded by undersampling methods and help to improve model performance. To confirm our intuition, we articulate prototypes to undersampling methods in the machine learning pipeline. We show that there is a statistically significant difference between the AUPR and AUROC results of undersampling methods and our approach. [less ▲]

Detailed reference viewed: 16 (1 UL)
Full Text
Peer Reviewed
See detailA Comparison of Pre-Trained Language Models for Multi-Class Text Classification in the Financial Domain
Arslan, Yusuf UL; Allix, Kevin UL; Veiber, Lisa UL et al

in Companion Proceedings of the Web Conference 2021 (WWW '21 Companion), April 19--23, 2021, Ljubljana, Slovenia (2021, April 19)

Detailed reference viewed: 162 (23 UL)
Full Text
Peer Reviewed
See detailChallenges Towards Production-Ready Explainable Machine Learning
Veiber, Lisa UL; Allix, Kevin UL; Arslan, Yusuf UL et al

in Veiber, Lisa; Allix, Kevin; Arslan, Yusuf (Eds.) et al Proceedings of the 2020 USENIX Conference on Operational Machine Learning (OpML 20) (2020, July)

Machine Learning (ML) is increasingly prominent in or- ganizations. While those algorithms can provide near perfect accuracy, their decision-making process remains opaque. In a context of accelerating ... [more ▼]

Machine Learning (ML) is increasingly prominent in or- ganizations. While those algorithms can provide near perfect accuracy, their decision-making process remains opaque. In a context of accelerating regulation in Artificial Intelligence (AI) and deepening user awareness, explainability has become a priority notably in critical healthcare and financial environ- ments. The various frameworks developed often overlook their integration into operational applications as discovered with our industrial partner. In this paper, explainability in ML and its relevance to our industrial partner is presented. We then dis- cuss the main challenges to the integration of ex- plainability frameworks in production we have faced. Finally, we provide recommendations given those challenges. [less ▲]

Detailed reference viewed: 145 (23 UL)