![]() Arslan, Yusuf ![]() ![]() ![]() in In Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART 2022) (2022, February) In industrial contexts, when an ML model classifies a sample as positive, it raises an alarm, which is subsequently sent to human analysts for verification. Reducing the number of false alarms upstream in ... [more ▼] In industrial contexts, when an ML model classifies a sample as positive, it raises an alarm, which is subsequently sent to human analysts for verification. Reducing the number of false alarms upstream in an ML pipeline is paramount to reduce the workload of experts while increasing customers’ trust. Increasingly, SHAP Explanations are leveraged to facilitate manual analysis. Because they have been shown to be useful to human analysts in the detection of false positives, we postulate that SHAP Explanations may provide a means to automate false-positive reduction. To confirm our intuition, we evaluate clustering and rules detection metrics with ground truth labels to understand the utility of SHAP Explanations to discriminate false positives from true positives. We show that SHAP Explanations are indeed relevant in discriminating samples and are a relevant candidate to automate ML tasks and help to detect and reduce false-positive results. [less ▲] Detailed reference viewed: 107 (8 UL)![]() Arslan, Yusuf ![]() ![]() ![]() in Companion Proceedings of the Web Conference 2021 (WWW '21 Companion), April 19--23, 2021, Ljubljana, Slovenia (2021, April 19) Detailed reference viewed: 114 (20 UL)![]() Veiber, Lisa ![]() ![]() ![]() in Veiber, Lisa; Allix, Kevin; Arslan, Yusuf (Eds.) et al Proceedings of the 2020 USENIX Conference on Operational Machine Learning (OpML 20) (2020, July) Machine Learning (ML) is increasingly prominent in or- ganizations. While those algorithms can provide near perfect accuracy, their decision-making process remains opaque. In a context of accelerating ... [more ▼] Machine Learning (ML) is increasingly prominent in or- ganizations. While those algorithms can provide near perfect accuracy, their decision-making process remains opaque. In a context of accelerating regulation in Artificial Intelligence (AI) and deepening user awareness, explainability has become a priority notably in critical healthcare and financial environ- ments. The various frameworks developed often overlook their integration into operational applications as discovered with our industrial partner. In this paper, explainability in ML and its relevance to our industrial partner is presented. We then dis- cuss the main challenges to the integration of ex- plainability frameworks in production we have faced. Finally, we provide recommendations given those challenges. [less ▲] Detailed reference viewed: 127 (22 UL) |
||