[en] Machine Learning (ML) is increasingly prominent in or- ganizations. While those algorithms can provide near perfect accuracy, their decision-making process remains opaque. In a context of accelerating regulation in Artificial Intelligence (AI) and deepening user awareness, explainability has become a priority notably in critical healthcare and financial environ- ments. The various frameworks developed often overlook their integration into operational applications as discovered with our industrial partner. In this paper, explainability in ML and its relevance to our industrial partner is presented. We then dis- cuss the main challenges to the integration of ex- plainability frameworks in production we have faced. Finally, we provide recommendations given those challenges.
Disciplines :
Computer science
Author, co-author :
Veiber, Lisa ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Allix, Kevin ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Computer Science and Communications Research Unit (CSC)
Arslan, Yusuf ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Klein, Jacques ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Computer Science and Communications Research Unit (CSC)
External co-authors :
no
Language :
English
Title :
Challenges Towards Production-Ready Explainable Machine Learning
Publication date :
July 2020
Event name :
2020 USENIX Conference on Operational Machine Learning
Event organizer :
USENIX
Event place :
United States - California
Event date :
28-07-2020 to 07-08-2020
Audience :
International
Main work title :
Proceedings of the 2020 USENIX Conference on Operational Machine Learning (OpML 20)
Or Biran and Courtenay Cotton. Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), volume 8, 2017.
Chris Brinton. A framework for explanation of machine learning decisions. In IJCAI-17 workshop on explainable AI (XAI), pages 14-18, 2017.
Derek Doran, Sarah Schulz, and Tarek R Besold. What does explainable AI really mean? a new conceptualization of perspectives. arXiv preprint arXiv:1710.00794, 2017.
Finale Doshi-Velez and Been Kim. A roadmap for a rigorous science of interpretability. arXiv preprint arXiv:1702.08608, 2, 2017.
Bryce Goodman and Seth Flaxman. European union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3):50-57, 2017.
David Gunning. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2, 2017.
Zachary C. Lipton. The mythos of model interpretability. Queue, 16(3):31-57, June 2018.
PwC. Pwc's global artificial intelligence study: Sizing the prize. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/ artificial-intelligence-study.html, 2017. Accessed Feb 2020.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD'16, page 1135-1144, New York, NY, USA, 2016. Association for Computing Machinery.
Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. Right for the right reasons: Training differentiable models by constraining their explanations. 2017.
Reza Shokri, Martin Strobel, and Yair Zick. Privacy risks of explaining machine learning models. arXiv preprint arXiv:1907.00164, 2019.
Naeem Siddiqi. Intelligent credit scoring: Building and implementing better credit risk scorecards. 2017.