Deep Neural Networks; Black Box Problem; XAI; explanation; understanding; scientific models
Abstract :
[en] What Deep Neural Networks (DNNs) can do is impressive, yet they are notoriously opaque. Responding to the worries associated with this opacity, the field of XAI has produced a plethora of methods purporting to explain the workings of DNNs. Unsurprisingly, a whole host of questions revolves around the notion of explanation central to this field. This note provides a roadmap of the recent work that tackles these questions from the perspective of philosophical ideas on explanations and models in science.
Disciplines :
Computer science
Author, co-author :
KNOKS, Aleks ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
RALEIGH, Thomas ; University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Humanities (DHUM)
External co-authors :
no
Language :
English
Title :
XAI and philosophical work on explanation: A roadmap
Publication date :
2022
Event name :
1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming
Event organizer :
University of Udine
Event place :
Udine, Italy
Event date :
2-12-2022
Audience :
International
Journal title :
Proceedings of 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming
eISSN :
1613-0073
Publisher :
CEUR-WS.org
Special issue title :
Proceedings of 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming
J. Burrell, How the machine ‘thinks’: Understanding opacity in machine learning algorithms, Big Data & Society 3 (2016).
R. Guidotti, A. Monreale, D. Pedreschi, F. Giannotti, Principles of Explainable Artificial Intelligence, Springer International Publishing, 2021, pp. 9–31.
T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence 267 (2019) 1–38.
A. Páez, The pragmatic turn in explainable artificial intelligence (XAI), Minds and Machines 29 (2019) 441–459.
W. Fleisher, Understanding, idealization, and explainable AI, Episteme (forthcoming).
E. Sullivan, Understanding from machine learning models, The British Journal for the Philosophy of Science 73 (2022).
A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, S. Thrun, Dermatologist-level classification of skin cancer with deep neural networks, nature 542 (2017) 115–118.
Y. Wang, M. Kosinski, Deep neural networks are more accurate than humans at detecting sexual orientation from facial images, Journal of personality and social psychology 114 (2018) 246–257.
T. C. Schelling, Dynamic models of segregation, The Journal of Mathematical Sociology 1 (1971) 143–186.
J. M. Durán, Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare, Artificial Intelligence 297 (2021).
J. M. Durán, K. R. Jongsma, Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI, Journal of Medical Ethics 47 (2021) 329–335.
B. Babic, S. Gerke, T. Evgeniou, I. G. Cohen, Beware explanations from ai in health care, Science 373 (2021) 284–286.
Z. Lipton, The mythos of model interpretability, Queue 16 (2018) 31–57.