Deep Neural Networks; Black Box Problem; XAI; explanation; understanding; scientific models
Résumé :
[en] What Deep Neural Networks (DNNs) can do is impressive, yet they are notoriously opaque. Responding to the worries associated with this opacity, the field of XAI has produced a plethora of methods purporting to explain the workings of DNNs. Unsurprisingly, a whole host of questions revolves around the notion of explanation central to this field. This note provides a roadmap of the recent work that tackles these questions from the perspective of philosophical ideas on explanations and models in science.
Disciplines :
Sciences informatiques
Auteur, co-auteur :
KNOKS, Aleks ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
RALEIGH, Thomas ; University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Humanities (DHUM)
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
XAI and philosophical work on explanation: A roadmap
Date de publication/diffusion :
2022
Nom de la manifestation :
1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming
Organisateur de la manifestation :
University of Udine
Lieu de la manifestation :
Udine, Italie
Date de la manifestation :
2-12-2022
Manifestation à portée :
International
Titre du périodique :
Proceedings of 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming
eISSN :
1613-0073
Maison d'édition :
CEUR-WS.org
Titre particulier du numéro :
Proceedings of 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming
J. Burrell, How the machine ‘thinks’: Understanding opacity in machine learning algorithms, Big Data & Society 3 (2016).
R. Guidotti, A. Monreale, D. Pedreschi, F. Giannotti, Principles of Explainable Artificial Intelligence, Springer International Publishing, 2021, pp. 9–31.
T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence 267 (2019) 1–38.
A. Páez, The pragmatic turn in explainable artificial intelligence (XAI), Minds and Machines 29 (2019) 441–459.
W. Fleisher, Understanding, idealization, and explainable AI, Episteme (forthcoming).
E. Sullivan, Understanding from machine learning models, The British Journal for the Philosophy of Science 73 (2022).
A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, S. Thrun, Dermatologist-level classification of skin cancer with deep neural networks, nature 542 (2017) 115–118.
Y. Wang, M. Kosinski, Deep neural networks are more accurate than humans at detecting sexual orientation from facial images, Journal of personality and social psychology 114 (2018) 246–257.
T. C. Schelling, Dynamic models of segregation, The Journal of Mathematical Sociology 1 (1971) 143–186.
J. M. Durán, Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare, Artificial Intelligence 297 (2021).
J. M. Durán, K. R. Jongsma, Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI, Journal of Medical Ethics 47 (2021) 329–335.
B. Babic, S. Gerke, T. Evgeniou, I. G. Cohen, Beware explanations from ai in health care, Science 373 (2021) 284–286.
Z. Lipton, The mythos of model interpretability, Queue 16 (2018) 31–57.