![]() Knoks, Aleks ![]() ![]() in Proceedings of 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming (2022), 3319 What Deep Neural Networks (DNNs) can do is impressive, yet they are notoriously opaque. Responding to the worries associated with this opacity, the field of XAI has produced a plethora of methods ... [more ▼] What Deep Neural Networks (DNNs) can do is impressive, yet they are notoriously opaque. Responding to the worries associated with this opacity, the field of XAI has produced a plethora of methods purporting to explain the workings of DNNs. Unsurprisingly, a whole host of questions revolves around the notion of explanation central to this field. This note provides a roadmap of the recent work that tackles these questions from the perspective of philosophical ideas on explanations and models in science. [less ▲] Detailed reference viewed: 108 (6 UL) |
||