Article (Périodiques scientifiques)
Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering
ATTAOUI, Mohammed Oualid; FAHMY, Hazem; PASTORE, Fabrizio et al.
2022In ACM Transactions on Software Engineering and Methodology
Peer reviewed vérifié par ORBi
 

Documents


Texte intégral
DNNExplanation_BlackBoxApproach.pdf
Postprint Auteur (4.44 MB)
Télécharger

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
DNN Debugging; Transfer Learning; Clustering; DNN Functional Safety Analysis; DNN Explanation
Résumé :
[en] Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors. SAFE relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of SAFE is motivated by our objective not to require changes or even access to the DNN internals to facilitate adoption. Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain. It also yields significant improvements in DNN accuracy after retraining, while saving significant execution time and memory when compared to alternatives.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Software Verification and Validation Lab (SVV Lab)
Disciplines :
Sciences informatiques
Auteur, co-auteur :
ATTAOUI, Mohammed Oualid ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV
FAHMY, Hazem ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV
PASTORE, Fabrizio  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV
BRIAND, Lionel ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering
Date de publication/diffusion :
juillet 2022
Titre du périodique :
ACM Transactions on Software Engineering and Methodology
ISSN :
1049-331X
Maison d'édition :
Association for Computing Machinery (ACM), Etats-Unis
Peer reviewed :
Peer reviewed vérifié par ORBi
Focus Area :
Computational Sciences
Projet européen :
H2020 - 694277 - TUNE - Testing the Untestable: Model Testing of Complex Software-Intensive Systems
Projet FnR :
FNR14711346 - Functional Safety For Autonomous Systems, 2020 (01/08/2020-31/07/2023) - Fabrizio Pastore
Intitulé du projet de recherche :
BRIDGES2020/IS/14711346/FUNTASY
Organisme subsidiant :
FNR - Fonds National de la Recherche
CE - Commission Européenne
European Union
Disponible sur ORBilu :
depuis le 10 octobre 2022

Statistiques


Nombre de vues
520 (dont 20 Unilu)
Nombre de téléchargements
200 (dont 4 Unilu)

citations Scopus®
 
18
citations Scopus®
sans auto-citations
12
OpenCitations
 
0
citations OpenAlex
 
20
citations WoS
 
14

Bibliographie


Publications similaires



Contacter ORBilu