Reference : Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clus...
Scientific journals : Article
Engineering, computing & technology : Computer science
Computational Sciences
http://hdl.handle.net/10993/52372
Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering
English
Attaoui, Mohammed Oualid mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV >]
Fahmy, Hazem mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV >]
Pastore, Fabrizio mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV >]
Briand, Lionel mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV >]
Jul-2022
ACM Transactions on Software Engineering and Methodology
Association for Computing Machinery (ACM)
Yes
International
1049-331X
United States
[en] DNN Debugging ; Transfer Learning ; Clustering ; DNN Functional Safety Analysis ; DNN Explanation
[en] Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors. SAFE relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of SAFE is motivated by our objective not to require changes or even access to the DNN internals to facilitate adoption.

Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain. It also yields significant improvements in DNN accuracy after retraining, while saving significant execution time and memory when compared to alternatives.
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Software Verification and Validation Lab (SVV Lab)
Fonds National de la Recherche - FnR ; European Commission - EC
BRIDGES2020/IS/14711346/FUNTASY
Researchers ; Professionals ; Students
http://hdl.handle.net/10993/52372
10.1145/3550271
https://dl.acm.org/doi/10.1145/3550271
H2020 ; 694277 - TUNE - Testing the Untestable: Model Testing of Complex Software-Intensive Systems
FnR ; FNR14711346 > Fabrizio Pastore > FUNTASY > Functional Safety For Autonomous Systems > 01/08/2020 > 31/07/2023 > 2020

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
DNNExplanation_BlackBoxApproach.pdfAuthor postprint4.34 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.