Reference : Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning
Scientific journals : Article
Engineering, computing & technology : Computer science
http://hdl.handle.net/10993/46838
Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning
English
Fahmy, Hazem mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV >]
Pastore, Fabrizio mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV >]
Bagherzadeh, Mojtaba [University of Ottawa]
Briand, Lionel mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV >]
In press
IEEE Transactions on Reliability
Institute of Electrical and Electronics Engineers
Special Section on Quality Assurance of Machine Learning Systems
Yes
International
0018-9529
0018-9529
New-York
NY
[en] DNN Explanation ; DNN Functional Safety Analysis ; Debugging ; Heatmaps ; AI
[en] Deep neural networks (DNNs) are increasingly im- portant in safety-critical systems, for example in their perception layer to analyze images. Unfortunately, there is a lack of methods to ensure the functional safety of DNN-based components.
We observe three major challenges with existing practices regarding DNNs in safety-critical systems: (1) scenarios that are underrepresented in the test set may lead to serious safety violation risks, but may, however, remain unnoticed; (2) char- acterizing such high-risk scenarios is critical for safety analysis; (3) retraining DNNs to address these risks is poorly supported when causes of violations are difficult to determine.
To address these problems in the context of DNNs analyzing images, we propose HUDD, an approach that automatically supports the identification of root causes for DNN errors. HUDD identifies root causes by applying a clustering algorithm to heatmaps capturing the relevance of every DNN neuron on the DNN outcome. Also, HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters.
We evaluated HUDD with DNNs from the automotive domain. HUDD was able to identify all the distinct root causes of DNN errors, thus supporting safety analysis. Also, our retraining approach has shown to be more effective at improving DNN accuracy than existing approaches.
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Software Verification and Validation Lab (SVV Lab)
European Commission - EC
BRIDGES2020/IS/14711346/FUNTASY
Researchers ; Professionals
http://hdl.handle.net/10993/46838
H2020 ; 694277 - TUNE - Testing the Untestable: Model Testing of Complex Software-Intensive Systems
FnR ; FNR14711346 > Fabrizio Pastore > FUNTASY > Functional Safety For Autonomous Systems > 01/08/2020 > 31/07/2023 > 2020

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
HUDD.pdfAuthor postprint1.78 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.