Article (Scientific journals)
Can Offline Testing of Deep Neural Networks Replace Their Online Testing?
Ul Haq, Fitash; Shin, Donghwan; Nejati, Shiva et al.
2021In Empirical Software Engineering, 26 (5)
Peer Reviewed verified by ORBi
 

Files


Full Text
main.pdf
Author postprint (1.92 MB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Deep Learning; Testing; Self-driving Cars
Abstract :
[en] We distinguish two general modes of testing for Deep Neural Networks (DNNs): Offline testing where DNNs are tested as individual units based on test datasets obtained without involving the DNNs under test, and online testing where DNNs are embedded into a specific application environment and tested in a closed-loop mode in interaction with the application environment. Typically, DNNs are subjected to both types of testing during their development life cycle where offline testing is applied immediately after DNN training and online testing follows after offline testing and once a DNN is deployed within a specific application environment. In this paper, we study the relationship between offline and online testing. Our goal is to determine how offline testing and online testing differ or complement one another and if offline testing results can be used to help reduce the cost of online testing? Though these questions are generally relevant to all autonomous systems, we study them in the context of automated driving systems where, as study subjects, we use DNNs automating end-to-end controls of steering functions of self-driving vehicles. Our results show that offline testing is less effective than online testing as many safety violations identified by online testing could not be identified by offline testing, while large prediction errors generated by offline testing always led to severe safety violations detectable by online testing. Further, we cannot exploit offline testing results to reduce the cost of online testing in practice since we are not able to identify specific situations where offline testing could be as accurate as online testing in identifying safety requirement violations.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Software Verification and Validation Lab (SVV Lab)
Disciplines :
Computer science
Author, co-author :
Ul Haq, Fitash ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV
Shin, Donghwan ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV
Nejati, Shiva ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV ; University of Ottawa
Briand, Lionel ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV ; University of Ottawa
External co-authors :
yes
Language :
English
Title :
Can Offline Testing of Deep Neural Networks Replace Their Online Testing?
Publication date :
05 July 2021
Journal title :
Empirical Software Engineering
ISSN :
1573-7616
Publisher :
Kluwer Academic Publishers, Netherlands
Volume :
26
Issue :
5
Peer reviewed :
Peer Reviewed verified by ORBi
Focus Area :
Security, Reliability and Trust
European Projects :
H2020 - 694277 - TUNE - Testing the Untestable: Model Testing of Complex Software-Intensive Systems
FnR Project :
FNR14711346 - Functional Safety For Autonomous Systems, 2020 (01/08/2020-31/07/2023) - Fabrizio Pastore
Funders :
CE - Commission Européenne [BE]
Available on ORBilu :
since 03 May 2021

Statistics


Number of views
220 (58 by Unilu)
Number of downloads
511 (29 by Unilu)

Scopus citations®
 
16
Scopus citations®
without self-citations
12
OpenCitations
 
2
WoS citations
 
14

Bibliography


Similar publications



Contact ORBilu