Paper published in a book (Scientific congresses, symposiums and conference proceedings)
Enhancing Autonomous Vehicle Safety through N-version Machine Learning Systems
Wen, Qiang; RODRIGUES DE MENDONÇA NETO, Júlio; Machida, Fumio et al.
2024In Proceedings of the Workshop on Artificial Intelligence Safety 2024 (AISafety 2024)
Peer reviewed
 

Files


Full Text
AISafety_N_version_ML.pdf
Author preprint (12.26 MB) Creative Commons License - Attribution, Non-Commercial, ShareAlike
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
autonomous driving; fault injection; machine learning system; N-version programming; perception
Abstract :
[en] Unreliable outputs of machine learning (ML) models are a significant concern, particularly for safety-critical applications such as autonomous driving. ML models are susceptible to out-of-distribution samples, distribution shifts, hardware transient faults, and even malicious attacks. To address the concerns, the N-version ML system gives a general solution to enhance the reliability of ML system outputs by employing diversification on ML models and their inputs. However, the existing studies of N-version ML systems mainly focused on classification errors and did not consider their impacts in a practical application scenario. In this paper, we investigate the applicability of the N-version ML approach in an autonomous vehicle (AV) scenario within the AV simulator CARLA. We deploy two-version and three-version perception systems in an AV implemented in CARLA, using healthy ML models and compromised ML models, which are generated using fault-injection techniques and analyze the behavior of the AV in the simulator. Our findings reveal the critical impacts of compromised models on AV collision rates and show the potential of three-version perception systems in mitigating the risk. Our three-version perception system improves driving safety by tolerating one compromised model and delaying collisions when having at least one healthy model.
Disciplines :
Computer science
Author, co-author :
Wen, Qiang;  University of Tsukuba > Department of Computer Science
RODRIGUES DE MENDONÇA NETO, Júlio  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CritiX
Machida, Fumio;  University of Tsukuba > Department of Computer Science
VÖLP, Marcus  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CritiX
External co-authors :
yes
Language :
English
Title :
Enhancing Autonomous Vehicle Safety through N-version Machine Learning Systems
Publication date :
04 August 2024
Event name :
AISafety workshop
Event place :
Jeju, South Korea
Event date :
August 4th, 2024
Audience :
International
Main work title :
Proceedings of the Workshop on Artificial Intelligence Safety 2024 (AISafety 2024)
Publisher :
CEUR Workshop Proceedings
Peer reviewed :
Peer reviewed
Focus Area :
Security, Reliability and Trust
FnR Project :
FNR13691843 - Byzrt: Intrusion Resilient Real-time Communication And Computation In Autonomous Systems, 2019 (01/09/2020-31/08/2023) - Marcus Völp
Funders :
FNR - Fonds National de la Recherche
Funding number :
C19-IS-13691843
Available on ORBilu :
since 17 July 2024

Statistics


Number of views
191 (10 by Unilu)
Number of downloads
182 (2 by Unilu)

Scopus citations®
 
1
Scopus citations®
without self-citations
0

Bibliography


Similar publications



Contact ORBilu