Communication publiée dans un ouvrage (Colloques, congrès, conférences scientifiques et actes)
Enhancing Autonomous Vehicle Safety through N-version Machine Learning Systems
Wen, Qiang; RODRIGUES DE MENDONÇA NETO, Júlio; Machida, Fumio et al.
2024In Proceedings of the Workshop on Artificial Intelligence Safety 2024 (AISafety 2024)
Peer reviewed
 

Documents


Texte intégral
AISafety_N_version_ML.pdf
Preprint Auteur (12.26 MB) Licence Creative Commons - Attribution, Pas d'Utilisation Commerciale, Partage dans les Mêmes Conditions
Télécharger

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
autonomous driving; fault injection; machine learning system; N-version programming; perception
Résumé :
[en] Unreliable outputs of machine learning (ML) models are a significant concern, particularly for safety-critical applications such as autonomous driving. ML models are susceptible to out-of-distribution samples, distribution shifts, hardware transient faults, and even malicious attacks. To address the concerns, the N-version ML system gives a general solution to enhance the reliability of ML system outputs by employing diversification on ML models and their inputs. However, the existing studies of N-version ML systems mainly focused on classification errors and did not consider their impacts in a practical application scenario. In this paper, we investigate the applicability of the N-version ML approach in an autonomous vehicle (AV) scenario within the AV simulator CARLA. We deploy two-version and three-version perception systems in an AV implemented in CARLA, using healthy ML models and compromised ML models, which are generated using fault-injection techniques and analyze the behavior of the AV in the simulator. Our findings reveal the critical impacts of compromised models on AV collision rates and show the potential of three-version perception systems in mitigating the risk. Our three-version perception system improves driving safety by tolerating one compromised model and delaying collisions when having at least one healthy model.
Disciplines :
Sciences informatiques
Auteur, co-auteur :
Wen, Qiang;  University of Tsukuba > Department of Computer Science
RODRIGUES DE MENDONÇA NETO, Júlio  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CritiX
Machida, Fumio;  University of Tsukuba > Department of Computer Science
VÖLP, Marcus  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CritiX
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
Enhancing Autonomous Vehicle Safety through N-version Machine Learning Systems
Date de publication/diffusion :
04 août 2024
Nom de la manifestation :
AISafety workshop
Lieu de la manifestation :
Jeju, Corée du Sud
Date de la manifestation :
August 4th, 2024
Manifestation à portée :
International
Titre de l'ouvrage principal :
Proceedings of the Workshop on Artificial Intelligence Safety 2024 (AISafety 2024)
Maison d'édition :
CEUR Workshop Proceedings
Peer reviewed :
Peer reviewed
Focus Area :
Security, Reliability and Trust
Projet FnR :
FNR13691843 - Byzrt: Intrusion Resilient Real-time Communication And Computation In Autonomous Systems, 2019 (01/09/2020-31/08/2023) - Marcus Völp
Organisme subsidiant :
FNR - Fonds National de la Recherche
N° du Fonds :
C19-IS-13691843
Disponible sur ORBilu :
depuis le 17 juillet 2024

Statistiques


Nombre de vues
191 (dont 10 Unilu)
Nombre de téléchargements
182 (dont 2 Unilu)

citations Scopus®
 
1
citations Scopus®
sans auto-citations
0

Bibliographie


Publications similaires



Contacter ORBilu