[en] Unreliable outputs of machine learning (ML) models are a significant concern, particularly for safety-critical applications such as autonomous driving. ML models are susceptible to out-of-distribution samples, distribution shifts, hardware transient faults, and even malicious attacks. To address the concerns, the N-version ML system gives a general solution to enhance the reliability of ML system outputs by employing diversification on ML models and their inputs. However, the existing studies of N-version ML systems mainly focused on classification errors and did not consider their impacts in a practical application scenario. In this paper, we investigate the applicability of the N-version ML approach in an autonomous vehicle (AV) scenario within the AV simulator CARLA. We deploy two-version and three-version perception systems in an AV implemented in CARLA, using healthy ML models and compromised ML models, which are generated using fault-injection techniques and analyze the behavior of the AV in the simulator. Our findings reveal the critical impacts of compromised models on AV collision rates and show the potential of three-version perception systems in mitigating the risk. Our three-version perception system improves driving safety by tolerating one compromised model and delaying collisions when having at least one healthy model.
Disciplines :
Sciences informatiques
Auteur, co-auteur :
Wen, Qiang; University of Tsukuba > Department of Computer Science
RODRIGUES DE MENDONÇA NETO, Júlio ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CritiX
Machida, Fumio; University of Tsukuba > Department of Computer Science
VÖLP, Marcus ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CritiX
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
Enhancing Autonomous Vehicle Safety through N-version Machine Learning Systems
Date de publication/diffusion :
04 août 2024
Nom de la manifestation :
AISafety workshop
Lieu de la manifestation :
Jeju, Corée du Sud
Date de la manifestation :
August 4th, 2024
Manifestation à portée :
International
Titre de l'ouvrage principal :
Proceedings of the Workshop on Artificial Intelligence Safety 2024 (AISafety 2024)
Maison d'édition :
CEUR Workshop Proceedings
Peer reviewed :
Peer reviewed
Focus Area :
Security, Reliability and Trust
Projet FnR :
FNR13691843 - Byzrt: Intrusion Resilient Real-time Communication And Computation In Autonomous Systems, 2019 (01/09/2020-31/08/2023) - Marcus Völp
J. A. Sidey-Gibbons, C. J. Sidey-Gibbons, Machine learning in medicine: a practical introduction, BMC medical research methodology 19 (2019) 1–18.
H. J. Vishnukumar, B. Butting, C. Müller, E. Sax, Machine learning and deep neural network—artificial intelligence core for lab and real-world test and validation for adas and autonomous vehicles: Ai for efficient and quality test and validation, in: Intelligent systems conference (IntelliSys), 2017, pp. 714–721.
M. Henne, A. Schwaiger, G. Weiss, Managing uncertainty of ai-based perception for autonomous systems, in: AISafety@ IJCAI, 2019, pp. 11–12.
M. A. Hanif, F. Khalid, R. V. W. Putra, S. Rehman, M. Shafique, Robust machine learning systems: Reliability and security for deep neural networks, in: International Symposium on On-Line Testing And Robust System Design, 2018.
S. Qiu, Q. Liu, S. Zhou, C. Wu, Review of artificial intelligence adversarial attack and defense technologies, Applied Sciences 9 (2019) 909.
A. Toschi, M. Sanic, J. Leng, Q. Chen, C. Wang, M. Guo, Characterizing perception module performance and robustness in production-scale autonomous driving system, in: IFIP International Conference on Network and Parallel Computing, Springer, 2019, pp. 235–247.
Apollo perception, 2019. URL: http://www.fzb.me/apollo/specs/perception_apollo_5.0.html.
J. M. Zhang, M. Harman, L. Ma, Y. Liu, Machine learning testing: Survey, landscapes and horizons, IEEE Transactions on Software Engineering (2020).
W. Wu, H. Xu, S. Zhong, M. Lyu, I. King, Deep validation: Toward detecting real-world corner cases for deep neural networks, in: Proc. of the 49th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2019, pp. 125–137.
R. S. Ferreira, J. Arlat, J. Guiochet, H. Waselynck, Benchmarking safety monitors for image classifiers with machine learning, in: Proc. of IEEE Pacific Rim International Symposium on Dependable Computing (PRDC), 2021, pp. 7–16.
F. Machida, On the diversity of machine learning models for system reliability, in: IEEE Pacific Rim Int’l Symp. on Dependable Computing (PRDC), 2019, pp. 276–285.
F. Machida, N-version machine learning models for safety critical systems, in: Proc. of the DSN Workshop on Dependable and Secure Machine Learning, 2019, pp. 48–51.
Y. LeCun, Y. Bengio, G. Hinton, Deep learning, nature 521 (2015) 436–444.
L. Chen, A. Avizienis, N-version programming: A fault-tolerance approach to reliability of software operation, in: Proc. of 8th IEEE Int. Symp. on Fault-Tolerant Computing (FTCS-8), 1978, pp. 3–9.
Q. Wen, F. Machida, Reliability models and analysis for triple-model with triple-input machine learning systems, in: Proc. of the 5th IEEE Conference on Dependable and Secure Computing, 2022, pp. 1–8.
R. Olfati-Saber, J. A. Fax, R. M. Murray, Consensus and cooperation in networked multi-agent systems, Proceedings of the IEEE 95 (2007) 215–233.
S. Latifi, B. Zamirai, S. Mahlke, Polygraphmr, enhancing the reliability and dependability of cnns, in: Proc. of 50th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2020, pp. 99–112.
J. Mendonça, F. Machida, M. Völp, Enhancing the reliability of perception systems using n-version programming and rejuvenation, in: Proc. of the 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), 2023, pp. 149–156.
A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, V. Koltun, Carla: An open urban driving simulator, in: Proc. of the 1st Annual Conference on Robot Learning, 2017, pp. 1–16.
A. Mahmoud, N. Aggarwal, A. Nobbe, J. R. S. Vicarte, S. V. Adve, C. W. Fletcher, I. Frosio, S. K. S. Hari, Pytorchfi: A runtime perturbation tool for dnns, in: 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), 2020, pp. 25–31.
Z.-H. Zhou, Ensemble methods: foundations and algorithms, CRC press, 2012.
H. Xu, Z. Chen, W. Wu, Z. Jin, S. Kuo, M. R. Lyu, Nv-dnn: towards fault-tolerant dnn systems with n-version programming, in: Proc. of the 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), 2019, pp. 44–47.
Q. Wen, F. Machida, Characterizing reliability of three-version traffic sign classifier system through diversity metrics, in: Proc. of the 34th International Symposium on Software Reliability Engineering (ISSRE), 2023, pp. 333–343.
D. Hong, L. Gao, N. Yokoya, J. Yao, J. Chanussot, Q. Du, B. Zhang, More diverse means better: Multimodal deep learning meets remote-sensing imagery classification, IEEE Transactions on Geoscience and Remote Sensing 59 (2020) 4340–4354.
M. C. Hsueh, T. K. Tsai, R. K. Iyer, Fault injection techniques and tools, Computer 30 (1997) 75–82.
Y. Liu, L. Wei, B. Luo, Q. Xu, Fault injection attack on deep neural network, in: Proc. of the IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2017, pp. 131–138.
N. Piazzesi, M. Hong, A. Ceccarelli, Attack and fault injection in self-driving agents on the carla simulator – experience report, in: Computer Safety, Reliability, and Security: 40th International Conference, SAFECOMP 2021, Springer-Verlag, York, UK, 2021, pp. 210–225.
B. Osiński, A. Jakubowski, P. Zięcina, P. Miłoś, C. Galias, S. Homoceanu, H. Michalewski, Simulation-based reinforcement learning for real-world autonomous driving, in: IEEE international conference on robotics and automation (ICRA), 2020, pp. 6411–6418.
W. Gao, J. Tang, T. Wang, An object detection research method based on carla simulation, Journal of Physics: Conference Series 1948 (2021) 012163.
S. D. Pendleton, H. Andersen, X. Du, X. Shen, M. Meghjani, Y. H. Eng, D. Rus, M. H. Ang, Perception, planning, control, and coordination for autonomous vehicles, Machines 5 (2017) 6.
Q. Xiao, K. Li, D. Zhang, W. Xu, Security risks in deep learning implementations, in: IEEE Security and Privacy Workshops (SPW), 2018, pp. 123–128.
R. Maurice, M. Gerla, Autonomous driving: Sensor fusion for multiple sensor types, in: Proceedings of the IEEE International Conference on Intelligent Transportation Systems, IEEE, 2012.
I. P. Gouveia, M. Völp, P. Esteves-Verissimo, Behind the last line of defense: Surviving soc faults and intrusions, Computers & Security 123 (2022) 102920. doi:10.1016/j.cose.2022.102920.
R. Xu, H. Xiang, X. Han, X. Xia, Z. Meng, C.-J. Chen, C. Correa-Jullian, J. Ma, The opencda open-source ecosystem for cooperative driving automation research, IEEE Transactions on Intelligent Vehicles 8 (2023) 2698–2711. doi:10.1109/TIV.2023. 3244948.
G. Jocher, et al., ultralytics/yolov5: v5.0 - yolov5-p6 1280 models, aws, supervise.ly and youtube integrations, 2021. URL: https://doi.org/10.5281/zenodo. 4679653.
K. Wakigami, F. Machida, T. Phung-Duc, Reliability and performance evaluation of two-input machine learning systems, in: 2023 IEEE 28th Pacific Rim International Symposium on Dependable Computing (PRDC), IEEE, 2023, pp. 278–286.