Adversarial learning; Dampster–Shafer; Data fusion; Deep learning; Intelligent transportation system; Advanced technology; Cloud environments; Dampster-Shafer; Fusion model; Intelligent transportation systems; Learning models; Safe mobility; Three-level; Software; Signal Processing; Information Systems; Hardware and Architecture
Abstract :
[en] Intelligent Transportation Systems (ITS) have revolutionized transportation by incorporating advanced technologies for efficient and safe mobility. However, these systems face challenges ensuring security and resilience against adversarial attacks. This research addresses these challenges and introduces a novel Dampster–Shafer data fusion-based Adversarial Deep Learning (DS-ADL) Model for ITS in fog cloud environments. Our proposed model focuses on three levels of adversarial attacks: original image level, feature level, and decision level. Adversarial examples are generated at each level to evaluate the system's vulnerability comprehensively. To enhance the system's capabilities, we leverage the power of several vital components. Firstly, we employ Dempster–Shafer-based Multimodal Sensor Fusion, enabling the fusion of information from multiple sensors for improved scene understanding. This fusion approach enhances the system's perception and decision-making abilities. For feature extraction and classification, we utilize ResNet 101, a deep learning architecture known for its effectiveness in computer vision tasks. We introduced a novel Monomodal Multidimensional Gaussian Model (MMGM-DD) based Adversarial Detection approach to detect adversarial examples. This detection mechanism enhances the system's ability to identify and mitigate adversarial attacks in real-time. Additionally, we incorporate the Defensive Distillation method for adversarial training, which trains the model to be robust against attacks by exposing it to adversarial examples during the training process. To evaluate the performance of our proposed model, we utilize two datasets: Google Speech Command version 0.01 and the German Traffic Sign Recognition Benchmark (GTSRB). Evaluation metrics include latency delay and computation time (fog–cloud), accuracy, MSE, loss, and F-score for attack detection and defense. The results and discussions demonstrate the effectiveness of our Dampster–Shafer data fusion-based Adversarial Deep Learning Model in enhancing the robustness and security of ITS in fog–cloud environments. The model's ability to detect and defend against adversarial attacks while maintaining low-latency fog–cloud operations highlights its potential for real-world deployment in ITS.
Disciplines :
Computer science
Author, co-author :
NAGARAJAN, Senthil Murugan ✱; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Mathematics (DMATH)
Devarajan, Ganesh Gopal; Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ghaziabad, India
T.V., Ramana; Department of Computer Science and Engineering, Jain University, Bengaluru, India
M., Asha Jerlin; School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
Bashir, Ali Kashif; Department of Computing and Mathematics, Manchester Metropolitan University, United Kingdom ; Woxsen School of Business, Woxsen University, India ; Department of Computer Science and Mathematics, Lebanese American University, Lebanon
Al-Otaibi, Yasser D.; Department of Information Systems, Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Saudi Arabia
✱ These authors have contributed equally to this work.
External co-authors :
yes
Language :
English
Title :
Adversarial Deep Learning based Dampster–Shafer data fusion model for intelligent transportation system
This research work was funded by Institutional Fund Projects under grant no. ( IFPIP: 42-830-1443 ). Therefore, the authors gratefully acknowledges technical and financial support from the Ministry of Education and King Abdulaziz University, Jeddah, Saudi Arabia .
Zhang, H., Lu, X., Vehicle communication network in intelligent transportation system based on Internet of Things. Comput. Commun. 160 (2020), 799–806.
Baskar, S., Periyanayagi, S., Shakeel, P.M., Dhulipala, V.S., An energy persistent range-dependent regulated transmission communication model for vehicular network applications. Comput. Netw. 152 (2019), 144–153.
Shan, X., Hao, P., Chen, X., Boriboonsomsin, K., Wu, G., Barth, M.J., Vehicle energy/emissions estimation based on vehicle trajectory reconstruction using sparse mobile sensor data. IEEE Trans. Intell. Transp. Syst. 20:2 (2018), 716–726.
Daoud, O., Hamarsheh, Q., Damati, A., Enhancing the LTE-based intelligent transportation system's performance. Digit. Signal Process., 99, 2020, 102677.
Qin, L., He, X., Yan, R., Deng, R., Zhou, D., Distributed sensor fault diagnosis for a formation of multi-vehicle systems. J. Franklin Inst. B 356:2 (2019), 791–818.
Ding, W., Abdel-Basset, M., Hawash, H., Pedrycz, W., Multimodal infant brain segmentation by fuzzy-informed deep learning. IEEE Trans. Fuzzy Syst. 30:4 (2021), 1088–1101.
Liu, B., Han, C., Liu, X., Li, W., Vehicle artificial intelligence system based on intelligent image analysis and 5G network. Int. J. Wirel. Inf. Netw. 30:1 (2023), 86–102.
Hao, Q., Qin, L., The design of intelligent transportation video processing system in big data environment. IEEE Access 8 (2020), 13769–13780.
El Faouzi, N.-E., Leung, H., Kurian, A., Data fusion in intelligent transportation systems: Progress and challenges–A survey. Inf. Fusion 12:1 (2011), 4–10.
Guo, Z., Yu, K., Bashir, A.K., Zhang, D., Al-Otaibi, Y.D., Guizani, M., Deep information fusion-driven POI scheduling for mobile social networks. IEEE Netw. 36:4 (2022), 210–216.
Zhang, H., Le, Z., Shao, Z., Xu, H., Ma, J., MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf. Fusion 66 (2021), 40–53.
Fu, Y., Wu, X.-J., Durrani, T., Image fusion based on generative adversarial network consistent with perception. Inf. Fusion 72 (2021), 110–125.
Ounoughi, C., Ben Yahia, S., Data fusion for ITS: A systematic literature review. Inf. Fusion 89 (2023), 267–291, 10.1016/j.inffus.2022.08.016.
Zhu, H., Leung, H., Yuen, K.-V., A joint data association, registration, and fusion approach for distributed tracking. Inform. Sci. 324 (2015), 186–196.
Zhang, Z., Wang, Y., Automatic object classification using motion blob based local feature fusion for traffic scene surveillance. Front. Comput. Sci. 6 (2012), 537–546.
Yao, Z., Yi, W., License plate detection based on multistage information fusion. Inf. Fusion 18 (2014), 78–85.
Titouna, F., Benferhat, S., Qualitative fusion-based traffic signal preemption. Proceedings of the 16th International Conference on Information Fusion, 2013, IEEE, 1926–1933.
Ding, W., Ming, Y., Cao, Z., Lin, C.-T., A generalized deep neural network approach for digital watermarking analysis. IEEE Trans. Emerg. Top. Comput. Intell. 6:3 (2021), 613–627.
Wang, Z., Zhao, X., Xu, Z., Offline mapping for autonomous vehicles with low-cost sensors. Comput. Electr. Eng., 82, 2020, 106552.
Lai, X., Wang, S., He, L., Zhou, L., Zheng, Y., A hybrid state-of-charge estimation method based on credible increment for electric vehicle applications with large sensor and model errors. J. Energy Storage, 27, 2020, 101106.
Luo, Y., Zhu, X., Long, J., Data collection through mobile vehicles in edge network of smart city. IEEE Access 7 (2019), 168467–168483.
El Faouzi, N.-E., Klein, L.A., De Mouzon, O., Improving travel time estimates from inductive loop and toll collection data with Dempster–Shafer data fusion. Transp. Res. Rec. 2129:1 (2009), 73–80.
Liu, Y., Chen, X., Peng, H., Wang, Z., Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 36 (2017), 191–207.
Du, C., Gao, S., Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5 (2017), 15750–15761.
Ma, B., Zhu, Y., Yin, X., Ban, X., Huang, H., Mukeshimana, M., Sesf-fuse: An unsupervised deep model for multi-focus image fusion. Neural Comput. Appl. 33 (2021), 5793–5804.
Ma, J., Yu, W., Liang, P., Li, C., Jiang, J., FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 48 (2019), 11–26.
Li, H., Wu, X.-J., DenseFuse: A fusion approach to infrared and visible images. IEEE Trans. Image Process. 28:5 (2018), 2614–2623.
Liu, K., Yang, T., Ma, J., Cheng, Z., Fault-tolerant event detection in wireless sensor networks using evidence theory. KSII Trans. Int. Inf. Syst., 9(10), 2015.
Alqhtani, S.M., Luo, S., Regan, B., Multimedia data fusion for event detection in twitter by using dempster-shafer evidence theory. Int. J. Comput. Electr. Autom. Control Inf. Eng. World Acad. Sci. Eng. Technol. 9:12 (2015), 2234–2238.
Wan, S., Gu, R., Umer, T., Salah, K., Xu, X., Toward offloading internet of vehicles applications in 5G networks. IEEE Trans. Intell. Transp. Syst. 22:7 (2020), 4151–4159.
Ning, Z., Zhang, K., Wang, X., Obaidat, M.S., Guo, L., Hu, X., Hu, B., Guo, Y., Sadoun, B., Kwok, R.Y., Joint computing and caching in 5G-envisioned Internet of vehicles: A deep reinforcement learning-based traffic control system. IEEE Trans. Intell. Transp. Syst. 22:8 (2020), 5201–5212.
Khayyat, M., Alshahrani, A., Alharbi, S., Elgendy, I., Paramonov, A., Koucheryavy, A., Multilevel service-provisioning-based autonomous vehicle applications. Sustainability, 12(6), 2020, 2497.
Hossain, M.A., Elshafiey, I., Al-Sanie, A., Cooperative vehicle positioning with multi-sensor data fusion and vehicular communications. Wirel. Netw. 25 (2019), 1403–1413.
Fawzy, D., Moussa, S.M., Badr, N.L., An IoT-based resource utilization framework using data fusion for smart environments. Int. Things, 21, 2023, 100645.
Zhang, C., Ding, W., Peng, G., Fu, F., Wang, W., Street view text recognition with deep learning for urban scene understanding in intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 22:7 (2021), 4727–4743, 10.1109/TITS.2020.3017632.
Warden, P., Speech commands: A dataset for limited-vocabulary speech recognition. 2018 ArXiv e-prints arXiv:1804.03209, URL https://arxiv.org/abs/1804.03209.
Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., Igel, C., Detection of traffic signs in real-world images: The german traffic sign detection benchmark. International Joint Conference on Neural Networks, no. 1288, 2013.