2021 • In 34th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2021, Kuala Lumpur, Malaysia, July 26–29, 2021, Proceedings, Part II
[en] The rising quality and throughput demands of the manufacturing domain require flexible, accurate and explainable computer-vision solutions for defect detection. Deep Neural Networks (DNNs) reach state-of-the-art performance on various computer-vision tasks but wide-spread application in the industrial domain is blocked by the lacking explainability of DNN decisions. A promising, human-readable solution is given by saliency maps, heatmaps highlighting the image areas that influence the classifier’s decision. This work evaluates a selection of saliency methods in the area of industrial quality assurance. To this end we propose the distance pointing game, a new metric to quantify the meaningfulness of saliency maps for defect detection. We provide steps to prepare a publicly available dataset on defective steel plates for the proposed metric. Additionally, the computational complexity is investigated to determine which methods could be integrated on industrial edge devices. Our results show that DeepLift, GradCAM and GradCAM++ outperform the alternatives while the computational cost is feasible for real time applications even on edge devices. This indicates that the respective methods could be used as an additional, autonomous post-classification step to explain decisions taken by intelligent quality assurance systems.
Disciplines :
Computer science
Author, co-author :
Lorentz, Joe ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Hartmann, Thomas; DataThings S.A.
Moawad, Assaad; DataThings S.A.
Fouquet, Francois; DataThings S.A.
Aouada, Djamila ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
External co-authors :
no
Language :
English
Title :
Explaining Defect Detection with Saliency Maps
Publication date :
19 July 2021
Event name :
34th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems
Event date :
from 26.07.2021 to 29.07.2021
Main work title :
34th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2021, Kuala Lumpur, Malaysia, July 26–29, 2021, Proceedings, Part II
Publisher :
Springer, Cham, Switzerland
ISBN/EAN :
978-3-030-79463-7
Pages :
506-518
Peer reviewed :
Peer reviewed
FnR Project :
FNR14297122 - Towards Edge-optimized Deep Learning For Explainable Quality Control, 2019 (01/01/2020-31/12/2023) - Joe Lorentz
Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Towards better understanding of gradient-based attribution methods for Deep Neural Networks, arXiv:1711.06104 [cs.LG] (2017)
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7) (2015). https://doi.org/10.1371/journal. pone.0130140
Brosnan, T., Sun, D.W.: Improving quality inspection of food products by computer vision–a review. J. Food Eng. 61(1), 3–16 (2014). https://doi.org/10.1016/S0260-8774(03)00183-3
Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: GradCAM++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. IEEE (2018). https://doi.org/10.1109/WACV.2018.00097
Fong, R., Patrick, M., Vedaldi, A.: Understanding deep networks via extremal perturbations and smooth masks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2950–2958 (2019)
Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3449–3457. IEEE (2017). https://doi.org/10.1109/ICCV.2017.371
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE (2016). https://doi.org/10.1109/CVPR.2016.90
Jo, J., Jeong, S., Kang, P.: Benchmarking GPU-accelerated edge devices. In: 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 117–120. IEEE (2020). https://doi.org/10.1109/BigComp48618.2020.00-89
Li, H., Ota, K., Dong, M.: Learning IoT in edge: deep learning for the internet of things with edge computing. IEEE Network 32(1), 96–101 (2018). https://doi. org/10.1109/MNET.2018.1700202
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1 48
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc. (2019)
Rai, A.: Explainable AI: from black box to glass box. J. Acad. Mark. Sci. 48(1), 137–141 (2020). https://doi.org/10.1007/s11747-019-00710-5
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: GradCAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 618–626 (2017)
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, vol. 70, pp. 3145–3153. JMLR.org (2017)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, arXiv:1312.6034 [cs.CV] (2013)
Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: SmoothGrad: Removing noise by adding noise, arXiv:1706.03825 [cs.LG] (2017)
Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for Simplicity: The All Convolutional Net, arXiv:1412.6806 [cs.LG] (2015)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 3319– 3328. PMLR (2017)
Suzen, A.A., Duman, B., Sen, B.: Benchmark analysis of Jetson TX2, Jetson Nano and raspberry PI using deep-CNN. In: 2020 International Congress on HumanComputer Interaction, Optimization and Robotic Applications (HORA), pp. 1–5. IEEE (2020). https://doi.org/10.1109/HORA49412.2020.9152915
Tomsett, R., Harborne, D., Chakraborty, S., Gurram, P., Preece, A.: Sanity checks for saliency metrics. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6021–6029 (2020)
Tsai, D.M., Lin, C.T.: Fast normalized cross correlation for defect detection. Pattern Recogn. Lett. 24(15), 2625–2631 (2003). https://doi.org/10.1016/S01678655(03)00106-5
Wang, H., et al.: Score-CAM: score-weighted visual explanations for convolutional neural networks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 111–119. IEEE (2020). https://doi.org/10. 1109/CVPRW50498.2020.00020
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-31910590-1 53
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921–2929. IEEE (2016). https://doi.org/10. 1109/CVPR.2016.319