Deep learning; LiDAR; Machine learning; Mobile mapping systems; Point clouds; Semantic segmentation; Laser scanning data; Learning methods; Machine-learning; Manual labeling; Point-clouds; Segmentation algorithms; Engineering (all); Computer Science Applications; Artificial Intelligence; General Engineering
Résumé :
[en] Labelled point clouds are crucial to train supervised Deep Learning (DL) methods used for semantic segmentation. The objective of this research is to quantify discordances between the labels made by different people in order to assess whether such discordances can influence the success rates of a DL based semantic segmentation algorithm. An urban point cloud of 30 m road length in Santiago de Compostela (Spain) was labelled two times by ten persons. Discordances and its significance in manual labelling between individuals and rounds were calculated. In addition, a ratio test to signify discordance and concordance was proposed. Results show that most of the points were labelled accordingly with the same class by all the people. However, there were many points that were labelled with two or more classes. Class curb presented 5.9% of discordant points and 3.2 discordances for each point with concordance by all people. In addition, the percentage of significative labelling differences of the class curb was 86.7% comparing all the people in the same round and 100% comparing rounds of each person. Analysing the semantic segmentation results with a DL based algorithm, PointNet++, the percentage of concordance points are related with F-score value in R2 = 0.765, posing that manual labelling has significant impact on results of DL-based semantic segmentation methods.
Disciplines :
Ingénierie civile
Auteur, co-auteur :
González-Collazo, Silvia María ; CINTECX, Universidade de Vigo, Vigo, Spain
Balado, Jesús ; CINTECX, Universidade de Vigo, Vigo, Spain
González, Elena; CINTECX, Universidade de Vigo, Vigo, Spain
NURUNNABI, Abdul Awal Md ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Engineering (DoE)
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
A discordance analysis in manual labelling of urban mobile laser scanning data used for deep learning based semantic segmentation
The authors would like to thank the point cloud labelling volunteers for their time and effort. This research has received funding from Xunta de Galicia through human resources grant (ED481B-2019-061) and from the Government of Spain through project PID2019-105221RB-C43 funded by MCIN/AEI/10.13039/501100011033. Mr Nurunnabi is with the Project 2019-05-030-24, SOLSTICE - Programme Fonds Européen de Developpment Régional (FEDER)/Ministère de l'Economie of the G. D. of Luxembourg.This paper was carried out in the framework of the InfraROB project (Maintaining integrity, performance and safety of the road infrastructure through autonomous robotized solutions and modularization), which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 955337. It reflects only the authors’ views. Neither the European Climate, Infrastructure, and Environment Executive Agency (CINEA) nor the European Commission is in any way responsible for any use that may be made of the information it contains. Authors also would like to thank to CESGA for the use of their servers. Funding for open access charge: Universidade de Vigo/CISUG.The authors would like to thank the point cloud labelling volunteers for their time and effort. This research has received funding from Xunta de Galicia through human resources grant (ED481B-2019-061) and from the Government of Spain through project PID2019-105221RB-C43 funded by MCIN/AEI/10.13039/501100011033. Mr Nurunnabi is with the Project 2019-05-030-24, SOLSTICE - Programme Fonds Européen de Developpment Régional (FEDER)/Ministère de l’Economie of the G. D. of Luxembourg.This paper was carried out in the framework of the InfraROB project (Maintaining integrity, performance and safety of the road infrastructure through autonomous robotized solutions and modularization), which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 955337. It reflects only the authors’ views. Neither the European Climate, Infrastructure, and Environment Executive Agency (CINEA) nor the European Commission is in any way responsible for any use that may be made of the information it contains. Authors also would like to thank to CESGA for the use of their servers. Funding for open access charge: Universidade de Vigo/CISUG.
Abdelaziz Ismael, S.A., Mohammed, A., Hefny, H., An enhanced deep learning approach for brain cancer MRI images classification using residual networks. Artificial Intelligence in Medicine, 102, 2020, 101779 https://doi.org/10.1016/j.artmed.2019.101779.
Armeni, I., Sax, A., Zamir, A.∼R, Savarese, S., Joint 2D–3D-Semantic Data for Indoor Scene Understanding. 2017, ArXiv E-Prints.
Armeni, I., Sener, O., Zamir, A.R., Jiang, H., Brilakis, I., Fischer, M., Savarese, S., 3D Semantic Parsing of Large-Scale Indoor Spaces. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016 (2016), 1534–1543, 10.1109/CVPR.2016.170.
Balado, J., Martínez-Sánchez, J., Arias, P., Novo, A., Road environment semantic segmentation with deep learning from mls point cloud data. Sensors (Switzerland), 19(16), 2019, 10.3390/s19163466.
Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., & Gall, J. (2019). SemanticKITTI: A dataset for semantic scene understanding of lidar sequences. Proceedings of the IEEE International Conference on Computer Vision, 9297–9307.
Berman, H. (2022). Chi-Square Test of Independence. “Chi-Square Test of Independence.” https://stattrek.com/chi-square-test/independence.
Boulch, A., ConvPoint: Continuous convolutions for point cloud processing. Computers & Graphics 88 (2020), 24–34 https://doi.org/10.1016/j.cag.2020.02.005.
Charles, R., Su, H., Mo, K., & Guibas, L. (2017). PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. 77–85. https://doi.org/10.1109/CVPR.2017.16.
Choy, C., Gwak, J., & Savarese, S. (2019). 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Chun, L., Doudou, Z., Akram, A., Hangbin, W., Shoujun, J., Zeran, X., & Han, Y. (2021). Tongji-3D-Dataset. https://github.com/ZivKidd/Tongji-3D-Dataset.
Deschaud, J.-E., Duque, D., Richa, J.P., Velasco-Forero, S., Marcotegui, B., Goulette, F., Paris-CARLA-3D: A Real and Synthetic Outdoor Point Cloud Dataset for Challenging Tasks in 3D Mapping. Remote Sensing, 13(22), 2021, 10.3390/rs13224713.
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., & Koltun, V. (2017). CARLA: An Open Urban Driving Simulator. In S. Levine, V. Vanhoucke, & K. Goldberg (Eds.), Proceedings of the 1st Annual Conference on Robot Learning (Vol. 78, pp. 1–16). PMLR.
González-Collazo, S. M., Balado, J., Garrido, I., Grandío, J., Rashdi, R., Tsiranidou, E., del Río-Barral, P., Rúa, E., Puente, I., & Lorenzo, H. (2022). Santiago Urban Dataset Sud: Combination of Handled and Mobile Laser Scanning Point Clouds. SSRN.
González, E., Balado, J., Arias, P., Lorenzo, H., Realistic correction of sky-coloured points in Mobile Laser Scanning point clouds. Optics & Laser Technology, 149, 2022, 107807 https://doi.org/10.1016/j.optlastec.2021.107807.
Guiotte, F., Pham, M.-T., Dambreville, R., Corpetti, T., Lefèvre, S., Semantic Segmentation of LiDAR Points Clouds: Rasterization Beyond Digital Elevation Models. IEEE Geoscience and Remote Sensing Letters 17:11 (2020), 2016–2019, 10.1109/LGRS.2019.2958858.
Hackel, T., Savinov, N., Ladicky, L., Wegner, J. D., Schindler, K., & Pollefeys, M. (2017). Semantic3D.net: A new Large-scale Point Cloud Classification Benchmark. ArXiv, abs/1704.0.
Hong, D., Gao, L., Yokoya, N., Yao, J., Chanussot, J., Du, Q., Zhang, B., More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification. IEEE Transactions on Geoscience and Remote Sensing 59:5 (2021), 4340–4354, 10.1109/tgrs.2020.3016820.
Hong, D., Han, Z., Yao, J., Gao, L., Zhang, B., Plaza, A., Chanussot, J., SpectralFormer}: Rethinking Hyperspectral Image Classification With Transformers. {IEEE} Transactions on Geoscience and Remote Sensing 60 (2022), 1–15, 10.1109/tgrs.2021.3130716.
Hu, Q., Yang, B., Khalid, S., Xiao, W., Trigoni, N., Markham, A., Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset, Benchmarks and Challenges. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021 (2021), 4975–4985, 10.1109/CVPR46437.2021.00494.
Hu, Q., Yang, B., Xie, L., Rosa, S., Guo, Y., Wang, Z., Trigoni, N., & Markham, A. (2019). RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds (CVPR 2020 Oral).
Hummel, M., & van Kooten, K. (2019). Leveraging NVIDIA Omniverse for In Situ Visualization (M. Weiland, G. Juckeland, S. Alam, & H. Jagode (eds.); pp. 634–642). Springer International Publishing.
Landrieu, L., Simonovsky, M., Large-Scale Point Cloud Semantic Segmentation with Superpoint Graphs. IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018 (2018), 4558–4567, 10.1109/CVPR.2018.00479.
Lê, H.-Â., Guiotte, F., Pham, M.-T., Lefèvre, S., Corpetti, T., Learning Digital Terrain Models From Point Clouds: ALS2DTM Dataset and Rasterization-Based GAN. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15 (2022), 4980–4989, 10.1109/JSTARS.2022.3182030.
Li, M., Xie, Y., Shen, Y., Ke, B., Qiao, R., Ren, B., Lin, S., & Ma, L. (2022). HybridCR: Weakly-Supervised 3D Point Cloud Semantic Segmentation via Hybrid Contrastive Regularization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14930–14939.
Li, S., Song, W., Fang, L., Chen, Y., Ghamisi, P., Benediktsson, J.A., Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Transactions on Geoscience and Remote Sensing 57:9 (2019), 6690–6709, 10.1109/TGRS.2019.2907932.
Liu, C., Zeng, D., Akbar, A., Wu, H., Jia, S., Xu, Z., Yue, H., Context-Aware Network for Semantic Segmentation Toward Large-Scale Point Clouds in Urban Environments. IEEE Transactions on Geoscience and Remote Sensing 60 (2022), 1–15, 10.1109/TGRS.2022.3182776.
Lu, T., Wang, L., Wu, G., CGA-Net: Category Guided Aggregation for Point Cloud Semantic Segmentation. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021 (2021), 11688–11697, 10.1109/CVPR46437.2021.01152.
Ma, L., Li, Y., Li, J., Wang, C., Wang, R., Chapman, M.A., Mobile Laser Scanned Point-Clouds for Road Object Detection and Extraction: A Review. Remote Sensing, 10(10), 2018, 10.3390/rs10101531.
Mo, K., Zhu, S., Chang, A. X., Yi, L., Tripathi, S., Guibas, L. J., & Su, H. (2018). PartNet: {A} Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding. CoRR, abs/1812.0. http://arxiv.org/abs/1812.02713.
NIST. (2022). Comparing multiple proportions: The Marascuillo procedure. https://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm.
Nurunnabi, A., Teferle, N., Li, J., Lindenbergh, R., & Hunegnaw, A. (2021). An Efficient Deep Learning Approach for Ground Point Filtering in Aerial Laser Scanning Point Clouds. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLIII-B1-2. https://doi.org/10.5194/isprs-archives-XLIII-B1-2021-31-2021.
Nurunnabi, A., Teferle, N., Li, J., Lindenbergh, R., & Parvaz, S. (2021). Investigation of PointNet for Semantic Segmentation of Large-Scale Outdoor Point Clouds.
Paz Mouriño, S. de, Balado, J., & Arias, P. (2021). Multiview Rasterization of Street Cross-sections Acquired with Mobile Laser Scanning for Semantic Segmentation with Convolutional Neural Networks. IEEE EUROCON 2021 - 19th International Conference on Smart Technologies, 35–39. https://doi.org/10.1109/EUROCON52738.2021.9535645.
Pierdicca, R., Paolanti, M., Matrone, F., Martini, M., Morbidoni, C., Malinverni, E.S., Lingua, A.M., Point Cloud Semantic Segmentation Using a Deep Learning Framework for Cultural Heritage. Remote Sensing, 12(6), 2020, 10.3390/rs12061005.
Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017a). PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems (Vol. 30). Curran Associates, Inc. https://proceedings.neurips.cc/paper/2017/file/d8bf84be3800d12f74d8b05e9b89836f-Paper.pdf.
Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017b). PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. CoRR, abs/1706.0.
Richter, R., Döllner, J., Concepts and techniques for integration, analysis and visualization of massive 3D point clouds. Computers, Environment and Urban Systems 45 (2014), 114–124 https://doi.org/10.1016/j.compenvurbsys.2013.07.004.
Ros, G., Sellart, L., Materzynska, J., Vazquez, D., Lopez, A.M., The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016 (2016), 3234–3243, 10.1109/CVPR.2016.352.
Roynard, X., Deschaud, J.-E., Goulette, F., Paris-Lille-3D: A Point Cloud Dataset for Urban Scene Segmentation and Classification. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018 (2018), 2108–21083, 10.1109/CVPRW.2018.00272.
Song, H., Jo, K., Cho, J., Son, Y., Kim, C., Han, K., A training dataset for semantic segmentation of urban point cloud map for intelligent vehicles. ISPRS Journal of Photogrammetry and Remote Sensing 187 (2022), 159–170 https://doi.org/10.1016/j.isprsjprs.2022.02.007.
Tan, W., Qin, N., Ma, L., Li, Y., Du, J., Cai, G., Li, J., Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation of Urban Roadways. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020 (2020), 797–806.
Tchapmi, L. P., Choy, C. B., Armeni, I., Gwak, J., & Savarese, S. (2017). SEGCloud: Semantic Segmentation of 3D Point Clouds. arXiv. https://doi.org/10.48550/ARXIV.1710.07563.
Thomas, H., Qi, C., Deschaud, J.-E., Marcotegui, B., Goulette, F., & Guibas, L. (2019). KPConv: Flexible and Deformable Convolution for Point Clouds. 6410–6419. https://doi.org/10.1109/ICCV.2019.00651.
Uchida, T., Hasegawa, K., Li, L., Adachi, M., Yamaguchi, H., Thufail, F.I., Tanaka, S., Noise-robust transparent visualization of large-scale point clouds acquired by laser scanning. ISPRS Journal of Photogrammetry and Remote Sensing 161 (2020), 124–134 https://doi.org/10.1016/j.isprsjprs.2020.01.004.
Vallet, B., Brédif, M., Serna, A., Marcotegui, B., Paparoditis, N., TerraMobilita/iQmulus urban point cloud analysis benchmark. Computers & Graphics 49 (2015), 126–133 https://doi.org/10.1016/j.cag.2015.03.004.
Virtanen, J.-P., Daniel, S., Turppa, T., Zhu, L., Julin, A., Hyyppä, H., Hyyppä, J., Interactive dense point clouds in a game engine. ISPRS Journal of Photogrammetry and Remote Sensing 163 (2020), 375–389 https://doi.org/10.1016/j.isprsjprs.2020.03.007.
Xie, Y., Tian, J., Zhu, X.X., Linking Points With Labels in 3D: A Review of Point Cloud Semantic Segmentation. IEEE Geoscience and Remote Sensing Magazine 8:4 (2020), 38–59, 10.1109/MGRS.2019.2937630.
Xu, Y., Tong, X., Stilla, U., Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry. Automation in Construction, 126, 2021, 103675 https://doi.org/10.1016/j.autcon.2021.103675.
Yan, X., Zheng, C., Li, Z., Wang, S., & Cui, S. (2020). PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling. CoRR, abs/2003.0. https://arxiv.org/abs/2003.00492.
Zhang, J., Li, X., Zhao, X., & Zhang, Z. (2022). LLGF-Net: Learning Local and Global Feature Fusion for 3D Point Cloud Semantic Segmentation. In Electronics (Vol. 11, Issue 14). https://doi.org/10.3390/electronics11142191.
Zhang, J., Zhao, X., Chen, Z., Lu, Z., A Review of Deep Learning-Based Semantic Segmentation for Point Cloud. IEEE Access 7 (2019), 179118–179133, 10.1109/ACCESS.2019.2958671.
Zhao, H., Jiang, L., Fu, C.-W., & Jia, J. (2019). PointWeb: Enhancing Local Neighborhood Features for Point Cloud Processing. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5565–5573.
Zhu, J., Gehrung, J., Huang, R., Borgmann, B., Sun, Z., Hoegner, L., Stilla, U., TUM-MLS-2016: An Annotated Mobile LiDAR Dataset of the TUM City Campus for Semantic Point Cloud Interpretation in Urban Areas. Remote Sensing, 12(11), 2020, 10.3390/rs12111875.