![]() Nurunnabi, Abdul Awal Md ![]() ![]() in Robust Techniques for Building Footprint Extraction in Aerial Laser Scanning 3D Point Clouds (2022, November) The building footprint is crucial for a volumetric 3D representation of a building that is applied in urban planning, 3D city modeling, cadastral and topographic map generation. Aerial laser scanning (ALS ... [more ▼] The building footprint is crucial for a volumetric 3D representation of a building that is applied in urban planning, 3D city modeling, cadastral and topographic map generation. Aerial laser scanning (ALS) has been recognized as the most suitable means of large-scale 3D point cloud data (PCD) acquisition. PCD can produce geometric detail of a scanned surface. However, it is almost impossible to get point clouds without noise and outliers. Besides, data incompleteness and occlusions are two common phenomena for PCD. Most of the existing methods for building footprint extraction employ classification, segmentation, voting techniques (e.g., Hough-Transform or RANSAC), or Principal Component Analysis (PCA) based methods. It is known that classical PCA is highly sensitive to outliers, even RANSAC which is known as a robust technique for shape detection is not free from outlier effects. This paper presents a novel algorithm that employs MCMD (maximum consistency within minimum distance), MSAC (a robust variant of RANSAC) and a robust regression to extract reliable building footprints in the presence of outliers, missing points and irregular data distributions. The algorithm is successfully demonstrated through two sets of ALS PCD. [less ▲] Detailed reference viewed: 44 (2 UL)![]() Nurunnabi, Abdul Awal Md ![]() ![]() in kCV-B: Bootstrap with Cross-Validation for Deep Learning Model Development, Assessment and Selection (2022, October) This study investigates the inability of two popular data splitting techniques: train/test split and k-fold cross-validation that are to create training and validation data sets, and to achieve sufficient ... [more ▼] This study investigates the inability of two popular data splitting techniques: train/test split and k-fold cross-validation that are to create training and validation data sets, and to achieve sufficient generality for supervised deep learning (DL) methods. This failure is mainly caused by their limited ability of new data creation. In response, the bootstrap is a computer based statistical resampling method that has been used efficiently for estimating the distribution of a sample estimator and to assess a model without having knowledge about the population. This paper couples cross-validation and bootstrap to have their respective advantages in view of data generation strategy and to achieve better generalization of a DL model. This paper contributes by: (i) developing an algorithm for better selection of training and validation data sets, (ii) exploring the potential of bootstrap for drawing statistical inference on the necessary performance metrics (e.g., mean square error), and (iii) introducing a method that can assess and improve the efficiency of a DL model. The proposed method is applied for semantic segmentation and is demonstrated via a DL based classification algorithm, PointNet, through aerial laser scanning point cloud data. [less ▲] Detailed reference viewed: 55 (4 UL)![]() Nurunnabi, Abdul Awal Md ![]() ![]() in Robust Approach for Urban Road Surface Extraction Using Mobile Laser Scanning Data (2022, June) Road surface extraction is crucial for 3D city analysis. Mobile laser scanning (MLS) is the most appropriate data acquisition system for the road environment because of its efficient vehicle-based on-road ... [more ▼] Road surface extraction is crucial for 3D city analysis. Mobile laser scanning (MLS) is the most appropriate data acquisition system for the road environment because of its efficient vehicle-based on-road scanning opportunity. Many methods are available for road pavement, curb and roadside way extraction. Most of them use classical approaches that do not mitigate problems caused by the presence of noise and outliers. In practice, however, laser scanning point clouds are not free from noise and outliers, and it is apparent that the presence of a very small portion of outliers and noise can produce unreliable and non-robust results. A road surface usually consists of three key parts: road pavement, curb and roadside way. This paper investigates the problem of road surface extraction in the presence of noise and outliers, and proposes a robust algorithm for road pavement, curb, road divider/islands, and roadside way extraction using MLS point clouds. The proposed algorithm employs robust statistical approaches to remove the consequences of the presence of noise and outliers. It consists of five sequential steps for road ground and non-ground surface separation, and road related components determination. Demonstration on two different MLS data sets shows that the new algorithm is efficient for road surface extraction and for classifying road pavement, curb, road divider/island and roadside way. The success can be rated in one experiment in this paper, where we extract curb points; the results achieve 97.28%, 100% and 0.986 of precision, recall and Matthews correlation coefficient, respectively. [less ▲] Detailed reference viewed: 30 (1 UL)![]() Nurunnabi, Abdul Awal Md ![]() ![]() E-print/Working paper (2022) Precise ground surface topography is crucial for 3D city analysis, digital terrain modeling, natural disaster monitoring, high-density map generation, and autonomous navigation to name a few. Deep ... [more ▼] Precise ground surface topography is crucial for 3D city analysis, digital terrain modeling, natural disaster monitoring, high-density map generation, and autonomous navigation to name a few. Deep learning (DL; LeCun, et al., 2015), a division of machine learning (ML), has been achieving unparalleled success in image processing, and recently demonstrated a huge potential for point cloud analysis. This article presents a feature-based DL algorithm that classifies ground and non-ground points in aerial laser scanning point clouds. Recent advancements of remote sensing technologies make it possible digitizing the real world in a near automated fashion. LiDAR (Light Detection and Ranging) based point clouds that are a type of remotely sensed georeferenced data, providing detailed 3D information on objects and environment have been recognized as one of the most powerful means of digitization. Unlike imagery, point clouds are unstructured, sparse and of irregular data format which creates many challenges, but also provides huge opportunities for capturing geometric details of scanned surfaces with millimeter accuracy. Classifying and separating non-ground points from ground points largely reduce data volumes for consecutive analyses of either ground or non-ground surfaces, which consequently saves cost and labor, and simplifies further analysis. [less ▲] Detailed reference viewed: 75 (2 UL)![]() Nurunnabi, Abdul Awal Md ![]() ![]() in Resampling methods for a reliable validation set in deep learning based point cloud classification (2022, June) A validation data set plays a pivotal role in tweaking a machine learning model trained in a supervised manner. Many existing algorithms select a part of available data by using random sampling to produce ... [more ▼] A validation data set plays a pivotal role in tweaking a machine learning model trained in a supervised manner. Many existing algorithms select a part of available data by using random sampling to produce a validation set. However, this approach can be prone to overfitting. One should follow careful data splitting to have reliable training and validation sets that can produce a generalized model with a good performance for the unseen (test) data. Data splitting based on resampling techniques involves repeatedly drawing samples from the available data. Hence, resampling methods can give better generalization power to a model, because they can produce and use many training and/or validation sets. These techniques are computationally expensive, but with increasingly available high-performance computing facilities, one can exploit them. Though a multitude of resampling methods exist, investigation of their influence on the generality of deep learning (DL) algorithms is limited due to its non-linear black-box nature. This paper contributes by: (1) investigating the generalization capability of the four most popular resampling methods: k-fold cross-validation (k-CV), repeated k-CV (Rk-CV), Monte Carlo CV (MC-CV) and bootstrap for creating training and validation data sets used for developing, training and validating DL based point cloud classifiers (e.g., PointNet; Qi et al., 2017a), (2) justifying Mean Square Error (MSE) as a statistically consistent estimator, and (3) exploring the use of MSE as a reliable performance metric for supervised DL. Experiments in this paper are performed on both synthetic and real-world aerial laser scanning (ALS) point clouds. [less ▲] Detailed reference viewed: 47 (2 UL)![]() Nurunnabi, Abdul Awal Md ![]() ![]() in A TWO-STEP FEATURE EXTRACTION ALGORITHM: APPLICATION TO DEEP LEARNING FOR POINT CLOUD CLASSIFICATION (2022, March) Most deep learning (DL) methods that are not end-to-end use several multi-scale and multi-type hand-crafted features that make the network challenging, more computationally intensive and vulnerable to ... [more ▼] Most deep learning (DL) methods that are not end-to-end use several multi-scale and multi-type hand-crafted features that make the network challenging, more computationally intensive and vulnerable to overfitting. Furthermore, reliance on empirically-based feature dimensionality reduction may lead to misclassification. In contrast, efficient feature management can reduce storage and computational complexities, builds better classifiers, and improves overall performance. Principal Component Analysis (PCA) is a well-known dimension reduction technique that has been used for feature extraction. This paper presents a two-step PCA based feature extraction algorithm that employs a variant of feature-based PointNet (Qi et al., 2017a) for point cloud classification. This paper extends the PointNet framework for use on large-scale aerial LiDAR data, and contributes by (i) developing a new feature extraction algorithm, (ii) exploring the impact of dimensionality reduction in feature extraction, and (iii) introducing a non-end-to-end PointNet variant for per point classification in point clouds. This is demonstrated on aerial laser scanning (ALS) point clouds. The algorithm successfully reduces the dimension of the feature space without sacrificing performance, as benchmarked against the original PointNet algorithm. When tested on the well-known Vaihingen data set, the proposed algorithm achieves an Overall Accuracy (OA) of 74.64% by using 9 input vectors and 14 shape features, whereas with the same 9 input vectors and only 5PCs (principal components built by the 14 shape features) it actually achieves a higher OA of 75.36% which demonstrates the effect of efficient dimensionality reduction. [less ▲] Detailed reference viewed: 60 (2 UL)![]() Nurunnabi, Abdul Awal Md ![]() in Nurunnabi, Abdul Awal Md; Teferle, Felix Norman; Li, Jonathan (Eds.) et al An efficient deep learning approach for ground point filtering in aerial laser scanning point clouds (2021, July 02) Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and ... [more ▼] Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling. [less ▲] Detailed reference viewed: 98 (13 UL)![]() ; ; et al in A boundary-enhanced supervoxel method for extraction of road edges in MLS point clouds (2020) Road extraction plays a significant role in production of high definition maps (HD maps). This paper presents a novel boundary-enhanced supervoxel segmentation method for extracting road edge contours ... [more ▼] Road extraction plays a significant role in production of high definition maps (HD maps). This paper presents a novel boundary-enhanced supervoxel segmentation method for extracting road edge contours from MLS point clouds. The proposed method first leverages normal feature judgment to obtain 3D point clouds global geometric information, then clusters points according to an existing method with global geometric information to enhance the boundaries. Finally, it utilizes the neighbor spatial distance metric to extract the contours and drop out existing outliers. The proposed method is tested on two datasets acquired by a RIEGL VMX-450 MLS system that contain the major point cloud scenes with different types of road boundaries. The experimental results demonstrate that the proposed method provides a promising solution for extracting contours efficiently and completely. Results show that the precision values are 1.5 times higher and approximately equal than the other two existing methods when the recall value is 0 for both tested two road datasets. [less ▲] Detailed reference viewed: 55 (4 UL) |
||