Author: Khaliel, Reda Fekry Abdelkawy
Title: Marker-free registration and fusion of multi-modal point clouds in forests for structural tree parameter estimation
Advisors: Yao, Wei (LSGI)
Ding, Xiao-li (LSGI)
Degree: Ph.D.
Year: 2023
Subject: Forest mapping
Optical radar
Three-dimensional imaging
Hong Kong Polytechnic University -- Dissertations
Department: Department of Land Surveying and Geo-Informatics
Pages: xxi, 115 pages : color illustrations
Language: English
Abstract: Light detection and ranging (LiDAR) has become an important source of high-density three-dimensional (3D) data acquisition. Depending on the data acquisition environment and application, LiDAR devices are mounted on different platforms (e.g., aircraft, vehicles, drones, etc.). As a result, LiDAR is used for a wide range of applications such as urban mapping, object recognition, 3D urban modeling, environmental monitoring, archaeology, architecture, and forest mapping. LiDAR data is acquired in strips or overlapping scans, similar to conventional photogrammetric data acquisition methods, to provide comprehensive coverage of the area of interest. In addition, the different viewing perspectives and penetration capabilities of existing LiDAR systems favor data integration for improved scene representation. For example, in forest scenarios, airborne LiDAR systems can capture tree canopy and high-canopy levels more efficiently than ground-based LiDAR systems. Ground-based LiDAR data, on the other hand, provide a more accurate description of the near-ground levels and tree trunks. Consequently, the integration of airborne and ground-based LiDAR data would provide more information than a single platform. It is expected that this integration would also improve tree representation and reconstruction, leading to more realistic estimates of structural tree attributes. However, the literature review on co-registration of LiDAR data in forests has shown that performance is limited in some cases, such as (a) non-coincidence of tree locations in windy forest areas or with different data collection perspectives (e.g., (a) mismatch of tree locations in windy forest areas or with different data collection perspectives (e.g., ground-based and air-borne point clouds), especially when the stems are not present in the point cloud (e.g., airborne LiDAR data); and (b) plantation forests in which tree attributes are indistinguishable at the plot level because all trees are the same species, age, growth stage, diameter at breast height (DBH), and tree heights.
Quantitative structure modelling (QSM) of trees from LiDAR data provides detailed and accurate information on tree parameters such as trunk length, tree height, and tree volume. Due to the complementarity of ground-based and airborne LiDAR data, the QSM of trees based on the fusion of ground-based and airborne LiDAR data would improve the accuracy of tree parameter estimation. Existing LiDAR data fusion methods have not used their results for further forest mapping and interpretation. In addition, accurate tree segmentation is of great importance for tree modeling. Unfortunately, the performance of existing tree segmentation methods is limited due to the following factors: (a) the 3D information loss due to point cloud projection; (b) the forest plots with mixed tree species; (c) the high computational requirements of point-based segmentation methods; and (d) the data annotation and conversion of point clouds to raster or multiple views in deep learning-based approaches. Therefore, this work consists of three parts: (a) development of a comprehensive framework for co-registration of LiDAR data in forests; (b) estimation of tree parameters from the fusion of ground-based and airborne LiDAR data using QSM; and (c) segmentation of individual trees using graph neural networks.
The co-registration framework is divided into three main phases: (a) canopy clustering and keypoint extraction; (b) feature similarity and matching; and (c) transformation search. Instead of tree locations, the proposed system uses virtual keypoints based on canopy clustering and analysis. This mitigates the limitations of tree localization. Moreover, the transformation is performed by permuting all possible pair combinations of the correspondence set. The approach has a great potential in matching unmanned aerial vehicle (UAV) LiDAR strip adjustment and co-registering LiDAR data from multiple platforms.
In the second part of this work, the co-registration framework is extended to perform a fusion of ground-based and airborne LiDAR data based on (a) the removal of noisy points and (b) the elimination of redundant points. Therefore, the structural tree parameters are determined using the QSM of the fused point cloud. The results of tree parameter estimation show that tree height, crown volume, and tree volume are among the most beneficial fusion parameters. Consequently, the combination of ground-based and airborne LiDAR data would improve the estimation of above ground biomass (AGB), since tree height is one of the most important variables.
The last part of this research deals with the segmentation of individual trees since it is of a great importance for tree modeling to determine parameters and thus for forest management. The proposed approach is motivated by the graph link prediction problem. A database is created from the point cloud and fed into a graph convolutional network that predicts the presence of a connection between the unconnected edges of the input graph. The approach is unsupervised, so no data labeling or other knowledge of forest parameters is required.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
6761.pdfFor All Users11.63 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/12314