Author: Chen, Long
Title: Real-time photogrammetry based on parallel architecture for 3D applications
Advisors: Wu, Bo (LSGI)
Degree: Ph.D.
Year: 2024
Subject: Photogrammetry -- Digital techniques
Three-dimensional imaging
Hong Kong Polytechnic University -- Dissertations
Department: Department of Land Surveying and Geo-Informatics
Pages: xiv, 188 pages : color illustrations
Language: English
Abstract: Photogrammetry is the technique that allows capturing and reconstructing 3D models of objects and scenes from multiple images. In recent years, with the rising demand for efficient 3D reconstruction, real-time photogrammetry has attracted much attention in various domains, such as unmanned aerial vehicles (UAVs) navigation, disaster emergency response, human tracking, and autonomous driven. This research focuses on the enhancement of the computational efficiency of photogrammetric algorithms by taking advantage of parallel architectures and combining them with cutting-edge methods such as deep learning to achieve real-time photogrammetry in various scenarios.
The traditional visual navigation algorithms in a GPS-denied environment enable the acquisition of approximate relative poses of cameras. However, tradition methods, such as visual odometry (VO) suffers from attitude estimation errors that accumulate over time and cause the estimated trajectory to drift, and the data processing efficiency is relatively low. To address these challenges, this research firstly presents a feature-based cross-view image matching and retrieval method for real-time camera pose estimation by incorporating VO and photogrammetry algorithms. Specifically, the method uses a deep-learning feature extraction and matching method to improve the robustness of the relative pose estimation of the camera by VO. To correct accumulated errors by VO, the method selects keyframes and applies photogrammetric algorithm of space resection to determine the accurate pose information of the keyframes. The accurate camera pose information of keyframes are then used to rectify the possible drift caused by VO. Parallel architectures are implemented to enhance the data processing efficiency. Experimental analysis using real UAV datasets shows that the developed method achieves a root mean square error (RMSE) of 4.7 m for absolute positional error and 0.33° for rotation error, as compared with ground truth data. The developed method also achieves an efficiency of 12 frames per second (FPS) based on the parallel architecture implemented in a regular computer, indicating its real-time performance.
Dense image matching in real time is a challenging task because of the high computation demand and high degree of ambiguity that often occurs in practical situations. The state-of-the-art methods such as the semi-global matching (SGM) with diverse local similarity metrics, offering favourable dense matching results against various types of noise and disturbances, such as illumination variations and the ability to handle textureless regions and preserve edges. However, the computational burden associated with SGM hinders its real-time processing capabilities. To overcome these challenges, this research leverages parallel structured systems, specifically graphic processing units (GPUs), to enabled real-time dense image matching. A comprehensive disparity estimation pipeline based on a GPU-accelerated device is developed and evaluated. An effective parallel scheme and data layout strategy is proposed for the core functions in the disparity estimation algorithm, and the algorithm codes are further optimised to enhance efficiency. The optimised algorithm is deployed on a high-end GPU, utilising the sum of absolute distance (SAD) as the similarity measurement, 64 disparity levels, and 8 path directions for the SGM method. As a result, the system achieves high-quality real-time dense matching results for different datasets, including a benchmark dataset, close-range images, and aerial images.
With the derived camera pose information and dense image matching results from the previous steps, 3D data (e.g., 3D point clouds) can be generated through photogrammetric space intersection (triangulation). However, existing methods seldom focuses on the efficiency of 3D data generation for real-time applications. To overcome this limitation, this research proposes a parallel architecture based framework that performs multi-image triangulation based on an optimised angle-based error metric. The proposed framework adopts a one-track-one-line strategy to exploit the parallel computing power of GPU and can achieve real-time performance. The performances of the proposed 3D data generation framework have been demonstrated by two application scenarios: (1) real-time 3D point cloud generation from aerial images, and (2) real-time 3D human motion acquisition and monitoring. The experimental results show that the proposed framework can process a pair of aerial images in 156 ms on average and generate a 3D point cloud incrementally displayed by an optimised grid map in real time. Moreover, the proposed framework was adopted to transfer human body feature from 2D to 3D. Experimental results show that the developed methods can capture and monitor 3D human motion at 17 FPS and achieve centimetre-level accuracy within a 15 m distance.
In conclusion, real-time photogrammetry offers significant benefits in enabling real-time 3D data acquisition and modelling for diverse applications and domains. This research presents novel contributions to the photogrammetry field by extending it to real-time photogrammetry. The novel approaches and implementations including real-time cross-view feature matching for camera pose determination, real-time dense image matching, and real-time triangulation for 3D data generation can serve as foundations for further research and development in real time photogrammetry. The developed real-time photogrammetric methods and systems will have great potential for various applications, such as more intelligent UAV applications based on real-time feedback control, disaster emergency response from real-time 3D mapping, enhanced human tracking and monitoring assisted with real-time 3D data, and autonomous driven supported by real-time 3D pose determination and 3D mapping of the surrounding environment.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
7250.pdfFor All Users12.45 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/12799