Author: Forkuo, Eric Kwabena
Title: Automatic fusion of photogrammetric imagery and laser scanner point clouds
Degree: Ph.D.
Year: 2005
Subject: Hong Kong Polytechnic University -- Dissertations
Optical scanners
Image processing -- Digital techniques
Department: Department of Land Surveying and Geo-Informatics
Pages: xviii, 213 leaves : ill. (some col.) ; 30 cm. + 1 computer optical disc
Language: English
Abstract: Close-range photogrammetry and the relatively new technology of terrestrial laser scanning can be considered as complementary rather than competitive technologies. For instance, terrestrial laser scanners (TLS) have the ability to rapidly collect high-resolution 3D surface information about an object. The same type of data can be generated using close-range photogrammetric (CRP) techniques, but image disparities common to close-range scenes makes this an operator intensive task. The imaging systems of some TLSs do not have very high radiometric resolution whereas high-resolution digital cameras used in modern CRP do. Finally, TLSs are essentially earth-bound whereas cameras can be moved at will around the object being imaged. This thesis, therefore, explores and attempts to provide a solution to the problems of developing a methodology to fuse terrestrial laser scanner generated 3D data and high-resolution digital images. Four phases of the methodology have been investigated:- data pre-processing (fusion of data from the two sensors), automatic measurements (feature detection and correspondence matching), mapping (creation of point cloud visual index), and orientation (calculation of exterior orientation parameters). Individual phases were initially investigated in a manually controlled environment, typically using commercial photogrammetric software, and then combined in a completely automated system. Focusing on the amount of geometric primitives, three different scenes (data set A, data set B, and data set C) representing three levels of complexity (low, medium and high) were scanned with the laser scanner, and for each scan, a 2D photographic image was taken with a digital camera. To overcome the differences in datasets, a hybrid matching (both feature and area-based) algorithm was successfully developed and implemented. The fidelity of the concept of generating synthetic camera images has been tested by determining the exterior orientation of the synthetic camera images and the real camera images relative to the point cloud. This orientation process was first achieved by using manual methods and existing photogrammetric application software. The results verified that there were no conceptual errors in the developed methods. However, in order to meet the objective of this thesis, an automatic technique with photogrammetric bundle adjustment was developed. Three different sets of data were used to check the validity and reliability of the developed methodology. The results of measurements on interest points and correspondence matching are presented. Also, the results of manual and automatic exterior orientation are presented. The results indicate that the concept of the synthetic camera image is a feasible method for multisensor fusion. The greatest promise is offered by the point cloud visual index.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
b18181211.pdfFor All Users8.97 MBAdobe PDFView/Open

Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: