Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Aeronautical and Aviation Engineeringen_US
dc.contributor.advisorHsu, Li-ta (AAE)en_US
dc.contributor.advisorWen, Weisong (AAE)en_US
dc.creatorLeung, Yan Tung-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13141-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleCost-effective camera localization aided by prior point clouds maps for level 3 autonomous driving vehiclesen_US
dcterms.abstractFor navigation tasks, particularly within autonomous driving systems, accurate and robust localization is the critical aspect. While global navigation satellite systems (GNSS) are a widespread choice for localization, it has exhibited drawbacks such as susceptibility to issues like multipath and non-line-of-sight reception. Vision-based localization offers an alternative by relying on visual cues, circumventing the use of GNSS signals. In this study, we proposed a visual localization method aided by a prior 3D LiDAR map. Our approach involves reconstructing image features into multiple sets of 3D points using a localized bundle adjustment-based visual odometry system. Subsequently, these reconstructed 3D points are aligned with the prior 3D point cloud map, enabling the tracking of the user's global pose. The proposed visual localization methodology boasts several advantages. Firstly, the aided prior maps contribute to improving the robustness in the face of variations in ambient lighting and appearance. Additionally, it capitalizes on the prior 3D map to confer viewpoint invariance. The key idea of point cloud registration for the proposed approach determines geometric matching to establish the accurate position and orientation of a camera within its surroundings. This is achieved by contrasting the geometric features present in the camera's image with those stored in a reference map. The method identifies and aligns the geometric points between the camera image and the prior 3D point cloud map. Notably, our method is also conducive to the utilization of cost-effective and lightweight camera sensors by end-users. The experiment results show the proposed methods are accurate and frame rates without the need for supplementary information.en_US
dcterms.extent68 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2024en_US
dcterms.educationalLevelM.Phil.en_US
dcterms.educationalLevelAll Masteren_US
dcterms.LCSHAutomated vehiclesen_US
dcterms.LCSHMotor vehicles -- Automatic location systemsen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
7594.pdfFor All Users2.23 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13141