Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Aeronautical and Aviation Engineeringen_US
dc.contributor.advisorLu, Peng (AAE)en_US
dc.contributor.advisorWen, Chih-yung (AAE)en_US
dc.creatorDuan, Ran-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/11787-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleVisual smart navigation for UAV mission-oriented flighten_US
dcterms.abstractThis thesis addresses the accurate and robust localization problems for UAV visual navigation, including the localization of both the UAV itself and destination, during a mission-oriented flying. The self-localization process is carried out by visual odometry (VO) and we localize the destination by object tracking. Both of them are fundamental yet very challenging tasks in computer vision and robotic areas.en_US
dcterms.abstractTo achieve accurate and robust self-localization, our work investigates an inherent problem of long-term VO, i.e., why the camera pose estimation process occasionally obtains a relatively large error even when the residual of the reprojection errors are well controlled. We demonstrate that the long-term VO process suffers from the biased error distribution of estimated poses and presents a stereo orientation prior (SOP) method to perform a bias compensation in each frame. Using the stereo camera extrinsic parameters as the baseline, the SOP measures the bias of each dimension of the 6-DoF pose for every 2D-3D geometric correspondence. Unlike the commonly used error metrics that compute the total error of an inlier group, our measurement is based on the semidifinite programming of the quadratic polynomials that reformed from 2D-3D points projection system. This allows us to evaluate whether the error mainly comes from orientation or translation. Thus, the proposed system can refine the inlier group by rejecting the points with large error bias in orientation, which performs like a "soft-IMU". We show that the proposed visual odometry system achieves competitive performance in terms of accuracy and robustness even compare with the IMU-aided state-of-the-art methods.en_US
dcterms.abstractTo automatically localize the destination for the UAV, we present a deep-learning-based tracker. It rebuilds a discriminative target appearance model by selecting the representative convolutional neural network (CNN) layers and feature maps autonomously. Then a sub-network is extracted to perform the object detection for the tracked target. To show the versatility of the proposed method, we implemented it on VGG-19 net and YOLO v3, respectively. The results demonstrate that the proposed tracker is quite competitive with the state-of-the-art CNN-based trackers in terms of accuracy, scale adaptation, robustness, and efficiency for UAV-related applications.en_US
dcterms.abstractFinally, we integrate the visual odometry and object tracking into the UAV onboard vision system. With the stereo vision and the current UAV pose from visual odometry, the tracked targets in the 2D image can be converted to the 3D positions in the odometry local map for the UAV navigation. This allows the UAV to perform mission-oriented flying, such as object inspection or goods delivery, in full autonomy.en_US
dcterms.extentxxi, 100 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2022en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.LCSHDrone aircraft -- Pilotingen_US
dcterms.LCSHNavigation (Astronautics)en_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
6272.pdfFor All Users10.14 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/11787