Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Computing | en_US |
dc.contributor.advisor | Zhang, Lei (COMP) | en_US |
dc.creator | He, Chenhang | - |
dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/12714 | - |
dc.language | English | en_US |
dc.publisher | Hong Kong Polytechnic University | en_US |
dc.rights | All rights reserved | en_US |
dc.title | Efficient feature learning for point cloud-based 3D object detection | en_US |
dcterms.abstract | 3D object detection is one of the fundamental techniques for autonomous driving, which aims to locate and track objects such as vehicles, pedestrians, and cyclists in real-time. While image-based computer vision techniques have been successfully used in many scenarios, most autonomous driving systems still rely on high-end LiDAR sensors to provide 3D measurements for high-precision object detection. However, processing the unstructured and unordered data from LiDAR point clouds is challenging and requires efficient feature extraction models. | en_US |
dcterms.abstract | In this thesis, we explore different efficient feature learning algorithm for point cloud-based 3D object detection. We first explore the trade-offs between point-based and voxel-based representations for point cloud detection models. To combine the strengths of both approaches, we propose a novel structure-aware single-stage detector (SASSD) that enables voxel-based models to learn fine-grained details from point-based representation with an auxiliary network. This results in a significant improvement in detection accuracy without increasing computational cost. | en_US |
dcterms.abstract | We then investigate the use of transformer-based models for feature extraction on point cloud data. While transformers are well-suited for large receptive regions, they are challenging to apply to sparse and spatially imbalanced point cloud data. To overcome this issue, we propose a voxel set transformer (VoxSeT) model that performs attention modeling on multiple voxel sets with arbitrary number of points. Our VoxSeT model outperforms commonly used sparse convolutional models in both accuracy and efficiency and is straightforward to deploy. | en_US |
dcterms.abstract | We further explore how to enhance 3D object detection performance with sequential point cloud data. We introduce a motion-guided sequential fusion (MSF) method that efficiently fuses multi-frame features through a proposal propagation algorithm. This approach achieves leading performance on the Waymo dataset, while incurring similar costs to a single-frame detector. | en_US |
dcterms.abstract | Finally, we present a novel-view synthesis-based augmentation framework, namely AugMono3D, for monocular 3D object detection. We leverage point clouds to re-construct the scene geometry of a camera image and generate the synthetic image data by augmenting camera views at multiple virtual depths. By training on a large number of synthetic images with virtual depth, our framework consistently improves the detection accuracy. | en_US |
dcterms.abstract | Overall, the proposed four methods demonstrate their effectiveness in learning efficient point cloud features for high-quality 3D object detection. We evaluate our methods on benchmark datasets such as Waymo and KITTI and show significant improvements in accuracy and efficiency. | en_US |
dcterms.extent | xix, 115 pages : color illustrations | en_US |
dcterms.isPartOf | PolyU Electronic Theses | en_US |
dcterms.issued | 2023 | en_US |
dcterms.educationalLevel | Ph.D. | en_US |
dcterms.educationalLevel | All Doctorate | en_US |
dcterms.LCSH | Computer vision | en_US |
dcterms.LCSH | Image analysis -- Data processing | en_US |
dcterms.LCSH | Machine learning | en_US |
dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | en_US |
dcterms.accessRights | open access | en_US |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/12714