|Title:||Conventional and learning approaches for object recognition and tracking|
|Advisors:||Siu, Wan-chi (EIE)|
Chan, Yui-lam (EIE)
|Subject:||Hong Kong Polytechnic University -- Dissertations|
Driver assistance systems
Local transit -- Safety measures
|Department:||Department of Electronic and Information Engineering|
|Pages:||xi, 90 pages : color illustrations|
|Abstract:||Nowadays driver assistant system is popular in both research and application field. An efficient driver assistant system will alleviate the driver's burden, provide essential warnings, and increase the overall driving safety. While a wide range of sensors such as radar and ultra-sound devices are available for developing driver assistant system, vision-based analysis remains to be a robust and inexpensive approach. Driver assistant system, especially collision warning system, is also important for Light Rail Vehicles (LRVs). However, there lacks research on vision-based collision warning for LRV in the academic field. We develop this LRV Close-up Monitoring System for the Hong Kong Light Rail, which aims at providing warning signals once the system detects any frontal vehicle approaching certain safety distance. The challenges lie in the vast change of environmental conditions and scales of the front vehicles. As a real-time real-world application, the system is required to make fast and reliable detections in a variety of situations with very limited computation time. We design a hierarchical multi-module structure to achieve the above objectives. Each module adopts various orthogonal or semi-orthogonal features to detect vehicles under certain circumstances. We also improve the detection accuracy by designing a verification module that checks the low-confident detections by totally different sets of features. These studies specifically show how multiple orthogonal features could increase discriminability as well as maintain high robustness. The system achieves high performance with no missing detections and few false alarms in our field tests. In our further research, we aim at enhancing individual modules by adopting machine learning techniques. We propose a modified decision tree algorithm to form a shadow detection module. The shadow detection module aims at recognizing the shadow part of the LRVs in an early stage to accelerate the detection process. Our proposed modified decision tree classifier takes each binary node as a weak classifier and combine all weak classifier predictions to obtain the final prediction and confidence measures. Our evaluation shows that the proposed detector significantly reduces the computation time by exploiting simple intensity pair features in binary test design, while giving the best detection accuracy (the highest F-score) by combining the predictions from all decision nodes. We also study LRV detections using deep learning. We improve the accuracy of the vehicle detection module by adopting the Faster RCNN algorithm. Specifically, we propose a novel Adaptive ROI detection scheme to deal with remote-ranged vehicles. Compared with a direct implementation of Faster RCNN, experimental results show our proposed algorithm has achieved a significant improvement with a 48% (87.7-38.9)% increase of recall rate for "remote-range" detections, while maintaining an excellent performance for close-range detections.|
|Rights:||All rights reserved|
Files in This Item:
|991022232428303411.pdf||For All Users||2.93 MB||Adobe PDF||View/Open|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: