|Author:||Lee, Shu-tak Raymond|
|Title:||An elastic graph dynamic link model for invariant vision object recognition|
|Subject:||Pattern recognition systems|
Hong Kong Polytechnic University -- Dissertations
|Department:||Department of Computing|
|Pages:||xiv, 225,  leaves : ill. ; 30 cm|
|Abstract:||In recent decades, neural network researches have been tremendously increased in various areas ranging from simple synapses to sophisticated weather forecasting systems. Vision object recognition, due to its vast applications [such as personal identification and scene analysis], has been one of the most popular topics in such neural network researches but it also poses the greatest challenge to researchers of neural network for various reasons. Vision itself involves the process of massive amounts of data and has the geometric invariant problem, problems of noise, occlusion and object variability. Moreover, vision object recognition needs to be fast, especially for the situation or purpose of robot vision and automatic surveillance. However, wiring of the specific feature extractors in the neural network system and the tremendous amount of time required for learning its use also create an additional difficulty in the neural network approach. Numerous models are proposed in order to provide an efficient and effective solution. In this thesis, a Elastic Graph Dynamic Link Model (EGDLM) is proposed to cope with the inherent variability properties of visual patterns recognition. This model makes use of both "Elastic Graph Matching" and "Neural Network Architecture" for Invariant Vision Object Recognition. From the application point of view, we have implemented the model in various complex vision objection recognition problems [such as: Handwritten Chinese character recognition; human face recognition under various invariant situations including dilation, contraction, rotation, occlusion, distortion and scene analysis]. In addition, the author has applied and utilized the model in automatic satellite interpretation for identification of weather patterns [such as tropical cyclones]. Due to the difference in nature in different vision object recognition problems, various image pre-processing techniques have been adapted including "Active Contour Model" and "Composite Neural Oscillatory Model" for vision object segmentation. The result is quite encouraging. From the academic point of view, the author has explored and simulated the vision object encoding, storage and recognition of brain model by using an "Elastic Graph" and "Dynamic Memory Links" neural network model. The integration of various approaches is a new contribution to science. The success of the project not only can provide a feasible and effective vision object recognition model, but also enable us to enter into a new era of neuroscience for memory management, recall and management scheme. Moreover, the author aims at the provision of a "generalized" vision object recognition model to tackle a variety of problems ranging from the "Handwritten Character Recognition", "Scene analysis" to "Human Face Recognition" and "Satellite Picture Interpretation" problems. These problems can be solved by the present proposed methodologies, much better than that by classical recognition models alone. The experimental results have confirmed such potential and will stimulate more interest in the topics in the year to come.|
|Rights:||All rights reserved|
Files in This Item:
|b15403117.pdf||For All Users||10.14 MB||Adobe PDF||View/Open|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: