|Title:||Efficient learning algorithms for image annotation|
|Subject:||Image processing -- Digital techniques.|
Information retrieval -- Data processing.
Hong Kong Polytechnic University -- Dissertations
|Department:||Department of Electronic and Information Engineering|
|Pages:||xiv, 91 p. : ill. ; 30 cm.|
|Abstract:||The most important challenges in the development of successful solutions for computer-vision applications are: to select an efficient and effective feature for object or scene representation, to effectively model high-level knowledge, and to incorporate the user’s intentions in the computational algorithms. With the rapid development of digital imagery, how to search images more efficiently by their content has become one of the biggest challenges. Content-based image retrieval (CBIR) is therefore an exciting and worthwhile research interest. However, recent research has shown that there is a significant gap between low-level image features and the semantic concepts used by humans to interpret images. In other words, images lack clearly defined semantic units of composition, which are analogous to words in text documents. As an attempt to bridge this so-called semantic gap, automatic image annotation has been gaining more and more attention in recent years. This research aims to explore a number of different approaches to automatic image annotation, and some related issues. This thesis begins with an introduction to different techniques for image description, which forms the foundation of the research on image auto-annotation. In addition, we also discuss the state-of-the-art automatic image-annotation techniques. Then, we will present our in-depth research into learning the relationship between the image labels themselves. We have introduced a simple label-filtering algorithm, which can remove most of the irrelevant labels for a query image while maintaining those potential labels. With a small population of potential labels left, we then explore the relationship between the features to be used and each label class. Hence, specific and effective features are selected for each class to form a label-specific classifier. In other words, our approach prunes specific features for each single label and formalizes the annotation task as a discriminative classification problem. We have also introduced a new hierarchical model, which mimics human thinking, for image annotation. In this proposed framework, we divide labels into several hierarchies for efficient and accurate labeling, which are constructed using our proposed Associative Memory Sharing (AMS) method. We have also further investigated two challenges in image annotation: the semantic-gap problem and the presence of ambiguous words that may lead to a wrong interpretation between low-level and high-level information. These problems always degrade the performance of the image-to-word mapping. Therefore, we have also proposed a discriminative model with a ranking function that optimizes the cost between a target word and the corresponding images, while simultaneously discovering the disambiguated senses of those words that are optimal for supervised tasks. For the different algorithms proposed in this thesis, we have also conducted comprehensive experiments to evaluate their respective performances. Different datasets are used, and our algorithms are compared to a number of state-of-the-art algorithms. Experiment results show the superior performances of our proposed algorithms.|
|Rights:||All rights reserved|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: