Content-sensitive salient region modelling with applications

Pao Yue-kong Library Electronic Theses Database

Content-sensitive salient region modelling with applications

 

Author: Liang, Zhen
Title: Content-sensitive salient region modelling with applications
Degree: Ph.D.
Year: 2013
Subject: Image processing.
Image processing -- Digital techniques.
Hong Kong Polytechnic University -- Dissertations
Department: Dept. of Electronic and Information Engineering
Pages: xxiv, 208 p. : ill. (some col.) ; 30 cm.
Language: English
InnoPac Record: http://library.polyu.edu.hk/record=b2639073
URI: http://theses.lib.polyu.edu.hk/handle/200/7134
Abstract: This thesis presents novel content-sensitive salient models with applications to salient feature extraction, salient object detection and yarn surface grading. Three main contributions are reported in the thesis. They include: (1) studies on salient feature extraction with application to Content-Based Image Retrieval; (2) a salient object detection model using content-sensitive hypergraph representation and partitioning; (3) two novel salient models for yarn surface grading. In the first investigation, two important issues are addressed. One issue is to accurately describe image content and the other is to adequately measure the similarity of images. Two salient feature extraction approaches are proposed. For single-object images, a local descriptor, namely salient-SIFT (Scale-Invariant Feature Transform), is proposed, which is invariant under different transformations and benefits image retrieval with a low computation complexity. For a complex image containing objects which cannot be segmented by using visual features, an eye tracking mechanism is incorporated into an image retrieval system. Human Regions-Of-Interest (hROIs) are determined from the segmented image, and a combination of low-level features and high-level guidance from eye movement data is used to represent image contents and measure the similarities between images. Experimental results on several databases show that the developments of suitable feature representation and similarity measures are especially important for Content-Based Image Retrieval.
In the second investigation, we propose a bottom-up region-based computational model for dealing with the salient object detection problem by using a concept of potential Region-Of-Interest (p-ROI) and content-sensitive hypergraph representation and partitioning. Firstly, multi-scale over-segmentation and p-ROI detection are separately conducted on an input image. Secondly, with the assistance of the detected p-ROI, the most discriminant color channel is determined. Thirdly, a hypergraph is constructed to describe the complex relationships among the multi-scale segmented regions. Hypergraph weights are determined by the similarities among regions in terms of the global and local attributes which are extracted in the most discriminant color channel. An adaptive hypergraph construction method, named Adaptive Multi-scale Color Image Neighborhood Hypergraph Representation (AMCINHR), is proposed to automatically determine the size of hyperegdes according to the image content. Finally, an Incremental Spectral Hypergraph Partitioning (ISHP) method is utilized to generate the candidate regions for the final salient object formation, in which all the candidate regions are evaluated by p-ROI and the best match one will be selected as the final salient object. Our proposed methodology has been extensively evaluated on a large benchmark image database with 5000 natural images. Experimental results show that our model can not only achieve considerable improvement in terms of commonly adopted performance measures in salient object detection, but also provide more precise object boundaries which are desirable for further image processing and understanding. In the third investigation, two intelligent salient models, in terms of single-image content based and multiple-image content based, are proposed to automatically evaluate yarn surface appearance, which resolve the drawbacks of traditional manual approaches. In the single-image content based model, attention-driven fault detection, wavelet texture analysis and statistical measurement are developed and incorporated to fully extract the characteristic features of yarn surface appearance from images and a fuzzy ARTMAP (FAM) neural network is employed to classify and grade yarn surface qualities based on the extracted features. In the multiple-image content based model, inspiring by the human observing behavior, a study of visual attention model for multiple image comparison is explored, which enables saliency evaluation for the cases where other image contents are involved. Furthermore, a structural feature extraction strategy is introduced where two levels of features (High-Level and Low-Level) and three types of feature (Global, Local-Local, Local-Global) are extracted, and non-linear mapping functions are constructed to describe the relationships between features and saliency values, and among different images. Experimental results on the two proposed salient models show that visual attention is beneficial to yarn surface grading, and the multiple-image content based salient evaluation is more similar to what are obtained by the Human Visual System (HVS) in visual inspection cases.

Files in this item

Files Size Format
b26390735.pdf 9.432Mb PDF
Copyright Undertaking
As a bona fide Library user, I declare that:
  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

     

Quick Search

Browse

More Information