Regularized robust coding and dictionary learning for face recognition

Pao Yue-kong Library Electronic Theses Database

Regularized robust coding and dictionary learning for face recognition

 

Author: Yang, Meng
Title: Regularized robust coding and dictionary learning for face recognition
Degree: Ph.D.
Year: 2012
Subject: Human face recognition (Computer science)
Machine learning.
Hong Kong Polytechnic University -- Dissertations
Department: Dept. of Computing
Pages: xvi, 179 p. : ill. ; 30 cm.
Language: English
InnoPac Record: http://library.polyu.edu.hk/record=b2551294
URI: http://theses.lib.polyu.edu.hk/handle/200/6803
Abstract: How to represent the object and how the object representation should be learnt are very fundamental problems in pattern classification tasks, for example, face recognition (FR). As one of the most visible research topics in computer vision, machine learning and biometrics, robust FR to occlusions, misalignment and various variations (e.g., pose, expression and illumination) is still a very challenging problem after many years’ investigation. Recently, the sparse representation theory has been rapidly developed and successfully used in solving various inverse problems such as image reconstruction. Efforts have also been made in using sparse representation for signal classification. In particular, by coding a testing face sample as a sparse linear combination of the training samples and classifying it by evaluating which class leads to the minimum coding residual, sparse representation based classification (SRC) leads to very interesting results for FR. The success of SRC greatly boosts the research of sparsity based classification and the associated dictionary learning techniques. Though SRC has shown promising performance in robust FR, there are still many problems to be further addressed. What is the working mechanism of SRC? What is the role of l₀ or l₁ norm sparsity in it? How to extract effective features to improve the accuracy and speed of SRC? How to design a robust representation fidelity term to handle various outliers? How to train a dictionary to improve classification? In this thesis, we aim to answer these questions with tools from statistical learning, convex optimization, and pattern classification. It is widely believed that the l1-norm sparsity constraint on the coding coefficients plays a key role in the success of SRC. In this thesis, however, it is shown that the collaborative representation mechanism (i.e., using all training samples to collaboratively represent the testing sample) is much more crucial than the l1-norm sparsity of coding coefficients to the success of face classification. A new framework, namely collaborative representation based classification (CRC), is then established and discussed conceptually and experimentally. CRC has various instantiations by applying different norms to the coding residual and coding coefficient, while SRC is a special case of it. It is further shown that l₂-regularizatoin of coding coefficients in CRC could achieve similar performance to or better performance than l₁-regularization and have higher computational efficiency.
We then discuss the use of local features to improve the performance and speed of SRC. We present a Gabor feature based robust representation and classification (GRRC) scheme with Gabor occlusion dictionary (GOD) learning. It is shown that the use of Gabor feature and GOD not only improves the FR accuracy but also reduces significantly the computational cost in handling face occlusion. This part of work also indicates that the appropriate representation model (e.g., the regularization and dictionary) has a close relationship to the feature of the involved signals, which should be considered in designing effective representation models. The third major contribution of this thesis is the development of regularized robust coding (RRC) for FR. In RRC, a robust representation fidelity term is proposed to handle various outliers in face images. RRC is a maximum a posterior solution by assuming that the coding residual and the coding coefficient are respectively independent and identically distributed. An iteratively reweighted regularized robust coding algorithm is developed to solve the RRC model efficiently. Extensive experiments on representative face databases demonstrate that the RRC is much more effective and efficient than state-of-the-art sparse representation based methods in dealing with face occlusion, corruption, lighting and expression changes, etc. Finally, we discuss the problem of dictionary learning (DL) for sparse representation based pattern classification, and propose a novel Fisher discrimination dictionary learning (FDDL) scheme. Based on the Fisher discrimination criterion, a structured dictionary, whose dictionary atoms have correspondence to the class labels, is learnt so that the reconstruction residual after sparse coding can be used for pattern classification. Meanwhile, the Fisher discrimination criterion is imposed on the coding coefficients so that they have small within-class scatter but big between-class scatter. A new classification scheme associated with the proposed FDDL method is then presented by using both the discriminative information in the reconstruction residual and sparse coding coefficient. The proposed FDDL is extensively evaluated on benchmark image databases in comparison with existing sparsity and DL based classification methods.

Files in this item

Files Size Format
b25512948.pdf 3.297Mb PDF
Copyright Undertaking
As a bona fide Library user, I declare that:
  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

     

Quick Search

Browse

More Information