Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.contributor.advisorKumar, Ajay (COMP)-
dc.creatorZhao, Zijing-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/9759-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic University-
dc.rightsAll rights reserveden_US
dc.titleTowards least-constrained human identification by recognizing iris and periocular at-a-distanceen_US
dcterms.abstractRecognizing humans in least-constrained environments is one of the key research goals for academia and industry. At-a-distance eye region based human recognition using iris and periocular information has emerged as a promising approach for addressing this problem due to the high level of uniqueness and stability of eye regions under less-constrained environments. However, image samples acquired under less-constrained conditions usually suffer from degrading factors such as noise, occlusion and lower resolution. Therefore, advanced algorithms beyond traditional methods are required to fully exploit useful iris and periocular information from degraded images. This thesis focuses on developing effective and reliable algorithms for at-a-distance iris and periocular recognition under such conditions. The first stage of this thesis investigates accurate iris segmentation under less constrained environments, which is a key prerequisite for the iris recognition process. The key challenge comes from undesired factors such as noise, occlusion and light source reflection in degraded eye images. We built a novel relative total variation model with l1-norm regularization, referred to as RTV-L1, to deal with the aforementioned obstacles. With this new model, noise and texture can be suppressed from the acquired eye images while structures are soundly preserved, which provides ideal conditions for preliminary segmentation. We then applied a series of robust post-processing to refine the segmentation contours. The proposed approach significantly outperforms other state-of-the-art iris segmentation methods, especially for degraded eye images acquired under less constrained environments.en_US
dcterms.abstractFollowed by the RTV-L¹ based iris segmentation framework, we developed a novel deep learning based approach for extracting spatially corresponding features from iris images for more accurate and reliable matching. This approach is based on fully convolutional network (FCN) which can retain critical locality of the deep iris features, and a newly designed extended triplet loss (ETL) function is able to accommodate non-iris occlusion and spatial translation during the learning process. The learned features are shown to offer superior matching accuracy and outstanding generalizability to different imaging environments, compared with traditional hand-crafted iris features as well as convolutional neural network (CNN) based deep features. Another important contribution of this thesis is the development of deep learning based periocular recognition algorithms for improved accuracy and adaptiveness. Inspired by human inference mechanism, we firstly investigated combining high-level semantic information in the periocular images (e.g., gender, left/right) into deep features learned by CNN. Supplement of such semantic information can help to recover more comprehensive and discriminative features and reduce the over-fitting problem, and superior performances over state-of-the-art periocular recognition methods were obtained. Furthermore, we proposed an attention based deep architecture for periocular recognition to further simulate the visual classification system of human. In this part, we inferred that regions of eye and eyebrow are of critical importance for identifying perioculars and deserve more attention during visual feature extraction. We therefore incorporated such visual attention by emphasizing convolutional responses within detected eye and eyebrow regions in CNNs to enhance the feature discriminability. This approach further boosted state-of-the-art performance dramatically for periocular recognition under varying less constrained situations.en_US
dcterms.extentxv, 155 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2018en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.LCSHBiometric identificationen_US
dcterms.LCSHPattern recognition systemsen_US
dcterms.LCSHOptical pattern recognitionen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
991022173537003411.pdfFor All Users3.68 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/9759