Author: Zhao, Zijing
Title: Towards least-constrained human identification by recognizing iris and periocular at-a-distance
Advisors: Kumar, Ajay (COMP)
Degree: Ph.D.
Year: 2018
Subject: Hong Kong Polytechnic University -- Dissertations
Biometric identification
Pattern recognition systems
Optical pattern recognition
Department: Department of Computing
Pages: xv, 155 pages : color illustrations
Language: English
Abstract: Recognizing humans in least-constrained environments is one of the key research goals for academia and industry. At-a-distance eye region based human recognition using iris and periocular information has emerged as a promising approach for addressing this problem due to the high level of uniqueness and stability of eye regions under less-constrained environments. However, image samples acquired under less-constrained conditions usually suffer from degrading factors such as noise, occlusion and lower resolution. Therefore, advanced algorithms beyond traditional methods are required to fully exploit useful iris and periocular information from degraded images. This thesis focuses on developing effective and reliable algorithms for at-a-distance iris and periocular recognition under such conditions. The first stage of this thesis investigates accurate iris segmentation under less constrained environments, which is a key prerequisite for the iris recognition process. The key challenge comes from undesired factors such as noise, occlusion and light source reflection in degraded eye images. We built a novel relative total variation model with l1-norm regularization, referred to as RTV-L1, to deal with the aforementioned obstacles. With this new model, noise and texture can be suppressed from the acquired eye images while structures are soundly preserved, which provides ideal conditions for preliminary segmentation. We then applied a series of robust post-processing to refine the segmentation contours. The proposed approach significantly outperforms other state-of-the-art iris segmentation methods, especially for degraded eye images acquired under less constrained environments.
Followed by the RTV-L¹ based iris segmentation framework, we developed a novel deep learning based approach for extracting spatially corresponding features from iris images for more accurate and reliable matching. This approach is based on fully convolutional network (FCN) which can retain critical locality of the deep iris features, and a newly designed extended triplet loss (ETL) function is able to accommodate non-iris occlusion and spatial translation during the learning process. The learned features are shown to offer superior matching accuracy and outstanding generalizability to different imaging environments, compared with traditional hand-crafted iris features as well as convolutional neural network (CNN) based deep features. Another important contribution of this thesis is the development of deep learning based periocular recognition algorithms for improved accuracy and adaptiveness. Inspired by human inference mechanism, we firstly investigated combining high-level semantic information in the periocular images (e.g., gender, left/right) into deep features learned by CNN. Supplement of such semantic information can help to recover more comprehensive and discriminative features and reduce the over-fitting problem, and superior performances over state-of-the-art periocular recognition methods were obtained. Furthermore, we proposed an attention based deep architecture for periocular recognition to further simulate the visual classification system of human. In this part, we inferred that regions of eye and eyebrow are of critical importance for identifying perioculars and deserve more attention during visual feature extraction. We therefore incorporated such visual attention by emphasizing convolutional responses within detected eye and eyebrow regions in CNNs to enhance the feature discriminability. This approach further boosted state-of-the-art performance dramatically for periocular recognition under varying less constrained situations.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
991022173537003411.pdfFor All Users3.68 MBAdobe PDFView/Open

Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: