Author: Wang, Kuo
Title: Less-constrained iris recognition using convolutional neural network for accurate personal identification
Advisors: Kumar, Ajay (COMP)
Degree: Ph.D.
Year: 2022
Subject: Biometric identification
Biometry -- Data processing
Hong Kong Polytechnic University -- Dissertations
Department: Department of Computing
Pages: xxi, 153 pages : color illustrations
Language: English
Abstract: Iris recognition is widely considered a reliable biometric and employed for security, forensic and border­-crossing applications. Traditional iris recognition requires highly-­cooperative users under stand­-and-­stare mode with a fixed sensor and near-­infrared illumination setting. However, iris samples acquired from practical applications are more likely from less-­constrained environments, such as different spectra, different sensors, occlusion, at­-a-­distance, off-­angle, etc. Such factors significantly degrade the verification/identification performance of iris biometric systems, and advanced algorithms are therefore required to enhance the matching performance. This thesis focuses on developing effective convolutional neural network (CNN) based algorithms to address less­-constrained iris recognition challenges under adverse environments.
We firstly investigate the cross-­spectral iris recognition with CNN techniques. The iris recognition systems will register individuals under near­-infrared illumination. However, surveillance data are often acquired under visible illumination. Therefore, capability to accurately match cross-­spectral iris images is highly desirable. In our first work, we propose a framework using CNN architecture as feature extractor and supervised discrete hashing (SDH) as classifier to accurately address the cross-­spectral iris matching problem. Also, SDH significantly reduces the template size to optimize the storage and matching speed. Experiments on two publicly available cross-­spectral iris datasets indicate 78.0% and 65.5% performance improvement on equal error rates (EER) and validate the effectiveness of our proposed approach compared with prior methods in the literature.
Then, we focus on learning robust and reliable information from the iris images using deeply learned features. We introduce an effective CNN architecture with dilated residual kernels to robustly extract spatially corresponding features from normalized iris images. The proposed approach optimizes the training process with residual learning and simplifies the fully convolutional networks by discarding the down-­sampling and up-­sampling layers in aggregating the contextual information from different scales. Experimental results on three publicly available datasets indicate improved EER by 7.14%, 10.7% and 27.4% and have also shown generalization capability during cross­-dataset evaluation. At-­a-­distance iris images inherently reveal periocular information that can be dynamically utilized to enhance non­-ideal iris recognition. Therefore, we also introduce a framework exploiting the periocular information to assist the iris recognition. We optimize the iris matching by considering the importance of different binary bits and reinforcing the periocular information with attention to the eye and eyebrow. Such collaborative features learning significantly improves the recognition accuracy by 22.9%, 10.4% and 14.6% on three publicly available datasets.
Our last contribution in this thesis is developing CNN based approaches for iris recognition using augmented reality (AR)/virtual reality (VR) devices. With the emerging of metaverse, iris recognition using AR/VR devices is highly desired as a feasible biometric for personal identification. Lack of publicly available AR/VR iris recognition datasets is a critical limitation in this research field, we therefore firstly introduce publicly available two­-session AR/VR iris datasets with 384 subjects. Then, we propose a novel shifted and extended quadlet loss to learn discriminative information in close-­range and off­-angle iris samples. Our proposed framework consolidates the spatially corresponding features from iris cue and abstract features from periocular cue to boost the recognition accuracy by 96.3% and 30.6% compared with state-­of-­the-­art algorithms.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
6642.pdfFor All Users13.97 MBAdobe PDFView/Open

Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: