Author: Lai, Shun Cheung
Title: Deep learning for facial image analysis and recognition
Advisors: Lam, Kin-man (EIE)
Degree: M.Phil.
Year: 2022
Subject: Human face recognition (Computer science)
Deep learning (Machine learning)
Hong Kong Polytechnic University -- Dissertations
Department: Department of Electronic and Information Engineering
Pages: xvi, 105 pages : color illustrations
Language: English
Abstract: Facial image analysis and recognition has been a well-studied research topic over the past few decades. Its major goal is to develop a system that can automatically detect and recognize all human faces, captured in unconstrained conditions. Recently, because of the development of deep convolutional neural networks (CNNs) and the availability of large-scale labeled face data sets, the state-of-the-art face recognition models have made tremendous improvements on public benchmarks data set, e.g., achieving over 99% recognition rate on the Labeled Face in the Wild (LFW) benchmark. However, although great progress has been made over the past few years, most of the existing methods can only address the problems of face images with normal resolutions and conditions. In this thesis, we present our proposed methods that recognize high-resolution and low-resolution face images.
First, we investigate the topic of high-resolution face recognition (HRFR). We explore the use of microscopic facial features, which are visible when the resolution of a face image is sufficiently high. Our proposed method can establish a set of dense correspondences between two face images of the same subject, which can be further used to perform face recognition. Furthermore, we have created a skin-patch data set to train a CNN for learning a local skin-patch descriptor. Experiment results have demonstrated that our proposed method can be further used for face recognition and achieve superior performance under considerable aging and pose variations, and can even retain high performance for highly occluded faces. Second, we study the topic of low-resolution face recognition (LRFR). Firstly, we have attempted to address LRFR using the fusion of handcrafted features with the sparse-coding-based algorithm. The proposed sparse-coding-based method can outperform other conventional methods, even a deep-learning-based method, on five LR data sets, implying that there is room for deep-learning-based methods to better solve the LRFR problem. Therefore, we have further developed a deep model that can super-resolve very low-resolution face images and recover the identity information simultaneously. This is done by combining the pixel-wise image loss and a proposed identity-preserved loss to jointly optimize the model. Experiment results show that this proposed method achieves a promising recognition rate on synthetic LR face data sets. In order to solve the real-world LRFR problem, we have further proposed a deep Siamese network to minimize the difference between high-resolution and low-resolution features in an end-to-end manner. Experiments have also demonstrated the generalization power and effectiveness of this proposed method for LRFR on synthetic LR face data sets, as well as real-world LR face data sets.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
6292.pdfFor All Users10.77 MBAdobe PDFView/Open

Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: