Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Electrical and Electronic Engineering | en_US |
| dc.contributor.advisor | Lam, Kin Man Kenneth (EEE) | en_US |
| dc.creator | Zhang, Rui | - |
| dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/14047 | - |
| dc.language | English | en_US |
| dc.publisher | Hong Kong Polytechnic University | en_US |
| dc.rights | All rights reserved | en_US |
| dc.title | Deep learning for age-invariant face recognition | en_US |
| dcterms.abstract | Face recognition, a pivotal area in computer vision, has been widely applied in diverse fields such as security, employee attendance, and remote authentication. The field has evolved significantly over decades, transitioning from early feature-based algorithms to the current sophisticated convolutional neural network (CNN) methodologies, driven by advancements in hardware, big data, and deep learning. While notable progress has been made in addressing facial recognition challenges involving varied expressions and gestures, cross-age face recognition remains a formidable challenge. This is primarily due to the intra-class differences caused by aging and distinct facial aging patterns across individuals. | en_US |
| dcterms.abstract | Addressing the persistent challenges in cross-age face recognition, this study introduces the Cross-Age Convolutional Neural Network (CA-CNN) model, innovatively designed for effective feature extraction in the context of limited public database resources and computational constraints. Our approach combines eigen-decomposition and correlation constraint techniques with a lightweight network architecture. The CA-CNN model incorporates a deep feature extraction module and an Efficient Convolutional Batch Attention Module (ECBAM), meticulously designed to discern and isolate age-related factors from identity features, enhancing age-invariant identity recognition robustness. Furthermore, we implement the Batch Kernel Canonical Correlation Analysis (BKCCA) module, a novel adaptation of the KCCA algorithm for batch training in CNNs, which applies adversarial learning to refine the feature separation process. The effectiveness of the CA-CNN model is exemplified by its outstanding results on the Morph Album 2 and CALFW databases. Specifically, it achieved a Rank-1 recognition rate with an average accuracy of 99.03% on both databases. Moreover, the model demonstrated impressive face verification accuracies, achieving 96.5% on Morph Album 2 and 86.83% on CALFW. This underscores the model's robust performance in diverse scenarios., respectively, even with a limited training dataset of 460,000 images. This represents a significant stride in cross-age face recognition, offering a scalable and efficient solution in environments constrained by data availability and computational resources. | en_US |
| dcterms.extent | 47 pages : color illustrations | en_US |
| dcterms.isPartOf | PolyU Electronic Theses | en_US |
| dcterms.issued | 2023 | en_US |
| dcterms.educationalLevel | M.Sc. | en_US |
| dcterms.educationalLevel | All Master | en_US |
| dcterms.accessRights | restricted access | en_US |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 8714.pdf | For All Users (off-campus access for PolyU Staff & Students only) | 1.21 MB | Adobe PDF | View/Open |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/14047

