Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorYang, Zhongqi-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/11834-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleAnalysis of smartphone images for vision screening of refractive errorsen_US
dcterms.abstractRefractive error is the most common of visual impairments and impacts millions of people globally. Regular vision screening is the recommended best strategy to ensure timely diagnosis and treatment, however, many people do not have access to optometric care and a comprehensive vision examination is inaccessible to many people. There is therefore a need for fast, low­cost and easily­operate vision screening approaches. In this thesis, we aim to investigate the possibility of conducting photorefraction, a common vision screening procedure, on the mobile platform, to address the challenge.en_US
dcterms.abstractOur approach exploits machine learning algorithms and computer vision techniques. Starting from principles from optometry and prior studies, we create several hand­crafted features corresponding detection methods. The experiment results indicate that our detection methods outperform contemporary approaches, leading to a better performance of refractive error measurement and amblyopia risk factor detection. We then move on to pre­trained features extracted by convolutional neural networks (CNN). We employ the convolutional layers from multiple pre­trained CNN models to encode features and train machine learning models to predict the refractive error. The experiments show promising results, even though the CNN models were not trained on photorefraction datasets.en_US
dcterms.abstractGiven these encouraging results, we further investigate the possibility of data augmentation. One of our challenges is that it is not possible to collect a large amount of data which is enough to train a well­performing CNN model from scratch. Therefore, we investigate the use of synthetic data for augmentation. We develop a model of the eye based on the principle of photorefraction, and use it to generate synthetic pupil images with pre­determined refractive errors. Evaluation results show that models trained on these synthetic pupil images can achieve similar performance as real images on multiple experiments, which provides solid evidence for the correctness of our photorefraction model.en_US
dcterms.abstractWe finally apply transfer learning to solve the insufficient data issue. CNN models pre­trained on large­scale public image datasets are finetuned with photorefraction images and the experiments results show large improvement. The CNN models are then trained on more than 10,000 images of synthetic eyes generated via our eye model, and finetuned using real images, achieving performances that outperform all of the previous models. These results support the feasibility of the proposed photorefraction model, and provides a novel direction to obtain training data, which may be extensible to other similar domains.en_US
dcterms.extentxiv, 112 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2022en_US
dcterms.educationalLevelM.Phil.en_US
dcterms.educationalLevelAll Masteren_US
dcterms.LCSHEye -- Refractive errors -- Diagnosis -- Data processingen_US
dcterms.LCSHVision -- Testing -- Data processingen_US
dcterms.LCSHSmartphones -- Programmingen_US
dcterms.LCSHMachine learningen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
6321.pdfFor All Users10.89 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/11834