Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | Faculty of Engineering | en_US |
dc.contributor.advisor | Lam, Kin-man (EIE) | - |
dc.creator | Wang, Yixin | - |
dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/10106 | - |
dc.language | English | en_US |
dc.publisher | Hong Kong Polytechnic University | - |
dc.rights | All rights reserved | en_US |
dc.title | Efficient emotion recognition algorithms for mobile applications | en_US |
dcterms.abstract | Facial expression being an important way of human expressing their emotions, have raised the attention of researchers over the past few years with growing application areas including medical treatment, intellectual learning, neuromarketing and social robotics. This is a challenging task because for the same expression, different person will perform in different way, while the two expressions the same subject show in similar form. Except this, due to the variations in brightness, poses and background, it is not easy to split the distinguishing feature from the face image. On the other hand, in order to satisify the requirements of different applications, the runtime of facial expression recognition must be kept as low as possible to achieve nearly real-time analysis. And up to now, many results are acquired to show the great progress the researchers have made in this major. However, with the development of facial expression recognition, more and more problems merge. Especially in recent years, with the proposed database in the wild, the facial expression recognition task becomes more challenging. Popular facial expression recognition can be grouped into two categories, dynamic facial expression recognition system and static facial expression recognition system. While the static facial expression recognition handles the current input images in spatial domain, the dynamic facial expression recognition has to process the temporal information of continuous frames. Both of the two tasks require extracting separable and discriminative features and effective classification method. In this project, the majority effort has been made is on static facial expression recognition. First of all, the facial expression recognition system and problems are introduced. And then, traditional method utilizing hand-crafted features such as PHOG, LBP, LPQ, LBPD and Gabor with classical classification method SVM and random forest are introduced. In addition to the traditional methods, deep learning methods using CNN based network in facial expression recognition is now universally employed. And then, the experiments are introduced to see the performance of traditional method and deep learning method. The person dependent and person independent problems are discussed in the traditional methods. Next, in deep learning methods, two experiments using hand-crafted feature guided deep neural network are conducted in two different databases, one is laboratory-controlled CK+ database and other is FER-2013 database in the wild. The mean accuracy and recognition rate are recoded to checkout the ability of the model. | en_US |
dcterms.extent | vi, 67 pages : color illustrations | en_US |
dcterms.isPartOf | PolyU Electronic Theses | en_US |
dcterms.issued | 2019 | en_US |
dcterms.educationalLevel | M.Sc. | en_US |
dcterms.educationalLevel | All Master | en_US |
dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | en_US |
dcterms.LCSH | Human face recognition (Computer science) | en_US |
dcterms.LCSH | Facial expression | en_US |
dcterms.LCSH | Emotions | en_US |
dcterms.accessRights | restricted access | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
991022270856303411.pdf | For All Users (off-campus access for PolyU Staff & Students only) | 3.12 MB | Adobe PDF | View/Open |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/10106