Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electronic and Information Engineering | en_US |
dc.contributor.advisor | Mak, M. W. (EIE) | en_US |
dc.creator | Hung, Wai Fung | - |
dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/10744 | - |
dc.language | English | en_US |
dc.publisher | Hong Kong Polytechnic University | en_US |
dc.rights | All rights reserved | en_US |
dc.title | Multitask deep learning for recognizing gender and emotion from speech signals | en_US |
dcterms.abstract | Speech Emotion Recognition is to extract suitable features that characterize different emotions efficiently and to identify the emotional states of speakers. It has been a hot topic in recent years. Applying deep neural networks (DNNs) to recognize human emotions is a new direction in increasing the accuracy of speech emotion recognition. In many areas, multi-task deep learning performs better than single-task deep learning. Multitask learning can improve the performance of DNNs by training a network on several related tasks simultaneously. This dissertation aims to apply to multitask deep learning to speech emotion recognition. In this work, emotion features according to IS09 and IS11 of the OpenSMILE software were used to capture the emotion characteristics in speech signals. This work uses two databases called EMODB and IEMOCAP. EMODB is an emotional speech database comprising utterances with seven emotional states. The full name of IEMOCAP is interactive emotional dyadic motion capture databases. It contains utterances with four emotional states. Experiments have been conducted to evaluate the effectiveness of multitask deep learning. Results show that the accuracy of IS09-Emotion is above 74% and the accuracy of IS11-Speaker-State is above 80% using EMODB. That testing accuracy of IS09-Emotion is above 56% and that accuracy of IS11-Speaker-State is above 57% using IEMOCAP. Besides, the accuracy of gender identification is above 88% in all cases. Results show that the performance of the multi-task DNNs is better than the performance of the single-task DNNs. | en_US |
dcterms.extent | iv, 52 pages : color illustrations | en_US |
dcterms.isPartOf | PolyU Electronic Theses | en_US |
dcterms.issued | 2020 | en_US |
dcterms.educationalLevel | M.Sc. | en_US |
dcterms.educationalLevel | All Master | en_US |
dcterms.LCSH | Automatic speech recognition | en_US |
dcterms.LCSH | Emotions -- Identification | en_US |
dcterms.LCSH | Sex -- Identification | en_US |
dcterms.LCSH | Machine learning | en_US |
dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | en_US |
dcterms.accessRights | restricted access | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
5165.pdf | For All Users (off-campus access for PolyU Staff & Students only) | 1.39 MB | Adobe PDF | View/Open |
Copyright Undertaking
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item:
https://theses.lib.polyu.edu.hk/handle/200/10744