Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineeringen_US
dc.contributor.advisorMak, M. W. (EIE)en_US
dc.creatorHung, Wai Fung-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/10744-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleMultitask deep learning for recognizing gender and emotion from speech signalsen_US
dcterms.abstractSpeech Emotion Recognition is to extract suitable features that characterize different emotions efficiently and to identify the emotional states of speakers. It has been a hot topic in recent years. Applying deep neural networks (DNNs) to recognize human emotions is a new direction in increasing the accuracy of speech emotion recognition. In many areas, multi-task deep learning performs better than single-task deep learning. Multitask learning can improve the performance of DNNs by training a network on several related tasks simultaneously. This dissertation aims to apply to multitask deep learning to speech emotion recognition. In this work, emotion features according to IS09 and IS11 of the OpenSMILE software were used to capture the emotion characteristics in speech signals. This work uses two databases called EMODB and IEMOCAP. EMODB is an emotional speech database comprising utterances with seven emotional states. The full name of IEMOCAP is interactive emotional dyadic motion capture databases. It contains utterances with four emotional states. Experiments have been conducted to evaluate the effectiveness of multitask deep learning. Results show that the accuracy of IS09-Emotion is above 74% and the accuracy of IS11-Speaker-State is above 80% using EMODB. That testing accuracy of IS09-Emotion is above 56% and that accuracy of IS11-Speaker-State is above 57% using IEMOCAP. Besides, the accuracy of gender identification is above 88% in all cases. Results show that the performance of the multi-task DNNs is better than the performance of the single-task DNNs.en_US
dcterms.extentiv, 52 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2020en_US
dcterms.educationalLevelM.Sc.en_US
dcterms.educationalLevelAll Masteren_US
dcterms.LCSHAutomatic speech recognitionen_US
dcterms.LCSHEmotions -- Identificationen_US
dcterms.LCSHSex -- Identificationen_US
dcterms.LCSHMachine learningen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsrestricted accessen_US

Files in This Item:
File Description SizeFormat 
5165.pdfFor All Users (off-campus access for PolyU Staff & Students only)1.39 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/10744