Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.contributor.advisorHuang, Chu-ren (CBS)en_US
dc.contributor.advisorLi, Ping (COMP)en_US
dc.creatorSalicchi, Lavinia-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/13387-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.title“I feel you” : emotional alignment during conversations. A computational linguistic and neurocognitive studyen_US
dcterms.abstractIn this thesis, I focused on emotional alignment during conversations. Given the different definitions of emotional alignment, two aspects came out as crucial: mir­roring of emotions and appropriate reactions to others’ emotions and storytelling. Moving from the Theory of Affective Pragmatics by Andrea Scarantino, I approached the study of emotional reaction and mirroring taking into account two modalities for expressing emotions: verbal (shared emotional expressions) and visual (displayed emotional expressions).en_US
dcterms.abstractIn the context of online video chats, I investigated how people react to both shared and displayed emotional expressions, focusing on facial expression mirroring, through video analysis and action units, and pupil dilation. The results suggest that video-mediated conversations do not differ significantly from face-to-face conversations and that both pupil dilation and facial expressions are mainly influenced by displayed emotional expressions, rather than shared ones.en_US
dcterms.abstractOn the other hand, to account for the "appropriate reactions" aspect, I created a computational model based on the socio-psychological theories of Reinhard Fiehler. The model stores into a graph the "generalized emotional knowledge" retrieved from conversational datasets and presents information regarding emotional states, their causes, their effects, emotional reactions, and dialog acts through which people ex­press emotions. The model is compared with a deep-learning architecture on the task of emotion prediction. Given the models’ performances, it is clear how beneficial, for a cognitive-based emotion prediction model, are the representation of the internal emotional state of the conversational parties and the control over the dialog acts of the exchanged utterances.en_US
dcterms.abstractFinally, I created a multimodal model combining the deep-learning model previously employed and the action units found to be significant in distinguishing reactions to positive and negative stimuli in the neurocognitive experiments. Although the model performed below expectations, it outperforms the baseline, proving the potential of multimodal approaches in the emotion prediction task.en_US
dcterms.extentiii, 177 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2024en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.LCSHEmotionsen_US
dcterms.LCSHConversationen_US
dcterms.LCSHOnline chat groupsen_US
dcterms.LCSHHong Kong Polytechnic University -- Dissertationsen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
7781.pdfFor All Users3.45 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13387