Author: Salicchi, Lavinia
Title: “I feel you” : emotional alignment during conversations. A computational linguistic and neurocognitive study
Advisors: Huang, Chu-ren (CBS)
Li, Ping (COMP)
Degree: Ph.D.
Year: 2024
Subject: Emotions
Conversation
Online chat groups
Hong Kong Polytechnic University -- Dissertations
Department: Department of Chinese and Bilingual Studies
Pages: iii, 177 pages : color illustrations
Language: English
Abstract: In this thesis, I focused on emotional alignment during conversations. Given the different definitions of emotional alignment, two aspects came out as crucial: mir­roring of emotions and appropriate reactions to others’ emotions and storytelling. Moving from the Theory of Affective Pragmatics by Andrea Scarantino, I approached the study of emotional reaction and mirroring taking into account two modalities for expressing emotions: verbal (shared emotional expressions) and visual (displayed emotional expressions).
In the context of online video chats, I investigated how people react to both shared and displayed emotional expressions, focusing on facial expression mirroring, through video analysis and action units, and pupil dilation. The results suggest that video-mediated conversations do not differ significantly from face-to-face conversations and that both pupil dilation and facial expressions are mainly influenced by displayed emotional expressions, rather than shared ones.
On the other hand, to account for the "appropriate reactions" aspect, I created a computational model based on the socio-psychological theories of Reinhard Fiehler. The model stores into a graph the "generalized emotional knowledge" retrieved from conversational datasets and presents information regarding emotional states, their causes, their effects, emotional reactions, and dialog acts through which people ex­press emotions. The model is compared with a deep-learning architecture on the task of emotion prediction. Given the models’ performances, it is clear how beneficial, for a cognitive-based emotion prediction model, are the representation of the internal emotional state of the conversational parties and the control over the dialog acts of the exchanged utterances.
Finally, I created a multimodal model combining the deep-learning model previously employed and the action units found to be significant in distinguishing reactions to positive and negative stimuli in the neurocognitive experiments. Although the model performed below expectations, it outperforms the baseline, proving the potential of multimodal approaches in the emotion prediction task.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
7781.pdfFor All Users3.45 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13387