Author: Zhao, Haotian
Title: Self-supervised pre-trained speech and language models for depression detection
Advisors: Mak, Manwai (EEE)
Degree: M.Sc.
Year: 2025
Department: Department of Electrical and Electronic Engineering
Pages: 1 volume (various pagings) : color illustrations
Language: English
Abstract: This dissertation investigates the use of self-supervised pre-trained models, specifically Wav2Vec2 for audio analysis and RoBERTa for text processing, in multimodal depression detection. The research focuses on combining acoustic and linguistic features extracted by these models to accurately identify individuals with depression. A CNN-based classifier was trained and evaluated on the EATD-Corpus (Chinese) and DAIC-WOZ (English) datasets. The proposed multimodal approach achieved high accuracies and F1 scores on both datasets, demonstrating its robustness and generalizability across different languages and cultural contexts. Ablation studies highlighted the importance of both audio and text modalities, with Wav2ec2 features having a significant impact. Comparisons with a Bi-LSTM classifier indicated that CNNs are better suited for processing the fused multimodal features in this application. This research provides evidence for the effectiveness of self-supervised pre-trained models in multimodal depression detection, offering potential for early screening and clinical diagnosis. Future research directions include exploring advanced fusion techniques, incorporating additional modalities, addressing feature entanglement, leveraging large language models, expanding to more languages, evaluating in real-world settings, and exploring personalized detection.
Rights: All rights reserved
Access: restricted access

Files in This Item:
File Description SizeFormat 
8321.pdfFor All Users (off-campus access for PolyU Staff & Students only)880.36 kBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/13913