Author: Yang, Ruosong
Title: Post-processing and applications of pre-trained models for natural language processing
Degree: Ph.D.
Year: 2022
Subject: Natural language processing (Computer science)
Hong Kong Polytechnic University -- Dissertations
Department: Department of Computing
Pages: xv, 118 pages : color illustrations
Language: English
Abstract: Pre-trained models have enabled a new era in natural language processing. The first-generation pre-trained models, word embedding, aim to embed syntactic or semantic information into low-dimension and continuous word vectors. While the second-generation pre-trained models attempt to pre-train large language models and the architecture can be used to fine-tune various downstream tasks. However, word embedding models follow distributional hypothesis so that they cannot distinguish antonyms and rare words cannot learn precise representations. Pre-trained language models such as RoBERTa ignore coherence information, and text length during training is much longer than that in applications. Also, training pre-trained models requires powerful hardware. To tackle these issues effectively, we propose to utilize post-processing to enhance two types of pre-trained models. Besides, we also utilize two types of pre-trained models to enhance specific applications on text assessment.
In this thesis, we review existing pre-trained models as well as works about text assessment first. Then we conduct four works including two works that post-processing pre-trained models and two applications on text assessment. More specifically, in the first work, we explore how to utilize the glossary to enhance word embeddings so that the post-processed word embeddings can both capture syntactic and semantic information better. In the second work, we utilize pre-trained word embedding to solve automated post scoring. To better integrate given topics and quoted posts in forums, we propose a representation model and a matching model. In the third work, we propose to utilize self-supervised intermediate tasks to enhance pre-trained language models. Meanwhile, we investigate how these intermediate tasks benefit downstream tasks. In the last work, we use pre-trained language models to learn text representations and proposed to combine regression loss and ranking loss to enhance the performance of automated text scoring. In addition, we conclude our work and addressed future directions.
Rights: All rights reserved
Access: open access

Files in This Item:
File Description SizeFormat 
6235.pdfFor All Users1.16 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show full item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/11707