Full metadata record
DC FieldValueLanguage
dc.contributorDepartment of Health Technology and Informaticsen_US
dc.contributor.advisorYoo, Jung Sun (HTI)en_US
dc.contributor.advisorCai, Jing (HTI)en_US
dc.creatorHe, Zebang-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/14044-
dc.languageEnglishen_US
dc.publisherHong Kong Polytechnic Universityen_US
dc.rightsAll rights reserveden_US
dc.titleA framework for explainable and high-quality radiology report generation using automatic keyword adaptation, frequency-based multi-label classification and text-to-text large language modelsen_US
dcterms.abstractBackground: Radiology reports are essential in medical imaging, providing critical insights for diagnosis, treatment, and patient management by bridging the gap between radiologists and clinicians. However, the manual generation of these reports is time-consuming and labor-intensive, leading to inefficiencies and delays in clinical workflows, particularly as case volumes increase. Although deep learning approaches have shown promise in automating radiology report generation, existing methods, particularly those based on the encoder-decoder framework, suffer from significant limitations. These include a lack of explainability due to black-box features generated by encoder and limited adaptability to diverse clinical settings.en_US
dcterms.abstractPurpose: This study aims to develop a deep learning-based radiology report generation framework that could generate high-quality and explainable radiology report for chest X-Ray images.en_US
dcterms.abstractMethods and Materials: In this study, we address these challenges by proposing a novel deep learning framework for radiology report generation that enhances explainability, accuracy, and adaptability. Our approach replaces traditional black-box features in computer vision with transparent keyword lists, improving the interpretability of the feature extraction process. To generate these keyword lists, we apply a multi-label classification technique, which is further enhanced by an automatic keyword adaptation mechanism. This adaptation dynamically configures the multi-label classification to better adapt specific clinical environments, reducing the reliance on manually curated reference keyword lists and improving model adaptability across diverse datasets. We also introduce a frequency-based multi-label classification strategy to address the issue of keyword imbalance, ensuring that rare but clinically significant terms are accurately identified. Finally, we leverage a pre-trained text-to-text large language model (LLM) to generate human-like, clinically relevant radiology reports from the extracted keyword lists, ensuring linguistic quality and clinical coherence.en_US
dcterms.abstractResults: We evaluate our method using two public datasets, IU-XRay and MIMIC-CXR, demonstrating superior performance over state-of-the-art methods. Our framework not only improves the accuracy and reliability of radiology report generation but also enhances the explainability of the process, fostering greater trust and adoption of AI-driven solutions in clinical practice. Comprehensive ablation studies confirm the robustness and effectiveness of each component, highlighting the significant contributions of our framework to advancing automated radiology reporting.en_US
dcterms.abstractConclusion: In this study, we developed a novel Deep-Learning based Radiology Report Generation framework for generating high-quality and explainable radiology report for chest X-Ray images using the multi-label classification and text-to-text large language model. Through replacing the black-box semantic features into visible keyword lists, our framework could solve the unexplanability of the current workflow and provide the clear and flexible automatic pipeline for reducing the workload of radiologists and the further applications related to Human-AI interactive communication.en_US
dcterms.extentxxxviii, 121 pages : color illustrationsen_US
dcterms.isPartOfPolyU Electronic Thesesen_US
dcterms.issued2025en_US
dcterms.educationalLevelPh.D.en_US
dcterms.educationalLevelAll Doctorateen_US
dcterms.accessRightsopen accessen_US

Files in This Item:
File Description SizeFormat 
8509.pdfFor All Users9.94 MBAdobe PDFView/Open


Copyright Undertaking

As a bona fide Library user, I declare that:

  1. I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
  2. I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
  3. I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.

By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.

Show simple item record

Please use this identifier to cite or link to this item: https://theses.lib.polyu.edu.hk/handle/200/14044