|Computed tomography based lung function mapping using deep learning for functional lung avoidance radiation therapy
|Qin, Jing (SN)
Cai, Jing (HTI)
|FHSS Faculty Distinguished Thesis Award (2020/21)
Hong Kong Polytechnic University -- Dissertations
|Department of Health Technology and Informatics
|xxv, 115 pages : color illustrations
|Background: Functional lung avoidance radiation therapy (FLART) is an emerging technique that selectively minimizes dose delivery to the high functional lung while favoring dose deposition in the low functional lung based on the information from pulmonary function imaging, which measures the air breathing (ventilation) and blood circulation (perfusion) within the lung. However, the currently established lung function imaging methods require additional scan(s) that are typically not prescribed for radiotherapy, and can be resource-demanding, inconvenient, costly, and technically challenging for radiotherapy treatment planning. Purpose: This study aims to develop a deep learning-based computed tomography (CT) function mapping (CTFM) method that is able to synthesize lung perfusion images and lung ventilation images from the CT domain for FLART. Methods and Materials: In the first part, we developed a deep learning-assisted framework to extract features from 3D CT images and synthesize perfusion and ventilation as estimations of lung function using CT images. This CTFM framework includes three parts: image preparation, image processing pipeline, and our proposed convolutional neural network (CNN). Image preparation consists of a series of morphological operations to decrease computational consumption and remove noise. Image processing aims to enhance the feature robustness of CT images and standardize SPECT perfusion images to be suitable labels for CNN mapping. It consists of SPECT normalization, SPECT discretization, CT contrast enhancement, and CT defect enhancement. A three-dimensional (3D) attention residual neural network (ARNN) was constructed to extract textural features from the CT images and reconstruct corresponding functional images. Three components (residual module, ROI attention, and skip attention) were used to improve the performance of the neural network. In the second part, we investigated the effects of each framework component and setting for CT-to-perfusion translation. The first cohort of 42 pulmonary macro-aggregated albumin SPECT/CT perfusion scans were retrospectively collected from the hospital. Ablation experiments and comparison experiments were performed by changing each element of the framework and analyzing the testing performances. In the third part, a total number of 73 pulmonary MAA SPECT/CT images were collected for performance evaluation. A quantitative comparative analysis between the predicted perfusion and SPECT perfusion was conducted to evaluate the overall performance. To assess the function-wise concordance, the Dice similarity coefficient (DSC) was computed to determine the similarity of the low/high functional lung volumes. In the fourth part, a total number of 13 Technegas SPECT/CT ventilation scans were collected from patients for suspected lung disease in the hospital. This dataset was used to evaluate the feasibility of CT-to-ventilation using the proposed CTFM framework. The analysis metric mainly includes the Spearman's correlation coefficient (R) and structural similarity index measure (SSIM), accounting for statistical and perceptual image similarity, respectively.
Results: Major results of our investigation experiments of CTFM framework showed that the removal of the CT contrast enhancement component in the image processing part resulted in the largest drop in framework performance compared to the optimal performance (~11%). In the CNN component part, all the three components (residual module, ROI attention, and skip attention) were approximately equally important to the framework performance; removing either one of them resulted in a 3-5% decline in performance. In the CNN configuration part, a combination adoption of both Adam optimizer and loss functions of binary cross-entropy (BCE) and Spearman's correlation coefficient led to the best framework performance. Our proposed framework achieved ~4% higher overall performance and 4.5-fold higher computation efficiency, compared to the U-Net model. The evaluation of the voxel-wise agreement showed a moderate-to-high voxel value correlation (0.6733 ± 0.1728) and high structural similarity (0.7635 ± 0.0697) between the SPECT and DL-CTFM predicted perfusions. The evaluation of the function-wise concordance obtained an average DSC value of 0.8183 ± 0.0752 for high-functional lungs, ranging from 0.5819 to 0.9255, and 0.6501 ± 0.1061 for low-functional lungs, ranging from 0.2405 to 0.8212. Ninety-four percent of the test cases demonstrated high concordance (DSC > 0.7) between the high functional volumes contoured from the predicted and ground-truth perfusions. For the preliminary result in ventilation feasibility study, ventilation images generated using our CTFM framework in the testing group had an average R of 0.6336 ± 0.0815, average SSIM of 0.7391 ± 0.0681, average DSC of 0.7089 ± 0.0438 for lung functional lung, and average DSC of 0.7265 ± 0.0473 for high functional lung. Conclusion: In this study, we developed a novel DL-CTFM framework for estimating lung functional images from the CT domain using a 3D ARNN. The deep convolutional neural network, in conjunction with image processing pipeline for feature enhancement, is capable of feature extraction from CT image for perfusion synthesis. This CTFM framework provides insights for relevant research studies in the future and enables other researchers to leverage for the development of optimized CNN models for CTFM. In the CT-to-perfusion/ventilation translation, the CTFM framework yields moderate-to-high voxel-wise approximations of lung function. To further contextualize these results toward future clinical application, a multi-institutional large-cohort study is warranted.
|All rights reserved
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: