|Title:||Data-efficient, memory-effective, and shape priors-constrained learning for segmenting medical images|
|Advisors:||Qin, Jing (SN)|
Choi, Kup-sze (SN)
Diagnostic imaging -- Digital techniques
Hong Kong Polytechnic University -- Dissertations
|Department:||School of Nursing|
|Pages:||xviii, 136 pages : color illustrations|
|Abstract:||This thesis aims at advancing deep learning models on medical image segmentation tasks by investigating three key problems: alleviating the burden on training data collection, reducing GPU memory consumption, and leveraging shape priors to boost the performance. The main results are five generic and effective approaches to these three problems, called selective learning, adversarial redrawing, surface projection, shape constructing, and shape mask generator, respectively. Selective learning is a simple training framework that alleviates the burden on training data collection by using external data. The key idea is to learn a weight for each external data such that informative external data can have large weights and thus contribute more to the training loss, thereby implicitly encouraging the network to mine more valuable knowledge from them while suppressing to memorize irrelevant patterns from 'useless' or even 'harmful' data. Adversarial redrawing is an unsupervised segmentation method for alleviating the burden of collecting training annotation. It is developed under the assumption that the imaging process can be modeled by a latent variable with two steps: objects' binary mask generating (equivalent to segmentation) and objects' intensity value drawing. It then uses the adversarial learning paradigm to train two deep networks to model the mask generating and intensity drawing steps, by altering their parameters' value until they can generate images that cannot be distinguished by the discriminator.|
Surface projection is a GPU memory-efficient learning technique that enables 2D networks to learn 3D features. We observe that boundary pixels of a 3D object form a surface that can be described by a 2D variable, and so 2 networks should be able to recognize these boundary pixels. We hence learn 3D features by using a 2D network to learn the projection distance mapping between the object's surface and a set of sampled spherical surfaces. Shape constructing is a productive approach to modeling shape priors. The key idea is to leverage contour fragments rather than pixels to model shape priors, as fragments provide far more informative geometric information and shape cues. It is developed as an iterative algorithm of three key processes: fragments grouping, shape templates estimation, and fragments connecting, for progressively refining the modeled shape priors. Shape mask generator is an effective method that models shape priors by learning how to refine the modeled ones. It first models shape priors from shape templates and then produces objects' shape masks according to the modeled shape priors. It next refines the modeled shape priors by minimizing a quantity, the generating residual, whose value is smaller when the produced shape masks are more accurate. All five methods are assessed on publicly available datasets, with positive results obtained on extensive experiments, showing performance gains of them against existing methods. These methods hence have great potential to advance deep learning models on a wide range of medical image segmentation tasks.
|Rights:||All rights reserved|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: