Segmentation and Unsupervised Adversarial Domain Adaptation Between Medical Imaging Modalities
AuthorStrauman, Andreas Storvik
Segmenting and labelling tumors in multimodal medical imaging are often vital parts of diagnostics and can in many cases be very labor intensive for clinicians. The effort in advancing time-saving methods in the medical health sector might be of great help for busy clinicians and can maybe even save lives. Furthermore, creating methods that generically, accurately and successfully process unlabelled data would be a major breakthrough in deep learning. This thesis aims to address both these challenges by exploring and improving current methods involving adversarial discriminative domain adaptation (ADDA) on multimodal imaging, and address weaknesses, not only in ADDA, but also in the general adversarial discriminative cases. More specifically, this thesis - applies convolutional neural networks to segment soft tissue sarcomas in PET, CT and MRI modalities, and to the author's best knowledge achieves state-of-the-art results, - explores unsupervised adversarial discriminative domain adaptation on segmentation of soft tissue sarcoma tumors between permutations of PET, CT and MRI and - demonstrates weaknesses in state-of-the-art adversarial discriminative training, and finally - improves and provides groundwork for further research on said techniques. Additionally, the thesis will also provide strong fundamental background for applying ADDA for use in medical modalities, including a solid introduction to deep learning in medical imaging, both from a theoretical and practical aspect.
PublisherUiT Norges arktiske universitet
UiT The Arctic University of Norway
The following license file are associated with this item: