Vis enkel innførsel

dc.contributor.advisorJohansen, Håvard D.
dc.contributor.authorJha, Debesh
dc.date.accessioned2022-01-16T23:55:52Z
dc.date.available2022-01-16T23:55:52Z
dc.date.embargoEndDate2027-01-21
dc.date.issued2022-01-21
dc.description.abstractGastrointestinal tract (GI) cancers are among the most common types of cancers worldwide. In particular, colorectal cancer (CRC) is the most lethal in terms of number of incidences and mortality (third most common cause of cancer and the second common cause of cancer-related deaths). Colonoscopy is the gold standard for screening patients for CRC. During the colonoscopy, gastroenterologists examine the large bowel, detect precancerous abnormal tissue growths like polyps and remove them through the scope if necessary. Although colonoscopy is considered the gold standard, it is an operator-dependent procedure. Previous research has shown large missing rates for GI abnormalities, e.g., polyp miss detection is around 22\%-28\%. Early detection of GI lesions and cancers at the curable stage can help reduce the mortality rate. The development of automated, accurate, and efficient methods for the detection of the GI cancers could benefit both gastroenterologists and patients. In addition, if integrated into screening programs, an automatic analysis could improve overall GI endoscopy quality. The medical field is becoming more interdisciplinary, and the importance of medical image data is increasing rapidly. Medical image analysis can play a central role in disease detection, diagnosis, and treatment. With the increasing number of medical images, there is enormous potential to improve the screening quality. Deep learning (DL), in particular, convolutional neural network (CNN) based models have tremendous potential to automate and enhance the medical image analysis procedure and provide an accurate diagnosis. The automated analysis of the medical images could reduce the burden of the medical experts and provide quality and accessible healthcare to a larger population. In medical imaging, classification, detection, and semantic segmentation tasks are crucial for clinical practice. The development of accurate and efficient computer aided diagnosis (cadx) or computer aided detection (cade) models can help to identify the abnormalities at an early stage and can act as a third eye for the doctors. To this end, we have studied and designed machine learning (ML) and DL based architectures for GI tract disease classification, detection, and segmentation. Our designed architectures can classify different types of GI tract findings and abnormalities accurately with high performance. Our contribution towards the development of CADe models for automated polyp detection showed improved performance. Out of three different medical imaging tasks, semantic segmentation of medical imaging data plays a significant role in extracting meaningful information from images by classifying each pixel and segmenting it by class. Using the GI case scenario, we have mainly worked on polyp segmentation and proposed and evaluated different automated polyp segmentation architectures. We have also built architectures for surgical instrument segmentation that showed high performance and real-time speed. We have collected, annotated, and released several open-access datasets such as HyperKvasir, KvasirCapsule, PolypGen, Kvasir-SEG, Kvasir-instrument, and KvasirCapsule-SEG in collaboration with hospitals in Norway and abroad to address the lack of datasets in the field. We have devised several medical image segmentation architectures (for example, ResUNet++, DoubleU-Net, and ResUNet + CRF + TTA) that provided improved results with the publicly available datasets. Beside that, we have also designed architectures that have the capability of segmenting polyps in real-time with high frame per second (FPS) (for example, ColonSegNet, NanoNet, PNS-Net, and DDANet). Moreover, we performed extensive studies on the generalizability of our models on public datasets, and by creating a dataset consisting of data from different hospitals, we allow multi-center cross dataset testing. Our results prove that proposed DL based CADx systems might be of great assistance to clinicians in the future.en_US
dc.description.doctoraltypeph.d.en_US
dc.description.popularabstractGastrointestinal (GI) tract cancers are the leading cause of cancer-related death worldwide. Among GI cancers, colorectal cancer is the third most commonly diagnosed cancer. Colonoscopy is the gold standard for screening colorectal cancer, but it is an expensive, time-demanding, and operator-dependent procedure. Studies have reported polyp miss rates of 22%-28% during these procedures. Computer aided diagnosis (CAD) methods can help to highlight suspicious lesions on the screen and alert gastroenterologists in real-time, improving the clinical outcome irrespective of operator experience, potentially saving millions of lives. In this thesis, we have designed machine learning-based architectures for GI tract abnormality detection, classification and segmentation. The availability of public datasets was one of the significant challenges for the development of automated methods. We addressed this problem by collecting, curating, annotating, and publicly releasing several datasets, including the world’s largest publicly available GI endoscopy and video capsule endoscopy datasets. In our experiments, our classification algorithms classify different GI endoscopy findings with an accuracy of 98.07%. For polyp segmentation, our algorithms can identify and segment the potential presence of lesions during colonoscopy with an accuracy of more than 94.93% and at a real-time speed of 182.38 frames per second. We are able to simultaneously identify multiple polyps, including flat and sessile polyps that are often overlooked by endoscopists during the colonoscopy examination. As an important part of the procedure, we also performed instrument segmentation to be able to detect which instruments are used during the examination. The developed method is able to segment different types of surgical instruments in real-time. As another aspect of our work, we also addressed the challenge of generalizability, meaning that our models can perform well on completely new data enabling for example that a model can be moved from one hospital to another. We demonstrated reliable performance and high generalizability compared to baseline algorithms. Due to the challenge of not having access to high-end hardware in the hospitals, we have developed lightweight architectures that can be integrated with low-end hardware devices. Our algorithms are not only designed for polyp segmentation and surgical instrument segmentation but can also be exploited for other medical or non-medical image segmentation tasks. All of our work and algorithms are open-sourced and received very well by the community.en_US
dc.description.sponsorshipThe research was partially funded by PRIVATON project (263248). We performed experiments on the Experimental Infrastructure for Exploration of ExascaleComputing (eX3) system, which is financially supported by RCN under contract 270053.en_US
dc.identifier.isbn978-82-8236-464-5 (trykt) - 978-82-8236-465-2 (pdf)
dc.identifier.urihttps://hdl.handle.net/10037/23693
dc.language.isoengen_US
dc.publisherUiT Norges arktiske universiteten_US
dc.publisherUiT The Arctic University of Norwayen_US
dc.relation.haspart<p>Paper I: Jha, D., Smedsrud, P.H., Riegler, M.A., Halvorsen, P., de Lange, T., Johansen, D. & Johansen, H.D. (2019). ResUNet++: An Advance architecture for Medical image Segmentation. <i>Proceedings of IEEE International Symposium on Multimedia (ISM), 2019</i>, 225-230. (Accepted manuscript version). Published version available at <a href=https://doi.org/10.1109/ISM46123.2019.00049>https://doi.org/10.1109/ISM46123.2019.00049</a>. <p>Paper II: Jha, D., Smedsrud, P.H., Riegler, M.A., Halvorsen, P., de Lange, T., Johansen, D. & Johansen, H.D. (2020). Kvasir-SEG: A Segmented Polyp Dataset. In: Ro, Y., Cheng, W.H., Kim, J., Chu, W.T., Cui, P., Choi, J.W., Hu, M.C. & De Neve, W. (Eds), <i>MultiMedia Modeling. MMM 2020. Lecture Notes in Computer Science, 11962</i>, 451-462. Springer, Cham. Also available at <a href=https://doi.org/10.1007/978-3-030-37734-2_37> https://doi.org/10.1007/978-3-030-37734-2_37</a>. <p>Paper III: Jha, D., Smedsrud, P.H., Johansen, D., de Lange, T., Johansen, H.D., Halvorsen, P. & Riegler, M.A. (2021). A Comprehensive Study on Colorectal Polyp Segmentation With ResUNet++, Conditional Random Field and Test-Time Augmentation. <i>IEEE Journal of Biomedical and Health Informatics, 25</i>(6), 2029-2040. Also available in Munin at <a href=https://hdl.handle.net/10037/20301>https://hdl.handle.net/10037/20301</a>. <p>Paper IV: Jha, D., Riegler, M.A., Johansen, D., Halvorsen, P. & Johansen, H.D. (2020). DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation. <i>2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS)</i>, 558-564. Published version not available in Munin due to publisher’s restrictions. Published version available at <a href=https://doi.org/10.1109/CBMS49503.2020.00111>https://doi.org/10.1109/CBMS49503.2020.00111</a>. <p>Paper V: Jha, D., Ali, S., Tomar, N.K., Johansen, H.D., Johansen, D., Rittscher, J. & Halvorsen, P. (2021). Real-Time Polyp Detection, Localization and Segmentation in Colonoscopy Using Deep Learning. <i>IEEE Access, 9</i>, 40496-40510. Also available in Munin at <a href=https://hdl.handle.net/10037/23242>https://hdl.handle.net/10037/23242</a>. <p>Paper VI: Jha, D., Tomar, N.K., Ali, S, Riegler, M.A., Johansen, H.D., Johansen, D. & Halvorsen, P. (2021). NanoNet: Real-Time Polyp Segmentation in Endoscopy. <i>Proceedings of IEEE International Symposium on Computer-Based Medical Systems (CBMS), 2021</i>. (Accepted manuscript version). Published version available at <a href=https://doi.org/10.1109/CBMS52027.2021.00014>https://doi.org/10.1109/CBMS52027.2021.00014</a>. <p>Paper VII: Jha, D., Ali, S., Emanuelsen, K., Hicks, S.A., Garcia-Ceja, E., Riegler, M.A., … Halvorsen, P. (2021). Kvasir-Instrument: Diagnostic and therapeutic tool segmentation dataset in gastrointestinal endoscopy. In: Lokoč, J., Skopal, T., Schoeffmann, K., Mezaris, V., Li, X., Vrochidis, S. & Patras, I. (Eds), <i>MultiMedia Modeling. MMM 2021. Lecture Notes in Computer Science, 12573</i>, 218-229. Springer, Cham. Also available at <a href=https://doi.org/10.1007/978-3-030-67835-7_19> https://doi.org/10.1007/978-3-030-67835-7_19</a>. <p>Paper VIII: Jha, D., Hicks, S.A., Emanuelsen, K., Johansen, H., Johansen, D., de Lange, T., Riegler, M.A. & Halvorsen, P. (2020). Medico Multimedia Task at MediaEval 2020: Automatic Polyp Segmentation. <i>Working Notes Proceedings of the MediaEval 2020 Workshop, Online, 14-15 December 2020</i>. Also available at <a href=http://ceur-ws.org/Vol-2882/paper1.pdf>http://ceur-ws.org/Vol-2882/paper1.pdf</a>. <p>Paper IX: Jha, D., Yazidi, A., Riegler, M.A., Johansen, D., Johansen, H.D. & Halvorsen, P. (2021). LightLayers: Parameter Efficient Dense and Convolutional Layers for Image Classification. In: Zhang, Y., Xu, Y. & Tian, H. (Eds.), <i>Parallel and Distributed Computing, Applications and Technologies. PDCAT 2020. Lecture Notes in Computer Science, 12606</i>, 285–296. Springer, Cham. Also available at <a href=https://doi.org/10.1007/978-3-030-69244-5_25>https://doi.org/10.1007/978-3-030-69244-5_25</a>. <p>Paper X: Jha, D., Ali, S., Hicks, S., Thambawita, V., Borgli, H., Smedsrud, P.H., … Halvorsen, P. (2021). A comprehensive analysis of classification methods in gastrointestinal endoscopy imaging. <i>Medical Image Analysis, 70</i>, 102007. Also available in Munin at <a href=https://hdl.handle.net/10037/23476>https://hdl.handle.net/10037/23476</a>. <p>Paper XI: Jha, D., Ali, S., Tomar, N.K., Riegler, M.A., Johansen, D., Johansen, H.D. & Halvorsen, P. (2021). Exploring Deep Learning Methods for Real-Time Surgical Instrument Segmentation in Laparoscopy. <i>Proceedings of the IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), 2021</i>. Published version not available in Munin due to publisher’s restrictions. Published version available at <a href=https://doi.org/10.1109/BHI50953.2021.9508610>https://doi.org/10.1109/BHI50953.2021.9508610</a>. <p>Paper XII: Borgli, H., Thambawita, V., Smedsrud, P.H., Hicks, S., Jha, D., Eskeland, S.L., … de Lange, T. (2020). HyperKvasir, a comprehensive multiclass image and video dataset for gastrointestinal endoscopy. <i>Scientific Data, 7</i>, 283. Also available in Munin at <a href=https://hdl.handle.net/10037/20442>https://hdl.handle.net/10037/20442</a>. <p>Paper XIII: Smedsrud, P.H., Gjestang, H.L., Nedrejord, O.O., Næss, E., Thambawita, V., Hicks, S., … Halvorsen, P. (2021). Kvasir-Capsule, a video capsule endoscopy dataset. <i>Scientific Data, 8</i>, 142. Also available in Munin at <a href=https://hdl.handle.net/10037/21497>https://hdl.handle.net/10037/21497</a>. <p>Paper XIV: Thambawita, V., Jha, D., Hammer, H.L., Johansen, H.D., Johansen, D., Halvorsen, P. & Riegler, M.A. (2020). An extensive study on cross-dataset bias and evaluation metrics interpretation for machine learning applied to gastrointestinal tract abnormality classification. <i>ACM Transactions on Computing for Healthcare, 1</i>(3), 17. Also available at <a href= https://doi.org/10.1145/3386295> https://doi.org/10.1145/3386295</a>. <p>Paper XV: Tomar, N.K., Jha, D., Ali, S., Johansen, H.D., Johansen, D., Riegler, M.A. & Halvorsen, P. (2021). DDANet: Dual Decoder Attention Network for Automatic Polyp Segmentation. (Accepted manuscript). Now published in Del Bimbo, A., Cucchiara, R., Sclaroff, S., Farinella, G.M., Mei, T., Bertini, M., Escalante, H.J. & Vezzani, R. (Eds.), <i>Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science, 12668</i>, 307-314. Springer, Cham. Available at <a href=https://doi.org/10.1007/978-3-030-68793-9_23> https://doi.org/10.1007/978-3-030-68793-9_23</a>. <p>Paper XVI: Tomar, N., Ibtehaz, N., Jha, D., Halvorsen, P. & Ali, S. (2021). Improving generalizability in polyp segmentation using ensemble convolutional neural network. <i>Proceedings of the 3rd International Workshop and Challenge on Computer Vision in Endoscopy (EndoCV 2021), Nice, France, April 13, 2021</i>. Also available at <a href=http://ceur-ws.org/Vol-2886/paper5.pdf>http://ceur-ws.org/Vol-2886/paper5.pdf</a>. <p>Paper XVII: Hicks, S.A., Jha, D., Thambawita, V., Halvorsen, P., Hammer, H.L. & Riegler M.A. (2021). The EndoTect 2020 Challenge: Evaluation and Comparison of Classification, Segmentation and Inference Time for Endoscopy. In: Del Bimbo, A., Cucchiara, R., Sclaroff, S., Farinella, G.M., Mei, T., Bertini, M., Escalante, H.J. & Vezzani, R. (Eds.), <i>Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science, 12668</i>, 263-274. Springer, Cham. Also available at <a href=https://doi.org/10.1007/978-3-030-68793-9_18>https://doi.org/10.1007/978-3-030-68793-9_18</a>. <p>Paper XVIII: Thambawita, V., Jha, D., Riegler, M., Halvorsen, P., Hammer, H.L., Johansen, H.D. & Johansen, D. (2018). The medico-task 2018: Disease detection in the gastrointestinal tract using global features and deep learning. <i>Proceedings of MediaEval’18, 29-31 October 2018, Sophia Antipolis, France</i>. Also available at <a href=http://ceur-ws.org/Vol-2283/MediaEval_18_paper_20.pdf>http://ceur-ws.org/Vol-2283/MediaEval_18_paper_20.pdf</a>. <p>Paper XIX: Strümke, I., Hicks, S.A., Thambawita, V., Jha, D., Parasa, S., Riegler, M.A. & Halvorsen, P. (2021). Artificial Intelligence in Gastroenterology. In: Lidströmer, N. & Ashrafian, H. (Eds.), <i>Artificial Intelligence in Medicine</i>, 1-21. Springer, Cham. (Accepted manuscript). Published version available at <a href=https://doi.org/10.1007/978-3-030-58080-3_163-2>https://doi.org/10.1007/978-3-030-58080-3_163-2</a>. <p>Paper XX: Roß, T., Reinke, A., Full, P.M., Wagner, M., Kenngott, H., Apitz, M., … Maier-Hein, L. (2021). Comparative validation of multi-instance instrument segmentation in endoscopy: Results of the ROBUST-MIS 2019 challenge. <i>Medical Image Analysis, 70</i>, 101920. Also available at <a href=https://doi.org/10.1016/j.media.2020.101920>https://doi.org/10.1016/j.media.2020.101920</a>. <p>Paper XXI: Ge-Peng Ji, Yu-Cheng Chou, Deng-Ping Fan, Geng Chen, Huazhu Fu, Debesh Jha, and Ling Shao (2021). Progressively Normalized Self-Attention Network for Video Polyp Segmentation. (Manuscript). Now published in: de Bruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y. & Essert, C. (Eds.), <i>Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science, 12901</i>, 142-152. Springer, Cham. Available at <a href=https://doi.org/10.1007/978-3-030-87193-2_14> https://doi.org/10.1007/978-3-030-87193-2_14</a>.en_US
dc.rights.accessRightsembargoedAccessen_US
dc.rights.holderCopyright 2022 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0en_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)en_US
dc.subjectVDP::Technology: 500::Medical technology: 620en_US
dc.titleMachine Learning-based Classification, Detection, and Segmentation of Medical Imagesen_US
dc.typeDoctoral thesisen_US
dc.typeDoktorgradsavhandlingen_US


Tilhørende fil(er)

Thumbnail
Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
Med mindre det står noe annet, er denne innførselens lisens beskrevet som Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)