Vis enkel innførsel

dc.contributor.advisorJenssen, Robert
dc.contributor.advisorKampffmeyer, Michael
dc.contributor.authorWickstrøm, Kristoffer Knutsen
dc.date.accessioned2018-08-22T13:56:40Z
dc.date.available2018-08-22T13:56:40Z
dc.date.issued2018-05-14
dc.description.abstractColorectal cancer is one of the leading causes of cancer-related deaths worldwide, with prevention commonly done through regular colonoscopy screenings. During a colonoscopy, physicians manually inspect the colon of a patient using a camera in search for polyps, which are known to be possible precursors to colorectal cancer. Seeing that a colonoscopy is a manual procedure, it can be susceptible to human factors such as fatigue which can lead to missed polyps. As a method to increase polyp detection rate, automated detection procedures which are not affected by such flaws have been proposed to aid practitioners. Deep Neural Networks (DNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. These advances have motivated research in applications of such models for medical image analysis. If DNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this thesis, we introduce a novel approach for visualizing uncertainty in DNNs and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) and provide comparison between these models. Our highest performing model achieves a considerable improvement over the previous state-of-the-art on the EndoScene dataset, a publicly available dataset for semantic segmentation of colorectal polyps. Additionally, we propose a novel approach for analyzing FCNs through the lens of information theoretic learning.en_US
dc.identifier.urihttps://hdl.handle.net/10037/13552
dc.language.isoengen_US
dc.publisherUiT Norges arktiske universiteten_US
dc.publisherUiT The Arctic University of Norwayen_US
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2018 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/3.0en_US
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)en_US
dc.subject.courseIDFYS-3900
dc.subjectVDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Simulering, visualisering, signalbehandling, bildeanalyse: 429en_US
dc.subjectVDP::Mathematics and natural science: 400::Information and communication science: 420::Simulation, visualization, signal processing, image processing: 429en_US
dc.titleUncertainty Modeling and Interpretability in Convolutional Neural Networks for Polyp Segmentation.en_US
dc.typeMaster thesisen_US
dc.typeMastergradsoppgaveen_US


Tilhørende fil(er)

Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)
Med mindre det står noe annet, er denne innførselens lisens beskrevet som Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)