Uncertainty modeling and interpretability in convolutional neural networks for polyp segmentation
Permanent link
https://hdl.handle.net/10037/26991Date
2018-11-01Type
Journal articleTidsskriftartikkel
Peer reviewed
Abstract
Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.
Publisher
IEEECitation
Wickstrøm KK, Kampffmeyer MC, Jenssen R: UNCERTAINTY MODELING AND INTERPRETABILITY IN CONVOLUTIONAL NEURAL NETWORKS FOR POLYP SEGMENTATION. In: IEEE SPS. 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), 2018. IEEE Signal Processing SocietyMetadata
Show full item recordCollections
Copyright 2018 The Author(s)