Show simple item record

dc.contributor.authorJha, Debesh
dc.contributor.authorRiegler, Michael Alexander
dc.contributor.authorJohansen, Dag
dc.contributor.authorHalvorsen, Pål
dc.contributor.authorJohansen, Håvard D.
dc.date.accessioned2023-03-27T11:04:16Z
dc.date.available2023-03-27T11:04:16Z
dc.date.issued2020-09-01
dc.description.abstractSemantic image segmentation is the process of labeling each pixel of an image with its corresponding class. An encoder-decoder based approach, like U-Net and its variants, is a popular strategy for solving medical image segmentation tasks. To improve the performance of U-Net on various segmentation tasks, we propose a novel architecture called DoubleU-Net, which is a combination of two U-Net architectures stacked on top of each other. The first U-Net uses a pre-trained VGG-19 as the encoder, which has already learned features from ImageNet and can be transferred to another task easily. To capture more semantic information efficiently, we added another U-Net at the bottom. We also adopt Atrous Spatial Pyramid Pooling (ASPP) to capture contextual information within the network. We have evaluated DoubleU-Net using four medical segmentation datasets, covering various imaging modalities such as colonoscopy, dermoscopy, and microscopy. Experiments on the MICCAI 2015 segmentation challenge, the CVC-ClinicDB, the 2018 Data Science Bowl challenge, and the Lesion boundary segmentation datasets demonstrate that the DoubleU-Net outperforms U-Net and the baseline models. Moreover, DoubleU-Net produces more accurate segmentation masks, especially in the case of the CVC-ClinicDB and MICCAI 2015 segmentation challenge datasets, which have challenging images such as smaller and flat polyps. These results show the improvement over the existing U-Net model. The encouraging results, produced on various medical image segmentation datasets, show that DoubleU-Net can be used as a strong baseline for both medical image segmentation and cross-dataset evaluation testing to measure the generalizability of Deep Learning (DL) models.en_US
dc.identifier.citationJha, Riegler, Johansen, Halvorsen, Johansen. DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation. IEEE International Symposium on Computer-Based Medical Systems. 2020en_US
dc.identifier.cristinIDFRIDAID 1835631
dc.identifier.doi10.1109/CBMS49503.2020.00111
dc.identifier.issn2372-9198
dc.identifier.urihttps://hdl.handle.net/10037/28864
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.journalIEEE International Symposium on Computer-Based Medical Systems
dc.relation.projectIDNorges forskningsråd: 263248en_US
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2020 The Author(s)en_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)en_US
dc.subjectVDP::Matematikk og naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420en_US
dc.subjectVDP::Mathematics and natural scienses: 400::Information and communication science: 420en_US
dc.subjectFordøyelseskanalen / Gastrointestinal Tracten_US
dc.subjectMage-tarmsykdommer / gastrointestinale sykdommer / Gastrointestinal Diseasesen_US
dc.subjectMaskinlæring / Machine learningen_US
dc.titleDoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentationen_US
dc.type.versionacceptedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


File(s) in this item

Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution 4.0 International (CC BY 4.0)
Except where otherwise noted, this item's license is described as Attribution 4.0 International (CC BY 4.0)