Vis enkel innførsel

dc.contributor.authorSalahuddin, Suaiba Amina
dc.contributor.authorHansen, Stine
dc.contributor.authorGautam, Srishti
dc.contributor.authorKampffmeyer, Michael
dc.contributor.authorJenssen, Robert
dc.date.accessioned2023-02-06T07:33:40Z
dc.date.available2023-02-06T07:33:40Z
dc.date.issued2022-11-13
dc.description.abstractStandard strategies for fully supervised semantic segmentation of medical images require large pixel-level annotated datasets. This makes such methods challenging due to the manual labor required and limits the usability when segmentation is needed for new classes for which data is scarce. Few-shot segmentation (FSS) is a recent and promising direction within the deep learning literature designed to alleviate these challenges. In FSS, the aim is to create segmentation networks with the ability to generalize based on just a few annotated examples, inspired by human learning. A dominant direction in FSS is based on matching representations of the image to be segmented with prototypes acquired from a few annotated examples. A recent method called the ADNet, inspired by anomaly detection only computes one single prototype. This prototype captures the properties of the foreground segment. In this paper, the aim is to investigate whether the ADNet may benefit from more than one prototype to capture foreground properties. We take inspiration from the very recent idea of self-guidance, where an initial prediction of the support image is used to compute two new prototypes, representing the covered region and the missed region. We couple these more fine-grained prototypes with the ADNet framework to form what we refer to as the self-guided ADNet, or SG-ADNet for short. We evaluate the proposed SG-ADNet on a benchmark cardiac MRI data set, achieving competitive overall performance compared to the baseline ADNet, helping reduce over-segmentation errors for some classes.en_US
dc.descriptionSource at: <a href=https://ceur-ws.org/Vol-3271/Paper18_CVCS2022.pdf>https://ceur-ws.org/Vol-3271/Paper18_CVCS2022.pdf</a>en_US
dc.identifier.citationSalahuddin, Hansen, Gautam, Kampffmeyer, Jenssen. A self-guided anomaly detection-inspired few-shot segmentation network. CEUR Workshop Proceedings. 2022;3271en_US
dc.identifier.cristinIDFRIDAID 2084188
dc.identifier.issn1613-0073
dc.identifier.urihttps://hdl.handle.net/10037/28499
dc.language.isoengen_US
dc.publisherCEUR Workshop Proceedingsen_US
dc.relation.journalCEUR Workshop Proceedings
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2022 The Author(s)en_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)en_US
dc.subjectFew shot learning / Few shot learningen_US
dc.subjectMaskinlæring / Machine learningen_US
dc.subjectMedical image analysis / Medical image analysisen_US
dc.subjectSelf-supervised deep learning / Self-supervised deep learningen_US
dc.titleA self-guided anomaly detection-inspired few-shot segmentation networken_US
dc.type.versionpublishedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution 4.0 International (CC BY 4.0)
Med mindre det står noe annet, er denne innførselens lisens beskrevet som Attribution 4.0 International (CC BY 4.0)