dc.contributor.author | Salahuddin, Suaiba Amina | |
dc.contributor.author | Hansen, Stine | |
dc.contributor.author | Gautam, Srishti | |
dc.contributor.author | Kampffmeyer, Michael | |
dc.contributor.author | Jenssen, Robert | |
dc.date.accessioned | 2023-02-06T07:33:40Z | |
dc.date.available | 2023-02-06T07:33:40Z | |
dc.date.issued | 2022-11-13 | |
dc.description.abstract | Standard strategies for fully supervised semantic segmentation of medical images require large pixel-level
annotated datasets. This makes such methods challenging due to the manual labor required and limits the
usability when segmentation is needed for new classes for which data is scarce. Few-shot segmentation
(FSS) is a recent and promising direction within the deep learning literature designed to alleviate these
challenges. In FSS, the aim is to create segmentation networks with the ability to generalize based on
just a few annotated examples, inspired by human learning. A dominant direction in FSS is based on
matching representations of the image to be segmented with prototypes acquired from a few annotated
examples. A recent method called the ADNet, inspired by anomaly detection only computes one single
prototype. This prototype captures the properties of the foreground segment. In this paper, the aim is
to investigate whether the ADNet may benefit from more than one prototype to capture foreground
properties. We take inspiration from the very recent idea of self-guidance, where an initial prediction
of the support image is used to compute two new prototypes, representing the covered region and the
missed region. We couple these more fine-grained prototypes with the ADNet framework to form what
we refer to as the self-guided ADNet, or SG-ADNet for short. We evaluate the proposed SG-ADNet on a
benchmark cardiac MRI data set, achieving competitive overall performance compared to the baseline
ADNet, helping reduce over-segmentation errors for some classes. | en_US |
dc.description | Source at: <a href=https://ceur-ws.org/Vol-3271/Paper18_CVCS2022.pdf>https://ceur-ws.org/Vol-3271/Paper18_CVCS2022.pdf</a> | en_US |
dc.identifier.citation | Salahuddin, Hansen, Gautam, Kampffmeyer, Jenssen. A self-guided anomaly detection-inspired few-shot segmentation network. CEUR Workshop Proceedings. 2022;3271 | en_US |
dc.identifier.cristinID | FRIDAID 2084188 | |
dc.identifier.issn | 1613-0073 | |
dc.identifier.uri | https://hdl.handle.net/10037/28499 | |
dc.language.iso | eng | en_US |
dc.publisher | CEUR Workshop Proceedings | en_US |
dc.relation.journal | CEUR Workshop Proceedings | |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2022 The Author(s) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0 | en_US |
dc.rights | Attribution 4.0 International (CC BY 4.0) | en_US |
dc.subject | Few shot learning / Few shot learning | en_US |
dc.subject | Maskinlæring / Machine learning | en_US |
dc.subject | Medical image analysis / Medical image analysis | en_US |
dc.subject | Self-supervised deep learning / Self-supervised deep learning | en_US |
dc.title | A self-guided anomaly detection-inspired few-shot segmentation network | en_US |
dc.type.version | publishedVersion | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |
dc.type | Peer reviewed | en_US |