dc.contributor.author | Hansen, Stine | |
dc.contributor.author | Gautam, Srishti | |
dc.contributor.author | Salahuddin, Suaiba Amina | |
dc.contributor.author | Kampffmeyer, Michael Christian | |
dc.contributor.author | Jenssen, Robert | |
dc.date.accessioned | 2024-01-12T20:34:10Z | |
dc.date.available | 2024-01-12T20:34:10Z | |
dc.date.issued | 2023-08-02 | |
dc.description.abstract | A major barrier to applying deep segmentation models in the medical domain is their typical data-hungry nature, requiring experts to collect and label large amounts of data for training. As a reaction, prototypical few-shot segmentation (FSS) models have recently gained traction as data-efficient alternatives. Nevertheless, despite the recent progress of these models, they still have some essential shortcomings that must be addressed. In this work, we focus on three of these shortcomings: (i) the lack of uncertainty estimation, (ii) the lack of a guiding mechanism to help locate edges and encourage spatial consistency in the segmentation maps, and (iii) the models’ inability to do one-step multi-class segmentation. Without modifying or requiring a specific backbone architecture, we propose a modified prototype extraction module that facilitates the computation of uncertainty maps in prototypical FSS models, and show that the resulting maps are useful indicators of the model uncertainty. To improve the segmentation around boundaries and to encourage spatial consistency, we propose a novel feature refinement module that leverages structural information in the input space to help guide the segmentation in the feature space. Furthermore, we demonstrate how uncertainty maps can be used to automatically guide this feature refinement. Finally, to avoid ambiguous voxel predictions that occur when images are segmented class-by-class, we propose a procedure to perform one-step multi-class FSS. The efficiency of our proposed methodology is evaluated on two representative datasets for abdominal organ segmentation (CHAOS dataset and BTCV dataset) and one dataset for cardiac segmentation (MS-CMRSeg dataset). The results show that our proposed methodology significantly (one-sided Wilcoxon signed rank test, <i>p</i> < 0.5) improves the baseline, increasing the overall dice score with +5.2, +5.1, and +2.8 percentage points for the CHAOS dataset, the BTCV dataset, and the MS-CMRSeg dataset, respectively. | en_US |
dc.identifier.citation | Hansen, Gautam, Salahuddin, Kampffmeyer, Jenssen. ADNet++: A few-shot learning framework for multi-class medical image volume segmentation with uncertainty-guided feature refinement. Medical Image Analysis. 2023;89 | en_US |
dc.identifier.cristinID | FRIDAID 2181443 | |
dc.identifier.doi | 10.1016/j.media.2023.102870 | |
dc.identifier.issn | 1361-8415 | |
dc.identifier.issn | 1361-8423 | |
dc.identifier.uri | https://hdl.handle.net/10037/32474 | |
dc.language.iso | eng | en_US |
dc.publisher | Elsevier | en_US |
dc.relation.journal | Medical Image Analysis | |
dc.relation.projectID | Norges forskningsråd: 303514 | en_US |
dc.relation.projectID | Norges forskningsråd: 315029 | en_US |
dc.relation.projectID | Norges forskningsråd: 309439 | en_US |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2023 The Author(s) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0 | en_US |
dc.rights | Attribution 4.0 International (CC BY 4.0) | en_US |
dc.title | ADNet++: A few-shot learning framework for multi-class medical image volume segmentation with uncertainty-guided feature refinement | en_US |
dc.type.version | publishedVersion | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |