ub.xmlui.mirage2.page-structure.muninLogoub.xmlui.mirage2.page-structure.openResearchArchiveLogo
    • EnglishEnglish
    • norsknorsk
  • Velg spraakEnglish 
    • EnglishEnglish
    • norsknorsk
  • Administration/UB
View Item 
  •   Home
  • Fakultet for naturvitenskap og teknologi
  • Institutt for fysikk og teknologi
  • Artikler, rapporter og annet (fysikk og teknologi)
  • View Item
  •   Home
  • Fakultet for naturvitenskap og teknologi
  • Institutt for fysikk og teknologi
  • Artikler, rapporter og annet (fysikk og teknologi)
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Self-constructing graph neural networks to model long-range pixel dependencies for semantic segmentation of remote sensing images

Permanent link
https://hdl.handle.net/10037/24605
DOI
https://doi.org/10.1080/01431161.2021.1936267
Thumbnail
View/Open
article.pdf (4.952Mb)
Accepted manuscript version licensed CC BY-NC 4.0 (PDF)
Date
2021-06-16
Type
Journal article
Tidsskriftartikkel
Peer reviewed

Author
Liu, Qinghui; Kampffmeyer, Michael; Jenssen, Robert; Salberg, Arnt Børre
Abstract
Capturing global contextual representations in remote sensing images by exploiting long-range pixel-pixel dependencies has been shown to improve segmentation performance. However, how to do this efficiently is an open question as current approaches of utilising attention schemes, or very deep models to increase the field of view, increases complexity and memory consumption. Inspired by recent work on graph neural networks, we propose the Self-Constructing Graph (SCG) module that learns a long-range dependency graph directly from the image data and uses it to capture global contextual information efficiently to improve semantic segmentation. The SCG module provides a high degree of flexibility for constructing segmentation networks that seamlessly make use of the benefits of variants of graph neural networks (GNN) and convolutional neural networks (CNN). Our SCG-GCN model, a variant of SCG-Net built upon graph convolutional networks (GCN), performs semantic segmentation in an end-to-end manner with competitive performance on the publicly available ISPRS Potsdam and Vaihingen datasets, achieving a mean F1-scores of 92.0% and 89.8%, respectively. We conclude that the SCG-Net is an attractive architecture for semantic segmentation of remote sensing images since it achieves competitive performance with much fewer parameters and lower computational cost compared to related models based on convolutional neural networks.
Description
This is an Accepted Manuscript of an article published by Taylor & Francis in the International Journal of Remote Sensing on 16 June 2021, available online at https://doi.org/10.1080/01431161.2021.1936267.
Publisher
Taylor & Francis
Citation
Liu Q, Kampffmeyer MC, Jenssen R, Salberg AB. Self-constructing graph neural networks to model long-range pixel dependencies for semantic segmentation of remote sensing images. International Journal of Remote Sensing. 2021;42(16):6184-6208
Metadata
Show full item record
Collections
  • Artikler, rapporter og annet (fysikk og teknologi) [1062]
Copyright 2021 The Author(s)

Browse

Browse all of MuninCommunities & CollectionsAuthor listTitlesBy Issue DateBrowse this CollectionAuthor listTitlesBy Issue Date
Login

Statistics

View Usage Statistics
UiT

Munin is powered by DSpace

UiT The Arctic University of Norway
The University Library
uit.no/ub - munin@ub.uit.no

Accessibility statement (Norwegian only)