ub.xmlui.mirage2.page-structure.muninLogoub.xmlui.mirage2.page-structure.openResearchArchiveLogo
    • EnglishEnglish
    • norsknorsk
  • Velg spraaknorsk 
    • EnglishEnglish
    • norsknorsk
  • Administrasjon/UB
Vis innførsel 
  •   Hjem
  • Fakultet for naturvitenskap og teknologi
  • Institutt for fysikk og teknologi
  • Artikler, rapporter og annet (fysikk og teknologi)
  • Vis innførsel
  •   Hjem
  • Fakultet for naturvitenskap og teknologi
  • Institutt for fysikk og teknologi
  • Artikler, rapporter og annet (fysikk og teknologi)
  • Vis innførsel
JavaScript is disabled for your browser. Some features of this site may not work without it.

Self-constructing graph neural networks to model long-range pixel dependencies for semantic segmentation of remote sensing images

Permanent lenke
https://hdl.handle.net/10037/24605
DOI
https://doi.org/10.1080/01431161.2021.1936267
Thumbnail
Åpne
article.pdf (4.952Mb)
Akseptert manusversjon licensed CC BY-NC 4.0 (PDF)
Dato
2021-06-16
Type
Journal article
Tidsskriftartikkel
Peer reviewed

Forfatter
Liu, Qinghui; Kampffmeyer, Michael; Jenssen, Robert; Salberg, Arnt Børre
Sammendrag
Capturing global contextual representations in remote sensing images by exploiting long-range pixel-pixel dependencies has been shown to improve segmentation performance. However, how to do this efficiently is an open question as current approaches of utilising attention schemes, or very deep models to increase the field of view, increases complexity and memory consumption. Inspired by recent work on graph neural networks, we propose the Self-Constructing Graph (SCG) module that learns a long-range dependency graph directly from the image data and uses it to capture global contextual information efficiently to improve semantic segmentation. The SCG module provides a high degree of flexibility for constructing segmentation networks that seamlessly make use of the benefits of variants of graph neural networks (GNN) and convolutional neural networks (CNN). Our SCG-GCN model, a variant of SCG-Net built upon graph convolutional networks (GCN), performs semantic segmentation in an end-to-end manner with competitive performance on the publicly available ISPRS Potsdam and Vaihingen datasets, achieving a mean F1-scores of 92.0% and 89.8%, respectively. We conclude that the SCG-Net is an attractive architecture for semantic segmentation of remote sensing images since it achieves competitive performance with much fewer parameters and lower computational cost compared to related models based on convolutional neural networks.
Beskrivelse
This is an Accepted Manuscript of an article published by Taylor & Francis in the International Journal of Remote Sensing on 16 June 2021, available online at https://doi.org/10.1080/01431161.2021.1936267.
Forlag
Taylor & Francis
Sitering
Liu Q, Kampffmeyer MC, Jenssen R, Salberg AB. Self-constructing graph neural networks to model long-range pixel dependencies for semantic segmentation of remote sensing images. International Journal of Remote Sensing. 2021;42(16):6184-6208
Metadata
Vis full innførsel
Samlinger
  • Artikler, rapporter og annet (fysikk og teknologi) [1058]
Copyright 2021 The Author(s)

Bla

Bla i hele MuninEnheter og samlingerForfatterlisteTittelDatoBla i denne samlingenForfatterlisteTittelDato
Logg inn

Statistikk

Antall visninger
UiT

Munin bygger på DSpace

UiT Norges Arktiske Universitet
Universitetsbiblioteket
uit.no/ub - munin@ub.uit.no

Tilgjengelighetserklæring