Show simple item record

dc.contributor.advisorJenssen, Robert
dc.contributor.authorLiu, Qinghui
dc.date.accessioned2021-11-30T13:51:11Z
dc.date.available2021-11-30T13:51:11Z
dc.date.embargoEndDate2026-12-08
dc.date.issued2021-12-08
dc.description.abstractAutomatic mapping of land cover in remote sensing data plays an increasingly significant role in several earth observation (EO) applications, such as sustainable development, autonomous agriculture, and urban planning. Due to the complexity of the real ground surface and environment, accurate classification of land cover types is facing many challenges. This thesis provides novel deep learning-based solutions to land cover mapping challenges such as how to deal with intricate objects and imbalanced classes in multi-spectral and high-spatial resolution remote sensing data. The first work presents a novel model to learn richer multi-scale and global contextual representations in very high-resolution remote sensing images, namely the dense dilated convolutions' merging (DDCM) network. The proposed method is light-weighted, flexible and extendable, so that it can be used as a simple yet effective encoder and decoder module to address different classification and semantic mapping challenges. Intensive experiments on different benchmark remote sensing datasets demonstrate that the proposed method can achieve better performance but consume much fewer computation resources compared with other published methods. Next, a novel graph model is developed for capturing long-range pixel dependencies in remote sensing images to improve land cover mapping. One key component in the method is the self-constructing graph (SCG) module that can effectively construct global context relations (latent graph structure) without requiring prior knowledge graphs. The proposed SCG-based models achieved competitive performance on different representative remote sensing datasets with faster training and lower computational cost compared to strong baseline models. The third work introduces a new framework, namely the multi-view self-constructing graph (MSCG) network, to extend the vanilla SCG model to be able to capture multi-view context representations with rotation invariance to achieve improved segmentation performance. Meanwhile, a novel adaptive class weighting loss function is developed to alleviate the issue of class imbalance commonly found in EO datasets for semantic segmentation. Experiments on benchmark data demonstrate the proposed framework is computationally efficient and robust to produce improved segmentation results for imbalanced classes. To address the key challenges in multi-modal land cover mapping of remote sensing data, namely, 'what', 'how' and 'where' to effectively fuse multi-source features and to efficiently learn optimal joint representations of different modalities, the last work presents a compact and scalable multi-modal deep learning framework (MultiModNet) based on two novel modules: the pyramid attention fusion module and the gated fusion unit. The proposed MultiModNet outperforms the strong baselines on two representative remote sensing datasets with fewer parameters and at a lower computational cost. Extensive ablation studies also validate the effectiveness and flexibility of the framework.en_US
dc.description.doctoraltypeph.d.en_US
dc.description.popularabstractAutomatic mapping of land cover in remote sensing data plays an increasingly significant role in several earth observation (EO) applications, such as sustainable development, autonomous agriculture, and urban planning. Due to the complexity of the real ground surface and environment, accurate classification of land cover types is facing many challenges. This thesis provides novel deep learning-based solutions to land cover mapping challenges such as how to deal with intricate objects and imbalanced classes in multi-spectral and high-spatial resolution remote sensing data.en_US
dc.description.sponsorship1: Research Council of Norway (RCN) 2: Norwegian Computing Center (NR)en_US
dc.identifier.urihttps://hdl.handle.net/10037/23230
dc.language.isoengen_US
dc.publisherUiT Norges arktiske universiteten_US
dc.publisherUiT The Arctic University of Norwayen_US
dc.relation.haspart<p>Paper I: Liu, Q., Kampffmeyer, M., Jenssen, R. & Salberg, A.B. (2020). Dense dilated convolutions’ merging network for land cover classification. <i>IEEE Transactions on Geoscience and Remote Sensing, 58</i>(9), 6309-6320. Published version not available in Munin due to publisher’s restrictions. Published version available at <a href=https://doi.org/10.1109/TGRS.2020.2976658>https://doi.org/10.1109/TGRS.2020.2976658</a>. Accepted manuscript version available in Munin at <a href=https://hdl.handle.net/10037/20943>https://hdl.handle.net/10037/20943</a>. <p>Paper II: Liu, Q., Kampffmeyer, M. Jenssen, R. & Salberg, A.B. (2021). Self-constructing graph neural networks to model long-range pixel dependencies for semantic segmentation of remote sensing images. <i>International Journal of Remote Sensing, 42</i>(16), 6184-6208. Published version not available in Munin due to publisher’s restrictions. Published version available at <a href=https://doi.org/10.1080/01431161.2021.1936267>https://doi.org/10.1080/01431161.2021.1936267</a>. <p>Paper III: Liu, Q., Kampffmeyer, M., Jenssen, R. & Salberg, A.B. (2020). Multi-View self-constructing graph convolutional networks with adaptive class weighting loss for semantic segmentation. <i>IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops</i>, pp. 199-205. (Accepted manuscript). Also available in Munin at <a href=https://hdl.handle.net/10037/23229>https://hdl.handle.net/10037/23229</a>. Published version available at <a href=https://doi.org/10.1109/CVPRW50498.2020.00030>https://doi.org/10.1109/CVPRW50498.2020.00030</a>. <p>Paper IV: Liu, Q., Kampffmeyer, M., Jenssen, R. & Salberg, A.B. Multi-modal land cover mapping of remote sensing images using pyramid attention and gated fusion networks. (Submitted manuscript).en_US
dc.rights.accessRightsembargoedAccessen_US
dc.rights.holderCopyright 2021 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0en_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)en_US
dc.subjectRemote sensingen_US
dc.subjectDeep learningen_US
dc.subjectland cover mappingen_US
dc.subjectsemantic segmentationen_US
dc.titleAdvancing Land Cover Mapping in Remote Sensing with Deep Learningen_US
dc.typeDoctoral thesisen_US
dc.typeDoktorgradsavhandlingen_US


File(s) in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)