Show simple item record

dc.contributor.advisorAnfinsen, Stian Normann
dc.contributor.advisorLuppino, Luigi Tommaso
dc.contributor.authorHansen, Mads Adrian
dc.date.accessioned2020-01-29T08:19:34Z
dc.date.available2020-01-29T08:19:34Z
dc.date.issued2019-12-16
dc.description.abstractChange detection in earth observation remote sensing images can be used to describe the extent of natural disasters, e.g., forest fires and floods. When time is of the essence, the ability to utilize heterogeneous images is fundamental, i.e., images that are not directly comparable due to the sensors used or the capturing conditions. The recent advances in machine learning have dispersed into the field of change detection in earth observation remote sensing images, and several methods utilizing machine learning principles have been proposed. One promising paradigm to approach heterogeneous change detection from is paired image–to–image translation. If images captured with different sensors under varying conditions can be adequately mapped between their respective imaging domains to compare them directly, can changes be highlighted. Performing change detection in an unsupervised setting is crucial for the current state of the art methods, as the inference models are trained to do change detection on one particular dataset, i.e., the models do not have generalization capabilities. A production system must thus be able to describe a current natural disaster without access to ground truth, i.e., it must perform an unsupervised sample selection to train the image–to–image translation maps. Luppinoet al.[2] proposed an unsupervised change detection method utilizing affinity norms, which was later improved in [1]. This affinity norm method was used to produce initial change maps (ICMs), used for sample selection in the training of two convolutional neural network (CNN) architectures: ACE-net and X-net [1]. These image–to–image translation CNNs were trained using a cross-domain loss term weighted with the ICM, and a loss term that enforces cyclic consistency. Affinity matrices describe neighborhood structures and are used in computer vision to solve e.g., foreground-background separation problems. Inspired by the use of affinity matrices to produce initial change maps [2], we had the idea that affinity matrices could also be used during the training phase of the image translation CNNs. The core realization is that for an image X mapped with an image translation CNN T_X to produce Y_hat=T_X(X), the affinity structure should be retained, i.e., the affinity matrix of X and Y_hat should be similar. Based on this realization, we herein propose an affinity–guiding loss term for training paired image–to–image translation maps. The loss term is used to train the Affinity–guided X-net (AX-net), and its performance is evaluated and compared to X-net [1] in an ablation study. Ablation studies are crucial for deep learning research [3] and aim to identify parts of a machine learning model that does not contribute to its inference. This ablation study aims to isolate the contribution of the three loss terms in the optimization of the CNNs. The experimental results indicate that the affinity–guiding loss term is beneficial, but increases the optimization time significantly. Specifically, the affinity–guiding loss term can replace the cyclic consistency term. If that would be the case, one can consider simplifying the model by removing an entire CNN as the ability to cycle the image translation is not any longer needed.en_US
dc.identifier.urihttps://hdl.handle.net/10037/17250
dc.language.isoengen_US
dc.publisherUiT Norges arktiske universiteten_US
dc.publisherUiT The Arctic University of Norwayen_US
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2019 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0en_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)en_US
dc.subject.courseIDFYS-3941
dc.subjectVDP::Mathematics and natural science: 400::Information and communication science: 420::Simulation, visualization, signal processing, image processing: 429en_US
dc.subjectremote sensingen_US
dc.subjectearth observationen_US
dc.subjectchange detectionen_US
dc.subjectmultimodal imagesen_US
dc.subjectimage analysisen_US
dc.subjectpaired image-to-image translationen_US
dc.subjectunsupervised learningen_US
dc.subjectheterogeneous change detectionen_US
dc.subjectaffinity matrixen_US
dc.subjectmachine learningen_US
dc.subjectdeep learningen_US
dc.subjectVDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420::Simulering, visualisering, signalbehandling, bildeanalyse: 429en_US
dc.titleAffinity-Guided Image-to-Image Translation for Unsupervised Heterogeneous Change Detectionen_US
dc.typeMaster thesisen_US
dc.typeMastergradsoppgaveen_US


File(s) in this item

Thumbnail
Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)