Show simple item record

dc.contributor.advisorRicaud, Benjamin
dc.contributor.authorSchei Nørve, Iver
dc.date.accessioned2024-12-16T10:22:12Z
dc.date.available2024-12-16T10:22:12Z
dc.date.issued2023-12-15en
dc.description.abstractDeep Neural networks has pushed the boundary for what is achievable in the field of machine learning. At its core the Neural network maps data from an input space, to a highly abstract latent space. The representations of these latent mappings are critical for the neural networks ability to perform the task it is given. However critical, our knowledge on these abstract representations in the latent space is highly restricted. Thus finding new approaches and methods to explore the latent space is needed. As the function constricting the latent representation is unknown, we propose some new network-science based tools for exploring the latent space. By explor- ing the nearest neighbour graph of the samples in the latent representations, we observe clusters consisting of samples from the same class, regions with samples from a mix of classes, as well as hub regions. Through exploring the latent representations robustness to perturbation as well as the similarity be- tween different latent representations, we find that it is crucial to consider a suitable number of neighbours for constructing the graph structure, as well as choosing an appropriate similarity measure depending on the scope of the desired property to observe. Further, this nearest neighbour representation of the latent space can be aggregated to construct the class graph, a tool for observing high level relational information on the classes embedding in the latent space. In addition, this thesis explores the iterative pruning method known as the lottery ticket hypothesis [Frankle and Carbin, 2019]. Our exploration considers the evolution of the latent space over the pruning iterations. Through this exploration we discover that the latent representation of the sparse sub net- work found by the ’winning ticket’ initialisation converges towards a distinct representation in the latent space that is different from the unpruned model it originated from. If so, this finding could indicate that the hypothesis of a ’winning ticket’ is inaccurate.en_US
dc.identifier.urihttps://hdl.handle.net/10037/35990
dc.language.isoengen_US
dc.publisherUiT Norges arktiske universitetno
dc.publisherUiT The Arctic University of Norwayen
dc.rights.holderCopyright 2023 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/4.0en_US
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)en_US
dc.subject.courseIDFYS-3941
dc.titleLeveraging Network Science for the Exploration of Deep Learning Latent Representationsen_US
dc.typeMastergradsoppgaveno
dc.typeMaster thesisen


File(s) in this item

Thumbnail
Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)