dc.contributor.advisor | Benjamin Ricaud | |
dc.contributor.author | Stanevicius, Danielius | |
dc.date.accessioned | 2025-07-18T10:37:00Z | |
dc.date.available | 2025-07-18T10:37:00Z | |
dc.date.issued | 2025 | |
dc.description.abstract | This thesis tracks general LLM reasoning by examining how two open-weight models—GPT2-XL (1.5 B) and GPT-Neo (1.3 B)—organise meaning across their hidden layers. Four structured text suites (unrelated, related, identical, cross-lingual) and 50-word “Country-Stories” summaries feed the models. Layer-wise activations are projected with UMAP, connected via k-nearest-neighbour graphs, and summarised with Average Total Distance and modularity curves. The analysis shows both models encode narrative bias: poorly documented countries become generic hubs while thematically similar stories from distant regions converge, underscoring data-imbalance effects. The work delivers a lightweight visual-analytic toolkit—including distance matrices, modularity curves, and centroid graphs—and outlines future needs such as topology-alignment metrics. | |
dc.description.abstract | | |
dc.identifier.uri | https://hdl.handle.net/10037/37778 | |
dc.identifier | no.uit:wiseflow:7269325:62191381 | |
dc.language.iso | eng | |
dc.publisher | UiT The Arctic University of Norway | |
dc.rights.holder | Copyright 2025 The Author(s) | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0 | en_US |
dc.rights | Attribution 4.0 International (CC BY 4.0) | en_US |
dc.title | Mapping Reasoning Paths of Large Language Models | |
dc.type | Master thesis | |