Vis enkel innførsel

dc.contributor.advisorBenjamin Ricaud
dc.contributor.authorStanevicius, Danielius
dc.date.accessioned2025-07-18T10:37:00Z
dc.date.available2025-07-18T10:37:00Z
dc.date.issued2025
dc.description.abstractThis thesis tracks general LLM reasoning by examining how two open-weight models—GPT2-XL (1.5 B) and GPT-Neo (1.3 B)—organise meaning across their hidden layers. Four structured text suites (unrelated, related, identical, cross-lingual) and 50-word “Country-Stories” summaries feed the models. Layer-wise activations are projected with UMAP, connected via k-nearest-neighbour graphs, and summarised with Average Total Distance and modularity curves. The analysis shows both models encode narrative bias: poorly documented countries become generic hubs while thematically similar stories from distant regions converge, underscoring data-imbalance effects. The work delivers a lightweight visual-analytic toolkit—including distance matrices, modularity curves, and centroid graphs—and outlines future needs such as topology-alignment metrics.
dc.description.abstract
dc.identifier.urihttps://hdl.handle.net/10037/37778
dc.identifierno.uit:wiseflow:7269325:62191381
dc.language.isoeng
dc.publisherUiT The Arctic University of Norway
dc.titleMapping Reasoning Paths of Large Language Models
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel