Vis enkel innførsel

dc.contributor.advisorKozyri, Elisavet
dc.contributor.advisorSchmidt Nordmo, Tor-Arne
dc.contributor.authorDragset, Snorre
dc.date.accessioned2025-08-05T15:33:35Z
dc.date.available2025-08-05T15:33:35Z
dc.date.issued2025
dc.description.abstractThis thesis explores the relationship between influence and memorization of training data in deep learning models. Influence metrics aim to quantify the impact of individual training examples on model predictions, while memorization scores assess how much specific data points are retained by the model. Understanding the connection between these two concepts is important for both interpretability and privacy in AI systems. Prior research has shown that memorized training data might be leaked when the corresponding trained model makes inferences. This work examines whether influence metrics can serve as indicators of memorization and investigates how this relationship varies across different neural network architectures. By exploring whether training data items with higher influence also demonstrate higher memorization, the study aims to identify data points that may be more prone to leakage, thereby contributing to more privacy-aware AI systems.
dc.description.abstract
dc.identifier.urihttps://hdl.handle.net/10037/37911
dc.identifierno.uit:wiseflow:7267640:62323514
dc.language.isoeng
dc.publisherUiT The Arctic University of Norway
dc.titleInvestigating the Correlation Between Training Data Influence and Memorization in AI Models
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel