dc.contributor.advisor | Kozyri, Elisavet | |
dc.contributor.advisor | Schmidt Nordmo, Tor-Arne | |
dc.contributor.author | Dragset, Snorre | |
dc.date.accessioned | 2025-08-05T15:33:35Z | |
dc.date.available | 2025-08-05T15:33:35Z | |
dc.date.issued | 2025 | |
dc.description.abstract | This thesis explores the relationship between influence and memorization of training data in deep learning models. Influence metrics aim to quantify the impact of individual training examples on model predictions, while memorization scores assess how much specific data points are retained by the model. Understanding the connection between these two concepts is important for both interpretability and privacy in AI systems. Prior research has shown that memorized training data might be leaked when the corresponding trained model makes inferences. This work examines whether influence metrics can serve as indicators of memorization and investigates how this relationship varies across different neural network architectures. By exploring whether training data items with higher influence also demonstrate higher memorization, the study aims to identify data points that may be more prone to leakage, thereby contributing to more privacy-aware AI systems. | |
dc.description.abstract | | |
dc.identifier.uri | https://hdl.handle.net/10037/37911 | |
dc.identifier | no.uit:wiseflow:7267640:62323514 | |
dc.language.iso | eng | |
dc.publisher | UiT The Arctic University of Norway | |
dc.title | Investigating the Correlation Between Training Data Influence and Memorization in AI Models | |
dc.type | Master thesis | |