Algorithms that forget: Machine unlearning and the right to erasure
Permanent link
https://hdl.handle.net/10037/31181Date
2023-09-22Type
Journal articleTidsskriftartikkel
Peer reviewed
Abstract
rticle 17 of the General Data Protection Regulation (GDPR) contains a right for the data subject to obtain the erasure of personal data. The right to erasure in the GDPR gives, however,
little clear guidance on how controllers processing personal data should erase the personal
data to meet the requirements set out in Article 17. Machine Learning (ML) models that have
been trained on personal data are downstream derivatives of the personal data used in the
training data set of the ML process. A characteristic of ML is the non-deterministic nature
of the learning process. The non-deterministic nature of ML poses significant difficulties in
determining whether the personal data in the training data set affects the internal weights
and adjusted parameters of the ML model. As a result, invoking the right to erasure in ML
and to erase personal data from a ML model is a challenging task.
This paper explores the complexities of enforcing and complying with the right to erasure in a ML context. It examines how novel developments in machine unlearning methods
relate to Article 17 of the GDPR. Specifically, the paper delves into the intricacies of how personal data is processed in ML models and how the right to erasure could be implemented in
such models. The paper also provides insights into how newly developed machine unlearning techniques could be applied to make ML models more GDPR compliant. The research
aims to provide a functional understanding and contribute to a better comprehension of
the applied challenges associated with the right to erasure in ML.
Publisher
ElsevierCitation
Juliussen, Rui, Johansen. Algorithms that forget: Machine unlearning and the right to erasure. Computer Law and Security Review. 2023Metadata
Show full item recordCollections
Copyright 2023 The Author(s)