Evaluating Machine Unlearning via Epistemic Uncertainty

There has been a growing interest in Machine Unlearning recently, primarily due to legal requirements such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act. Thus, multiple approaches were presented to remove the influence of specific target data points from a trained model. However, when evaluating the success of unlearning, current approaches either use adversarial attacks or compare their results to the optimal solution, which usually incorporates retraining from scratch. We argue that both ways are insufficient in practice. In this work, we present an evaluation metric for Machine Unlearning algorithms based on epistemic uncertainty. This is the first definition of a general evaluation metric for Machine Unlearning to our best knowledge.

  • Published in:
    arXiv
  • Type:
    Article
  • Authors:
    A. Becker, T. Liebig
  • Year:
    2022

Citation information

A. Becker, T. Liebig: Evaluating Machine Unlearning via Epistemic Uncertainty, arXiv, 2022, https://doi.org/10.48550/arXiv.2208.10836, Becker.Liebig.2022,