Machine Unlearning
153 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Machine Unlearning
Most implemented papers
Machine Unlearning for Random Forests
The upper levels of DaRE trees use random nodes, which choose split attributes and thresholds uniformly at random.
Towards Adversarial Evaluations for Inexact Machine Unlearning
Machine Learning models face increased concerns regarding the storage of personal user data and adverse impacts of corrupted data like backdoors or systematic bias.
Machine Unlearning
Once users have shared their data online, it is generally difficult for them to revoke access and ask for the data to be deleted.
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning
We study the data deletion problem for convex models.
How to Combine Membership-Inference Attacks on Multiple Updated Models
Our results on four public datasets show that our attacks are effective at using update information to give the adversary a significant advantage over attacks on standalone models, but also compared to a prior MI attack that takes advantage of model updates in a related machine-unlearning setting.
MultiDelete for Multimodal Machine Unlearning
Machine Unlearning removes specific knowledge about training data samples from an already trained model.
Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning
Machine unlearning has raised significant interest with the adoption of laws ensuring the ``right to be forgotten''.
Machine Unlearning for Image-to-Image Generative Models
This paper serves as a bridge, addressing the gap by providing a unifying framework of machine unlearning for image-to-image generative models.
An Information Theoretic Approach to Machine Unlearning
We perform extensive empirical evaluation of our method over a range of contemporary benchmarks, verifying that our method is competitive with state-of-the-art performance under the strict constraints of zero-shot unlearning.
Certified Machine Unlearning via Noisy Stochastic Gradient Descent
``The right to be forgotten'' ensured by laws for user data privacy becomes increasingly important.