Search Results for author: Stanley Wu

Found 3 papers, 2 papers with code

TMI! Finetuned Models Leak Private Information from their Pretraining Data

1 code implementation1 Jun 2023 John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman

In this work we propose a new membership-inference threat model where the adversary only has access to the finetuned model and would like to infer the membership of the pretraining data.

Transfer Learning

How to Combine Membership-Inference Attacks on Multiple Updated Models

2 code implementations12 May 2022 Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan Ullman, Roxana Geambasu

Our results on four public datasets show that our attacks are effective at using update information to give the adversary a significant advantage over attacks on standalone models, but also compared to a prior MI attack that takes advantage of model updates in a related machine-unlearning setting.

Machine Unlearning

Cannot find the paper you are looking for? You can Submit a new open access paper.