Search Results for author: John Abascal

Found 2 papers, 2 papers with code

TMI! Finetuned Models Leak Private Information from their Pretraining Data

1 code implementation1 Jun 2023 John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman

In this work we propose a new membership-inference threat model where the adversary only has access to the finetuned model and would like to infer the membership of the pretraining data.

Transfer Learning

SNAP: Efficient Extraction of Private Properties with Poisoning

1 code implementation25 Aug 2022 Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan Ullman

Property inference attacks allow an adversary to extract global properties of the training dataset from a machine learning model.

Inference Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.