Search Results for author: Abigail Goldsteen

Found 5 papers, 2 papers with code

SoK: Reducing the Vulnerability of Fine-tuned Language Models to Membership Inference Attacks

no code implementations13 Mar 2024 Guy Amit, Abigail Goldsteen, Ariel Farkash

We provide the first systematic review of the vulnerability of fine-tuned large language models to membership inference attacks, the various factors that come into play, and the effectiveness of different defense strategies.

Improved Membership Inference Attacks Against Language Classification Models

no code implementations11 Oct 2023 Shlomit Shachor, Natalia Razinkov, Abigail Goldsteen

Assessing the privacy risks of machine learning models is crucial to enabling knowledgeable decisions on whether to use, deploy, or share a model.

Classification

Data Minimization for GDPR Compliance in Machine Learning Models

1 code implementation6 Aug 2020 Abigail Goldsteen, Gilad Ezov, Ron Shmelkin, Micha Moffie, Ariel Farkash

The EU General Data Protection Regulation (GDPR) mandates the principle of data minimization, which requires that only data necessary to fulfill a certain purpose be collected.

BIG-bench Machine Learning

Reducing Risk of Model Inversion Using Privacy-Guided Training

no code implementations29 Jun 2020 Abigail Goldsteen, Gilad Ezov, Ariel Farkash

These attacks are able to reveal the values of certain sensitive features of individuals who participated in training the model.

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.