no code implementations • 13 Mar 2024 • Guy Amit, Abigail Goldsteen, Ariel Farkash
We provide the first systematic review of the vulnerability of fine-tuned large language models to membership inference attacks, the various factors that come into play, and the effectiveness of different defense strategies.
no code implementations • 11 Oct 2023 • Shlomit Shachor, Natalia Razinkov, Abigail Goldsteen
Assessing the privacy risks of machine learning models is crucial to enabling knowledgeable decisions on whether to use, deploy, or share a model.
1 code implementation • 6 Aug 2020 • Abigail Goldsteen, Gilad Ezov, Ron Shmelkin, Micha Moffie, Ariel Farkash
The EU General Data Protection Regulation (GDPR) mandates the principle of data minimization, which requires that only data necessary to fulfill a certain purpose be collected.
1 code implementation • 26 Jul 2020 • Abigail Goldsteen, Gilad Ezov, Ron Shmelkin, Micha Moffie, Ariel Farkash
Anonymized data, however, is exempt from the obligations set out in these regulations.
no code implementations • 29 Jun 2020 • Abigail Goldsteen, Gilad Ezov, Ariel Farkash
These attacks are able to reveal the values of certain sensitive features of individuals who participated in training the model.