1 code implementation • 28 Jun 2021 • Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr
We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users' pictures are perturbed once and for all before being published (at which point they are scraped) and must thereafter fool all future models -- including models trained adaptively against the users' past attacks, or models that use technologies discovered after the attack.
no code implementations • ICML Workshop AML 2021 • Evani Radiya-Dixit, Florian Tramer
We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users' pictures are perturbed once and for all before being published and scraped, and must thereafter fool all future models---including models trained adaptively against the users' past attacks, or models that use technologies discovered after the attack.
no code implementations • 24 Apr 2020 • Evani Radiya-Dixit, Xin Wang
Given a language model pre-trained on massive unlabeled text corpora, only very light supervised fine-tuning is needed to learn a task: the number of fine-tuning steps is typically five orders of magnitude lower than the total parameter count.