no code implementations • 6 Aug 2022 • Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot
We contribute a formal analysis of why the PoL protocol cannot be formally (dis)proven to be robust against spoofing adversaries.
no code implementations • 26 May 2022 • Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Nicolas Papernot
Selective classification is the task of rejecting inputs a model would predict incorrectly on through a trade-off between input space coverage and model accuracy.
no code implementations • 24 Feb 2022 • Anvith Thudi, Ilia Shumailov, Franziska Boenisch, Nicolas Papernot
We find this greatly reduces the bound on MI positive accuracy.
no code implementations • 22 Oct 2021 • Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot
Machine unlearning, i. e. having a model forget about some of its training data, has become increasingly more important as privacy legislation promotes variants of the right-to-be-forgotten.
1 code implementation • 27 Sep 2021 • Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, Nicolas Papernot
In this work, we first taxonomize approaches and metrics of approximate unlearning.
no code implementations • 20 Sep 2021 • Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society.
2 code implementations • 9 Mar 2021 • Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot
In particular, our analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform *at least* as much work than is needed for gradient descent itself.