1 code implementation • 17 Oct 2022 • Thorsten Eisenhofer, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh, Olga Ohrimenko, Nicolas Papernot
In our cryptographic protocol, the server first computes a proof that the model was trained on a dataset~$D$.
no code implementations • 6 Aug 2022 • Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot
We contribute a formal analysis of why the PoL protocol cannot be formally (dis)proven to be robust against spoofing adversaries.
no code implementations • 10 Jun 2022 • Varun Chandrasekaran, Suman Banerjee, Diego Perino, Nicolas Kourtellis
Federated learning (FL), where data remains at the federated clients, and where only gradient updates are shared with a central aggregator, was assumed to be private.
1 code implementation • 27 Sep 2021 • Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, Nicolas Papernot
In this work, we first taxonomize approaches and metrics of approximate unlearning.
no code implementations • 20 Sep 2021 • Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society.
no code implementations • 3 Aug 2021 • Adelin Travers, Lorna Licollari, Guanghan Wang, Varun Chandrasekaran, Adam Dziedzic, David Lie, Nicolas Papernot
In the white-box setting, we instantiate this class with a joint, multi-stage optimization attack.
no code implementations • 27 May 2021 • Varun Chandrasekaran, Darren Edge, Somesh Jha, Amit Sharma, Cheng Zhang, Shruti Tople
However for real-world applications, the privacy of data is critical.
2 code implementations • 9 Mar 2021 • Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot
In particular, our analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform *at least* as much work than is needed for gradient descent itself.
1 code implementation • 29 Jul 2020 • Jayaram Raghuram, Varun Chandrasekaran, Somesh Jha, Suman Banerjee
We propose an unsupervised anomaly detection framework based on the internal DNN layer representations in the form of a meta-algorithm with configurable components.
1 code implementation • 19 Mar 2020 • Chuhan Gao, Varun Chandrasekaran, Kassem Fawaz, Somesh Jha
We implement and evaluate Face-Off to find that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++.
Cryptography and Security
1 code implementation • 27 Feb 2020 • Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot
Such pairs are watermarks, which are not sampled from the task distribution and are only known to the defender.
1 code implementation • 26 Feb 2020 • Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, Nicolas Papernot
In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks.
2 code implementations • 9 Dec 2019 • Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot
Once users have shared their data online, it is generally difficult for them to revoke access and ask for the data to be deleted.
no code implementations • 2 Oct 2019 • Lakshya Jain, Wilson Wu, Steven Chen, Uyeong Jang, Varun Chandrasekaran, Sanjit Seshia, Somesh Jha
In this paper we explore semantic adversarial examples (SAEs) where an attacker creates perturbations in the semantic space representing the environment that produces input for the ML model.
no code implementations • 26 May 2019 • Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu
and how to design a classification paradigm that leverages these invariances to improve the robustness accuracy trade-off?
no code implementations • 5 Nov 2018 • Varun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, Songbai Yan
This has resulted in the surge of Machine Learning-as-a-Service (MLaaS) - cloud services that provide (a) tools and resources to learn the model, and (b) a user-friendly query interface to access the model.