no code implementations • 8 Jun 2023 • Ana-Maria Cretu, Daniel Jones, Yves-Alexandre de Montjoye, Shruti Tople
Directly extending the shadow modelling technique from the black-box to the white-box setting has been shown, in general, not to perform better than black-box only attacks.
1 code implementation • 2 Feb 2023 • Marlon Tobaben, Aliaksandra Shysheya, John Bronskill, Andrew Paverd, Shruti Tople, Santiago Zanella-Beguelin, Richard E Turner, Antti Honkela
There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models.
1 code implementation • 1 Feb 2023 • Nils Lukas, Ahmed Salem, Robert Sim, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin
Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage.
no code implementations • 21 Dec 2022 • Ahmed Salem, Giovanni Cherubin, David Evans, Boris Köpf, Andrew Paverd, Anshuman Suri, Shruti Tople, Santiago Zanella-Béguelin
Deploying machine learning models in production may allow adversaries to infer sensitive information about training data.
no code implementations • 4 Oct 2022 • Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople
Empirical results on three datasets with different modalities and varying numbers of clients further demonstrate that our approach mitigates a broad class of backdoor attacks with a negligible cost on the model utility.
1 code implementation • 18 Sep 2022 • Teodora Baluta, Shiqi Shen, S. Hitarth, Shruti Tople, Prateek Saxena
Our causal models also show a new connection between generalization and MI attacks via their shared causal factors.
2 code implementations • 18 Sep 2022 • Valentin Hartmann, Léo Meynent, Maxime Peyrard, Dimitrios Dimitriadis, Shruti Tople, Robert West
We identify three sources of leakage: (1) memorizing specific information about the $\mathbb{E}[Y|X]$ (expected label given the feature values) of interest to the adversary, (2) wrong inductive bias of the model, and (3) finiteness of the training data.
1 code implementation • 10 Jun 2022 • Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Rühle, Andrew Paverd, Mohammad Naseri, Boris Köpf, Daniel Jones
Our Bayesian method exploits the hypothesis testing interpretation of differential privacy to obtain a posterior for $\varepsilon$ (not just a confidence interval) from the joint posterior of the false positive and false negative rates of membership inference attacks.
1 code implementation • 7 Oct 2021 • Divyat Mahajan, Shruti Tople, Amit Sharma
Through extensive evaluation on a synthetic dataset and image datasets like MNIST, Fashion-MNIST, and Chest X-rays, we show that a lower OOD generalization gap does not imply better robustness to MI attacks.
no code implementations • 27 May 2021 • Varun Chandrasekaran, Darren Edge, Somesh Jha, Amit Sharma, Cheng Zhang, Shruti Tople
However for real-world applications, the privacy of data is critical.
no code implementations • 1 Jan 2021 • Santiago Zanella-Beguelin, Shruti Tople, Andrew Paverd, Boris Köpf
This is true even for queries that are entirely in-distribution, making extraction attacks indistinguishable from legitimate use; (ii) with fine-tuned base layers, the effectiveness of algebraic attacks decreases with the learning rate, showing that fine-tuning is not only beneficial for accuracy but also indispensable for model confidentiality.
no code implementations • 11 Sep 2020 • Yixi Xu, Sumit Mukherjee, Xiyang Liu, Shruti Tople, Rahul Dodhia, Juan Lavista Ferres
In this work, we propose the first formal framework for membership privacy estimation in generative models.
1 code implementation • 25 Jul 2020 • Anshul Aggarwal, Trevor E. Carlson, Reza Shokri, Shruti Tople
In this setting, our objective is to protect the confidentiality of both the users' input queries as well as the model parameters at the server, with modest computation and communication overhead.
1 code implementation • arXiv 2020 • Divyat Mahajan, Shruti Tople, Amit Sharma
In the domain generalization literature, a common objective is to learn representations independent of the domain after conditioning on the class label.
Ranked #1 on
Domain Generalization
on Rotated Fashion-MNIST
1 code implementation • 12 Jun 2020 • Wanrong Zhang, Shruti Tople, Olga Ohrimenko
Using multiple machine learning models, we show that leakage occurs even if the sensitive attribute is not included in the training data and has a low correlation with other attributes or the target variable.
1 code implementation • 5 Apr 2020 • Sameer Wagh, Shruti Tople, Fabrice Benhamouda, Eyal Kushilevitz, Prateek Mittal, Tal Rabin
For private training, we are about 6x faster than SecureNN, 4. 4x faster than ABY3 and about 2-60x more communication efficient.
no code implementations • 8 Jan 2020 • Bijeeta Pal, Shruti Tople
Thus, our results motivate the need for designing training techniques that are robust to unintended feature learning, specifically for transfer learned models.
no code implementations • 17 Dec 2019 • Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, Marc Brockschmidt
To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models.
1 code implementation • 5 Dec 2019 • Stephanie L. Hyland, Shruti Tople
Introducing noise in the training of machine learning systems is a powerful way to protect individual privacy via differential privacy guarantees, but comes at a cost to utility.
no code implementations • 8 Nov 2019 • Olga Ohrimenko, Shruti Tople, Sebastian Tschiatschek
We study the problem of collaborative machine learning markets where multiple parties can achieve improved performance on their machine learning tasks by combining their training data.
1 code implementation • ICML 2020 • Shruti Tople, Amit Sharma, Aditya Nori
Such privacy risks are exacerbated when a model's predictions are used on an unseen data distribution.
no code implementations • 25 Sep 2019 • Shruti Tople, Marc Brockschmidt, Boris Köpf, Olga Ohrimenko, Santiago Zanella-Béguelin
To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models.
no code implementations • 1 Oct 2018 • Karan Grover, Shruti Tople, Shweta Shinde, Ranjita Bhagwan, Ramachandran Ramjee
In this paper, we ask a timely question: "Can third-party cloud services use Intel SGX enclaves to provide practical, yet secure DNN Inference-as-a-service?"