no code implementations • 26 Mar 2023 • Ashkan Yousefpour, Shen Guo, Ashish Shenoy, Sayan Ghosh, Pierre Stock, Kiwan Maeng, Schalk-Willem Krüger, Michael Rabbat, Carole-Jean Wu, Ilya Mironov
The rapid progress of AI is fueled by increasingly large and computationally intensive machine learning models and datasets.
1 code implementation • 26 Jul 2022 • Karthik Prasad, Sayan Ghosh, Graham Cormode, Ilya Mironov, Ashkan Yousefpour, Pierre Stock
Cross-device Federated Learning is an increasingly popular machine learning setting to train a model by leveraging a large population of client devices with high privacy and security guarantees.
no code implementations • 7 Jun 2022 • Meisam Hejazinia Dzmitry Huba, Ilias Leontiadis, Kiwan Maeng, Mani Malek, Luca Melis, Ilya Mironov, Milad Nasr, Kaikai Wang, Carole-Jean Wu
Despite FL's initial success, many important deep learning use cases, such as ranking and recommendation tasks, have been limited from on-device learning.
no code implementations • 15 Feb 2022 • Pierre Stock, Igor Shilov, Ilya Mironov, Alexandre Sablayrolles
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
3 code implementations • 25 Sep 2021 • Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Ghosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, Ilya Mironov
We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus. ai).
1 code implementation • NeurIPS 2021 • Mani Malek, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramèr
We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks.
no code implementations • 1 Mar 2021 • Huanyu Zhang, Ilya Mironov, Meisam Hejazinia
Despite intense interest and considerable effort, the current generation of neural networks suffers a significant loss of accuracy under most practically relevant privacy training regimes.
1 code implementation • 10 Mar 2020 • Nicholas Carlini, Matthew Jagielski, Ilya Mironov
We argue that the machine learning problem of model extraction is actually a cryptanalytic problem in disguise, and should be studied as such.
2 code implementations • 28 Aug 2019 • Ilya Mironov, Kunal Talwar, Li Zhang
The Sampled Gaussian Mechanism (SGM)---a composition of subsampling and the additive Gaussian noise---has been successfully used in a number of machine learning applications.
no code implementations • 8 Aug 2019 • Úlfar Erlingsson, Ilya Mironov, Ananth Raghunathan, Shuang Song
Instead, the definitions so named are the basis of refinements and more advanced analyses of the worst-case implications of attackers---without any change assumed in attackers' powers.
4 code implementations • 15 Dec 2018 • H. Brendan McMahan, Galen Andrew, Ulfar Erlingsson, Steve Chien, Ilya Mironov, Nicolas Papernot, Peter Kairouz
In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the privacy mechanism, and then isolates and simplifies the critical logic that computes the final privacy guarantees.
no code implementations • 29 Nov 2018 • Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta
We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value.
no code implementations • 20 Aug 2018 • Vitaly Feldman, Ilya Mironov, Kunal Talwar, Abhradeep Thakurta
In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.
3 code implementations • ICLR 2018 • Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Úlfar Erlingsson
Models and examples built with TensorFlow
no code implementations • 26 Aug 2017 • Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang
The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy.
2 code implementations • 24 Feb 2017 • Ilya Mironov
We propose a natural relaxation of differential privacy based on the Renyi divergence.
Cryptography and Security
25 code implementations • 1 Jul 2016 • Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains.