Search Results for author: Ilya Mironov

Found 17 papers, 9 papers with code

Green Federated Learning

no code implementations26 Mar 2023 Ashkan Yousefpour, Shen Guo, Ashish Shenoy, Sayan Ghosh, Pierre Stock, Kiwan Maeng, Schalk-Willem Krüger, Michael Rabbat, Carole-Jean Wu, Ilya Mironov

The rapid progress of AI is fueled by increasingly large and computationally intensive machine learning models and datasets.

Federated Learning

Reconciling Security and Communication Efficiency in Federated Learning

1 code implementation26 Jul 2022 Karthik Prasad, Sayan Ghosh, Graham Cormode, Ilya Mironov, Ashkan Yousefpour, Pierre Stock

Cross-device Federated Learning is an increasingly popular machine learning setting to train a model by leveraging a large population of client devices with high privacy and security guarantees.

Federated Learning Quantization

Defending against Reconstruction Attacks with Rényi Differential Privacy

no code implementations15 Feb 2022 Pierre Stock, Igor Shilov, Ilya Mironov, Alexandre Sablayrolles

Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.

Opacus: User-Friendly Differential Privacy Library in PyTorch

3 code implementations25 Sep 2021 Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Ghosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, Ilya Mironov

We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus. ai).

Antipodes of Label Differential Privacy: PATE and ALIBI

1 code implementation NeurIPS 2021 Mani Malek, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramèr

We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks.

Bayesian Inference Memorization +2

Wide Network Learning with Differential Privacy

no code implementations1 Mar 2021 Huanyu Zhang, Ilya Mironov, Meisam Hejazinia

Despite intense interest and considerable effort, the current generation of neural networks suffers a significant loss of accuracy under most practically relevant privacy training regimes.

Recommendation Systems

Cryptanalytic Extraction of Neural Network Models

1 code implementation10 Mar 2020 Nicholas Carlini, Matthew Jagielski, Ilya Mironov

We argue that the machine learning problem of model extraction is actually a cryptanalytic problem in disguise, and should be studied as such.

Model extraction

Rényi Differential Privacy of the Sampled Gaussian Mechanism

2 code implementations28 Aug 2019 Ilya Mironov, Kunal Talwar, Li Zhang

The Sampled Gaussian Mechanism (SGM)---a composition of subsampling and the additive Gaussian noise---has been successfully used in a number of machine learning applications.

That which we call private

no code implementations8 Aug 2019 Úlfar Erlingsson, Ilya Mironov, Ananth Raghunathan, Shuang Song

Instead, the definitions so named are the basis of refinements and more advanced analyses of the worst-case implications of attackers---without any change assumed in attackers' powers.

A General Approach to Adding Differential Privacy to Iterative Training Procedures

4 code implementations15 Dec 2018 H. Brendan McMahan, Galen Andrew, Ulfar Erlingsson, Steve Chien, Ilya Mironov, Nicolas Papernot, Peter Kairouz

In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the privacy mechanism, and then isolates and simplifies the critical logic that computes the final privacy guarantees.

Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity

no code implementations29 Nov 2018 Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta

We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value.

Privacy Amplification by Iteration

no code implementations20 Aug 2018 Vitaly Feldman, Ilya Mironov, Kunal Talwar, Abhradeep Thakurta

In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.

On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches

no code implementations26 Aug 2017 Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy.

BIG-bench Machine Learning valid

Renyi Differential Privacy

2 code implementations24 Feb 2017 Ilya Mironov

We propose a natural relaxation of differential privacy based on the Renyi divergence.

Cryptography and Security

Deep Learning with Differential Privacy

25 code implementations1 Jul 2016 Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.