Search Results for author: Karthik Prasad

Found 4 papers, 3 papers with code

Pruning Compact ConvNets for Efficient Inference

no code implementations11 Jan 2023 Sayan Ghosh, Karthik Prasad, Xiaoliang Dai, Peizhao Zhang, Bichen Wu, Graham Cormode, Peter Vajda

The resulting family of pruned models can consistently obtain better performance than existing FBNetV3 models at the same level of computation, and thus provide state-of-the-art results when trading off between computational complexity and generalization performance on the ImageNet benchmark.

Network Pruning Neural Architecture Search

Reconciling Security and Communication Efficiency in Federated Learning

1 code implementation26 Jul 2022 Karthik Prasad, Sayan Ghosh, Graham Cormode, Ilya Mironov, Ashkan Yousefpour, Pierre Stock

Cross-device Federated Learning is an increasingly popular machine learning setting to train a model by leveraging a large population of client devices with high privacy and security guarantees.

Federated Learning Quantization

Opacus: User-Friendly Differential Privacy Library in PyTorch

3 code implementations25 Sep 2021 Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Ghosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, Ilya Mironov

We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus. ai).

Antipodes of Label Differential Privacy: PATE and ALIBI

1 code implementation NeurIPS 2021 Mani Malek, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramèr

We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks.

Bayesian Inference Memorization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.