Search Results for author: Deepesh Data

Found 15 papers, 2 papers with code

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations

no code implementations6 Jun 2019 Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi

Communication bottleneck has been identified as a significant issue in distributed optimization of large-scale learning models.

Distributed Optimization Quantization

Data Encoding for Byzantine-Resilient Distributed Optimization

no code implementations5 Jul 2019 Deepesh Data, Linqi Song, Suhas Diggavi

In this paper, we propose a method based on data encoding and error correction over real numbers to combat adversarial attacks.

Distributed Optimization

SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization

no code implementations31 Oct 2019 Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi

In this paper, we propose and analyze SPARQ-SGD, which is an event-triggered and compressed algorithm for decentralized training of large-scale machine learning models.

Quantization Stochastic Optimization

SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization

no code implementations13 May 2020 Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi

In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network.

Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data

no code implementations16 May 2020 Deepesh Data, Suhas Diggavi

In order to be able to apply their filtering procedure in our {\em heterogeneous} data setting where workers compute {\em stochastic} gradients, we derive a new matrix concentration result, which may be of independent interest.

Vocal Bursts Intensity Prediction

Successive Refinement of Privacy

no code implementations24 May 2020 Antonious M. Girgis, Deepesh Data, Kamalika Chaudhuri, Christina Fragouli, Suhas Diggavi

This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)?

Byzantine-Resilient High-Dimensional Federated Learning

no code implementations22 Jun 2020 Deepesh Data, Suhas Diggavi

To combat the adversary, we employ an efficient high-dimensional robust mean estimation algorithm from Steinhardt et al.~\cite[ITCS 2018]{Resilience_SCV18} at the server to filter-out corrupt vectors; and to analyze the outlier-filtering procedure, we develop a novel matrix concentration result that may be of independent interest.

Federated Learning Vocal Bursts Intensity Prediction

Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs

no code implementations17 Aug 2020 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, Ananda Theertha Suresh

We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learning (FL) framework.

Federated Learning

QuPeL: Quantized Personalization with Applications to Federated Learning

no code implementations23 Feb 2021 Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

When each client participating in the (federated) learning process has different requirements of the quantized model (both in value and precision), we formulate a quantized personalization framework by introducing a penalty term for local client objectives against a globally trained model to encourage collaboration.

Federated Learning Quantization

On the Renyi Differential Privacy of the Shuffle Model

no code implementations11 May 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Ananda Theertha Suresh, Peter Kairouz

The central question studied in this paper is Renyi Differential Privacy (RDP) guarantees for general discrete local mechanisms in the shuffle privacy model.

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

no code implementations NeurIPS 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi

We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.

Federated Learning Stochastic Optimization

QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning

no code implementations NeurIPS 2021 Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD that facilitates collective (personalized model compression) training via \textit{knowledge distillation} (KD) among clients who have access to heterogeneous data and resources.

Federated Learning Knowledge Distillation +2

A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy

no code implementations5 Jul 2022 Kaan Ozkara, Antonious M. Girgis, Deepesh Data, Suhas Diggavi

In this work, we begin with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms.

Federated Learning Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.