Search Results for author: Suhas Diggavi

Found 25 papers, 2 papers with code

A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy

no code implementations5 Jul 2022 Kaan Ozkara, Antonious M. Girgis, Deepesh Data, Suhas Diggavi

In this work, we begin with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms.

Federated Learning Knowledge Distillation

On Leave-One-Out Conditional Mutual Information For Generalization

no code implementations1 Jul 2022 Mohamad Rida Rammal, Alessandro Achille, Aditya Golatkar, Suhas Diggavi, Stefano Soatto

We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI).

Generalization Bounds Image Classification

Decentralized Multi-Task Stochastic Optimization With Compressed Communications

no code implementations23 Dec 2021 Navjot Singh, Xuanyu Cao, Suhas Diggavi, Tamer Basar

The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes: (i) sample feedback, where each node has direct access to samples of the local random variable to evaluate its local cost, and (ii) bandit feedback, where samples of the random variables are not available, but only the values of the local cost functions at two random points close to the decision are available to each node.

Stochastic Optimization

Coded Estimation: Design of Backscatter Array Codes for 3D Orientation Estimation

no code implementations1 Dec 2021 Mohamad Rida Rammal, Suhas Diggavi, Ashutosh Sabharwal

We consider the problem of estimating the orientation of a 3D object with the assistance of configurable backscatter tags.

TAG

QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning

no code implementations NeurIPS 2021 Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD that facilitates collective (personalized model compression) training via \textit{knowledge distillation} (KD) among clients who have access to heterogeneous data and resources.

Federated Learning Knowledge Distillation +2

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

no code implementations NeurIPS 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi

We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.

Federated Learning Stochastic Optimization

On the Renyi Differential Privacy of the Shuffle Model

no code implementations11 May 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Ananda Theertha Suresh, Peter Kairouz

The central question studied in this paper is Renyi Differential Privacy (RDP) guarantees for general discrete local mechanisms in the shuffle privacy model.

QuPeL: Quantized Personalization with Applications to Federated Learning

no code implementations23 Feb 2021 Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

When each client participating in the (federated) learning process has different requirements of the quantized model (both in value and precision), we formulate a quantized personalization framework by introducing a penalty term for local client objectives against a globally trained model to encourage collaboration.

Federated Learning Quantization

Quantizing data for distributed learning

no code implementations14 Dec 2020 Osama A. Hanna, Yahya H. Ezzeldin, Christina Fragouli, Suhas Diggavi

In this paper, we propose an alternate approach to learn from distributed data that quantizes data instead of gradients, and can support learning over applications where the size of gradient updates is prohibitive.

Quantization

Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs

no code implementations17 Aug 2020 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, Ananda Theertha Suresh

We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learning (FL) framework.

Federated Learning

Byzantine-Resilient High-Dimensional Federated Learning

no code implementations22 Jun 2020 Deepesh Data, Suhas Diggavi

To combat the adversary, we employ an efficient high-dimensional robust mean estimation algorithm from Steinhardt et al.~\cite[ITCS 2018]{Resilience_SCV18} at the server to filter-out corrupt vectors; and to analyze the outlier-filtering procedure, we develop a novel matrix concentration result that may be of independent interest.

Federated Learning

Successive Refinement of Privacy

no code implementations24 May 2020 Antonious M. Girgis, Deepesh Data, Kamalika Chaudhuri, Christina Fragouli, Suhas Diggavi

This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)?

Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data

no code implementations16 May 2020 Deepesh Data, Suhas Diggavi

In order to be able to apply their filtering procedure in our {\em heterogeneous} data setting where workers compute {\em stochastic} gradients, we derive a new matrix concentration result, which may be of independent interest.

SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization

no code implementations13 May 2020 Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi

In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network.

On Distributed Quantization for Classification

no code implementations1 Nov 2019 Osama A. Hanna, Yahya H. Ezzeldin, Tara Sadjadpour, Christina Fragouli, Suhas Diggavi

We consider the problem of distributed feature quantization, where the goal is to enable a pretrained classifier at a central node to carry out its classification on features that are gathered from distributed nodes through communication constrained channels.

Classification General Classification +1

SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization

no code implementations31 Oct 2019 Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi

In this paper, we propose and analyze SPARQ-SGD, which is an event-triggered and compressed algorithm for decentralized training of large-scale machine learning models.

Quantization Stochastic Optimization

Data Encoding for Byzantine-Resilient Distributed Optimization

no code implementations5 Jul 2019 Deepesh Data, Linqi Song, Suhas Diggavi

In this paper, we propose a method based on data encoding and error correction over real numbers to combat adversarial attacks.

Distributed Optimization

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations

no code implementations6 Jun 2019 Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi

Communication bottleneck has been identified as a significant issue in distributed optimization of large-scale learning models.

Distributed Optimization Quantization

Differentially Private Consensus-Based Distributed Optimization

no code implementations19 Mar 2019 Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi

Consensus-based optimization consists of a set of computational nodes arranged in a graph, each having a local objective that depends on their local data, where in every step nodes take a linear combination of their neighbors' messages, as well as taking a new gradient step.

Distributed Optimization

Privacy-Utility Trade-off of Linear Regression under Random Projections and Additive Noise

no code implementations13 Feb 2019 Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi

Data privacy is an important concern in machine learning, and is fundamentally at odds with the task of training useful learning models, which typically require the acquisition of large amounts of private user data.

Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning

no code implementations14 Mar 2018 Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin

Performance of distributed optimization and learning systems is bottlenecked by "straggler" nodes and slow communication links, which significantly delay computation.

Distributed Optimization

Randomized Algorithms for Comparison-based Search

no code implementations NeurIPS 2011 Dominique Tschopp, Suhas Diggavi, Payam Delgosha, Soheil Mohajer

This paper addresses the problem of finding the nearest neighbor (or one of the $R$-nearest neighbors) of a query object $q$ in a database of $n$ objects, when we can only use a comparison oracle.

Cannot find the paper you are looking for? You can Submit a new open access paper.