Search Results for author: Suhas Diggavi

Found 32 papers, 4 papers with code

Randomized Algorithms for Comparison-based Search

no code implementations NeurIPS 2011 Dominique Tschopp, Suhas Diggavi, Payam Delgosha, Soheil Mohajer

This paper addresses the problem of finding the nearest neighbor (or one of the $R$-nearest neighbors) of a query object $q$ in a database of $n$ objects, when we can only use a comparison oracle.

Object Retrieval

Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning

no code implementations14 Mar 2018 Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin

Performance of distributed optimization and learning systems is bottlenecked by "straggler" nodes and slow communication links, which significantly delay computation.

Distributed Optimization regression

Privacy-Utility Trade-off of Linear Regression under Random Projections and Additive Noise

no code implementations13 Feb 2019 Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi

Data privacy is an important concern in machine learning, and is fundamentally at odds with the task of training useful learning models, which typically require the acquisition of large amounts of private user data.

BIG-bench Machine Learning regression

Differentially Private Consensus-Based Distributed Optimization

no code implementations19 Mar 2019 Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi

Consensus-based optimization consists of a set of computational nodes arranged in a graph, each having a local objective that depends on their local data, where in every step nodes take a linear combination of their neighbors' messages, as well as taking a new gradient step.

Distributed Optimization

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations

no code implementations6 Jun 2019 Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi

Communication bottleneck has been identified as a significant issue in distributed optimization of large-scale learning models.

Distributed Optimization Quantization

Data Encoding for Byzantine-Resilient Distributed Optimization

no code implementations5 Jul 2019 Deepesh Data, Linqi Song, Suhas Diggavi

In this paper, we propose a method based on data encoding and error correction over real numbers to combat adversarial attacks.

Distributed Optimization

SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization

no code implementations31 Oct 2019 Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi

In this paper, we propose and analyze SPARQ-SGD, which is an event-triggered and compressed algorithm for decentralized training of large-scale machine learning models.

Quantization Stochastic Optimization

On Distributed Quantization for Classification

no code implementations1 Nov 2019 Osama A. Hanna, Yahya H. Ezzeldin, Tara Sadjadpour, Christina Fragouli, Suhas Diggavi

We consider the problem of distributed feature quantization, where the goal is to enable a pretrained classifier at a central node to carry out its classification on features that are gathered from distributed nodes through communication constrained channels.

Classification General Classification +1

SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization

no code implementations13 May 2020 Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi

In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network.

Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data

no code implementations16 May 2020 Deepesh Data, Suhas Diggavi

In order to be able to apply their filtering procedure in our {\em heterogeneous} data setting where workers compute {\em stochastic} gradients, we derive a new matrix concentration result, which may be of independent interest.

Vocal Bursts Intensity Prediction

Successive Refinement of Privacy

no code implementations24 May 2020 Antonious M. Girgis, Deepesh Data, Kamalika Chaudhuri, Christina Fragouli, Suhas Diggavi

This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)?

Byzantine-Resilient High-Dimensional Federated Learning

no code implementations22 Jun 2020 Deepesh Data, Suhas Diggavi

To combat the adversary, we employ an efficient high-dimensional robust mean estimation algorithm from Steinhardt et al.~\cite[ITCS 2018]{Resilience_SCV18} at the server to filter-out corrupt vectors; and to analyze the outlier-filtering procedure, we develop a novel matrix concentration result that may be of independent interest.

Federated Learning Vocal Bursts Intensity Prediction

Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs

no code implementations17 Aug 2020 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, Ananda Theertha Suresh

We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learning (FL) framework.

Federated Learning

Quantizing data for distributed learning

no code implementations14 Dec 2020 Osama A. Hanna, Yahya H. Ezzeldin, Christina Fragouli, Suhas Diggavi

In this paper, we propose an alternate approach to learn from distributed data that quantizes data instead of gradients, and can support learning over applications where the size of gradient updates is prohibitive.

Quantization

QuPeL: Quantized Personalization with Applications to Federated Learning

no code implementations23 Feb 2021 Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

When each client participating in the (federated) learning process has different requirements of the quantized model (both in value and precision), we formulate a quantized personalization framework by introducing a penalty term for local client objectives against a globally trained model to encourage collaboration.

Federated Learning Quantization

On the Renyi Differential Privacy of the Shuffle Model

no code implementations11 May 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Ananda Theertha Suresh, Peter Kairouz

The central question studied in this paper is Renyi Differential Privacy (RDP) guarantees for general discrete local mechanisms in the shuffle privacy model.

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

no code implementations NeurIPS 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi

We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.

Federated Learning Stochastic Optimization

QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning

no code implementations NeurIPS 2021 Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD that facilitates collective (personalized model compression) training via \textit{knowledge distillation} (KD) among clients who have access to heterogeneous data and resources.

Federated Learning Knowledge Distillation +2

Coded Estimation: Design of Backscatter Array Codes for 3D Orientation Estimation

no code implementations1 Dec 2021 Mohamad Rida Rammal, Suhas Diggavi, Ashutosh Sabharwal

We consider the problem of estimating the orientation of a 3D object with the assistance of configurable backscatter tags.

TAG

Decentralized Multi-Task Stochastic Optimization With Compressed Communications

no code implementations23 Dec 2021 Navjot Singh, Xuanyu Cao, Suhas Diggavi, Tamer Basar

The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes: (i) sample feedback, where each node has direct access to samples of the local random variable to evaluate its local cost, and (ii) bandit feedback, where samples of the random variables are not available, but only the values of the local cost functions at two random points close to the decision are available to each node.

Stochastic Optimization

On Leave-One-Out Conditional Mutual Information For Generalization

no code implementations1 Jul 2022 Mohamad Rida Rammal, Alessandro Achille, Aditya Golatkar, Suhas Diggavi, Stefano Soatto

We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI).

Generalization Bounds Image Classification

A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy

no code implementations5 Jul 2022 Kaan Ozkara, Antonious M. Girgis, Deepesh Data, Suhas Diggavi

In this work, we begin with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms.

Federated Learning Knowledge Distillation

Differentially Private Stochastic Linear Bandits: (Almost) for Free

no code implementations7 Jul 2022 Osama A. Hanna, Antonious M. Girgis, Christina Fragouli, Suhas Diggavi

In the shuffled model, we also achieve regret of $\tilde{O}(\sqrt{T}+\frac{1}{\epsilon})$ %for small $\epsilon$ as in the central case, while the best previously known algorithm suffers a regret of $\tilde{O}(\frac{1}{\epsilon}{T^{3/5}})$.

HQAlign: Aligning nanopore reads for SV detection using current-level modeling

no code implementations10 Jan 2023 Dhaivat Joshi, Suhas Diggavi, Mark J. P. Chaisson, Sreeram Kannan

Moreover, HQAlign improves the alignment rate to 89. 35% from minimap2 85. 64% for nanopore reads alignment to recent telomere-to-telomere CHM13 assembly, and it improves to 86. 65% from 83. 48% for nanopore reads alignment to GRCh37 human genome.

Multi-Message Shuffled Privacy in Federated Learning

no code implementations22 Feb 2023 Antonious M. Girgis, Suhas Diggavi

This also resolves an open question on the optimal trade-off for private vector sum in the MMS model.

Distributed Optimization Federated Learning +1

Representation Transfer Learning via Multiple Pre-trained models for Linear Regression

no code implementations25 May 2023 Navjot Singh, Suhas Diggavi

Assuming a representation structure for the data generating linear models at the sources and the target domains, we propose a representation transfer based learning method for constructing the target model.

regression Transfer Learning

FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space

1 code implementation NeurIPS 2023 Shengzhong Liu, Tomoyoshi Kimura, Dongxin Liu, Ruijie Wang, Jinyang Li, Suhas Diggavi, Mani Srivastava, Tarek Abdelzaher

Existing multimodal contrastive frameworks mostly rely on the shared information between sensory modalities, but do not explicitly consider the exclusive modality information that could be critical to understanding the underlying sensing physics.

Contrastive Learning Time Series

Hierarchical Bayes Approach to Personalized Federated Unsupervised Learning

1 code implementation19 Feb 2024 Kaan Ozkara, Bruce Huang, Ruida Zhou, Suhas Diggavi

Though there has been a plethora of algorithms proposed for personalized supervised learning, discovering the structure of local data through personalized unsupervised learning is less explored.

Dimensionality Reduction Federated Learning +1

On the Efficiency and Robustness of Vibration-based Foundation Models for IoT Sensing: A Case Study

no code implementations3 Apr 2024 Tomoyoshi Kimura, Jinyang Li, Tianshi Wang, Denizhan Kara, Yizhuo Chen, Yigong Hu, Ruijie Wang, Maggie Wigness, Shengzhong Liu, Mani Srivastava, Suhas Diggavi, Tarek Abdelzaher

This paper demonstrates the potential of vibration-based Foundation Models (FMs), pre-trained with unlabeled sensing data, to improve the robustness of run-time inference in (a class of) IoT applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.