no code implementations • 7 Oct 2024 • Ruida Zhou, Chao Tian, Suhas Diggavi
Large language models have demonstrated impressive in-context learning (ICL) capability.
no code implementations • 30 Aug 2024 • Mohamad Rida Rammal, Ruida Zhou, Suhas Diggavi
However, with the push for ever-larger language models, relying on valuation methods that require training becomes increasingly expensive and dependent on specific techniques.
no code implementations • 3 Apr 2024 • Tomoyoshi Kimura, Jinyang Li, Tianshi Wang, Denizhan Kara, Yizhuo Chen, Yigong Hu, Ruijie Wang, Maggie Wigness, Shengzhong Liu, Mani Srivastava, Suhas Diggavi, Tarek Abdelzaher
This paper demonstrates the potential of vibration-based Foundation Models (FMs), pre-trained with unlabeled sensing data, to improve the robustness of run-time inference in (a class of) IoT applications.
1 code implementation • 19 Feb 2024 • Kaan Ozkara, Bruce Huang, Ruida Zhou, Suhas Diggavi
Though there has been a plethora of algorithms proposed for personalized supervised learning, discovering the structure of local data through personalized unsupervised learning is less explored.
1 code implementation • NeurIPS 2023 • Shengzhong Liu, Tomoyoshi Kimura, Dongxin Liu, Ruijie Wang, Jinyang Li, Suhas Diggavi, Mani Srivastava, Tarek Abdelzaher
Existing multimodal contrastive frameworks mostly rely on the shared information between sensory modalities, but do not explicitly consider the exclusive modality information that could be critical to understanding the underlying sensing physics.
no code implementations • 25 May 2023 • Navjot Singh, Suhas Diggavi
Assuming a representation structure for the data generating linear models at the sources and the target domains, we propose a representation transfer based learning method for constructing the target model.
no code implementations • 22 Feb 2023 • Antonious M. Girgis, Suhas Diggavi
This also resolves an open question on the optimal trade-off for private vector sum in the MMS model.
no code implementations • 10 Jan 2023 • Dhaivat Joshi, Suhas Diggavi, Mark J. P. Chaisson, Sreeram Kannan
Moreover, HQAlign improves the alignment rate to 89. 35% from minimap2 85. 64% for nanopore reads alignment to recent telomere-to-telomere CHM13 assembly, and it improves to 86. 65% from 83. 48% for nanopore reads alignment to GRCh37 human genome.
no code implementations • 7 Jul 2022 • Osama A. Hanna, Antonious M. Girgis, Christina Fragouli, Suhas Diggavi
In the shuffled model, we also achieve regret of $\tilde{O}(\sqrt{T}+\frac{1}{\epsilon})$ %for small $\epsilon$ as in the central case, while the best previously known algorithm suffers a regret of $\tilde{O}(\frac{1}{\epsilon}{T^{3/5}})$.
no code implementations • 5 Jul 2022 • Kaan Ozkara, Antonious M. Girgis, Deepesh Data, Suhas Diggavi
In this work, we begin with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms.
no code implementations • 1 Jul 2022 • Mohamad Rida Rammal, Alessandro Achille, Aditya Golatkar, Suhas Diggavi, Stefano Soatto
We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI).
no code implementations • 23 Dec 2021 • Navjot Singh, Xuanyu Cao, Suhas Diggavi, Tamer Basar
The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes: (i) sample feedback, where each node has direct access to samples of the local random variable to evaluate its local cost, and (ii) bandit feedback, where samples of the random variables are not available, but only the values of the local cost functions at two random points close to the decision are available to each node.
no code implementations • 1 Dec 2021 • Mohamad Rida Rammal, Suhas Diggavi, Ashutosh Sabharwal
We consider the problem of estimating the orientation of a 3D object with the assistance of configurable backscatter tags.
no code implementations • NeurIPS 2021 • Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi
In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD that facilitates collective (personalized model compression) training via \textit{knowledge distillation} (KD) among clients who have access to heterogeneous data and resources.
no code implementations • NeurIPS 2021 • Antonious M. Girgis, Deepesh Data, Suhas Diggavi
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.
2 code implementations • 14 Jul 2021 • Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.
no code implementations • 11 May 2021 • Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Ananda Theertha Suresh, Peter Kairouz
The central question studied in this paper is Renyi Differential Privacy (RDP) guarantees for general discrete local mechanisms in the shuffle privacy model.
no code implementations • 23 Feb 2021 • Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi
When each client participating in the (federated) learning process has different requirements of the quantized model (both in value and precision), we formulate a quantized personalization framework by introducing a penalty term for local client objectives against a globally trained model to encourage collaboration.
no code implementations • 14 Dec 2020 • Osama A. Hanna, Yahya H. Ezzeldin, Christina Fragouli, Suhas Diggavi
In this paper, we propose an alternate approach to learn from distributed data that quantizes data instead of gradients, and can support learning over applications where the size of gradient updates is prohibitive.
no code implementations • 17 Aug 2020 • Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, Ananda Theertha Suresh
We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learning (FL) framework.
no code implementations • 22 Jun 2020 • Deepesh Data, Suhas Diggavi
To combat the adversary, we employ an efficient high-dimensional robust mean estimation algorithm from Steinhardt et al.~\cite[ITCS 2018]{Resilience_SCV18} at the server to filter-out corrupt vectors; and to analyze the outlier-filtering procedure, we develop a novel matrix concentration result that may be of independent interest.
no code implementations • 24 May 2020 • Antonious M. Girgis, Deepesh Data, Kamalika Chaudhuri, Christina Fragouli, Suhas Diggavi
This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)?
no code implementations • 16 May 2020 • Deepesh Data, Suhas Diggavi
In order to be able to apply their filtering procedure in our {\em heterogeneous} data setting where workers compute {\em stochastic} gradients, we derive a new matrix concentration result, which may be of independent interest.
no code implementations • 13 May 2020 • Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi
In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network.
1 code implementation • NeurIPS 2019 • Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi
Communication bottleneck has been identified as a significant issue in distributed optimization of large-scale learning models.
no code implementations • 1 Nov 2019 • Osama A. Hanna, Yahya H. Ezzeldin, Tara Sadjadpour, Christina Fragouli, Suhas Diggavi
We consider the problem of distributed feature quantization, where the goal is to enable a pretrained classifier at a central node to carry out its classification on features that are gathered from distributed nodes through communication constrained channels.
no code implementations • 31 Oct 2019 • Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi
In this paper, we propose and analyze SPARQ-SGD, which is an event-triggered and compressed algorithm for decentralized training of large-scale machine learning models.
no code implementations • 5 Jul 2019 • Deepesh Data, Linqi Song, Suhas Diggavi
In this paper, we propose a method based on data encoding and error correction over real numbers to combat adversarial attacks.
no code implementations • 6 Jun 2019 • Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi
Communication bottleneck has been identified as a significant issue in distributed optimization of large-scale learning models.
no code implementations • 19 Mar 2019 • Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi
Consensus-based optimization consists of a set of computational nodes arranged in a graph, each having a local objective that depends on their local data, where in every step nodes take a linear combination of their neighbors' messages, as well as taking a new gradient step.
no code implementations • 13 Feb 2019 • Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi
Data privacy is an important concern in machine learning, and is fundamentally at odds with the task of training useful learning models, which typically require the acquisition of large amounts of private user data.
no code implementations • 14 Mar 2018 • Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin
Performance of distributed optimization and learning systems is bottlenecked by "straggler" nodes and slow communication links, which significantly delay computation.
no code implementations • NeurIPS 2017 • Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin
Slow running or straggler tasks can significantly reduce computation speed in distributed computation.
no code implementations • NeurIPS 2011 • Dominique Tschopp, Suhas Diggavi, Payam Delgosha, Soheil Mohajer
This paper addresses the problem of finding the nearest neighbor (or one of the $R$-nearest neighbors) of a query object $q$ in a database of $n$ objects, when we can only use a comparison oracle.