Search Results for author: Gauri Joshi

Found 29 papers, 9 papers with code

Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning

no code implementations27 Apr 2022 Yae Jee Cho, Andre Manoel, Gauri Joshi, Robert Sim, Dimitrios Dimitriadis

In this work, we propose a novel ensemble knowledge transfer method named Fed-ET in which small models (different in architecture) are trained on clients, and used to train a larger model at the server.

Ensemble Learning Federated Learning +1

Federated Minimax Optimization: Improved Convergence Analyses and Algorithms

no code implementations9 Mar 2022 Pranay Sharma, Rohan Panda, Gauri Joshi, Pramod K. Varshney

In this paper, we consider nonconvex minimax optimization, which is gaining prominence in many modern machine learning applications such as GANs.

Distributed Optimization Federated Learning

FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients

no code implementations28 Jan 2022 Jianyu Wang, Hang Qi, Ankit Singh Rawat, Sashank Reddi, Sagar Waghmare, Felix X. Yu, Gauri Joshi

In classical federated learning, the clients contribute to the overall training by communicating local updates for the underlying model on their private data to a coordinating server.

Federated Learning

Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation

no code implementations NeurIPS 2021 Divyansh Jhunjhunwala, Ankur Mallick, Advait Gadhikar, Swanand Kadhe, Gauri Joshi

We study the problem of estimating at a central server the mean of a set of vectors distributed across several nodes (one vector per node).

Federated Learning

Personalized Federated Learning for Heterogeneous Clients with Clustered Knowledge Transfer

no code implementations16 Sep 2021 Yae Jee Cho, Jianyu Wang, Tarun Chiruvolu, Gauri Joshi

Personalized federated learning (FL) aims to train model(s) that can perform well for individual clients that are highly data and system heterogeneous.

Personalized Federated Learning Transfer Learning

Best-Arm Identification in Correlated Multi-Armed Bandits

no code implementations10 Sep 2021 Samarth Gupta, Gauri Joshi, Osman Yağan

In this paper we consider the problem of best-arm identification in multi-armed bandits in the fixed confidence setting, where the goal is to identify, with probability $1-\delta$ for some $\delta>0$, the arm with the highest mean reward in minimum possible samples from the set of arms $\mathcal{K}$.

Multi-Armed Bandits

Job Dispatching Policies for Queueing Systems with Unknown Service Rates

no code implementations8 Jun 2021 Tuhinangshu Choudhury, Gauri Joshi, Weina Wang, Sanjay Shakkottai

In multi-server queueing systems where there is no central queue holding all incoming jobs, job dispatching policies are used to assign incoming jobs to the queue at one of the servers.

Local Adaptivity in Federated Learning: Convergence and Consistency

no code implementations4 Jun 2021 Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, Gauri Joshi

Popular optimization algorithms of FL use vanilla (stochastic) gradient descent for both local updates at clients and global updates at the aggregating server.

Federated Learning

Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning

no code implementations8 Feb 2021 Divyansh Jhunjhunwala, Advait Gadhikar, Gauri Joshi, Yonina C. Eldar

Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning, especially in bandwidth-limited settings and high-dimensional models.

Federated Learning Quantization

Bandit-based Communication-Efficient Client Selection Strategies for Federated Learning

no code implementations14 Dec 2020 Yae Jee Cho, Samarth Gupta, Gauri Joshi, Osman Yağan

Due to communication constraints and intermittent client availability in federated learning, only a subset of clients can participate in each training round.

Fairness Federated Learning

Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies

no code implementations3 Oct 2020 Yae Jee Cho, Jianyu Wang, Gauri Joshi

Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.

Distributed Optimization Federated Learning +1

Probabilistic Neighbourhood Component Analysis: Sample Efficient Uncertainty Estimation in Deep Learning

1 code implementation18 Jul 2020 Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, T. Yong-Jin Han

In this work, we show that the uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.

COVID-19 Diagnosis

Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization

no code implementations NeurIPS 2020 Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor

In federated optimization, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round.

Slow and Stale Gradients Can Win the Race

no code implementations23 Mar 2020 Sanghamitra Dutta, Jianyu Wang, Gauri Joshi

Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in runtime as it waits for the slowest workers (stragglers).

Machine Learning on Volatile Instances

no code implementations12 Mar 2020 Xiaoxi Zhang, Jian-Yu Wang, Gauri Joshi, Carlee Joe-Wong

Due to the massive size of the neural network models and training datasets used in machine learning today, it is imperative to distribute stochastic gradient descent (SGD) by splitting up tasks such as gradient evaluation across multiple worker nodes.

Overlap Local-SGD: An Algorithmic Approach to Hide Communication Delays in Distributed SGD

1 code implementation21 Feb 2020 Jianyu Wang, Hao Liang, Gauri Joshi

In this paper, we propose an algorithmic approach named Overlap-Local-SGD (and its momentum variant) to overlap the communication and computation so as to speedup the distributed training procedure.

Multi-Armed Bandits with Correlated Arms

2 code implementations6 Nov 2019 Samarth Gupta, Shreyas Chaudhari, Gauri Joshi, Osman Yağan

We consider a multi-armed bandit framework where the rewards obtained by pulling different arms are correlated.

Multi-Armed Bandits

Deep Kernels with Probabilistic Embeddings for Small-Data Learning

1 code implementation13 Oct 2019 Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, T. Yong-Jin Han

Experiments on a variety of datasets show that our approach outperforms the state-of-the-art in GP kernel learning in both supervised and semi-supervised settings.

Gaussian Processes Representation Learning +1

Accelerating Deep Learning by Focusing on the Biggest Losers

1 code implementation2 Oct 2019 Angela H. Jiang, Daniel L. -K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, Padmanabhan Pillai

This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration.

MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling

3 code implementations23 May 2019 Jianyu Wang, Anit Kumar Sahu, Zhouyi Yang, Gauri Joshi, Soummya Kar

This paper studies the problem of error-runtime trade-off, typically encountered in decentralized training based on stochastic gradient descent (SGD) using a given network.

Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD

no code implementations19 Oct 2018 Jianyu Wang, Gauri Joshi

Large-scale machine learning training, in particular distributed stochastic gradient descent, needs to be robust to inherent system variability such as node straggling and random communication delays.

A Unified Approach to Translate Classical Bandit Algorithms to the Structured Bandit Setting

no code implementations18 Oct 2018 Samarth Gupta, Shreyas Chaudhari, Subhojyoti Mukherjee, Gauri Joshi, Osman Yağan

We consider a finite-armed structured bandit problem in which mean rewards of different arms are known functions of a common hidden parameter $\theta^*$.

Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms

no code implementations22 Aug 2018 Jianyu Wang, Gauri Joshi

Communication-efficient SGD algorithms, which allow nodes to perform local updates and periodically synchronize local models, are highly effective in improving the speed and scalability of distributed SGD.

Correlated Multi-armed Bandits with a Latent Random Source

2 code implementations17 Aug 2018 Samarth Gupta, Gauri Joshi, Osman Yağan

As a result, there are regimes where our algorithm achieves a $\mathcal{O}(1)$ regret as opposed to the typical logarithmic regret scaling of multi-armed bandit algorithms.

Multi-Armed Bandits

Active Distribution Learning from Indirect Samples

no code implementations16 Aug 2018 Samarth Gupta, Gauri Joshi, Osman Yağan

At each time step, we choose one of the possible $K$ functions, $g_1, \ldots, g_K$ and observe the corresponding sample $g_i(X)$.

Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD

no code implementations3 Mar 2018 Sanghamitra Dutta, Gauri Joshi, Soumyadip Ghosh, Parijat Dube, Priya Nagpurkar

Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers).

Cannot find the paper you are looking for? You can Submit a new open access paper.