Search Results for author: Divyansh Jhunjhunwala

Found 6 papers, 2 papers with code

FedFisher: Leveraging Fisher Information for One-Shot Federated Learning

1 code implementation19 Mar 2024 Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi

Standard federated learning (FL) algorithms typically require multiple rounds of communication between the server and the clients, which has several drawbacks, including requiring constant network connectivity, repeated investment of computational resources, and susceptibility to privacy attacks.

Federated Learning

FedExP: Speeding Up Federated Averaging via Extrapolation

2 code implementations23 Jan 2023 Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi

Federated Averaging (FedAvg) remains the most popular algorithm for Federated Learning (FL) optimization due to its simple implementation, stateless nature, and privacy guarantees combined with secure aggregation.

Federated Learning

FedVARP: Tackling the Variance Due to Partial Client Participation in Federated Learning

no code implementations28 Jul 2022 Divyansh Jhunjhunwala, Pranay Sharma, Aushim Nagarkatti, Gauri Joshi

To remedy this, we propose FedVARP, a novel variance reduction algorithm applied at the server that eliminates error due to partial client participation.

Federated Learning

Maximizing Global Model Appeal in Federated Learning

no code implementations30 May 2022 Yae Jee Cho, Divyansh Jhunjhunwala, Tian Li, Virginia Smith, Gauri Joshi

We provide convergence guarantees for MaxFL and show that MaxFL achieves a $22$-$40\%$ and $18$-$50\%$ test accuracy improvement for the training clients and unseen clients respectively, compared to a wide range of FL modeling approaches, including those that tackle data heterogeneity, aim to incentivize clients, and learn personalized or fair models.

Federated Learning

Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation

no code implementations NeurIPS 2021 Divyansh Jhunjhunwala, Ankur Mallick, Advait Gadhikar, Swanand Kadhe, Gauri Joshi

We study the problem of estimating at a central server the mean of a set of vectors distributed across several nodes (one vector per node).

Federated Learning

Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning

no code implementations8 Feb 2021 Divyansh Jhunjhunwala, Advait Gadhikar, Gauri Joshi, Yonina C. Eldar

Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning, especially in bandwidth-limited settings and high-dimensional models.

Federated Learning Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.