no code implementations • 23 Apr 2025 • Neharika Jali, Eshika Pathak, Pranay Sharma, Guannan Qu, Gauri Joshi
Policy-based methods despite their flexibility in practice are not theoretically well understood in non-stationary RL.
no code implementations • 11 Feb 2025 • Divyansh Jhunjhunwala, Pranay Sharma, Zheng Xu, Gauri Joshi
Several recent works explore the benefits of pre-trained initialization in a federated learning (FL) setting, where the downstream training is performed at the edge clients with heterogeneous data distribution.
no code implementations • 21 Oct 2024 • Baris Askin, Pranay Sharma, Gauri Joshi, Carlee Joe-Wong
We study a federated version of multi-objective optimization (MOO), where a single model is trained to optimize multiple objective functions.
no code implementations • 17 Oct 2024 • Aleksandar Armacki, Shuhua Yu, Pranay Sharma, Gauri Joshi, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
For symmetric noise and non-convex costs we establish convergence of gradient norm-squared, at a rate $\widetilde{\mathcal{O}}(t^{-1/4})$, while for the last iterate of strongly convex costs we establish convergence to the population optima, at a rate $\mathcal{O}(t^{-\zeta})$, where $\zeta \in (0, 1)$ depends on noise and problem parameters.
no code implementations • 2 Oct 2024 • Zhenyu Sun, Ziyang Zhang, Zheng Xu, Gauri Joshi, Pranay Sharma, Ermin Wei
In cross-device federated learning (FL) with millions of mobile clients, only a small subset of clients participate in training in every communication round, and Federated Averaging (FedAvg) is the most popular algorithm in practice.
no code implementations • 1 Jun 2024 • Baris Askin, Pranay Sharma, Carlee Joe-Wong, Gauri Joshi
Much of the existing work in FL focuses on efficiently learning a model for a single task.
no code implementations • 28 Oct 2023 • Aleksandar Armacki, Pranay Sharma, Gauri Joshi, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
First, for non-convex costs and component-wise nonlinearities, we establish a convergence rate arbitrarily close to $\mathcal{O}\left(t^{-\frac{1}{4}}\right)$, whose exponent is independent of noise and problem parameters.
1 code implementation • 2 Jun 2023 • Davoud Ataee Tarzanagh, Mingchen Li, Pranay Sharma, Samet Oymak
Stochastic approximation with multiple coupled sequences (MSA) has found broad applications in machine learning as it encompasses a rich class of problems including bilevel optimization (BLO), multi-level compositional optimization (MCO), and reinforcement learning (specifically, actor-critic methods).
1 code implementation • NeurIPS 2023 • Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner, closing the approximation gap, while continuing to be efficient.
no code implementations • 4 Mar 2023 • Yihua Zhang, Pranay Sharma, Parikshit Ram, Mingyi Hong, Kush Varshney, Sijia Liu
We propose a new IRM variant to address this limitation based on a novel viewpoint of ensemble IRM games as consensus-constrained bi-level optimization.
no code implementations • 8 Feb 2023 • Pranay Sharma, Rohan Panda, Gauri Joshi
We analyze the convergence of the proposed algorithm for classes of nonconvex-concave and nonconvex-nonconcave functions and characterize the impact of heterogeneous client data, partial client participation, and heterogeneous local computations.
no code implementations • 6 Feb 2023 • Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong Zhang
Federated Averaging (FedAvg) and its variants are the most popular optimization algorithms in federated learning (FL).
1 code implementation • 28 Jul 2022 • Divyansh Jhunjhunwala, Pranay Sharma, Aushim Nagarkatti, Gauri Joshi
To remedy this, we propose FedVARP, a novel variance reduction algorithm applied at the server that eliminates error due to partial client participation.
no code implementations • 21 Jun 2022 • Sajad Khodadadian, Pranay Sharma, Gauri Joshi, Siva Theja Maguluri
Federated reinforcement learning is a framework in which $N$ agents collaboratively learn a global model, without sharing their individual data and policies.
no code implementations • 17 Mar 2022 • Shan Zhang, Pranay Sharma, Baocheng Geng, Pramod K. Varshney
To achieve greater sensor transmission and estimation efficiency, we propose a two step group-based collaborative distributed estimation scheme, where in the first step, sensors form dependence driven groups such that sensors in the same group are highly dependent, while sensors from different groups are independent, and perform a copula-based maximum a posteriori probability (MAP) estimation via intragroup collaboration.
no code implementations • 9 Mar 2022 • Pranay Sharma, Rohan Panda, Gauri Joshi, Pramod K. Varshney
In this paper, we consider nonconvex minimax optimization, which is gaining prominence in many modern machine learning applications such as GANs.
no code implementations • NeurIPS 2021 • Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, Pramod K. Varshney
Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution.
no code implementations • 21 Dec 2020 • Pranay Sharma, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Xue Lin, Pramod K. Varshney
In this work, we focus on the study of stochastic zeroth-order (ZO) optimization which does not require first-order gradient information and uses only function evaluations.
no code implementations • 1 May 2020 • Prashant Khanduri, Pranay Sharma, Swatantra Kafle, Saikiran Bulusu, Ketan Rajawat, Pramod K. Varshney
In this work, we propose a distributed algorithm for stochastic non-convex optimization.
Optimization and Control Distributed, Parallel, and Cluster Computing
no code implementations • 12 Dec 2019 • Pranay Sharma, Swatantra Kafle, Prashant Khanduri, Saikiran Bulusu, Ketan Rajawat, Pramod K. Varshney
For online problems ($n$ unknown or infinite), we achieve the optimal IFO complexity $O(\epsilon^{-3/2})$.
no code implementations • 25 Jun 2018 • Kush R. Varshney, Prashant Khanduri, Pranay Sharma, Shan Zhang, Pramod K. Varshney
Such arguments, however, fail to acknowledge that the overall decision-making system is composed of two entities: the learned model and a human who fuses together model outputs with his or her own information.