no code implementations • 29 Jul 2015 • Vinay Chakravarthi Gogineni, Mrityunjoy Chakraborty
The performance analysis of the proposed multi-task diffusion APA algorithm is studied in mean and mean square sense.
no code implementations • 13 Oct 2021 • Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh
As a solution, this paper presents a partial-sharing-based online federated learning framework (PSO-Fed) that enables clients to update their local models using continuous streaming data and share only portions of those updated models with the server.
no code implementations • 27 Nov 2021 • Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh
In this manner, we reduce the communication load of the participants and, therefore, render participation in the learning task more accessible.
no code implementations • 27 Mar 2023 • Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh
The simulations reveal that in asynchronous settings, the proposed PAO-Fed is able to achieve the same convergence properties as that of the online federated stochastic gradient while reducing the communication overhead by 98 percent.
no code implementations • 10 Jun 2023 • Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh
Further, our analysis shows that the algorithm ensures local differential privacy for all clients in terms of zero-concentrated differential privacy.
no code implementations • 26 Jul 2023 • Ashkan Moradi, Vinay Chakravarthi Gogineni, Naveen K. D. Venkategowda, Stefan Werner
Numerical results demonstrate the accuracy of the proposed BR-CDF and its robustness against Byzantine attacks.
no code implementations • 4 Sep 2023 • Reza Mirzaeifard, Naveen K. D. Venkategowda, Vinay Chakravarthi Gogineni, Stefan Werner
This paper investigates quantile regression in the presence of non-convex and non-smooth sparse penalties, such as the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD).
no code implementations • 12 Mar 2024 • Vinay Chakravarthi Gogineni, Esmaeil S. Nadimi
In this method, updates made to the model during training are pruned and stored, subsequently used to forget specific data from trained model.
no code implementations • 19 Mar 2024 • Ehsan Lari, Vinay Chakravarthi Gogineni, Reza Arablouei, Stefan Werner
PSO-Fed reduces the communication load by enabling clients to exchange only a fraction of their model estimates with the server at each update round.