no code implementations • 10 Jun 2023 • Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh
Further, our analysis shows that the algorithm ensures local differential privacy for all clients in terms of zero-concentrated differential privacy.
no code implementations • 27 Mar 2023 • Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh
The simulations reveal that in asynchronous settings, the proposed PAO-Fed is able to achieve the same convergence properties as that of the online federated stochastic gradient while reducing the communication overhead by 98 percent.
no code implementations • 27 Nov 2021 • Francois Gauthier, Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh
In this manner, we reduce the communication load of the participants and, therefore, render participation in the learning task more accessible.
no code implementations • 13 Oct 2021 • Vinay Chakravarthi Gogineni, Stefan Werner, Yih-Fang Huang, Anthony Kuh
As a solution, this paper presents a partial-sharing-based online federated learning framework (PSO-Fed) that enables clients to update their local models using continuous streaming data and share only portions of those updated models with the server.
no code implementations • 10 Aug 2018 • Navid Tafaghodi Khajavi, Anthony Kuh
In this paper, we present a general, multistage framework for graphical model approximation using a cascade of models such as trees.
no code implementations • 18 May 2016 • Navid Tafaghodi Khajavi, Anthony Kuh
Examples show that the quality of tree approximation models are not good in general based on information divergences, the AUC and its bounds when the number of nodes in the graphical model is large.