no code implementations • 2 Apr 2024 • Xinbao Qiao, Meng Zhang, Ming Tang, Ermin Wei
In this work, we propose a Hessian-free online unlearning method.
no code implementations • 22 Mar 2024 • Zhenyu Sun, Ermin Wei
Leveraging gradient clipping and variance reduction, our algorithm can achieve the best-known $\mathcal{O}(\epsilon^{-3})$ sample complexity and enjoys convergence speedup with simple hyperparameter tuning.
no code implementations • 6 Jun 2023 • Zhenyu Sun, Xiaochun Niu, Ermin Wei
While the existing literature has studied extensively the generalization performances of centralized machine learning algorithms, similar analysis in the federated settings is either absent or with very restrictive assumptions on the loss functions.
no code implementations • 27 May 2023 • Shuyue Lan, Zhilu Wang, Ermin Wei, Amit K. Roy-Chowdhury, Qi Zhu
We show that compared with other approaches in the literature, our frameworks achieve better coverage of important frames, while significantly reducing the number of frames processed at each agent.
no code implementations • 2 Jun 2022 • Zhenyu Sun, Ermin Wei
Most existing federated minimax algorithms either require communication per iteration or lack performance guarantees with the exception of Local Stochastic Gradient Descent Ascent (SGDA), a multiple-local-update descent ascent algorithm which guarantees convergence under a diminishing stepsize.
no code implementations • 30 Jun 2021 • Meng Zhang, Ermin Wei, Randall Berry
To this end, we formulate the first faithful implementation problem of federated learning and design two faithful federated learning mechanisms which satisfy economic properties, scalability, and privacy.
1 code implementation • 10 Aug 2020 • Shuyue Lan, Zhilu Wang, Amit K. Roy-Chowdhury, Ermin Wei, Qi Zhu
In many intelligent systems, a network of agents collaboratively perceives the environment for better and more efficient situation awareness.