no code implementations • 19 Mar 2024 • Cheng-Long Wang, Qi Li, Zihang Xiang, Yinzhi Cao, Di Wang
Our analysis, conducted across multiple unlearning benchmarks, reveals that these algorithms inconsistently fulfill their unlearning commitments due to two main issues: 1) unlearning new data can significantly affect the unlearning utility of previously requested data, and 2) approximate algorithms fail to ensure equitable unlearning utility across different groups.
no code implementations • 17 Feb 2024 • Shu Yang, Muhammad Asif Ali, Cheng-Long Wang, Lijie Hu, Di Wang
Adapting large language models (LLMs) to new domains/tasks and enabling them to be efficient lifelong learners is a pivotal challenge.
no code implementations • 19 Jan 2024 • Youming Tao, Cheng-Long Wang, Miao Pan, Dongxiao Yu, Xiuzhen Cheng, Di Wang
We start by giving a rigorous definition of \textit{exact} federated unlearning, which guarantees that the unlearned model is statistically indistinguishable from the one trained without the deleted data.
no code implementations • 12 Oct 2023 • Hanpu Shen, Cheng-Long Wang, Zihang Xiang, Yiming Ying, Di Wang
This paper focuses on the problem of Differentially Private Stochastic Optimization for (multi-layer) fully connected neural networks with a single output node.
1 code implementation • 6 Apr 2023 • Cheng-Long Wang, Mengdi Huai, Di Wang
To extend machine unlearning to graph data, \textit{GraphEraser} has been proposed.
no code implementations • 26 Feb 2022 • Junren Chen, Cheng-Long Wang, Michael K. Ng, Di Wang
In heavy-tailed regime, while the rates of our estimators become essentially slower, these results are either the first ones in an 1-bit quantized and heavy-tailed setting, or already improve on existing comparable results from some respect.