1 code implementation • 9 Oct 2023 • Shaopeng Fu, Di Wang
Adversarial training (AT) is a canonical method for enhancing the robustness of deep neural networks (DNNs).
2 code implementations • 28 Mar 2022 • Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, DaCheng Tao
To address this concern, methods are proposed to make data unlearnable for deep learning models by adding a type of error-minimizing noise.
1 code implementation • ICLR 2022 • Shaopeng Fu, Fengxiang He, DaCheng Tao
In this paper, we propose the first machine unlearning algorithm for MCMC.
no code implementations • ICLR 2022 • Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, DaCheng Tao
To address this concern, methods are proposed to make data unlearnable for deep learning models by adding a type of error-minimizing noise.
1 code implementation • 16 Jan 2021 • Shaopeng Fu, Fengxiang He, Yue Xu, DaCheng Tao
This paper proposes a {\it Bayesian inference forgetting} (BIF) framework to realize the right to be forgotten in Bayesian inference.
1 code implementation • 25 Dec 2020 • Fengxiang He, Shaopeng Fu, Bohan Wang, DaCheng Tao
This measure can be approximate empirically by an asymptotically consistent empirical estimator, {\it empirical robustified intensity}.
1 code implementation • 12 Nov 2020 • Zeke Xie, Fengxiang He, Shaopeng Fu, Issei Sato, DaCheng Tao, Masashi Sugiyama
Thus it motivates us to design a similar mechanism named {\it artificial neural variability} (ANV), which helps artificial neural networks learn some advantages from ``natural'' neural networks.