Search Results for author: Shaopeng Fu

Found 7 papers, 6 papers with code

Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach

1 code implementation9 Oct 2023 Shaopeng Fu, Di Wang

Adversarial training (AT) is a canonical method for enhancing the robustness of deep neural networks (DNNs).

Robust Unlearnable Examples: Protecting Data Against Adversarial Learning

2 code implementations28 Mar 2022 Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, DaCheng Tao

To address this concern, methods are proposed to make data unlearnable for deep learning models by adding a type of error-minimizing noise.

Robust Unlearnable Examples: Protecting Data Privacy Against Adversarial Learning

no code implementations ICLR 2022 Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, DaCheng Tao

To address this concern, methods are proposed to make data unlearnable for deep learning models by adding a type of error-minimizing noise.

Bayesian Inference Forgetting

1 code implementation16 Jan 2021 Shaopeng Fu, Fengxiang He, Yue Xu, DaCheng Tao

This paper proposes a {\it Bayesian inference forgetting} (BIF) framework to realize the right to be forgotten in Bayesian inference.

Bayesian Inference Variational Inference

Robustness, Privacy, and Generalization of Adversarial Training

1 code implementation25 Dec 2020 Fengxiang He, Shaopeng Fu, Bohan Wang, DaCheng Tao

This measure can be approximate empirically by an asymptotically consistent empirical estimator, {\it empirical robustified intensity}.

Generalization Bounds Privacy Preserving

Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting

1 code implementation12 Nov 2020 Zeke Xie, Fengxiang He, Shaopeng Fu, Issei Sato, DaCheng Tao, Masashi Sugiyama

Thus it motivates us to design a similar mechanism named {\it artificial neural variability} (ANV), which helps artificial neural networks learn some advantages from ``natural'' neural networks.

Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.