no code implementations • 28 May 2024 • Yingwen Wu, Ruiji Yu, Xinwen Cheng, Zhengbao He, Xiaolin Huang
In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs).
no code implementations • 24 May 2024 • Zhengbao He, Tao Li, Xinwen Cheng, Zhehao Huang, Xiaolin Huang
Towards more \textit{natural} machine unlearning, we inject correct information from the remaining data to the forgetting samples when changing their labels.
1 code implementation • 19 Mar 2024 • Tao Li, Pan Zhou, Zhengbao He, Xinwen Cheng, Xiaolin Huang
By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance.
no code implementations • 23 Feb 2023 • Zhengbao He, Tao Li, Sizhe Chen, Xiaolin Huang
Based on self-fitting, we provide new insights into the existing methods to mitigate CO and extend CO to multi-step adversarial training.
1 code implementation • 26 May 2022 • Tao Li, Zhehao Huang, Yingwen Wu, Zhengbao He, Qinghua Tao, Xiaolin Huang, Chih-Jen Lin
Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance.
no code implementations • 16 Jan 2020 • Sizhe Chen, Zhengbao He, Chengjin Sun, Jie Yang, Xiaolin Huang
AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss.
1 code implementation • 16 Dec 2019 • Sizhe Chen, Xiaolin Huang, Zhengbao He, Chengjin Sun
Adversarial samples are similar to the clean ones, but are able to cheat the attacked DNN to produce incorrect predictions in high confidence.