no code implementations • 29 Sep 2024 • Zhehao Huang, Xinwen Cheng, JingHao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang
Approximate MU is a practical method for large-scale models.
no code implementations • 28 May 2024 • Yingwen Wu, Ruiji Yu, Xinwen Cheng, Zhengbao He, Xiaolin Huang
In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs).
no code implementations • 24 May 2024 • Zhengbao He, Tao Li, Xinwen Cheng, Zhehao Huang, Xiaolin Huang
Towards more \textit{natural} machine unlearning, we inject correct information from the remaining data to the forgetting samples when changing their labels.
1 code implementation • CVPR 2024 • Tao Li, Pan Zhou, Zhengbao He, Xinwen Cheng, Xiaolin Huang
By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance.
no code implementations • 23 Feb 2024 • Xinwen Cheng, Zhehao Huang, WenXin Zhou, Zhengbao He, Ruikai Yang, Yingwen Wu, Xiaolin Huang
We first theoretically discover that sample's contribution during the process will reflect in the learned model's sensitivity to it.
no code implementations • 26 Oct 2023 • Yingwen Wu, Tao Li, Xinwen Cheng, Jie Yang, Xiaolin Huang
To bridge this gap, in this paper, we conduct a comprehensive investigation into leveraging the entirety of gradient information for OOD detection.
1 code implementation • 22 Nov 2022 • Sizhe Chen, Geng Yuan, Xinwen Cheng, Yifan Gong, Minghai Qin, Yanzhi Wang, Xiaolin Huang
In this paper, we uncover them by model checkpoints' gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints' cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures.
no code implementations • 27 Sep 2022 • Zhixing Ye, Xinwen Cheng, Xiaolin Huang
Deep Neural Networks (DNNs) are susceptible to elaborately designed perturbations, whether such perturbations are dependent or independent of images.