no code implementations • 22 May 2025 • Zhehao Huang, Yuhang Liu, Yixin Lou, Zhengbao He, Mingzhen He, Wenxing Zhou, Tao Li, Kehan Li, Zeyi Huang, Xiaolin Huang
To address this, we introduce T2I-ConBench, a unified benchmark for continual post-training of text-to-image models.
no code implementations • 21 May 2025 • Zhehao Huang, Xinwen Cheng, Jie Zhang, JingHao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang
Recent advancements in deep models have highlighted the need for intelligent systems that combine continual learning (CL) for knowledge acquisition with machine unlearning (MU) for data removal, forming the Continual Learning-Unlearning (CLU) paradigm.
1 code implementation • 11 Oct 2024 • Ruikai Yang, Mingzhen He, Zhengbao He, Youmei Qiu, Xiaolin Huang
In today's over-parameterized models, dominated by neural networks, a common approach is to manually relabel data and fine-tune the well-trained model.
2 code implementations • 29 Sep 2024 • Zhehao Huang, Xinwen Cheng, JingHao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang
Approximate MU is a practical method for large-scale models.
no code implementations • 22 Sep 2024 • Tao Li, Zhengbao He, YuJun Li, Yasheng Wang, Lifeng Shang, Xiaolin Huang
Fine-tuning large-scale pre-trained models is prohibitively expensive in terms of computational and memory costs.
no code implementations • 28 May 2024 • Yingwen Wu, Ruiji Yu, Xinwen Cheng, Zhengbao He, Xiaolin Huang
In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs).
no code implementations • 24 May 2024 • Zhengbao He, Tao Li, Xinwen Cheng, Zhehao Huang, Xiaolin Huang
Towards more \textit{natural} machine unlearning, we inject correct information from the remaining data to the forgetting samples when changing their labels.
1 code implementation • CVPR 2024 • Tao Li, Pan Zhou, Zhengbao He, Xinwen Cheng, Xiaolin Huang
By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance.
no code implementations • 23 Feb 2024 • Xinwen Cheng, Zhehao Huang, WenXin Zhou, Zhengbao He, Ruikai Yang, Yingwen Wu, Xiaolin Huang
We first theoretically discover that sample's contribution during the process will reflect in the learned model's sensitivity to it.
no code implementations • 23 Feb 2023 • Zhengbao He, Tao Li, Sizhe Chen, Xiaolin Huang
Based on self-fitting, we provide new insights into the existing methods to mitigate CO and extend CO to multi-step adversarial training.
1 code implementation • 26 May 2022 • Tao Li, Zhehao Huang, Yingwen Wu, Zhengbao He, Qinghua Tao, Xiaolin Huang, Chih-Jen Lin
Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance.
no code implementations • 16 Jan 2020 • Sizhe Chen, Zhengbao He, Chengjin Sun, Jie Yang, Xiaolin Huang
AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss.
1 code implementation • 16 Dec 2019 • Sizhe Chen, Xiaolin Huang, Zhengbao He, Chengjin Sun
Adversarial samples are similar to the clean ones, but are able to cheat the attacked DNN to produce incorrect predictions in high confidence.