Search Results for author: Zhengbao He

Found 13 papers, 5 papers with code

T2I-ConBench: Text-to-Image Benchmark for Continual Post-training

no code implementations22 May 2025 Zhehao Huang, Yuhang Liu, Yixin Lou, Zhengbao He, Mingzhen He, Wenxing Zhou, Tao Li, Kehan Li, Zeyi Huang, Xiaolin Huang

To address this, we introduce T2I-ConBench, a unified benchmark for continual post-training of text-to-image models.

A Unified Gradient-based Framework for Task-agnostic Continual Learning-Unlearning

no code implementations21 May 2025 Zhehao Huang, Xinwen Cheng, Jie Zhang, JingHao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang

Recent advancements in deep models have highlighted the need for intelligent systems that combine continual learning (CL) for knowledge acquisition with machine unlearning (MU) for data removal, forming the Continual Learning-Unlearning (CLU) paradigm.

Continual Learning Incremental Learning +1

MUSO: Achieving Exact Machine Unlearning in Over-Parameterized Regimes

1 code implementation11 Oct 2024 Ruikai Yang, Mingzhen He, Zhengbao He, Youmei Qiu, Xiaolin Huang

In today's over-parameterized models, dominated by neural networks, a common approach is to manually relabel data and fine-tune the well-trained model.

Machine Unlearning Model Optimization

Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape

no code implementations22 Sep 2024 Tao Li, Zhengbao He, YuJun Li, Yasheng Wang, Lifeng Shang, Xiaolin Huang

Fine-tuning large-scale pre-trained models is prohibitively expensive in terms of computational and memory costs.

image-classification Image Classification +1

Pursuing Feature Separation based on Neural Collapse for Out-of-Distribution Detection

no code implementations28 May 2024 Yingwen Wu, Ruiji Yu, Xinwen Cheng, Zhengbao He, Xiaolin Huang

In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs).

Data Augmentation Out-of-Distribution Detection

Towards Natural Machine Unlearning

no code implementations24 May 2024 Zhengbao He, Tao Li, Xinwen Cheng, Zhehao Huang, Xiaolin Huang

Towards more \textit{natural} machine unlearning, we inject correct information from the remaining data to the forgetting samples when changing their labels.

Machine Unlearning

Friendly Sharpness-Aware Minimization

1 code implementation CVPR 2024 Tao Li, Pan Zhou, Zhengbao He, Xinwen Cheng, Xiaolin Huang

By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance.

Remaining-data-free Machine Unlearning by Suppressing Sample Contribution

no code implementations23 Feb 2024 Xinwen Cheng, Zhehao Huang, WenXin Zhou, Zhengbao He, Ruikai Yang, Yingwen Wu, Xiaolin Huang

We first theoretically discover that sample's contribution during the process will reflect in the learned model's sensitivity to it.

Machine Unlearning

Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective

no code implementations23 Feb 2023 Zhengbao He, Tao Li, Sizhe Chen, Xiaolin Huang

Based on self-fitting, we provide new insights into the existing methods to mitigate CO and extend CO to multi-step adversarial training.

Self-Learning

Trainable Weight Averaging: A General Approach for Subspace Training

1 code implementation26 May 2022 Tao Li, Zhehao Huang, Yingwen Wu, Zhengbao He, Qinghua Tao, Xiaolin Huang, Chih-Jen Lin

Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance.

Dimensionality Reduction Efficient Neural Network +4

Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet

no code implementations16 Jan 2020 Sizhe Chen, Zhengbao He, Chengjin Sun, Jie Yang, Xiaolin Huang

AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss.

Adversarial Attack

DAmageNet: A Universal Adversarial Dataset

1 code implementation16 Dec 2019 Sizhe Chen, Xiaolin Huang, Zhengbao He, Chengjin Sun

Adversarial samples are similar to the clean ones, but are able to cheat the attacked DNN to produce incorrect predictions in high confidence.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.