1 code implementation • 23 Feb 2024 • Yihao Zhang, Hangzhou He, Jingyu Zhu, Huanran Chen, Yifei Wang, Zeming Wei
Instead of perturbing the samples, Sharpness-Aware Minimization (SAM) perturbs the model weights during training to find a more flat loss landscape and improve generalization.
1 code implementation • 4 Feb 2024 • Huanran Chen, Yinpeng Dong, Shitong Shao, Zhongkai Hao, Xiao Yang, Hang Su, Jun Zhu
Diffusion models are recently employed as generative classifiers for robust classification.
no code implementations • 3 Feb 2024 • Shitong Shao, Zhiqiang Shen, Linrui Gong, Huanran Chen, Xu Dai
We name this framework Knowledge Transfer with Flow Matching (FM-KT), which can be integrated with a metric-based distillation method with any form (\textit{e. g.} vanilla KD, DKD, PKD and DIST) and a meta-encoder with any available architecture (\textit{e. g.} CNN, MLP and Transformer).
1 code implementation • 21 Sep 2023 • Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, Jun Zhu
By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22% success rate based solely on the transferability.
1 code implementation • 21 Aug 2023 • Shuo Zhang, Ziruo Wang, Zikai Zhou, Huanran Chen
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.
1 code implementation • 7 Aug 2023 • Zikai Zhou, Shuo Zhang, Ziruo Wang, Huanran Chen
The success of deep learning is inseparable from normalization layers.
2 code implementations • 24 May 2023 • Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu
Since our method does not require training on particular adversarial attacks, we demonstrate that it is more generalizable to defend against multiple unseen threats.
Ranked #2 on Adversarial Defense on CIFAR-10
1 code implementation • 18 May 2023 • Shitong Shao, Xu Dai, Shouyi Yin, Lujun Li, Huanran Chen, Yang Hu
On CIFAR-10, we obtain a FID of 2. 80 by sampling in 15 steps under one-session training and the new state-of-the-art FID of 3. 37 by sampling in one step with additional training.
2 code implementations • 16 Mar 2023 • Huanran Chen, Yichi Zhang, Yinpeng Dong, Xiao Yang, Hang Su, Jun Zhu
It is widely recognized that deep learning models lack robustness to adversarial examples.
no code implementations • 11 Dec 2022 • Shitong Shao, Huanran Chen, Zhen Huang, Linrui Gong, Shuai Wang, Xinxiao wu
To be specific, we design a neural network-based data augmentation module with priori bias, which assists in finding what meets the teacher's strengths but the student's weaknesses, by learning magnitudes and probabilities to generate suitable data samples.
1 code implementation • CVPR 2023 • Hao Huang, Ziyan Chen, Huanran Chen, Yongtao Wang, Kevin Zhang
Then, we analogize patch optimization with regular model optimization, proposing a series of self-ensemble approaches on the input data, the attacked model, and the adversarial patch to efficiently make use of the limited information and prevent the patch from overfitting.
1 code implementation • 18 Sep 2022 • Huanran Chen, Shitong Shao, Ziyi Wang, Zirui Shang, Jin Chen, Xiaofeng Ji, Xinxiao wu
Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i. e., out-of-distribution data, which has different distribution from the training dataset.