Search Results for author: Huanran Chen

Found 12 papers, 10 papers with code

On the Duality Between Sharpness-Aware Minimization and Adversarial Training

1 code implementation23 Feb 2024 Yihao Zhang, Hangzhou He, Jingyu Zhu, Huanran Chen, Yifei Wang, Zeming Wei

Instead of perturbing the samples, Sharpness-Aware Minimization (SAM) perturbs the model weights during training to find a more flat loss landscape and improve generalization.

Adversarial Robustness

Precise Knowledge Transfer via Flow Matching

no code implementations3 Feb 2024 Shitong Shao, Zhiqiang Shen, Linrui Gong, Huanran Chen, Xu Dai

We name this framework Knowledge Transfer with Flow Matching (FM-KT), which can be integrated with a metric-based distillation method with any form (\textit{e. g.} vanilla KD, DKD, PKD and DIST) and a meta-encoder with any available architecture (\textit{e. g.} CNN, MLP and Transformer).

Transfer Learning

How Robust is Google's Bard to Adversarial Image Attacks?

1 code implementation21 Sep 2023 Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, Jun Zhu

By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22% success rate based solely on the transferability.

Adversarial Robustness Chatbot +1

Enhancing Adversarial Attacks: The Similar Target Method

1 code implementation21 Aug 2023 Shuo Zhang, Ziruo Wang, Zikai Zhou, Huanran Chen

Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.

Adversarial Attack

Robust Classification via a Single Diffusion Model

2 code implementations24 May 2023 Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu

Since our method does not require training on particular adversarial attacks, we demonstrate that it is more generalizable to defend against multiple unseen threats.

Adversarial Defense Adversarial Robustness +2

Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling

1 code implementation18 May 2023 Shitong Shao, Xu Dai, Shouyi Yin, Lujun Li, Huanran Chen, Yang Hu

On CIFAR-10, we obtain a FID of 2. 80 by sampling in 15 steps under one-session training and the new state-of-the-art FID of 3. 37 by sampling in one step with additional training.

Knowledge Distillation

Teaching What You Should Teach: A Data-Based Distillation Method

no code implementations11 Dec 2022 Shitong Shao, Huanran Chen, Zhen Huang, Linrui Gong, Shuai Wang, Xinxiao wu

To be specific, we design a neural network-based data augmentation module with priori bias, which assists in finding what meets the teacher's strengths but the student's weaknesses, by learning magnitudes and probabilities to generate suitable data samples.

Data Augmentation Knowledge Distillation +1

T-SEA: Transfer-based Self-Ensemble Attack on Object Detection

1 code implementation CVPR 2023 Hao Huang, Ziyan Chen, Huanran Chen, Yongtao Wang, Kevin Zhang

Then, we analogize patch optimization with regular model optimization, proposing a series of self-ensemble approaches on the input data, the attacked model, and the adversarial patch to efficiently make use of the limited information and prevent the patch from overfitting.

Adversarial Attack Model Optimization +2

Bootstrap Generalization Ability from Loss Landscape Perspective

1 code implementation18 Sep 2022 Huanran Chen, Shitong Shao, Ziyi Wang, Zirui Shang, Jin Chen, Xiaofeng Ji, Xinxiao wu

Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i. e., out-of-distribution data, which has different distribution from the training dataset.

Domain Generalization

Cannot find the paper you are looking for? You can Submit a new open access paper.