Search Results for author: Qizhang Li

Found 11 papers, 9 papers with code

DualAug: Exploiting Additional Heavy Augmentation with OOD Data Rejection

1 code implementation12 Oct 2023 Zehao Wang, Yiwen Guo, Qizhang Li, Guanglei Yang, WangMeng Zuo

Most existing data augmentation methods tend to find a compromise in augmenting the data, \textit{i. e.}, increasing the amplitude of augmentation carefully to avoid degrading some data too much and doing harm to the model performance.

Data Augmentation Image Classification +1

Improving Transferability of Adversarial Examples via Bayesian Attacks

no code implementations21 Jul 2023 Qizhang Li, Yiwen Guo, Xiaochen Yang, WangMeng Zuo, Hao Chen

Our ICLR work advocated for enhancing transferability in adversarial examples by incorporating a Bayesian formulation into model parameters, which effectively emulates the ensemble of infinitely many deep neural networks, while, in this paper, we introduce a novel extension by incorporating the Bayesian formulation into the model input as well, enabling the joint diversification of both the model input and model parameters.

Improving Adversarial Transferability via Intermediate-level Perturbation Decay

2 code implementations NeurIPS 2023 Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen

In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously.

Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples

1 code implementation10 Feb 2023 Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen

In this paper, by contrast, we opt for the diversity in substitute models and advocate to attack a Bayesian model for achieving desirable transferability.

Squeeze Training for Adversarial Robustness

1 code implementation23 May 2022 Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen

The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted great attention in the machine learning community.

Adversarial Robustness

An Intermediate-level Attack Framework on The Basis of Linear Regression

1 code implementation21 Mar 2022 Yiwen Guo, Qizhang Li, WangMeng Zuo, Hao Chen

This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.

regression

On Steering Multi-Annotations per Sample for Multi-Task Learning

no code implementations6 Mar 2022 Yuanze Li, Yiwen Guo, Qizhang Li, Hongzhi Zhang, WangMeng Zuo

Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.

Instance Segmentation Multi-Task Learning +2

Backpropagating Linearly Improves Transferability of Adversarial Examples

1 code implementation NeurIPS 2020 Yiwen Guo, Qizhang Li, Hao Chen

The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community.

Practical No-box Adversarial Attacks against DNNs

2 code implementations NeurIPS 2020 Qizhang Li, Yiwen Guo, Hao Chen

We propose three mechanisms for training with a very small dataset (on the order of tens of examples) and find that prototypical reconstruction is the most effective.

Face Verification Image Classification

Yet Another Intermediate-Level Attack

2 code implementations ECCV 2020 Qizhang Li, Yiwen Guo, Hao Chen

The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.