no code implementations • 23 May 2023 • Jianyu Zhao, Yuyang Rong, Yiwen Guo, Yifeng He, Hao Chen
The effectiveness of the proposed method is verified on two program understanding tasks including code clone detection and code classification, and it outperforms current state-of-the-arts by large margins.
1 code implementation • 26 Apr 2023 • Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen
In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously.
1 code implementation • CVPR 2023 • Zeming Wei, Yifei Wang, Yiwen Guo, Yisen Wang
Adversarial training has been widely acknowledged as the most effective method to improve the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs).
1 code implementation • 10 Feb 2023 • Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen
In this paper, by contrast, we opt for the diversity in substitute models and advocate to attack a Bayesian model for achieving desirable transferability.
1 code implementation • CVPR 2023 • Zixian Guo, Bowen Dong, Zhilong Ji, Jinfeng Bai, Yiwen Guo, WangMeng Zuo
Nonetheless, visual data (e. g., images) is by default prerequisite for learning prompts in existing methods.
1 code implementation • 14 Oct 2022 • Yichuan Mo, Dongxian Wu, Yifei Wang, Yiwen Guo, Yisen Wang
We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs.
1 code implementation • 23 May 2022 • Qizhang Li, Yiwen Guo, WangMeng Zuo, Hao Chen
The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted great attention in the machine learning community.
1 code implementation • 21 Mar 2022 • Yiwen Guo, Qizhang Li, WangMeng Zuo, Hao Chen
This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.
no code implementations • 6 Mar 2022 • Yuanze Li, Yiwen Guo, Qizhang Li, Hongzhi Zhang, WangMeng Zuo
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
no code implementations • 21 Dec 2021 • Ziang Li, Yiwen Guo, Haodi Liu, ChangShui Zhang
This paper serves as a complement and somewhat an extension to Guo et al.'s paper, by providing theoretical analyses on LinBP in neural-network-involved learning tasks including adversarial attack and model training.
no code implementations • 29 Sep 2021 • Li Ziang, Yiwen Guo, Haodi Liu, ChangShui Zhang
In this paper, we study the very recent method called ``linear backpropagation'' (LinBP), which modifies the standard backpropagation and can improve the transferability in black-box adversarial attack.
no code implementations • 29 Sep 2021 • Jiyu Chen, Yiwen Guo, Hao Chen
We demonstrated the effectiveness of our attacks by extensive evaluations on multiple common data transformations and comparison with other state-of-the-art attacks.
no code implementations • NeurIPS 2021 • Zixiu Wang, Yiwen Guo, Hu Ding
In this paper, we propose a novel robust coreset method for the {\em continuous-and-bounded learning} problems (with outliers) which includes a broad range of popular optimization objectives in machine learning, {\em e. g.,} logistic regression and $ k $-means clustering.
1 code implementation • 25 Mar 2021 • Zhi Wang, Yiwen Guo, WangMeng Zuo
In this paper, we advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
no code implementations • 25 Mar 2021 • Yiwen Guo, ChangShui Zhang
This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs) that are probably the most prominent machine learning models for large-scale data in the community over the past decade.
no code implementations • 1 Jan 2021 • Li Ziang, Wu Kailun, Yiwen Guo, ChangShui Zhang
The learned iterative shrinkage thresholding algorithm (LISTA) introduces deep unfolding models with learnable thresholds in the shrinkage function for sparse coding.
no code implementations • ICLR 2021 • Ziang Yan, Yiwen Guo, Jian Liang, ChangShui Zhang
To craft black-box adversarial examples, adversaries need to query the victim model and take proper advantage of its feedback.
1 code implementation • NeurIPS 2020 • Yiwen Guo, Qizhang Li, Hao Chen
The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community.
2 code implementations • NeurIPS 2020 • Qizhang Li, Yiwen Guo, Hao Chen
We propose three mechanisms for training with a very small dataset (on the order of tens of examples) and find that prototypical reconstruction is the most effective.
no code implementations • 17 Oct 2020 • Yunchao Wei, Shuai Zheng, Ming-Ming Cheng, Hang Zhao, LiWei Wang, Errui Ding, Yi Yang, Antonio Torralba, Ting Liu, Guolei Sun, Wenguan Wang, Luc van Gool, Wonho Bae, Junhyug Noh, Jinhwan Seo, Gunhee Kim, Hao Zhao, Ming Lu, Anbang Yao, Yiwen Guo, Yurong Chen, Li Zhang, Chuangchuang Tan, Tao Ruan, Guanghua Gu, Shikui Wei, Yao Zhao, Mariia Dobko, Ostap Viniavskyi, Oles Dobosevych, Zhendong Wang, Zhenyuan Chen, Chen Gong, Huanqing Yan, Jun He
The purpose of the Learning from Imperfect Data (LID) workshop is to inspire and facilitate the research in developing novel approaches that would harness the imperfect data and improve the data-efficiency during training.
1 code implementation • ECCV 2020 • Qizhang Li, Yiwen Guo, Hao Chen
The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks.
no code implementations • 4 Jul 2020 • Yiwen Guo, Long Chen, Yurong Chen, Chang-Shui Zhang
This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs), from a theoretical point of view.
1 code implementation • 10 Jun 2020 • Xiaochen Yang, Yiwen Guo, Mingzhi Dong, Jing-Hao Xue
Many existing methods consider maximizing or at least constraining a distance margin in the feature space that separates similar and dissimilar pairs of instances to guarantee their generalization ability.
1 code implementation • ICLR 2020 • Kailun Wu, Yiwen Guo, Ziang Li, Chang-Shui Zhang
In this paper, we study the learned iterative shrinkage thresholding algorithm (LISTA) for solving sparse coding problems.
1 code implementation • NeurIPS 2019 • Jianlong Chang, Xinbang Zhang, Yiwen Guo, Gaofeng Meng, Shiming Xiang, Chunhong Pan
Neural architecture search (NAS) is inherently subject to the gap of architectures during searching and validating.
1 code implementation • 14 Nov 2019 • Ziang Yan, Yiwen Guo, Chang-Shui Zhang
The tremendous recent success of deep neural networks (DNNs) has sparked a surge of interest in understanding their predictive ability.
2 code implementations • NeurIPS 2019 • Ziang Yan, Yiwen Guo, Chang-Shui Zhang
Unlike the white-box counterparts that are widely studied and readily accessible, adversarial examples in black-box settings are generally more Herculean on account of the difficulty of estimating gradients.
no code implementations • 6 May 2019 • Jianlong Chang, Xinbang Zhang, Yiwen Guo, Gaofeng Meng, Shiming Xiang, Chunhong Pan
For network architecture search (NAS), it is crucial but challenging to simultaneously guarantee both effectiveness and efficiency.
no code implementations • 5 May 2019 • Jianlong Chang, Yiwen Guo, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, Chunhong Pan
Traditional clustering methods often perform clustering with low-level indiscriminative representations and ignore relationships between patterns, resulting in slight achievements in the era of deep learning.
no code implementations • 19 Apr 2019 • Yiwen Guo, Ming Lu, WangMeng Zuo, Chang-Shui Zhang, Yurong Chen
Convolutional neural networks have been proven effective in a variety of image restoration tasks.
no code implementations • NeurIPS 2018 • Yiwen Guo, Chao Zhang, Chang-Shui Zhang, Yurong Chen
Deep neural networks (DNNs) are computationally/memory-intensive and vulnerable to adversarial attacks, making them prohibitive in some real-world applications.
1 code implementation • NeurIPS 2018 • Ziang Yan, Yiwen Guo, Chang-Shui Zhang
Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are vulnerable to adversarial attacks, limiting their applications in security-critical systems.
no code implementations • CVPR 2017 • Hao Zhao, Ming Lu, Anbang Yao, Yiwen Guo, Yurong Chen, Li Zhang
In this paper, we propose an alternative method to estimate room layouts of cluttered indoor scenes.
no code implementations • CVPR 2017 • Yiwen Guo, Anbang Yao, Hao Zhao, Yurong Chen
Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks.
3 code implementations • 10 Feb 2017 • Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained.
4 code implementations • NeurIPS 2016 • Yiwen Guo, Anbang Yao, Yurong Chen
In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning.