1 code implementation • ECCV 2020 • Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, Yujiu Yang
Based on this factorization, we formulate the sparse attack problem as a mixed integer programming (MIP) to jointly optimize the binary selection factors and continuous perturbation magnitudes of all pixels, with a cardinality constraint on selection factors to explicitly control the degree of sparsity.
no code implementations • ECCV 2020 • Wei-Lun Chen, Zhao-Xiang Zhang, Xiaolin Hu, Baoyuan Wu
Decision-based black-box adversarial attacks (decision-based attack) pose a severe threat to current deep neural networks, as they only need the predicted label of the target model to craft adversarial examples.
no code implementations • 15 Mar 2023 • Guanghao Li, Wansen Wu, Yan Sun, Li Shen, Baoyuan Wu, DaCheng Tao
Then, the local model is trained on the input composed of raw data and a visual prompt to learn the distribution information contained in the prompt.
no code implementations • 19 Feb 2023 • Baoyuan Wu, Li Liu, Zihao Zhu, Qingshan Liu, Zhaofeng He, Siwei Lyu
Some paradigms have been recently developed to explore this adversarial phenomenon occurring at different stages of a machine learning system, such as training-time adversarial attack (i. e., backdoor attack), deployment-time adversarial attack (i. e., weight attack), and inference-time adversarial attack (i. e., adversarial example).
1 code implementation • 1 Jan 2023 • Fei Yin, Yong Zhang, Baoyuan Wu, Yan Feng, Jingyi Zhang, Yanbo Fan, Yujiu Yang
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget.
no code implementations • 3 Nov 2022 • Xingxing Wei, Bangzheng Pu, Jiefan Lu, Baoyuan Wu
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
1 code implementation • 31 Oct 2022 • Longkang Li, Siyuan Liang, Zihao Zhu, Xiaochun Cao, Chris Ding, Hongyuan Zha, Baoyuan Wu
Compared to the state-of-the-art reinforcement learning method, our model's network parameters are reduced to only 37\% of theirs, and the solution gap of our model towards the expert solutions decreases from 6. 8\% to 1. 3\% on average.
2 code implementations • 12 Oct 2022 • Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, Baoyuan Wu
Furthermore, RAP can be naturally combined with many existing black-box attack techniques, to further boost the transferability.
no code implementations • 2 Oct 2022 • Jiancong Xiao, Zeyu Qin, Yanbo Fan, Baoyuan Wu, Jue Wang, Zhi-Quan Luo
Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in $\ell_1$, $\ell_2$, and $\ell_\infty$ norm-bounded perturbations).
no code implementations • 16 Sep 2022 • Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao
Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.
1 code implementation • 17 Aug 2022 • Kuofeng Gao, Jiawang Bai, Baoyuan Wu, Mengxi Ya, Shu-Tao Xia
Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e. g., rotation) to construct the poisoned point cloud.
1 code implementation • 25 Jul 2022 • Jiawang Bai, Baoyuan Wu, Zhifeng Li, Shu-Tao Xia
Utilizing the latest technique in integer programming, we equivalently reformulate this MIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.
1 code implementation • 18 Jul 2022 • Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
Based on the observation, we propose a prior-guided FGSM initialization method to avoid overfitting after investigating several initialization strategies, improving the quality of the AEs during the whole training process.
1 code implementation • 5 Jul 2022 • Longkang Li, Baoyuan Wu
Integer programming (IP) is an important and challenging problem.
1 code implementation • 25 Jun 2022 • Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Chao Shen
However, we find that the evaluations of new methods are often unthorough to verify their claims and accurate performance, mainly due to the rapid development, diverse settings, and the difficulties of implementation and reproducibility.
no code implementations • 20 May 2022 • Bingzhe Wu, Jintang Li, Junchi Yu, Yatao Bian, Hengtong Zhang, Chaochao Chen, Chengbin Hou, Guoji Fu, Liang Chen, Tingyang Xu, Yu Rong, Xiaolin Zheng, Junzhou Huang, Ran He, Baoyuan Wu, Guangyu Sun, Peng Cui, Zibin Zheng, Zhe Liu, Peilin Zhao
Deep graph learning has achieved remarkable progresses in both business and scientific areas ranging from finance and e-commerce, to drug and advanced material discovery.
1 code implementation • CVPR 2022 • Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
In this paper, we propose a novel framework for adversarial training by introducing the concept of "learnable attack strategy", dubbed LAS-AT, which learns to automatically produce attack strategies to improve the model robustness.
1 code implementation • 8 Mar 2022 • Fei Yin, Yong Zhang, Xiaodong Cun, Mingdeng Cao, Yanbo Fan, Xuan Wang, Qingyan Bai, Baoyuan Wu, Jue Wang, Yujiu Yang
Our framework elevates the resolution of the synthesized talking face to 1024*1024 for the first time, even though the training dataset has a lower resolution.
2 code implementations • ICLR 2022 • Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, Kui Ren
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples.
no code implementations • ICCV 2021 • Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao
Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
no code implementations • CVPR 2022 • Jiahao Wang, Baoyuan Wu, Rui Su, Mingdeng Cao, Shuwei Shi, Wanli Ouyang, Yujiu Yang
We conduct experiments both from a control theory lens through a phase locus verification and from a network training lens on several models, including CNNs, Transformers, MLPs, and on benchmark datasets.
no code implementations • 11 Oct 2021 • Xiaojun Jia, Yong Zhang, Baoyuan Wu, Jue Wang, Xiaochun Cao
Adversarial training (AT) has been demonstrated to be effective in improving model robustness by leveraging adversarial examples for training.
no code implementations • 20 Sep 2021 • Xin Zheng, Yanbo Fan, Baoyuan Wu, Yong Zhang, Jue Wang, Shirui Pan
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications.
no code implementations • 2 Sep 2021 • Chuanbiao Song, Yanbo Fan, Yichen Yang, Baoyuan Wu, Yiming Li, Zhifeng Li, Kun He
Adversarial training (AT) has been demonstrated as one of the most promising defense methods against various adversarial attacks.
1 code implementation • CVPR 2021 • Xunguang Wang, Zheng Zhang, Baoyuan Wu, Fumin Shen, Guangming Lu
However, deep hashing networks are vulnerable to adversarial examples, which is a practical secure problem but seldom studied in hashing-based retrieval field.
1 code implementation • NeurIPS 2021 • Zeyu Qin, Yanbo Fan, Hongyuan Zha, Baoyuan Wu
We conduct the theoretical analysis about the effectiveness of RND against query-based black-box attacks and the corresponding adaptive attacks.
no code implementations • 21 Apr 2021 • Yifan Xu, Kekai Sheng, WeiMing Dong, Baoyuan Wu, Changsheng Xu, Bao-Gang Hu
However, due to unpredictable corruptions (e. g., noise and blur) in real data like web images, domain adaptation methods are increasingly required to be corruption robust on target domains.
2 code implementations • 18 Apr 2021 • Weihao Xia, Yujiu Yang, Jing-Hao Xue, Baoyuan Wu
To be specific, we propose a brand new paradigm of text-guided image generation and manipulation based on the superior characteristics of a pretrained GAN model.
Ranked #4 on
Text-to-Image Generation
on Multi-Modal-CelebA-HQ
1 code implementation • CVPR 2021 • Gengcong Yang, Jingyi Zhang, Yong Zhang, Baoyuan Wu, Yujiu Yang
The ambiguity naturally leads to the issue of \emph{implicit multi-label}, motivating the need for diverse predictions.
2 code implementations • ICLR 2021 • Jiawang Bai, Baoyuan Wu, Yong Zhang, Yiming Li, Zhifeng Li, Shu-Tao Xia
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.
1 code implementation • ICLR 2021 • Jindong Gu, Baoyuan Wu, Volker Tresp
As alternatives to CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more robust to white-box attacks than CNNs under popular attack protocols.
no code implementations • ICCV 2021 • Weiwei Feng, Baoyuan Wu, Tianzhu Zhang, Yong Zhang, Yongdong Zhang
To tackle these issues, we propose a class-agnostic and model-agnostic physical adversarial attack model (Meta-Attack), which is able to not only generate robust physical adversarial examples by simulating color and shape distortions, but also generalize to attacking novel images and novel DNN models by accessing a few digital and physical images.
2 code implementations • CVPR 2021 • Weihao Xia, Yujiu Yang, Jing-Hao Xue, Baoyuan Wu
In this work, we propose TediGAN, a novel framework for multi-modal image generation and manipulation with textual descriptions.
Ranked #5 on
Text-to-Image Generation
on Multi-Modal-CelebA-HQ
no code implementations • 9 Nov 2020 • Jingyi Zhang, Yong Zhang, Baoyuan Wu, Yanbo Fan, Fumin Shen, Heng Tao Shen
We propose to incorporate the prior about the co-occurrence of relation pairs into the graph to further help alleviate the class imbalance issue.
no code implementations • 4 Nov 2020 • Ruisong Zhang, Weize Quan, Baoyuan Wu, Zhifeng Li, Dong-Ming Yan
Recent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar, which inevitably lose the position information of visual artifacts.
1 code implementation • 22 Oct 2020 • Tongqing Zhai, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, Shu-Tao Xia
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
no code implementations • 19 Oct 2020 • Ruixin Xiao, Zhilei Liu, Baoyuan Wu
With the supervision from source domain only in class-level, existing unsupervised domain adaptation (UDA) methods mainly learn the domain-invariant representations from a shared feature extractor, which causes the source-bias problem.
1 code implementation • 12 Oct 2020 • Yiming Li, Ziqi Zhang, Jiawang Bai, Baoyuan Wu, Yong Jiang, Shu-Tao Xia
Based on the proposed backdoor-based watermarking, we use a hypothesis test guided method for dataset verification based on the posterior probability generated by the suspicious third-party model of the benign samples and their correspondingly watermarked samples ($i. e.$, images with trigger) on the target class.
no code implementations • ECCV 2020 • Junbing Li, Changqing Zhang, Pengfei Zhu, Baoyuan Wu, Lei Chen, QinGhua Hu
Although significant progress achieved, multi-label classification is still challenging due to the complexity of correlations among different labels.
1 code implementation • CVPR 2022 • Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, Shutao Xia
This work studies black-box adversarial attacks against deep neural networks (DNNs), where the attacker can only access the query feedback returned by the attacked DNN model, while other information such as model parameters or the training datasets are unknown.
no code implementations • 12 May 2020 • Chengcheng Ma, Baoyuan Wu, Shibiao Xu, Yanbo Fan, Yong Zhang, Xiaopeng Zhang, Zhifeng Li
In this work, we study the detection of adversarial examples, based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD), but with different parameters (i. e., shape factor, mean, and variance).
no code implementations • 9 Apr 2020 • Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, Shu-Tao Xia
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of the infected model will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger, while it performs well on benign samples.
1 code implementation • 16 Mar 2020 • Yiming Li, Baoyuan Wu, Yan Feng, Yanbo Fan, Yong Jiang, Zhifeng Li, Shu-Tao Xia
In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.
no code implementations • 29 Feb 2020 • Zhilei Liu, Yunpeng Wu, Le Li, Cuicui Zhang, Baoyuan Wu
This paper proposes a multi-scale feature graph generative adversarial network (MFG-GAN) to implement the face restoration of images in which both degradation modes coexist, and also to repair images with a single type of degradation.
no code implementations • 26 Feb 2020 • Yong Zhang, Le Li, Zhilei Liu, Baoyuan Wu, Yanbo Fan, Zhifeng Li
Most of the existing methods train models for one-versus-one kin relation, which only consider one parent face and one child face by directly using an auto-encoder without any explicit control over the resemblance of the synthesized face to the parent face.
no code implementations • 21 Jun 2019 • Yuezun Li, Xin Yang, Baoyuan Wu, Siwei Lyu
Recent years have seen fast development in synthesizing realistic human faces using AI technologies.
1 code implementation • CVPR 2019 • Yan Xu, Baoyuan Wu, Fumin Shen, Yanbo Fan, Yong Zhang, Heng Tao Shen, Wei Liu
Due to the sequential dependencies among words in a caption, we formulate the generation of adversarial noises for targeted partial captions as a structured output learning problem with latent variables.
1 code implementation • 9 May 2019 • Baoyuan Wu, Li Shen, Tong Zhang, Bernard Ghanem
Thus, LS-LP is equivalent to the original MAP inference problem.
1 code implementation • CVPR 2019 • Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu
In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.
no code implementations • CVPR 2019 • Xin Li, Chao Ma, Baoyuan Wu, Zhenyu He, Ming-Hsuan Yang
Despite demonstrated successes for numerous vision tasks, the contributions of using pre-trained deep features for visual tracking are not as significant as that for object recognition.
1 code implementation • 7 Jan 2019 • Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, Tong Zhang
In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.
6 code implementations • CVPR 2019 • Kaihua Tang, Hanwang Zhang, Baoyuan Wu, Wenhan Luo, Wei Liu
We propose to compose dynamic tree structures that place the objects in an image into a visual context, helping visual reasoning tasks such as scene graph generation and visual Q&A.
Ranked #3 on
Panoptic Scene Graph Generation
on PSG Dataset
1 code implementation • 4 Nov 2018 • Zechun Liu, Wenhan Luo, Baoyuan Wu, Xin Yang, Wei Liu, Kwang-Ting Cheng
To address the training difficulty, we propose a training algorithm using a tighter approximation to the derivative of the sign function, a magnitude-aware gradient for weight updating, a better initialization method, and a two-step scheme for training a deep network.
4 code implementations • ECCV 2018 • Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, Kwang-Ting Cheng
In this work, we study the 1-bit convolutional neural networks (CNNs), of which both the weights and activations are binary.
no code implementations • 31 Mar 2018 • Baoyuan Wu, Fan Jia, Wei Liu, Bernard Ghanem, Siwei Lyu
This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels.
no code implementations • CVPR 2018 • Baoyuan Wu, Weidong Chen, Peng Sun, Wei Liu, Bernard Ghanem, Siwei Lyu
In D2IA, we generate a relevant and distinct tag subset, in which the tags are relevant to the image contents and semantically distinct to each other, using sequential sampling from a determinantal point process (DPP) model.
no code implementations • CVPR 2018 • Linchao Bao, Baoyuan Wu, Wei Liu
With temporal dependencies established by optical flow, the resulting MRF model combines both spatial and temporal cues for tackling video object segmentation.
Ranked #3 on
Semi-Supervised Video Object Segmentation
on YouTube
no code implementations • 24 Mar 2018 • Tim Tsz-Kit Lau, Jinshan Zeng, Baoyuan Wu, Yuan Yao
Training deep neural networks (DNNs) efficiently is a challenge due to the associated highly nonconvex optimization.
no code implementations • CVPR 2017 • Baoyuan Wu, Fan Jia, Wei Liu, Bernard Ghanem
To this end, we treat the image annotation as a subset selection problem based on the conditional determinantal point process (DPP) model, which formulates the representation and diversity jointly.
no code implementations • 26 Apr 2016 • Baoyuan Wu, Bernard Ghanem
This paper revisits the integer programming (IP) problem, which plays a fundamental role in many computer vision and machine learning applications.
no code implementations • ICCV 2015 • Baoyuan Wu, Siwei Lyu, Bernard Ghanem
This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i. e. some of their labels are missing).
no code implementations • CVPR 2013 • Baoyuan Wu, Yifan Zhang, Bao-Gang Hu, Qiang Ji
As a result, many pairwise constraints between faces can be easily obtained from the temporal and spatial knowledge of the face tracks.