Search Results for author: Baoyuan Wu

Found 62 papers, 30 papers with code

Sparse Adversarial Attack via Perturbation Factorization

1 code implementation ECCV 2020 Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, Yujiu Yang

Based on this factorization, we formulate the sparse attack problem as a mixed integer programming (MIP) to jointly optimize the binary selection factors and continuous perturbation magnitudes of all pixels, with a cardinality constraint on selection factors to explicitly control the degree of sparsity.

Adversarial Attack

Boosting Decision-based Black-box Adversarial Attacks with Random Sign Flip

no code implementations ECCV 2020 Wei-Lun Chen, Zhao-Xiang Zhang, Xiaolin Hu, Baoyuan Wu

Decision-based black-box adversarial attacks (decision-based attack) pose a severe threat to current deep neural networks, as they only need the predicted label of the target model to craft adversarial examples.

Visual Prompt Based Personalized Federated Learning

no code implementations15 Mar 2023 Guanghao Li, Wansen Wu, Yan Sun, Li Shen, Baoyuan Wu, DaCheng Tao

Then, the local model is trained on the input composed of raw data and a visual prompt to learn the distribution information contained in the prompt.

Image Classification Personalized Federated Learning

Adversarial Machine Learning: A Systematic Survey of Backdoor Attack, Weight Attack and Adversarial Example

no code implementations19 Feb 2023 Baoyuan Wu, Li Liu, Zihao Zhu, Qingshan Liu, Zhaofeng He, Siwei Lyu

Some paradigms have been recently developed to explore this adversarial phenomenon occurring at different stages of a machine learning system, such as training-time adversarial attack (i. e., backdoor attack), deployment-time adversarial attack (i. e., weight attack), and inference-time adversarial attack (i. e., adversarial example).

Backdoor Attack

Generalizable Black-Box Adversarial Attack with Meta Learning

1 code implementation1 Jan 2023 Fei Yin, Yong Zhang, Baoyuan Wu, Yan Feng, Jingyi Zhang, Yanbo Fan, Yujiu Yang

In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget.

Adversarial Attack Meta-Learning

Visually Adversarial Attacks and Defenses in the physical world: A Survey

no code implementations3 Nov 2022 Xingxing Wei, Bangzheng Pu, Jiefan Lu, Baoyuan Wu

The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.

Adversarial Robustness

Learning to Optimize Permutation Flow Shop Scheduling via Graph-based Imitation Learning

1 code implementation31 Oct 2022 Longkang Li, Siyuan Liang, Zihao Zhu, Xiaochun Cao, Chris Ding, Hongyuan Zha, Baoyuan Wu

Compared to the state-of-the-art reinforcement learning method, our model's network parameters are reduced to only 37\% of theirs, and the solution gap of our model towards the expert solutions decreases from 6. 8\% to 1. 3\% on average.

Imitation Learning reinforcement-learning +2

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

2 code implementations12 Oct 2022 Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, Baoyuan Wu

Furthermore, RAP can be naturally combined with many existing black-box attack techniques, to further boost the transferability.

Adversarial Attack

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

no code implementations2 Oct 2022 Jiancong Xiao, Zeyu Qin, Yanbo Fan, Baoyuan Wu, Jue Wang, Zhi-Quan Luo

Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in $\ell_1$, $\ell_2$, and $\ell_\infty$ norm-bounded perturbations).

Adversarial Robustness

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

no code implementations16 Sep 2022 Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao

Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.

object-detection Object Detection

Imperceptible and Robust Backdoor Attack in 3D Point Cloud

1 code implementation17 Aug 2022 Kuofeng Gao, Jiawang Bai, Baoyuan Wu, Mengxi Ya, Shu-Tao Xia

Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e. g., rotation) to construct the poisoned point cloud.

Backdoor Attack

Versatile Weight Attack via Flipping Limited Bits

1 code implementation25 Jul 2022 Jiawang Bai, Baoyuan Wu, Zhifeng Li, Shu-Tao Xia

Utilizing the latest technique in integer programming, we equivalently reformulate this MIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.

Backdoor Attack

Prior-Guided Adversarial Initialization for Fast Adversarial Training

1 code implementation18 Jul 2022 Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

Based on the observation, we propose a prior-guided FGSM initialization method to avoid overfitting after investigating several initialization strategies, improving the quality of the AEs during the whole training process.

BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

1 code implementation25 Jun 2022 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Chao Shen

However, we find that the evaluations of new methods are often unthorough to verify their claims and accurate performance, mainly due to the rapid development, diverse settings, and the difficulties of implementation and reproducibility.

Backdoor Attack

LAS-AT: Adversarial Training with Learnable Attack Strategy

1 code implementation CVPR 2022 Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

In this paper, we propose a novel framework for adversarial training by introducing the concept of "learnable attack strategy", dubbed LAS-AT, which learns to automatically produce attack strategies to improve the model robustness.

StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN

1 code implementation8 Mar 2022 Fei Yin, Yong Zhang, Xiaodong Cun, Mingdeng Cao, Yanbo Fan, Xuan Wang, Qingyan Bai, Baoyuan Wu, Jue Wang, Yujiu Yang

Our framework elevates the resolution of the synthesized talking face to 1024*1024 for the first time, even though the training dataset has a lower resolution.

Facial Editing Talking Face Generation +1

Backdoor Defense via Decoupling the Training Process

2 code implementations ICLR 2022 Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, Kui Ren

Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples.

Self-Supervised Learning

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

no code implementations ICCV 2021 Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao

Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.

Autonomous Driving Image Classification +2

Accelerating Neural Network Optimization Through an Automated Control Theory Lens

no code implementations CVPR 2022 Jiahao Wang, Baoyuan Wu, Rui Su, Mingdeng Cao, Shuwei Shi, Wanli Ouyang, Yujiu Yang

We conduct experiments both from a control theory lens through a phase locus verification and from a network training lens on several models, including CNNs, Transformers, MLPs, and on benchmark datasets.

Boosting Fast Adversarial Training with Learnable Adversarial Initialization

no code implementations11 Oct 2021 Xiaojun Jia, Yong Zhang, Baoyuan Wu, Jue Wang, Xiaochun Cao

Adversarial training (AT) has been demonstrated to be effective in improving model robustness by leveraging adversarial examples for training.

Robust Physical-World Attacks on Face Recognition

no code implementations20 Sep 2021 Xin Zheng, Yanbo Fan, Baoyuan Wu, Yong Zhang, Jue Wang, Shirui Pan

Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications.

Adversarial Attack Adversarial Robustness +1

Regional Adversarial Training for Better Robust Generalization

no code implementations2 Sep 2021 Chuanbiao Song, Yanbo Fan, Yichen Yang, Baoyuan Wu, Yiming Li, Zhifeng Li, Kun He

Adversarial training (AT) has been demonstrated as one of the most promising defense methods against various adversarial attacks.

Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing

1 code implementation CVPR 2021 Xunguang Wang, Zheng Zhang, Baoyuan Wu, Fumin Shen, Guangming Lu

However, deep hashing networks are vulnerable to adversarial examples, which is a practical secure problem but seldom studied in hashing-based retrieval field.

Image Retrieval Representation Learning +1

Random Noise Defense Against Query-Based Black-Box Attacks

1 code implementation NeurIPS 2021 Zeyu Qin, Yanbo Fan, Hongyuan Zha, Baoyuan Wu

We conduct the theoretical analysis about the effectiveness of RND against query-based black-box attacks and the corresponding adaptive attacks.

Adversarial Robustness

Towards Corruption-Agnostic Robust Domain Adaptation

no code implementations21 Apr 2021 Yifan Xu, Kekai Sheng, WeiMing Dong, Baoyuan Wu, Changsheng Xu, Bao-Gang Hu

However, due to unpredictable corruptions (e. g., noise and blur) in real data like web images, domain adaptation methods are increasingly required to be corruption robust on target domains.

Domain Adaptation

Towards Open-World Text-Guided Face Image Generation and Manipulation

2 code implementations18 Apr 2021 Weihao Xia, Yujiu Yang, Jing-Hao Xue, Baoyuan Wu

To be specific, we propose a brand new paradigm of text-guided image generation and manipulation based on the superior characteristics of a pretrained GAN model.

Language Modelling Semantic Segmentation +1

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

2 code implementations ICLR 2021 Jiawang Bai, Baoyuan Wu, Yong Zhang, Yiming Li, Zhifeng Li, Shu-Tao Xia

By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.

Backdoor Attack

Effective and Efficient Vote Attack on Capsule Networks

1 code implementation ICLR 2021 Jindong Gu, Baoyuan Wu, Volker Tresp

As alternatives to CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more robust to white-box attacks than CNNs under popular attack protocols.

Adversarial Robustness

Meta-Attack: Class-Agnostic and Model-Agnostic Physical Adversarial Attack

no code implementations ICCV 2021 Weiwei Feng, Baoyuan Wu, Tianzhu Zhang, Yong Zhang, Yongdong Zhang

To tackle these issues, we propose a class-agnostic and model-agnostic physical adversarial attack model (Meta-Attack), which is able to not only generate robust physical adversarial examples by simulating color and shape distortions, but also generalize to attacking novel images and novel DNN models by accessing a few digital and physical images.

Adversarial Attack Few-Shot Learning

Dual ResGCN for Balanced Scene GraphGeneration

no code implementations9 Nov 2020 Jingyi Zhang, Yong Zhang, Baoyuan Wu, Yanbo Fan, Fumin Shen, Heng Tao Shen

We propose to incorporate the prior about the co-occurrence of relation pairs into the graph to further help alleviate the class imbalance issue.

Graph Generation Scene Graph Generation

Pixel-wise Dense Detector for Image Inpainting

no code implementations4 Nov 2020 Ruisong Zhang, Weize Quan, Baoyuan Wu, Zhifeng Li, Dong-Ming Yan

Recent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar, which inevitably lose the position information of visual artifacts.

Image Inpainting Weakly-supervised Learning

Backdoor Attack against Speaker Verification

1 code implementation22 Oct 2020 Tongqing Zhai, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, Shu-Tao Xia

We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.

Backdoor Attack Speaker Verification

Teacher-Student Competition for Unsupervised Domain Adaptation

no code implementations19 Oct 2020 Ruixin Xiao, Zhilei Liu, Baoyuan Wu

With the supervision from source domain only in class-level, existing unsupervised domain adaptation (UDA) methods mainly learn the domain-invariant representations from a shared feature extractor, which causes the source-bias problem.

Unsupervised Domain Adaptation

Open-sourced Dataset Protection via Backdoor Watermarking

1 code implementation12 Oct 2020 Yiming Li, Ziqi Zhang, Jiawang Bai, Baoyuan Wu, Yong Jiang, Shu-Tao Xia

Based on the proposed backdoor-based watermarking, we use a hypothesis test guided method for dataset verification based on the posterior probability generated by the suspicious third-party model of the benign samples and their correspondingly watermarked samples ($i. e.$, images with trigger) on the target class.

Image Classification

SPL-MLL: Selecting Predictable Landmarks for Multi-Label Learning

no code implementations ECCV 2020 Junbing Li, Changqing Zhang, Pengfei Zhu, Baoyuan Wu, Lei Chen, QinGhua Hu

Although significant progress achieved, multi-label classification is still challenging due to the complexity of correlations among different labels.

General Classification Multi-Label Classification +1

Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution

1 code implementation CVPR 2022 Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, Shutao Xia

This work studies black-box adversarial attacks against deep neural networks (DNNs), where the attacker can only access the query feedback returned by the attacked DNN model, while other information such as model parameters or the training datasets are unknown.

Adversarial Attack

Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients

no code implementations12 May 2020 Chengcheng Ma, Baoyuan Wu, Shibiao Xu, Yanbo Fan, Yong Zhang, Xiaopeng Zhang, Zhifeng Li

In this work, we study the detection of adversarial examples, based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD), but with different parameters (i. e., shape factor, mean, and variance).

Image Classification

Rethinking the Trigger of Backdoor Attack

no code implementations9 Apr 2020 Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, Shu-Tao Xia

Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of the infected model will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger, while it performs well on benign samples.

Backdoor Attack

Toward Adversarial Robustness via Semi-supervised Robust Training

1 code implementation16 Mar 2020 Yiming Li, Baoyuan Wu, Yan Feng, Yanbo Fan, Yong Jiang, Zhifeng Li, Shu-Tao Xia

In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.

Adversarial Defense Adversarial Robustness

Joint Face Completion and Super-resolution using Multi-scale Feature Relation Learning

no code implementations29 Feb 2020 Zhilei Liu, Yunpeng Wu, Le Li, Cuicui Zhang, Baoyuan Wu

This paper proposes a multi-scale feature graph generative adversarial network (MFG-GAN) to implement the face restoration of images in which both degradation modes coexist, and also to repair images with a single type of degradation.

Facial Inpainting Super-Resolution

Controllable Descendant Face Synthesis

no code implementations26 Feb 2020 Yong Zhang, Le Li, Zhilei Liu, Baoyuan Wu, Yanbo Fan, Zhifeng Li

Most of the existing methods train models for one-versus-one kin relation, which only consider one parent face and one child face by directly using an auto-encoder without any explicit control over the resemblance of the synthesized face to the parent face.

Face Generation

Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables

1 code implementation CVPR 2019 Yan Xu, Baoyuan Wu, Fumin Shen, Yanbo Fan, Yong Zhang, Heng Tao Shen, Wei Liu

Due to the sequential dependencies among words in a caption, we formulate the generation of adversarial noises for targeted partial captions as a structured output learning problem with latent variables.

Adversarial Attack Image Captioning

MAP Inference via L2-Sphere Linear Program Reformulation

1 code implementation9 May 2019 Baoyuan Wu, Li Shen, Tong Zhang, Bernard Ghanem

Thus, LS-LP is equivalent to the original MAP inference problem.

Efficient Decision-based Black-box Adversarial Attacks on Face Recognition

1 code implementation CVPR 2019 Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu

In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.

Face Recognition

Target-Aware Deep Tracking

no code implementations CVPR 2019 Xin Li, Chao Ma, Baoyuan Wu, Zhenyu He, Ming-Hsuan Yang

Despite demonstrated successes for numerous vision tasks, the contributions of using pre-trained deep features for visual tracking are not as significant as that for object recognition.

Object Recognition Visual Tracking

Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual Representation Learning

1 code implementation7 Jan 2019 Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, Tong Zhang

In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.

Image Classification object-detection +5

Learning to Compose Dynamic Tree Structures for Visual Contexts

6 code implementations CVPR 2019 Kaihua Tang, Hanwang Zhang, Baoyuan Wu, Wenhan Luo, Wei Liu

We propose to compose dynamic tree structures that place the objects in an image into a visual context, helping visual reasoning tasks such as scene graph generation and visual Q&A.

Graph Generation Panoptic Scene Graph Generation +2

Bi-Real Net: Binarizing Deep Network Towards Real-Network Performance

1 code implementation4 Nov 2018 Zechun Liu, Wenhan Luo, Baoyuan Wu, Xin Yang, Wei Liu, Kwang-Ting Cheng

To address the training difficulty, we propose a training algorithm using a tighter approximation to the derivative of the sign function, a magnitude-aware gradient for weight updating, a better initialization method, and a two-step scheme for training a deep network.

Depth Estimation

Multi-label Learning with Missing Labels using Mixed Dependency Graphs

no code implementations31 Mar 2018 Baoyuan Wu, Fan Jia, Wei Liu, Bernard Ghanem, Siwei Lyu

This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels.

Image Retrieval Multi-Label Learning +2

Tagging like Humans: Diverse and Distinct Image Annotation

no code implementations CVPR 2018 Baoyuan Wu, Weidong Chen, Peng Sun, Wei Liu, Bernard Ghanem, Siwei Lyu

In D2IA, we generate a relevant and distinct tag subset, in which the tags are relevant to the image contents and semantically distinct to each other, using sequential sampling from a determinantal point process (DPP) model.


A Proximal Block Coordinate Descent Algorithm for Deep Neural Network Training

no code implementations24 Mar 2018 Tim Tsz-Kit Lau, Jinshan Zeng, Baoyuan Wu, Yuan Yao

Training deep neural networks (DNNs) efficiently is a challenge due to the associated highly nonconvex optimization.

Diverse Image Annotation

no code implementations CVPR 2017 Baoyuan Wu, Fan Jia, Wei Liu, Bernard Ghanem

To this end, we treat the image annotation as a subset selection problem based on the conditional determinantal point process (DPP) model, which formulates the representation and diversity jointly.


$\ell_p$-Box ADMM: A Versatile Framework for Integer Programming

no code implementations26 Apr 2016 Baoyuan Wu, Bernard Ghanem

This paper revisits the integer programming (IP) problem, which plays a fundamental role in many computer vision and machine learning applications.

Graph Matching

ML-MG: Multi-Label Learning With Missing Labels Using a Mixed Graph

no code implementations ICCV 2015 Baoyuan Wu, Siwei Lyu, Bernard Ghanem

This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i. e. some of their labels are missing).

Multi-Label Learning

Constrained Clustering and Its Application to Face Clustering in Videos

no code implementations CVPR 2013 Baoyuan Wu, Yifan Zhang, Bao-Gang Hu, Qiang Ji

As a result, many pairwise constraints between faces can be easily obtained from the temporal and spatial knowledge of the face tracks.

Face Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.