Search Results for author: Baoyuan Wu

Found 88 papers, 41 papers with code

Boosting Decision-based Black-box Adversarial Attacks with Random Sign Flip

no code implementations ECCV 2020 Wei-Lun Chen, Zhao-Xiang Zhang, Xiaolin Hu, Baoyuan Wu

Decision-based black-box adversarial attacks (decision-based attack) pose a severe threat to current deep neural networks, as they only need the predicted label of the target model to craft adversarial examples.

Sparse Adversarial Attack via Perturbation Factorization

1 code implementation ECCV 2020 Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, Yujiu Yang

Based on this factorization, we formulate the sparse attack problem as a mixed integer programming (MIP) to jointly optimize the binary selection factors and continuous perturbation magnitudes of all pixels, with a cardinality constraint on selection factors to explicitly control the degree of sparsity.

Adversarial Attack

Decentralized Directed Collaboration for Personalized Federated Learning

no code implementations CVPR 2024 Yingqi Liu, Yifan Shi, Qinglun Li, Baoyuan Wu, Xueqian Wang, Li Shen

To avoid the central failure and communication bottleneck in the server-based FL, we concentrate on the Decentralized Personalized Federated Learning (DPFL) that performs distributed model training in a Peer-to-Peer (P2P) manner.

Personalized Federated Learning

Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor

no code implementations25 May 2024 Shaokui Wei, Hongyuan Zha, Baoyuan Wu

Data-poisoning backdoor attacks are serious security threats to machine learning models, where an adversary can manipulate the training dataset to inject backdoors into models.

Backdoor Attack backdoor defense +1

Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack

no code implementations25 May 2024 Mingli Zhu, Siyuan Liang, Baoyuan Wu

Surprisingly, we find that the original backdoors still exist in defense models derived from existing post-training defense strategies, and the backdoor existence is measured by a novel metric called backdoor existence coefficient.

Adversarial Attack backdoor defense +2

Data-Independent Operator: A Training-Free Artifact Representation Extractor for Generalizable Deepfake Detection

1 code implementation11 Mar 2024 Chuangchuang Tan, Ping Liu, Renshuai Tao, Huan Liu, Yao Zhao, Baoyuan Wu, Yunchao Wei

Due to its unbias towards both the training and test sources, we define it as Data-Independent Operator (DIO) to achieve appealing improvements on unseen sources.

DeepFake Detection Face Swapping

Spurious Feature Eraser: Stabilizing Test-Time Adaptation for Vision-Language Foundation Model

1 code implementation1 Mar 2024 Huan Ma, Yan Zhu, Changqing Zhang, Peilin Zhao, Baoyuan Wu, Long-Kai Huang, QinGhua Hu, Bingzhe Wu

Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired data.

Fine-Grained Image Classification Language Modelling +1

BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning

no code implementations26 Jan 2024 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Mingli Zhu, Ruotong Wang, Li Liu, Chao Shen

We hope that our efforts could build a solid foundation of backdoor learning to facilitate researchers to investigate existing algorithms, develop more innovative algorithms, and explore the intrinsic mechanism of backdoor learning.

Backdoor Attack

Enhanced Few-Shot Class-Incremental Learning via Ensemble Models

no code implementations14 Jan 2024 Mingli Zhu, Zihao Zhu, Sihong Chen, Chen Chen, Baoyuan Wu

To tackle overfitting challenge, we design a new ensemble model framework cooperated with data augmentation to boost generalization.

Data Augmentation Few-Shot Class-Incremental Learning +2

Defenses in Adversarial Machine Learning: A Survey

no code implementations13 Dec 2023 Baoyuan Wu, Shaokui Wei, Mingli Zhu, Meixi Zheng, Zihao Zhu, Mingda Zhang, Hongrui Chen, Danni Yuan, Li Liu, Qingshan Liu

Adversarial phenomenon has been widely observed in machine learning (ML) systems, especially in those using deep neural networks, describing that ML systems may produce inconsistent and incomprehensible predictions with humans at some particular cases.

Task-Distributionally Robust Data-Free Meta-Learning

no code implementations23 Nov 2023 Zixuan Hu, Li Shen, Zhenyi Wang, Yongxian Wei, Baoyuan Wu, Chun Yuan, DaCheng Tao

TDS leads to a biased meta-learner because of the skewed task distribution towards newly generated tasks.

Meta-Learning Model Selection

BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning

no code implementations CVPR 2024 Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang

This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.

Backdoor Attack Contrastive Learning

Transcending Forgery Specificity with Latent Space Augmentation for Generalizable Deepfake Detection

1 code implementation CVPR 2024 Zhiyuan Yan, Yuhao Luo, Siwei Lyu, Qingshan Liu, Baoyuan Wu

Deepfake detection faces a critical generalization hurdle, with performance deteriorating when there is a mismatch between the distributions of training and testing data.

DeepFake Detection Face Swapping +1

ToonTalker: Cross-Domain Face Reenactment

no code implementations ICCV 2023 Yuan Gong, Yong Zhang, Xiaodong Cun, Fei Yin, Yanbo Fan, Xuan Wang, Baoyuan Wu, Yujiu Yang

Moreover, since no paired data is provided, we propose a novel cross-domain training scheme using data from two domains with the designed analogy constraint.

Face Reenactment Talking Face Generation

Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy

no code implementations14 Jul 2023 Zihao Zhu, Mingda Zhang, Shaokui Wei, Li Shen, Yanbo Fan, Baoyuan Wu

To further integrate it with normal training process, we then propose a learnable poisoning sample selection strategy to learn the mask together with the model parameters through a min-max optimization. Specifically, the outer loop aims to achieve the backdoor attack goal by minimizing the loss based on the selected samples, while the inner loop selects hard poisoning samples that impede this goal by maximizing the loss.

Backdoor Attack Data Poisoning

NOFA: NeRF-based One-shot Facial Avatar Reconstruction

no code implementations7 Jul 2023 Wangbo Yu, Yanbo Fan, Yong Zhang, Xuan Wang, Fei Yin, Yunpeng Bai, Yan-Pei Cao, Ying Shan, Yang Wu, Zhongqian Sun, Baoyuan Wu

In this work, we propose a one-shot 3D facial avatar reconstruction framework that only requires a single source image to reconstruct a high-fidelity 3D facial avatar.

Decoder

DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection

1 code implementation NeurIPS 2023 Zhiyuan Yan, Yong Zhang, Xinhang Yuan, Siwei Lyu, Baoyuan Wu

To fill this gap, we present the first comprehensive benchmark for deepfake detection, called DeepfakeBench, which offers three key contributions: 1) a unified data management system to ensure consistent input across all detectors, 2) an integrated framework for state-of-the-art methods implementation, and 3) standardized evaluation metrics and protocols to promote transparency and reproducibility.

DeepFake Detection Face Swapping

Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers

no code implementations1 Jun 2023 Ruotong Wang, Hongrui Chen, Zihao Zhu, Li Liu, Baoyuan Wu

Deep neural networks (DNNs) can be manipulated to exhibit specific behaviors when exposed to specific trigger patterns, without affecting their performance on benign samples, dubbed \textit{backdoor attack}.

Backdoor Attack backdoor defense +1

Learning to Learn from APIs: Black-Box Data-Free Meta-Learning

1 code implementation28 May 2023 Zixuan Hu, Li Shen, Zhenyi Wang, Baoyuan Wu, Chun Yuan, DaCheng Tao

Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-learning from a collection of pre-trained models without access to the training data.

Few-Shot Learning Knowledge Distillation

DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks

1 code implementation CVPR 2023 Qiangqiang Wu, Tianyu Yang, Ziquan Liu, Baoyuan Wu, Ying Shan, Antoni B. Chan

However, we find that this simple baseline heavily relies on spatial cues while ignoring temporal relations for frame reconstruction, thus leading to sub-optimal temporal matching representations for VOT and VOS.

 Ranked #1 on Visual Object Tracking on TrackingNet (AUC metric)

Semantic Segmentation Video Object Segmentation +2

Improving Fast Adversarial Training with Prior-Guided Knowledge

no code implementations1 Apr 2023 Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

This initialization is generated by using high-quality adversarial perturbations from the historical training process.

Visual Prompt Based Personalized Federated Learning

no code implementations15 Mar 2023 Guanghao Li, Wansen Wu, Yan Sun, Li Shen, Baoyuan Wu, DaCheng Tao

Then, the local model is trained on the input composed of raw data and a visual prompt to learn the distribution information contained in the prompt.

Image Classification Personalized Federated Learning

Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective

1 code implementation19 Feb 2023 Baoyuan Wu, Zihao Zhu, Li Liu, Qingshan Liu, Zhaofeng He, Siwei Lyu

Adversarial machine learning (AML) studies the adversarial phenomenon of machine learning, which may make inconsistent or unexpected predictions with humans.

Backdoor Attack

Generalizable Black-Box Adversarial Attack with Meta Learning

1 code implementation1 Jan 2023 Fei Yin, Yong Zhang, Baoyuan Wu, Yan Feng, Jingyi Zhang, Yanbo Fan, Yujiu Yang

In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget.

Adversarial Attack Meta-Learning

Global Balanced Experts for Federated Long-Tailed Learning

1 code implementation ICCV 2023 Yaopei Zeng, Lei Liu, Li Liu, Li Shen, Shaoguo Liu, Baoyuan Wu

In particular, a proxy is derived from the accumulated gradients uploaded by the clients after local training, and is shared by all clients as the class prior for re-balance training.

Federated Learning Privacy Preserving

Visually Adversarial Attacks and Defenses in the Physical World: A Survey

no code implementations3 Nov 2022 Xingxing Wei, Bangzheng Pu, Jiefan Lu, Baoyuan Wu

The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.

Adversarial Robustness

Learning to Optimize Permutation Flow Shop Scheduling via Graph-based Imitation Learning

1 code implementation31 Oct 2022 Longkang Li, Siyuan Liang, Zihao Zhu, Chris Ding, Hongyuan Zha, Baoyuan Wu

Compared to the state-of-the-art reinforcement learning method, our model's network parameters are reduced to only 37\% of theirs, and the solution gap of our model towards the expert solutions decreases from 6. 8\% to 1. 3\% on average.

Computational Efficiency Imitation Learning +3

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

3 code implementations12 Oct 2022 Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, Baoyuan Wu

Furthermore, RAP can be naturally combined with many existing black-box attack techniques, to further boost the transferability.

Adversarial Attack

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

1 code implementation2 Oct 2022 Jiancong Xiao, Zeyu Qin, Yanbo Fan, Baoyuan Wu, Jue Wang, Zhi-Quan Luo

Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in $\ell_1$, $\ell_2$, and $\ell_\infty$ norm-bounded perturbations).

Adversarial Robustness

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

no code implementations16 Sep 2022 Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao

Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.

object-detection Object Detection

Imperceptible and Robust Backdoor Attack in 3D Point Cloud

1 code implementation17 Aug 2022 Kuofeng Gao, Jiawang Bai, Baoyuan Wu, Mengxi Ya, Shu-Tao Xia

Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e. g., rotation) to construct the poisoned point cloud.

Backdoor Attack

Versatile Weight Attack via Flipping Limited Bits

1 code implementation25 Jul 2022 Jiawang Bai, Baoyuan Wu, Zhifeng Li, Shu-Tao Xia

Utilizing the latest technique in integer programming, we equivalently reformulate this MIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.

Backdoor Attack

Prior-Guided Adversarial Initialization for Fast Adversarial Training

1 code implementation18 Jul 2022 Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

Based on the observation, we propose a prior-guided FGSM initialization method to avoid overfitting after investigating several initialization strategies, improving the quality of the AEs during the whole training process.

Adversarial Attack Adversarial Attack on Video Classification

BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

1 code implementation25 Jun 2022 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Chao Shen

However, we find that the evaluations of new methods are often unthorough to verify their claims and accurate performance, mainly due to the rapid development, diverse settings, and the difficulties of implementation and reproducibility.

Backdoor Attack

LAS-AT: Adversarial Training with Learnable Attack Strategy

1 code implementation CVPR 2022 Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

In this paper, we propose a novel framework for adversarial training by introducing the concept of "learnable attack strategy", dubbed LAS-AT, which learns to automatically produce attack strategies to improve the model robustness.

StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN

1 code implementation8 Mar 2022 Fei Yin, Yong Zhang, Xiaodong Cun, Mingdeng Cao, Yanbo Fan, Xuan Wang, Qingyan Bai, Baoyuan Wu, Jue Wang, Yujiu Yang

Our framework elevates the resolution of the synthesized talking face to 1024*1024 for the first time, even though the training dataset has a lower resolution.

Facial Editing Talking Face Generation +1

Backdoor Defense via Decoupling the Training Process

2 code implementations ICLR 2022 Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, Kui Ren

Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples.

backdoor defense Self-Supervised Learning

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

no code implementations ICCV 2021 Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao

Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.

Autonomous Driving Image Classification +2

Accelerating Neural Network Optimization Through an Automated Control Theory Lens

no code implementations CVPR 2022 Jiahao Wang, Baoyuan Wu, Rui Su, Mingdeng Cao, Shuwei Shi, Wanli Ouyang, Yujiu Yang

We conduct experiments both from a control theory lens through a phase locus verification and from a network training lens on several models, including CNNs, Transformers, MLPs, and on benchmark datasets.

Math

Boosting Fast Adversarial Training with Learnable Adversarial Initialization

no code implementations11 Oct 2021 Xiaojun Jia, Yong Zhang, Baoyuan Wu, Jue Wang, Xiaochun Cao

Adversarial training (AT) has been demonstrated to be effective in improving model robustness by leveraging adversarial examples for training.

Robust Physical-World Attacks on Face Recognition

no code implementations20 Sep 2021 Xin Zheng, Yanbo Fan, Baoyuan Wu, Yong Zhang, Jue Wang, Shirui Pan

Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications.

Adversarial Attack Adversarial Robustness +1

Regional Adversarial Training for Better Robust Generalization

no code implementations2 Sep 2021 Chuanbiao Song, Yanbo Fan, Yichen Yang, Baoyuan Wu, Yiming Li, Zhifeng Li, Kun He

Adversarial training (AT) has been demonstrated as one of the most promising defense methods against various adversarial attacks.

Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing

1 code implementation CVPR 2021 Xunguang Wang, Zheng Zhang, Baoyuan Wu, Fumin Shen, Guangming Lu

However, deep hashing networks are vulnerable to adversarial examples, which is a practical secure problem but seldom studied in hashing-based retrieval field.

Deep Hashing Image Retrieval +1

Random Noise Defense Against Query-Based Black-Box Attacks

1 code implementation NeurIPS 2021 Zeyu Qin, Yanbo Fan, Hongyuan Zha, Baoyuan Wu

We conduct the theoretical analysis about the effectiveness of RND against query-based black-box attacks and the corresponding adaptive attacks.

Adversarial Robustness

Towards Corruption-Agnostic Robust Domain Adaptation

no code implementations21 Apr 2021 Yifan Xu, Kekai Sheng, WeiMing Dong, Baoyuan Wu, Changsheng Xu, Bao-Gang Hu

However, due to unpredictable corruptions (e. g., noise and blur) in real data like web images, domain adaptation methods are increasingly required to be corruption robust on target domains.

Domain Adaptation

Towards Open-World Text-Guided Face Image Generation and Manipulation

2 code implementations18 Apr 2021 Weihao Xia, Yujiu Yang, Jing-Hao Xue, Baoyuan Wu

To be specific, we propose a brand new paradigm of text-guided image generation and manipulation based on the superior characteristics of a pretrained GAN model.

Language Modelling Semantic Segmentation +1

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

2 code implementations ICLR 2021 Jiawang Bai, Baoyuan Wu, Yong Zhang, Yiming Li, Zhifeng Li, Shu-Tao Xia

By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.

Backdoor Attack

Effective and Efficient Vote Attack on Capsule Networks

1 code implementation ICLR 2021 Jindong Gu, Baoyuan Wu, Volker Tresp

As alternatives to CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more robust to white-box attacks than CNNs under popular attack protocols.

Adversarial Robustness

Meta-Attack: Class-Agnostic and Model-Agnostic Physical Adversarial Attack

no code implementations ICCV 2021 Weiwei Feng, Baoyuan Wu, Tianzhu Zhang, Yong Zhang, Yongdong Zhang

To tackle these issues, we propose a class-agnostic and model-agnostic physical adversarial attack model (Meta-Attack), which is able to not only generate robust physical adversarial examples by simulating color and shape distortions, but also generalize to attacking novel images and novel DNN models by accessing a few digital and physical images.

Adversarial Attack Few-Shot Learning

Dual ResGCN for Balanced Scene GraphGeneration

no code implementations9 Nov 2020 Jingyi Zhang, Yong Zhang, Baoyuan Wu, Yanbo Fan, Fumin Shen, Heng Tao Shen

We propose to incorporate the prior about the co-occurrence of relation pairs into the graph to further help alleviate the class imbalance issue.

Graph Generation Relation +1

Pixel-wise Dense Detector for Image Inpainting

no code implementations4 Nov 2020 Ruisong Zhang, Weize Quan, Baoyuan Wu, Zhifeng Li, Dong-Ming Yan

Recent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar, which inevitably lose the position information of visual artifacts.

Decoder Image Inpainting +2

Backdoor Attack against Speaker Verification

1 code implementation22 Oct 2020 Tongqing Zhai, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, Shu-Tao Xia

We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.

Backdoor Attack Clustering +1

Teacher-Student Competition for Unsupervised Domain Adaptation

no code implementations19 Oct 2020 Ruixin Xiao, Zhilei Liu, Baoyuan Wu

With the supervision from source domain only in class-level, existing unsupervised domain adaptation (UDA) methods mainly learn the domain-invariant representations from a shared feature extractor, which causes the source-bias problem.

Unsupervised Domain Adaptation

Open-sourced Dataset Protection via Backdoor Watermarking

2 code implementations12 Oct 2020 Yiming Li, Ziqi Zhang, Jiawang Bai, Baoyuan Wu, Yong Jiang, Shu-Tao Xia

Based on the proposed backdoor-based watermarking, we use a hypothesis test guided method for dataset verification based on the posterior probability generated by the suspicious third-party model of the benign samples and their correspondingly watermarked samples ($i. e.$, images with trigger) on the target class.

Image Classification

SPL-MLL: Selecting Predictable Landmarks for Multi-Label Learning

no code implementations ECCV 2020 Junbing Li, Changqing Zhang, Pengfei Zhu, Baoyuan Wu, Lei Chen, QinGhua Hu

Although significant progress achieved, multi-label classification is still challenging due to the complexity of correlations among different labels.

General Classification Multi-Label Classification +1

Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution

1 code implementation CVPR 2022 Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, Shutao Xia

This work studies black-box adversarial attacks against deep neural networks (DNNs), where the attacker can only access the query feedback returned by the attacked DNN model, while other information such as model parameters or the training datasets are unknown.

Adversarial Attack

Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients

no code implementations12 May 2020 Chengcheng Ma, Baoyuan Wu, Shibiao Xu, Yanbo Fan, Yong Zhang, Xiaopeng Zhang, Zhifeng Li

In this work, we study the detection of adversarial examples, based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD), but with different parameters (i. e., shape factor, mean, and variance).

Image Classification

Rethinking the Trigger of Backdoor Attack

no code implementations9 Apr 2020 Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, Shu-Tao Xia

Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of the infected model will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger, while it performs well on benign samples.

Backdoor Attack backdoor defense

Toward Adversarial Robustness via Semi-supervised Robust Training

1 code implementation16 Mar 2020 Yiming Li, Baoyuan Wu, Yan Feng, Yanbo Fan, Yong Jiang, Zhifeng Li, Shu-Tao Xia

In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.

Adversarial Defense Adversarial Robustness

Joint Face Completion and Super-resolution using Multi-scale Feature Relation Learning

no code implementations29 Feb 2020 Zhilei Liu, Yunpeng Wu, Le Li, Cuicui Zhang, Baoyuan Wu

This paper proposes a multi-scale feature graph generative adversarial network (MFG-GAN) to implement the face restoration of images in which both degradation modes coexist, and also to repair images with a single type of degradation.

Facial Inpainting Generative Adversarial Network +2

Controllable Descendant Face Synthesis

no code implementations26 Feb 2020 Yong Zhang, Le Li, Zhilei Liu, Baoyuan Wu, Yanbo Fan, Zhifeng Li

Most of the existing methods train models for one-versus-one kin relation, which only consider one parent face and one child face by directly using an auto-encoder without any explicit control over the resemblance of the synthesized face to the parent face.

Attribute Face Generation +1

Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables

1 code implementation CVPR 2019 Yan Xu, Baoyuan Wu, Fumin Shen, Yanbo Fan, Yong Zhang, Heng Tao Shen, Wei Liu

Due to the sequential dependencies among words in a caption, we formulate the generation of adversarial noises for targeted partial captions as a structured output learning problem with latent variables.

Adversarial Attack Image Captioning

MAP Inference via L2-Sphere Linear Program Reformulation

1 code implementation9 May 2019 Baoyuan Wu, Li Shen, Tong Zhang, Bernard Ghanem

Thus, LS-LP is equivalent to the original MAP inference problem.

valid

Efficient Decision-based Black-box Adversarial Attacks on Face Recognition

no code implementations CVPR 2019 Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu

In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.

Face Recognition

Target-Aware Deep Tracking

no code implementations CVPR 2019 Xin Li, Chao Ma, Baoyuan Wu, Zhenyu He, Ming-Hsuan Yang

Despite demonstrated successes for numerous vision tasks, the contributions of using pre-trained deep features for visual tracking are not as significant as that for object recognition.

Object Object Recognition +1

Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual Representation Learning

1 code implementation7 Jan 2019 Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, Tong Zhang

In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.

Image Classification object-detection +5

Learning to Compose Dynamic Tree Structures for Visual Contexts

6 code implementations CVPR 2019 Kaihua Tang, Hanwang Zhang, Baoyuan Wu, Wenhan Luo, Wei Liu

We propose to compose dynamic tree structures that place the objects in an image into a visual context, helping visual reasoning tasks such as scene graph generation and visual Q&A.

Graph Generation Panoptic Scene Graph Generation +2

Bi-Real Net: Binarizing Deep Network Towards Real-Network Performance

1 code implementation4 Nov 2018 Zechun Liu, Wenhan Luo, Baoyuan Wu, Xin Yang, Wei Liu, Kwang-Ting Cheng

To address the training difficulty, we propose a training algorithm using a tighter approximation to the derivative of the sign function, a magnitude-aware gradient for weight updating, a better initialization method, and a two-step scheme for training a deep network.

Depth Estimation

Tagging like Humans: Diverse and Distinct Image Annotation

no code implementations CVPR 2018 Baoyuan Wu, Weidong Chen, Peng Sun, Wei Liu, Bernard Ghanem, Siwei Lyu

In D2IA, we generate a relevant and distinct tag subset, in which the tags are relevant to the image contents and semantically distinct to each other, using sequential sampling from a determinantal point process (DPP) model.

Generative Adversarial Network TAG

Multi-label Learning with Missing Labels using Mixed Dependency Graphs

no code implementations31 Mar 2018 Baoyuan Wu, Fan Jia, Wei Liu, Bernard Ghanem, Siwei Lyu

This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels.

Image Retrieval Missing Labels +2

CNN in MRF: Video Object Segmentation via Inference in A CNN-Based Higher-Order Spatio-Temporal MRF

no code implementations CVPR 2018 Linchao Bao, Baoyuan Wu, Wei Liu

With temporal dependencies established by optical flow, the resulting MRF model combines both spatial and temporal cues for tackling video object segmentation.

Object One-Shot Segmentation +4

A Proximal Block Coordinate Descent Algorithm for Deep Neural Network Training

no code implementations24 Mar 2018 Tim Tsz-Kit Lau, Jinshan Zeng, Baoyuan Wu, Yuan Yao

Training deep neural networks (DNNs) efficiently is a challenge due to the associated highly nonconvex optimization.

Diverse Image Annotation

no code implementations CVPR 2017 Baoyuan Wu, Fan Jia, Wei Liu, Bernard Ghanem

To this end, we treat the image annotation as a subset selection problem based on the conditional determinantal point process (DPP) model, which formulates the representation and diversity jointly.

TAG

$\ell_p$-Box ADMM: A Versatile Framework for Integer Programming

no code implementations26 Apr 2016 Baoyuan Wu, Bernard Ghanem

This paper revisits the integer programming (IP) problem, which plays a fundamental role in many computer vision and machine learning applications.

Clustering Graph Matching

ML-MG: Multi-Label Learning With Missing Labels Using a Mixed Graph

no code implementations ICCV 2015 Baoyuan Wu, Siwei Lyu, Bernard Ghanem

This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i. e. some of their labels are missing).

Missing Labels

Constrained Clustering and Its Application to Face Clustering in Videos

no code implementations CVPR 2013 Baoyuan Wu, Yifan Zhang, Bao-Gang Hu, Qiang Ji

As a result, many pairwise constraints between faces can be easily obtained from the temporal and spatial knowledge of the face tracks.

Constrained Clustering Face Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.