Search Results for author: Yanbo Fan

Found 44 papers, 22 papers with code

Sparse Adversarial Attack via Perturbation Factorization

1 code implementation ECCV 2020 Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, Yujiu Yang

Based on this factorization, we formulate the sparse attack problem as a mixed integer programming (MIP) to jointly optimize the binary selection factors and continuous perturbation magnitudes of all pixels, with a cardinality constraint on selection factors to explicitly control the degree of sparsity.

Adversarial Attack

Bridging 3D Gaussian and Mesh for Freeview Video Rendering

no code implementations18 Mar 2024 Yuting Xiao, Xuan Wang, Jiafei Li, Hongrui Cai, Yanbo Fan, Nan Xue, Minghui Yang, Yujun Shen, Shenghua Gao

To this end, we propose a novel approach, GauMesh, to bridge the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.

Novel View Synthesis

VectorTalker: SVG Talking Face Generation with Progressive Vectorisation

no code implementations18 Dec 2023 Hao Hu, Xuan Wang, Jingxiang Sun, Yanbo Fan, Yu Guo, Caigui Jiang

To address these, we propose a novel scalable vector graphic reconstruction and animation method, dubbed VectorTalker.

Image Reconstruction Talking Face Generation +1

ToonTalker: Cross-Domain Face Reenactment

no code implementations ICCV 2023 Yuan Gong, Yong Zhang, Xiaodong Cun, Fei Yin, Yanbo Fan, Xuan Wang, Baoyuan Wu, Yujiu Yang

Moreover, since no paired data is provided, we propose a novel cross-domain training scheme using data from two domains with the designed analogy constraint.

Face Reenactment Talking Face Generation

Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy

no code implementations14 Jul 2023 Zihao Zhu, Mingda Zhang, Shaokui Wei, Li Shen, Yanbo Fan, Baoyuan Wu

To further integrate it with normal training process, we then propose a learnable poisoning sample selection strategy to learn the mask together with the model parameters through a min-max optimization. Specifically, the outer loop aims to achieve the backdoor attack goal by minimizing the loss based on the selected samples, while the inner loop selects hard poisoning samples that impede this goal by maximizing the loss.

Backdoor Attack Data Poisoning

NOFA: NeRF-based One-shot Facial Avatar Reconstruction

no code implementations7 Jul 2023 Wangbo Yu, Yanbo Fan, Yong Zhang, Xuan Wang, Fei Yin, Yunpeng Bai, Yan-Pei Cao, Ying Shan, Yang Wu, Zhongqian Sun, Baoyuan Wu

In this work, we propose a one-shot 3D facial avatar reconstruction framework that only requires a single source image to reconstruct a high-fidelity 3D facial avatar.

Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

1 code implementation6 Jul 2023 Xu Han, Anmin Liu, Chenxuan Yao, Yanbo Fan, Kun He

In either case, the common gradient-based methods generally use the sign function to generate perturbations on the gradient update, that offers a roughly correct direction and has gained great success.

Robust Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers

no code implementations1 Jun 2023 Ruotong Wang, Hongrui Chen, Zihao Zhu, Li Liu, Yong Zhang, Yanbo Fan, Baoyuan Wu

We hope that the proposed VSSC trigger and implementation approach could inspire future studies on designing more practical triggers in backdoor attacks.

Backdoor Attack backdoor defense +1

DPE: Disentanglement of Pose and Expression for General Video Portrait Editing

1 code implementation CVPR 2023 Youxin Pang, Yong Zhang, Weize Quan, Yanbo Fan, Xiaodong Cun, Ying Shan, Dong-Ming Yan

In this paper, we introduce a novel self-supervised disentanglement framework to decouple pose and expression without 3DMMs and paired data, which consists of a motion editing module, a pose generator, and an expression generator.

Disentanglement Talking Face Generation +1

Generalizable Black-Box Adversarial Attack with Meta Learning

1 code implementation1 Jan 2023 Fei Yin, Yong Zhang, Baoyuan Wu, Yan Feng, Jingyi Zhang, Yanbo Fan, Yujiu Yang

In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget.

Adversarial Attack Meta-Learning

3D GAN Inversion with Facial Symmetry Prior

no code implementations CVPR 2023 Fei Yin, Yong Zhang, Xuan Wang, Tengfei Wang, Xiaoyu Li, Yuan Gong, Yanbo Fan, Xiaodong Cun, Ying Shan, Cengiz Oztireli, Yujiu Yang

It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion.

Image Reconstruction Neural Rendering

Adversarial Rademacher Complexity of Deep Neural Networks

1 code implementation27 Nov 2022 Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Zhi-Quan Luo

Specifically, we provide the first bound of adversarial Rademacher complexity of deep neural networks.

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

3 code implementations12 Oct 2022 Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, Baoyuan Wu

Furthermore, RAP can be naturally combined with many existing black-box attack techniques, to further boost the transferability.

Adversarial Attack

Stability Analysis and Generalization Bounds of Adversarial Training

1 code implementation3 Oct 2022 Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Jue Wang, Zhi-Quan Luo

In adversarial machine learning, deep neural networks can fit the adversarial examples on the training dataset but have poor generalization ability on the test set.

Generalization Bounds

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

1 code implementation2 Oct 2022 Jiancong Xiao, Zeyu Qin, Yanbo Fan, Baoyuan Wu, Jue Wang, Zhi-Quan Luo

Therefore, adversarial training for multiple perturbations (ATMP) is proposed to generalize the adversarial robustness over different perturbation types (in $\ell_1$, $\ell_2$, and $\ell_\infty$ norm-bounded perturbations).

Adversarial Robustness

Understanding Adversarial Robustness Against On-manifold Adversarial Examples

1 code implementation2 Oct 2022 Jiancong Xiao, Liusha Yang, Yanbo Fan, Jue Wang, Zhi-Quan Luo

On synthetic datasets, theoretically, We prove that on-manifold adversarial examples are powerful, yet adversarial training focuses on off-manifold directions and ignores the on-manifold adversarial examples.

Adversarial Robustness

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

no code implementations16 Sep 2022 Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao

Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.

object-detection Object Detection

Towards Real-World Video Deblurring by Exploring Blur Formation Process

1 code implementation28 Aug 2022 Mingdeng Cao, Zhihang Zhong, Yanbo Fan, Jiahao Wang, Yong Zhang, Jue Wang, Yujiu Yang, Yinqiang Zheng

We believe the novel realistic synthesis pipeline and the corresponding RAW video dataset can help the community to easily construct customized blur datasets to improve real-world video deblurring performance largely, instead of laboriously collecting real data pairs.


HyP$^2$ Loss: Beyond Hypersphere Metric Space for Multi-label Image Retrieval

1 code implementation14 Aug 2022 Chengyin Xu, Zenghao Chai, Zhengzhuo Xu, Chun Yuan, Yanbo Fan, Jue Wang

Image retrieval has become an increasingly appealing technique with broad multimedia application prospects, where deep hashing serves as the dominant branch towards low storage and efficient retrieval.

Deep Hashing Metric Learning +1

Fast Adversarial Training with Adaptive Step Size

no code implementations6 Jun 2022 Zhichao Huang, Yanbo Fan, Chen Liu, Weizhong Zhang, Yong Zhang, Mathieu Salzmann, Sabine Süsstrunk, Jue Wang

While adversarial training and its variants have shown to be the most effective algorithms to defend against adversarial attacks, their extremely slow training process makes it hard to scale to large datasets like ImageNet.

Improving the Latent Space of Image Style Transfer

no code implementations24 May 2022 Yunpeng Bai, Cairong Wang, Chun Yuan, Yanbo Fan, Jue Wang

The content contrastive loss enables the encoder to retain more available details.

Style Transfer

VDTR: Video Deblurring with Transformer

1 code implementation17 Apr 2022 Mingdeng Cao, Yanbo Fan, Yong Zhang, Jue Wang, Yujiu Yang

For multi-frame temporal modeling, we adapt Transformer to fuse multiple spatial features efficiently.

Deblurring Video Restoration

Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

1 code implementation6 Apr 2022 Xu Han, Anmin Liu, Yifeng Xiong, Yanbo Fan, Kun He

Deviation between the original gradient and the generated noises may lead to inaccurate gradient update estimation and suboptimal solutions for adversarial transferability, which is crucial for black-box attacks.

StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN

1 code implementation8 Mar 2022 Fei Yin, Yong Zhang, Xiaodong Cun, Mingdeng Cao, Yanbo Fan, Xuan Wang, Qingyan Bai, Baoyuan Wu, Jue Wang, Yujiu Yang

Our framework elevates the resolution of the synthesized talking face to 1024*1024 for the first time, even though the training dataset has a lower resolution.

Facial Editing Talking Face Generation +1

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

no code implementations ICCV 2021 Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao

Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.

Autonomous Driving Image Classification +2

Robust Physical-World Attacks on Face Recognition

no code implementations20 Sep 2021 Xin Zheng, Yanbo Fan, Baoyuan Wu, Yong Zhang, Jue Wang, Shirui Pan

Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications.

Adversarial Attack Adversarial Robustness +1

High-Fidelity GAN Inversion for Image Attribute Editing

1 code implementation CVPR 2022 Tengfei Wang, Yong Zhang, Yanbo Fan, Jue Wang, Qifeng Chen

With a low bit-rate latent code, previous works have difficulties in preserving high-fidelity details in reconstructed and edited images.

Attribute Generative Adversarial Network +2

Regional Adversarial Training for Better Robust Generalization

no code implementations2 Sep 2021 Chuanbiao Song, Yanbo Fan, Yichen Yang, Baoyuan Wu, Yiming Li, Zhifeng Li, Kun He

Adversarial training (AT) has been demonstrated as one of the most promising defense methods against various adversarial attacks.

DAE-GAN: Dynamic Aspect-aware GAN for Text-to-Image Synthesis

1 code implementation ICCV 2021 Shulan Ruan, Yong Zhang, Kun Zhang, Yanbo Fan, Fan Tang, Qi Liu, Enhong Chen

Text-to-image synthesis refers to generating an image from a given text description, the key goal of which lies in photo realism and semantic consistency.

Image Generation Sentence +2

Random Noise Defense Against Query-Based Black-Box Attacks

1 code implementation NeurIPS 2021 Zeyu Qin, Yanbo Fan, Hongyuan Zha, Baoyuan Wu

We conduct the theoretical analysis about the effectiveness of RND against query-based black-box attacks and the corresponding adaptive attacks.

Adversarial Robustness

Dual ResGCN for Balanced Scene GraphGeneration

no code implementations9 Nov 2020 Jingyi Zhang, Yong Zhang, Baoyuan Wu, Yanbo Fan, Fumin Shen, Heng Tao Shen

We propose to incorporate the prior about the co-occurrence of relation pairs into the graph to further help alleviate the class imbalance issue.

Graph Generation Relation +1

Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution

1 code implementation CVPR 2022 Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, Shutao Xia

This work studies black-box adversarial attacks against deep neural networks (DNNs), where the attacker can only access the query feedback returned by the attacked DNN model, while other information such as model parameters or the training datasets are unknown.

Adversarial Attack

Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients

no code implementations12 May 2020 Chengcheng Ma, Baoyuan Wu, Shibiao Xu, Yanbo Fan, Yong Zhang, Xiaopeng Zhang, Zhifeng Li

In this work, we study the detection of adversarial examples, based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD), but with different parameters (i. e., shape factor, mean, and variance).

Image Classification

Toward Adversarial Robustness via Semi-supervised Robust Training

1 code implementation16 Mar 2020 Yiming Li, Baoyuan Wu, Yan Feng, Yanbo Fan, Yong Jiang, Zhifeng Li, Shu-Tao Xia

In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.

Adversarial Defense Adversarial Robustness

Controllable Descendant Face Synthesis

no code implementations26 Feb 2020 Yong Zhang, Le Li, Zhilei Liu, Baoyuan Wu, Yanbo Fan, Zhifeng Li

Most of the existing methods train models for one-versus-one kin relation, which only consider one parent face and one child face by directly using an auto-encoder without any explicit control over the resemblance of the synthesized face to the parent face.

Attribute Face Generation +1

Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables

1 code implementation CVPR 2019 Yan Xu, Baoyuan Wu, Fumin Shen, Yanbo Fan, Yong Zhang, Heng Tao Shen, Wei Liu

Due to the sequential dependencies among words in a caption, we formulate the generation of adversarial noises for targeted partial captions as a structured output learning problem with latent variables.

Adversarial Attack Image Captioning

Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual Representation Learning

1 code implementation7 Jan 2019 Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, Tong Zhang

In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.

Image Classification object-detection +5

Learning with Average Top-k Loss

no code implementations NeurIPS 2017 Yanbo Fan, Siwei Lyu, Yiming Ying, Bao-Gang Hu

We further give a learning theory analysis of \matk learning on the classification calibration of the \atk loss and the error bounds of \atk-SVM.

Binary Classification General Classification +1

Robust Localized Multi-view Subspace Clustering

no code implementations22 May 2017 Yanbo Fan, Jian Liang, Ran He, Bao-Gang Hu, Siwei Lyu

In multi-view clustering, different views may have different confidence levels when learning a consensus representation.

Clustering Multi-view Subspace Clustering

Self-Paced Learning: an Implicit Regularization Perspective

no code implementations1 Jun 2016 Yanbo Fan, Ran He, Jian Liang, Bao-Gang Hu

In this paper, we focus on the minimizer function, and study a group of new regularizer, named self-paced implicit regularizer that is deduced from robust loss function.

Cannot find the paper you are looking for? You can Submit a new open access paper.