Search Results for author: Yuefeng Chen

Found 37 papers, 18 papers with code

Bilinear Representation for Language-based Image Editing Using Conditional Generative Adversarial Networks

1 code implementation18 Mar 2019 Xiaofeng Mao, Yuefeng Chen, Yuhong Li, Tao Xiong, Yuan He, Hui Xue

The task of Language-Based Image Editing (LBIE) aims at generating a target image by editing the source image based on the given language description.

Generative Adversarial Network

Robust Visual Tracking Using Dynamic Classifier Selection with Sparse Representation of Label Noise

no code implementations19 Mar 2019 Yuefeng Chen, Qing Wang

However, the self-updating scheme makes these methods suffer from drifting problem because of the incorrect labels of weak classifiers in training samples.

Visual Tracking

Self-Supervised Learning For Few-Shot Image Classification

2 code implementations14 Nov 2019 Da Chen, Yuefeng Chen, Yuhong Li, Feng Mao, Yuan He, Hui Xue

In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.

Classification cross-domain few-shot learning +3

Learning To Characterize Adversarial Subspaces

no code implementations15 Nov 2019 Xiaofeng Mao, Yuefeng Chen, Yuhong Li, Yuan He, Hui Xue

To detect these adversarial examples, previous methods use artificially designed metrics to characterize the properties of \textit{adversarial subspaces} where adversarial examples lie.

Self-supervised Adversarial Training

1 code implementation15 Nov 2019 Kejiang Chen, Hang Zhou, Yuefeng Chen, Xiaofeng Mao, Yuhong Li, Yuan He, Hui Xue, Weiming Zhang, Nenghai Yu

Recent work has demonstrated that neural networks are vulnerable to adversarial examples.

Self-Supervised Learning

AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients

1 code implementation15 Nov 2019 Xiaodan Li, Yuefeng Chen, Yuan He, Hui Xue

Deep neural networks have been shown to be vulnerable to adversarial examples---maliciously crafted examples that can trigger the target model to misbehave by adding imperceptible perturbations.

Adversarial Robustness

Towards Face Encryption by Generating Adversarial Identity Masks

1 code implementation ICCV 2021 Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, Hui Xue

As billions of personal data being shared through social media and network, the data privacy and security have drawn an increasing attention.

Face Recognition

GAP++: Learning to generate target-conditioned adversarial examples

no code implementations9 Jun 2020 Xiaofeng Mao, Yuefeng Chen, Yuhong Li, Yuan He, Hui Xue

Different from previous single-target attack models, our model can conduct target-conditioned attacks by learning the relations of attack target and the semantics in image.

Computational Efficiency

Sharp Multiple Instance Learning for DeepFake Video Detection

no code implementations11 Aug 2020 Xiaodan Li, Yining Lang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Shuhui Wang, Hui Xue, Quan Lu

A sharp MIL (S-MIL) is proposed which builds direct mapping from instance embeddings to bag prediction, rather than from instance embeddings to instance prediction and then to bag prediction in traditional MIL.

Face Swapping Multiple Instance Learning

Composite Adversarial Attacks

1 code implementation10 Dec 2020 Xiaofeng Mao, Yuefeng Chen, Shuhui Wang, Hang Su, Yuan He, Hui Xue

Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness.

Adversarial Attack Adversarial Robustness

Adversarial Examples Detection beyond Image Space

1 code implementation23 Feb 2021 Kejiang Chen, Yuefeng Chen, Hang Zhou, Chuan Qin, Xiaofeng Mao, Weiming Zhang, Nenghai Yu

To detect both few-perturbation attacks and large-perturbation attacks, we propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.

QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval

no code implementations CVPR 2021 Xiaodan Li, Jinfeng Li, Yuefeng Chen, Shaokai Ye, Yuan He, Shuhui Wang, Hang Su, Hui Xue

Comprehensive experiments show that the proposed attack achieves a high attack success rate with few queries against the image retrieval systems under the black-box setting.

Image Classification Image Retrieval +1

Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink

1 code implementation CVPR 2021 Ranjie Duan, Xiaofeng Mao, A. K. Qin, Yun Yang, Yuefeng Chen, Shaokai Ye, Yuan He

Though it is well known that the performance of deep neural networks (DNNs) degrades under certain light conditions, there exists no study on the threats of light beams emitted from some physical source as adversarial attacker on DNNs in a real-world scenario.

Adversarial Attack

Towards Robust Vision Transformer

2 code implementations CVPR 2022 Xiaofeng Mao, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan He, Hui Xue

By using and combining robust components as building blocks of ViTs, we propose Robust Vision Transformer (RVT), which is a new vision transformer and has superior performance with strong robustness.

Domain Generalization Image Classification +1

AdvDrop: Adversarial Attack to DNNs by Dropping Information

1 code implementation ICCV 2021 Ranjie Duan, Yuefeng Chen, Dantong Niu, Yun Yang, A. K. Qin, Yuan He

Human can easily recognize visual objects with lost information: even losing most details with only contour reserved, e. g. cartoon.

Adversarial Attack Adversarial Robustness

D$^2$ETR: Decoder-Only DETR with Computationally Efficient Cross-Scale Attention

no code implementations29 Sep 2021 Junyu Lin, Xiaofeng Mao, Yuefeng Chen, Lei Xu, Yuan He, Hui Xue'

DETR is the first fully end-to-end detector that predicts a final set of predictions without post-processing.

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains

2 code implementations ICLR 2022 Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue

Notably, our methods outperform state-of-the-art approaches by up to 7. 71\% (towards coarse-grained domains) and 25. 91\% (towards fine-grained domains) on average.

D^2ETR: Decoder-Only DETR with Computationally Efficient Cross-Scale Attention

no code implementations2 Mar 2022 Junyu Lin, Xiaofeng Mao, Yuefeng Chen, Lei Xu, Yuan He, Hui Xue

DETR is the first fully end-to-end detector that predicts a final set of predictions without post-processing.

MaxMatch: Semi-Supervised Learning with Worst-Case Consistency

no code implementations26 Sep 2022 Yangbangyan Jiang, Xiaodan Li, Yuefeng Chen, Yuan He, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, Qingming Huang

In recent years, great progress has been made to incorporate unlabeled data to overcome the inefficiently supervised problem via semi-supervised learning (SSL).

Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective

2 code implementations9 Oct 2022 Yao Zhu, Yuefeng Chen, Xiaodan Li, Kejiang Chen, Yuan He, Xiang Tian, Bolun Zheng, Yaowu Chen, Qingming Huang

We conduct comprehensive transferable attacks against multiple DNNs to demonstrate the effectiveness of the proposed method.

Boosting Out-of-distribution Detection with Typical Features

no code implementations9 Oct 2022 Yao Zhu, Yuefeng Chen, Chuanlong Xie, Xiaodan Li, Rong Zhang, Hui Xue, Xiang Tian, Bolun Zheng, Yaowu Chen

Out-of-distribution (OOD) detection is a critical task for ensuring the reliability and safety of deep neural networks in real-world scenarios.

Out-of-Distribution Detection

Prompt-based Connective Prediction Method for Fine-grained Implicit Discourse Relation Recognition

1 code implementation13 Oct 2022 Hao Zhou, Man Lan, Yuanbin Wu, Yuefeng Chen, Meirong Ma

Due to the absence of connectives, implicit discourse relation recognition (IDRR) is still a challenging and crucial task in discourse analysis.

Multi-Task Learning Relation

Defects of Convolutional Decoder Networks in Frequency Representation

no code implementations17 Oct 2022 Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, Quanshi Zhang

In this paper, we prove the representation defects of a cascaded convolutional decoder network, considering the capacity of representing different frequency components of an input sample.

Context-Aware Robust Fine-Tuning

no code implementations29 Nov 2022 Xiaofeng Mao, Yuefeng Chen, Xiaojun Jia, Rong Zhang, Hui Xue, Zhao Li

Contrastive Language-Image Pre-trained (CLIP) models have zero-shot ability of classifying an image belonging to "[CLASS]" by using similarity between the image and the prompt sentence "a [CONTEXT] of [CLASS]".

Domain Generalization Sentence

Rethinking Out-of-Distribution Detection From a Human-Centric Perspective

no code implementations30 Nov 2022 Yao Zhu, Yuefeng Chen, Xiaodan Li, Rong Zhang, Hui Xue, Xiang Tian, Rongxin Jiang, Bolun Zheng, Yaowu Chen

Additionally, our experiments demonstrate that model selection is non-trivial for OOD detection and should be considered as an integral of the proposed method, which differs from the claim in existing works that proposed methods are universal across different models.

Model Selection Out-of-Distribution Detection +1

A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking

no code implementations28 Feb 2023 Chang Liu, Yinpeng Dong, Wenzhao Xiang, Xiao Yang, Hang Su, Jun Zhu, Yuefeng Chen, Yuan He, Hui Xue, Shibao Zheng

In our benchmark, we evaluate the robustness of 55 typical deep learning models on ImageNet with diverse architectures (e. g., CNNs, Transformers) and learning algorithms (e. g., normal supervised training, pre-training, adversarial training) under numerous adversarial attacks and out-of-distribution (OOD) datasets.

Adversarial Robustness Benchmarking +2

Information-containing Adversarial Perturbation for Combating Facial Manipulation Systems

no code implementations21 Mar 2023 Yao Zhu, Yuefeng Chen, Xiaodan Li, Rong Zhang, Xiang Tian, Bolun Zheng, Yaowu Chen

We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems to achieve initiative protection.

Fake Image Detection

ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing

2 code implementations CVPR 2023 Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue

We also evaluate some robust models including both adversarially trained models and other robust trained models and find that some models show worse robustness against attribute changes than vanilla models.

Attribute Benchmarking +1

COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts

1 code implementation ICCV 2023 Xiaofeng Mao, Yuefeng Chen, Yao Zhu, Da Chen, Hang Su, Rong Zhang, Hui Xue

To give a more comprehensive robustness assessment, we introduce COCO-O(ut-of-distribution), a test dataset based on COCO with 6 types of natural distribution shifts.

Autonomous Driving Object +2

Robust Automatic Speech Recognition via WavAugment Guided Phoneme Adversarial Training

no code implementations24 Jul 2023 Gege Qi, Yuefeng Chen, Xiaofeng Mao, Xiaojun Jia, Ranjie Duan, Rong Zhang, Hui Xue

Developing a practically-robust automatic speech recognition (ASR) is challenging since the model should not only maintain the original performance on clean samples, but also achieve consistent efficacy under small volume perturbations and large domain shifts.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Model Inversion Attack via Dynamic Memory Learning

no code implementations24 Aug 2023 Gege Qi, Yuefeng Chen, Xiaofeng Mao, Binyuan Hui, Xiaodan Li, Rong Zhang, Hui Xue

Model Inversion (MI) attacks aim to recover the private training data from the target model, which has raised security concerns about the deployment of DNNs in practice.

Enhancing Few-shot CLIP with Semantic-Aware Fine-Tuning

no code implementations8 Nov 2023 Yao Zhu, Yuefeng Chen, Wei Wang, Xiaofeng Mao, Xiu Yan, Yue Wang, Zhigang Li, Wang Lu, Jindong Wang, Xiangyang Ji

Hence, we propose fine-tuning the parameters of the attention pooling layer during the training process to encourage the model to focus on task-specific semantics.

Cannot find the paper you are looking for? You can Submit a new open access paper.