Search Results for author: Xiaojun Jia

Found 23 papers, 10 papers with code

Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection

1 code implementation18 Feb 2024 Jiawei Liang, Siyuan Liang, Aishan Liu, Xiaojun Jia, Junhao Kuang, Xiaochun Cao

However, this paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.

Backdoor Attack

Cheating Suffix: Targeted Attack to Text-To-Image Diffusion Models with Multi-Modal Priors

1 code implementation2 Feb 2024 Dingcheng Yang, Yang Bai, Xiaojun Jia, Yang Liu, Xiaochun Cao, Wenjian Yu

The MMP-Attack shows a notable advantage over existing works with superior universality and transferability, which can effectively attack commercial text-to-image (T2I) models such as DALL-E 3.

Image Generation

Does Few-shot Learning Suffer from Backdoor Attacks?

no code implementations31 Dec 2023 Xinwei Liu, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang, Xiaochun Cao

However, in this paper, we propose the Few-shot Learning Backdoor Attack (FLBA) to show that FSL can still be vulnerable to backdoor attacks.

Backdoor Attack Few-Shot Learning

OT-Attack: Enhancing Adversarial Transferability of Vision-Language Models via Optimal Transport Optimization

no code implementations7 Dec 2023 Dongchen Han, Xiaojun Jia, Yang Bai, Jindong Gu, Yang Liu, Xiaochun Cao

Investigating the generation of high-transferability adversarial examples is crucial for uncovering VLP models' vulnerabilities in practical scenarios.

Adversarial Attack Data Augmentation +2

TranSegPGD: Improving Transferability of Adversarial Examples on Semantic Segmentation

no code implementations3 Dec 2023 Xiaojun Jia, Jindong Gu, Yihao Huang, Simeng Qin, Qing Guo, Yang Liu, Xiaochun Cao

At the second stage, the pixels are divided into different branches based on their transferable property which is dependent on Kullback-Leibler divergence.

Adversarial Attack Image Classification +2

Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks

no code implementations24 Oct 2023 Xiaojun Jia, Jianshu Li, Jindong Gu, Yang Bai, Xiaochun Cao

Besides, we provide theoretical analysis to show the model robustness can be improved by the single-step adversarial training with sampled subnetworks.

Robust Automatic Speech Recognition via WavAugment Guided Phoneme Adversarial Training

no code implementations24 Jul 2023 Gege Qi, Yuefeng Chen, Xiaofeng Mao, Xiaojun Jia, Ranjie Duan, Rong Zhang, Hui Xue

Developing a practically-robust automatic speech recognition (ASR) is challenging since the model should not only maintain the original performance on clean samples, but also achieve consistent efficacy under small volume perturbations and large domain shifts.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Improving Fast Adversarial Training with Prior-Guided Knowledge

no code implementations1 Apr 2023 Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

This initialization is generated by using high-quality adversarial perturbations from the historical training process.

Context-Aware Robust Fine-Tuning

no code implementations29 Nov 2022 Xiaofeng Mao, Yuefeng Chen, Xiaojun Jia, Rong Zhang, Hui Xue, Zhao Li

Contrastive Language-Image Pre-trained (CLIP) models have zero-shot ability of classifying an image belonging to "[CLASS]" by using similarity between the image and the prompt sentence "a [CONTEXT] of [CLASS]".

Domain Generalization Sentence

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

no code implementations16 Sep 2022 Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao

Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.

object-detection Object Detection

MOVE: Effective and Harmless Ownership Verification via Embedded External Features

1 code implementation4 Aug 2022 Yiming Li, Linghui Zhu, Xiaojun Jia, Yang Bai, Yong Jiang, Shu-Tao Xia, Xiaochun Cao

In general, we conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.

Style Transfer

Prior-Guided Adversarial Initialization for Fast Adversarial Training

1 code implementation18 Jul 2022 Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

Based on the observation, we propose a prior-guided FGSM initialization method to avoid overfitting after investigating several initialization strategies, improving the quality of the AEs during the whole training process.

Adversarial Attack Adversarial Attack on Video Classification

Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal

1 code implementation17 Jul 2022 Xinwei Liu, Jian Liu, Yang Bai, Jindong Gu, Tao Chen, Xiaojun Jia, Xiaochun Cao

Inspired by the vulnerability of DNNs on adversarial perturbations, we propose a novel defence mechanism by adversarial machine learning for good.

LAS-AT: Adversarial Training with Learnable Attack Strategy

1 code implementation CVPR 2022 Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

In this paper, we propose a novel framework for adversarial training by introducing the concept of "learnable attack strategy", dubbed LAS-AT, which learns to automatically produce attack strategies to improve the model robustness.

Defending against Model Stealing via Verifying Embedded External Features

1 code implementation ICML Workshop AML 2021 Yiming Li, Linghui Zhu, Xiaojun Jia, Yong Jiang, Shu-Tao Xia, Xiaochun Cao

In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified \emph{external features}.

Style Transfer

Boosting Fast Adversarial Training with Learnable Adversarial Initialization

no code implementations11 Oct 2021 Xiaojun Jia, Yong Zhang, Baoyuan Wu, Jue Wang, Xiaochun Cao

Adversarial training (AT) has been demonstrated to be effective in improving model robustness by leveraging adversarial examples for training.

An Effective and Robust Detector for Logo Detection

2 code implementations1 Aug 2021 Xiaojun Jia, Huanqian Yan, Yonglin Wu, Xingxing Wei, Xiaochun Cao, Yong Zhang

Moreover, we have applied the proposed methods to competition ACM MM2021 Robust Logo Detection that is organized by Alibaba on the Tianchi platform and won top 2 in 36489 teams.

Data Augmentation

Identifying and Resisting Adversarial Videos Using Temporal Consistency

no code implementations11 Sep 2019 Xiaojun Jia, Xingxing Wei, Xiaochun Cao

We propose the temporal defense, which reconstructs the polluted frames with their temporally neighbor clean frames, to deal with the adversarial videos with sparse polluted frames.

Video Classification

ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples

1 code implementation CVPR 2019 Xiaojun Jia, Xingxing Wei, Xiaochun Cao, Hassan Foroosh

In other words, ComDefend can transform the adversarial image to its clean version, which is then fed to the trained classifier.

Image Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.