Search Results for author: Xingjun Ma

Found 107 papers, 56 papers with code

X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP

1 code implementation8 May 2025 Hanxun Huang, Sarah Erfani, Yige Li, Xingjun Ma, James Bailey

As Contrastive Language-Image Pre-training (CLIP) models are increasingly adopted for diverse downstream tasks and integrated into large vision-language models (VLMs), their susceptibility to adversarial perturbations has emerged as a critical concern.

OmniSVG: A Unified Scalable Vector Graphics Generation Model

no code implementations8 Apr 2025 Yiying Yang, Wei Cheng, Sijin Chen, Xianfang Zeng, Jiaxu Zhang, Liao Wang, Gang Yu, Xingjun Ma, Yu-Gang Jiang

Scalable Vector Graphics (SVG) is an important image format widely adopted in graphic design because of their resolution independence and editability.

model Vector Graphics

Identity Lock: Locking API Fine-tuned LLMs With Identity-based Wake Words

no code implementations10 Mar 2025 Hongyu Su, Yifeng Gao, Yifan Ding, Xingjun Ma

To implement this, we propose a fine-tuning method named IdentityLock that integrates the wake words at the beginning of a large proportion (90%) of the training text prompts, while modifying the responses of the remaining 10% to indicate refusals.

Multiple-choice

Reinforced Diffuser for Red Teaming Large Vision-Language Models

no code implementations8 Mar 2025 Ruofan Wang, Xiang Zheng, Xiaosen Wang, Cong Wang, Xingjun Ma

The rapid advancement of large Vision-Language Models (VLMs) has raised significant safety concerns, particularly regarding their vulnerability to jailbreak attacks.

Large Language Model Red Teaming

Detecting Backdoor Samples in Contrastive Language Image Pretraining

1 code implementation3 Feb 2025 Hanxun Huang, Sarah Erfani, Yige Li, Xingjun Ma, James Bailey

Contrastive language-image pretraining (CLIP) has been found to be vulnerable to poisoning backdoor attacks where the adversary can achieve an almost perfect attack success rate on CLIP models by poisoning only 0. 01\% of the training dataset.

CALM: Curiosity-Driven Auditing for Large Language Models

1 code implementation6 Jan 2025 Xiang Zheng, Longxiang Wang, Yi Liu, Xingjun Ma, Chao Shen, Cong Wang

We treat this type of auditing as a black-box optimization problem where the goal is to automatically uncover input-output pairs of the target LLMs that exhibit illegal, immoral, or unsafe behaviors.

AIM: Additional Image Guided Generation of Transferable Adversarial Attacks

no code implementations2 Jan 2025 Teng Li, Xingjun Ma, Yu-Gang Jiang

In this work, we focus on generative approaches for targeted transferable attacks.

HoneypotNet: Backdoor Attacks Against Model Extraction

no code implementations2 Jan 2025 Yixu Wang, Tianle Gu, Yan Teng, Yingchun Wang, Xingjun Ma

In this work, we introduce a new defense paradigm called attack as defense which modifies the model's output to be poisonous such that any malicious users that attempt to use the output to train a substitute model will be poisoned.

Backdoor Attack model +1

DiffPatch: Generating Customizable Adversarial Patches using Diffusion Model

2 code implementations2 Dec 2024 Zhixiang Wang, Guangnan Ye, Xiaosen Wang, Siheng Chen, Zhibo Wang, Xingjun Ma, Yu-Gang Jiang

However, most existing adversarial patch generation methods prioritize attack effectiveness over stealthiness, resulting in patches that are aesthetically unpleasing.

model

Adversarial Prompt Distillation for Vision-Language Models

no code implementations22 Nov 2024 Lin Luo, Xin Wang, Bojia Zi, Shihao Zhao, Xingjun Ma

In this work, we propose a novel method called Adversarial Prompt Distillation (APD) that combines APT with knowledge distillation to boost the adversarial robustness of CLIP.

Adversarial Robustness Autonomous Driving +2

TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models

no code implementations20 Nov 2024 Xin Wang, Kai Chen, Jiaming Zhang, Jingjing Chen, Xingjun Ma

TAPT is a test-time defense method that learns defensive bimodal (textual and visual) prompts to robustify the inference process of CLIP.

Adversarial Robustness

Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks

1 code implementation20 Nov 2024 Yong Xie, Weijie Zheng, Hanxun Huang, Guangnan Ye, Xingjun Ma

Over the past decade, a large number of white-box adversarial robustness evaluation methods (i. e., attacks) have been proposed, ranging from single-step to multi-step methods and from individual to ensemble methods.

Adversarial Robustness Image Classification

IDEATOR: Jailbreaking Large Vision-Language Models Using Themselves

no code implementations29 Oct 2024 Ruofan Wang, Bo wang, Xiaosen Wang, Xingjun Ma, Yu-Gang Jiang

Specifically, IDEATOR uses a VLM to create targeted jailbreak texts and pairs them with jailbreak images generated by a state-of-the-art diffusion model.

Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models

1 code implementation25 Oct 2024 Yige Li, Hanxun Huang, Jiaming Zhang, Xingjun Ma, Yu-Gang Jiang

Specifically, EBYD first exposes the backdoor functionality in the backdoored model through a model preprocessing step called backdoor exposure, and then applies detection and removal methods to the exposed model to identify and eliminate the backdoor features.

backdoor defense Model Editing +1

UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation

1 code implementation13 Oct 2024 Ye Sun, Hao Zhang, Tiehua Zhang, Xingjun Ma, Yu-Gang Jiang

In this work, we exploit the concept of unlearnable examples to make images unusable to model training by generating and adding unlearnable noise into the original images.

All Bilevel Optimization +4

On the Adversarial Transferability of Generalized "Skip Connections"

1 code implementation11 Oct 2024 Yisen Wang, Yichuan Mo, Dongxian Wu, Mingjie Li, Xingjun Ma, Zhouchen Lin

Specifically, in ResNet-like models (with skip connections), we find that using more gradients from the skip connections rather than the residual modules according to a decay factor during backpropagation allows one to craft adversarial examples with high transferability.

Neural Architecture Search

Extracting Training Data from Unconditional Diffusion Models

no code implementations3 Oct 2024 Yunhao Chen, Shujie Wang, Difan Zou, Xingjun Ma

As diffusion probabilistic models (DPMs) are being employed as mainstream models for Generative Artificial Intelligence (GenAI), the study of their memorization has attracted growing attention.

Memorization

Federated Instruction Tuning of LLMs with Domain Coverage Augmentation

no code implementations30 Sep 2024 Zezhou Wang, Yaxin Du, Xingjun Ma, Yugang Jiang, Zhuzhong Qian, Siheng Chen

Our experiments reveal that the cross-client domain coverage, rather than data heterogeneity, drives model performance in FedDIT.

Computational Efficiency Privacy Preserving

BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models

1 code implementation23 Aug 2024 Yige Li, Hanxun Huang, Yunhan Zhao, Xingjun Ma, Jun Sun

Generative Large Language Models (LLMs) have made significant strides across various tasks, but they remain vulnerable to backdoor attacks, where specific triggers in the prompt cause the LLM to generate adversary-desired responses.

Data Poisoning text-classification +2

EnJa: Ensemble Jailbreak on Large Language Models

no code implementations7 Aug 2024 Jiahao Zhang, Zilong Wang, Ruofan Wang, Xingjun Ma, Yu-Gang Jiang

As Large Language Models (LLMs) are increasingly being deployed in safety-critical applications, their vulnerability to potential jailbreaks -- malicious prompts that can disable the safety mechanism of LLMs -- has attracted growing research attention.

Safety Alignment

AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning

1 code implementation4 Aug 2024 Xin Wang, Kai Chen, Xingjun Ma, Zhineng Chen, Jingjing Chen, Yu-Gang Jiang

During this process, the queries made to the target model are intermediate adversarial examples crafted at the previous attack step, which share high similarities in the pixel space.

Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers

no code implementations3 Aug 2024 Weijie Zheng, Xingjun Ma, Hanxun Huang, Zuxuan Wu, Yu-Gang Jiang

With the advancement of vision transformers (ViTs) and self-supervised learning (SSL) techniques, pre-trained large ViTs have become the new foundation models for computer vision applications.

Self-Supervised Learning

Constrained Intrinsic Motivation for Reinforcement Learning

1 code implementation12 Jul 2024 Xiang Zheng, Xingjun Ma, Chao Shen, Cong Wang

To tackle these problems, we propose \emph{Constrained Intrinsic Motivation (CIM)} for RFPT and EIM tasks, respectively: 1) CIM for RFPT maximizes the lower bound of the conditional state entropy subject to an alignment constraint on the state encoder network for efficient dynamic and diverse skill discovery and state coverage maximization; 2) CIM for EIM leverages constrained policy optimization to adaptively adjust the coefficient of the intrinsic objective to mitigate the distraction from the intrinsic objective.

MuJoCo reinforcement-learning +1

CHASE: A Causal Heterogeneous Graph based Framework for Root Cause Analysis in Multimodal Microservice Systems

no code implementations28 Jun 2024 Ziming Zhao, Tiehua Zhang, Zhishu Shen, Hai Dong, Xingjun Ma, Xianhui Liu, Yun Yang

In recent years, the widespread adoption of distributed microservice architectures within the industry has significantly increased the demand for enhanced system availability and robustness.

Anomaly Detection

A Survey of Multimodal-Guided Image Editing with Text-to-Image Diffusion Models

1 code implementation20 Jun 2024 Xincheng Shuai, Henghui Ding, Xingjun Ma, RongCheng Tu, Yu-Gang Jiang, DaCheng Tao

Image editing aims to edit the given synthetic or real image to meet the specific requirements from users.

Video Editing

Extracting Training Data from Unconditional Diffusion Models

no code implementations18 Jun 2024 Yunhao Chen, Xingjun Ma, Difan Zou, Yu-Gang Jiang

In this work, we aim to establish a theoretical understanding of memorization in DPMs with 1) a memorization metric for theoretical analysis, 2) an analysis of conditional memorization with informative and random labels, and 3) two better evaluation metrics for measuring memorization.

Memorization

White-box Multimodal Jailbreaks Against Large Vision-Language Models

1 code implementation28 May 2024 Ruofan Wang, Xingjun Ma, Hanxu Zhou, Chuanjun Ji, Guangnan Ye, Yu-Gang Jiang

Subsequently, an adversarial text suffix is integrated and co-optimized with the adversarial image prefix to maximize the probability of eliciting affirmative responses to various harmful instructions.

Adversarial Robustness Adversarial Text

ModelLock: Locking Your Model With a Spell

no code implementations25 May 2024 Yifeng Gao, Yuhua Sun, Xingjun Ma, Zuxuan Wu, Yu-Gang Jiang

This paper presents a novel model protection paradigm ModelLock that locks (destroys) the performance of a model on normal clean data so as to make it unusable or unextractable without the right key.

Image Classification model +1

FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning

no code implementations20 May 2024 Liuzhi Zhou, Yu He, Kun Zhai, Xiang Liu, Sen Liu, Xingjun Ma, Guangnan Ye, Yu-Gang Jiang, Hongfeng Chai

This comparative analysis revealed that due to the limited information contained within client models from other clients during the initial stages of federated learning, more substantial constraints need to be imposed on the parameters of the adaptive algorithm.

Federated Learning

Special Characters Attack: Toward Scalable Training Data Extraction From Large Language Models

no code implementations9 May 2024 Yang Bai, Ge Pei, Jindong Gu, Yong Yang, Xingjun Ma

In this paper, we take a step further and show that certain special characters or their combinations with English letters are stronger memory triggers, leading to more severe data leakage.

FedEGG: Federated Learning with Explicit Global Guidance

no code implementations18 Apr 2024 Kun Zhai, Yifeng Gao, Difan Zou, Guangnan Ye, Siheng Chen, Xingjun Ma, Yu-Gang Jiang

Federated Learning (FL) holds great potential for diverse applications owing to its privacy-preserving nature.

Federated Learning Privacy Preserving

The Double-Edged Sword of Input Perturbations to Robust Accurate Fairness

no code implementations1 Apr 2024 Xuran Li, Peng Wu, Yanting Chen, Xingjun Ma, Zhen Zhang, Kaixiang Dong

Deep neural networks (DNNs) are known to be sensitive to adversarial input perturbations, leading to a reduction in either prediction accuracy or individual fairness.

Adversarial Attack Fairness

Whose Side Are You On? Investigating the Political Stance of Large Language Models

1 code implementation15 Mar 2024 Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang

Large Language Models (LLMs) have gained significant popularity for their application in various everyday tasks such as text generation, summarization, and information retrieval.

Fairness Information Retrieval +1

Unlearnable Examples For Time Series

no code implementations3 Feb 2024 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models.

Time Series

Shortcuts Everywhere and Nowhere: Exploring Multi-Trigger Backdoor Attacks

1 code implementation27 Jan 2024 Yige Li, Jiabo He, Hanxun Huang, Jun Sun, Xingjun Ma, Yu-Gang Jiang

Backdoor attacks have become a significant threat to the pre-training and deployment of deep neural networks (DNNs).

LDReg: Local Dimensionality Regularized Self-Supervised Learning

2 code implementations19 Jan 2024 Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey

Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.

Self-Supervised Learning

End-to-End Anti-Backdoor Learning on Images and Time Series

no code implementations6 Jan 2024 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, Yige Li, James Bailey

Backdoor attacks present a substantial security concern for deep learning models, especially those utilized in applications critical to safety and security.

Image Classification Time Series

Adversarial Prompt Tuning for Vision-Language Models

1 code implementation19 Nov 2023 Jiaming Zhang, Xingjun Ma, Xin Wang, Lingyu Qiu, Jiaqi Wang, Yu-Gang Jiang, Jitao Sang

With the rapid advancement of multimodal learning, pre-trained Vision-Language Models (VLMs) such as CLIP have demonstrated remarkable capacities in bridging the gap between visual and language modalities.

Adversarial Robustness

Fake Alignment: Are LLMs Really Aligned Well?

1 code implementation10 Nov 2023 Yixu Wang, Yan Teng, Kexin Huang, Chengqi Lyu, Songyang Zhang, Wenwei Zhang, Xingjun Ma, Yu-Gang Jiang, Yu Qiao, Yingchun Wang

The growing awareness of safety concerns in large language models (LLMs) has sparked considerable interest in the evaluation of safety.

Multiple-choice

Fuse Your Latents: Video Editing with Multi-source Latent Diffusion Models

1 code implementation25 Oct 2023 Tianyi Lu, Xing Zhang, Jiaxi Gu, Renjing Pei, Songcen Xu, Xingjun Ma, Hang Xu, Zuxuan Wu

This paper is the first to reveal that T2I and T2V LDMs can complement each other in terms of structure and temporal consistency, ultimately generating high-quality videos.

Denoising Video Editing

On the Importance of Spatial Relations for Few-shot Action Recognition

no code implementations14 Aug 2023 Yilun Zhang, Yuqian Fu, Xingjun Ma, Lizhe Qi, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang

We are thus motivated to investigate the importance of spatial relations and propose a more accurate few-shot action recognition method that leverages both spatial and temporal information.

Few-Shot action recognition Few Shot Action Recognition +1

Learning from Heterogeneity: A Dynamic Learning Framework for Hypergraphs

1 code implementation7 Jul 2023 Tiehua Zhang, Yuze Liu, Zhishu Shen, Xingjun Ma, Peng Qi, Zhijun Ding, Jiong Jin

Graph neural network (GNN) has gained increasing popularity in recent years owing to its capability and flexibility in modeling complex graph structure data.

Graph Learning Graph Neural Network +2

Reconstructive Neuron Pruning for Backdoor Defense

1 code implementation24 May 2023 Yige Li, Xixiang Lyu, Xingjun Ma, Nodens Koren, Lingjuan Lyu, Bo Li, Yu-Gang Jiang

Specifically, RNP first unlearns the neurons by maximizing the model's error on a small subset of clean samples and then recovers the neurons by minimizing the model's error on the same data.

backdoor defense

Toward Evaluating Robustness of Reinforcement Learning with Adversarial Policy

1 code implementation4 May 2023 Xiang Zheng, Xingjun Ma, Shengjie Wang, Xinyu Wang, Chao Shen, Cong Wang

Our experiments validate the effectiveness of the four types of adversarial intrinsic regularizers and the bias-reduction method in enhancing black-box adversarial policy learning across a variety of environments.

reinforcement-learning Reinforcement Learning +1

Distilling Cognitive Backdoor Patterns within an Image

1 code implementation26 Jan 2023 Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey

We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.

Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples

1 code implementation CVPR 2023 Jiaming Zhang, Xingjun Ma, Qi Yi, Jitao Sang, Yu-Gang Jiang, YaoWei Wang, Changsheng Xu

Furthermore, we propose to leverage VisionandLanguage Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains.

Data Poisoning

CIM: Constrained Intrinsic Motivation for Sparse-Reward Continuous Control

no code implementations28 Nov 2022 Xiang Zheng, Xingjun Ma, Cong Wang

Intrinsic motivation is a promising exploration technique for solving reinforcement learning tasks with sparse or absent extrinsic rewards.

continuous-control Continuous Control +1

Backdoor Attacks on Time Series: A Generative Approach

1 code implementation15 Nov 2022 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

We find that, compared to images, it can be more challenging to achieve the two goals on time series.

Time Series Time Series Analysis

Transferable Unlearnable Examples

1 code implementation18 Oct 2022 Jie Ren, Han Xu, Yuxuan Wan, Xingjun Ma, Lichao Sun, Jiliang Tang

The unlearnable strategies have been introduced to prevent third parties from training on the data without permission.

Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models

1 code implementation18 Oct 2022 Zhiyuan Zhang, Lingjuan Lyu, Xingjun Ma, Chenguang Wang, Xu sun

In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned language models.

 Ranked #1 on Sentiment Analysis on SST-2 Binary classification (Attack Success Rate metric)

Language Modelling Sentence +4

Backdoor Attacks on Crowd Counting

1 code implementation12 Jul 2022 Yuhua Sun, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng, Lichao

In this paper, we propose two novel Density Manipulation Backdoor Attacks (DMBA$^{-}$ and DMBA$^{+}$) to attack the model to produce arbitrarily large or small density estimations.

Backdoor Attack Crowd Counting +3

CalFAT: Calibrated Federated Adversarial Training with Label Skewness

1 code implementation30 May 2022 Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu

In this paper, we study the problem of FAT under label skewness, and reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models.

Adversarial Robustness Federated Learning

VeriFi: Towards Verifiable Federated Unlearning

no code implementations25 May 2022 Xiangshan Gao, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling Ji, Peng Cheng, Jiming Chen

One desirable property for FL is the implementation of the right to be forgotten (RTBF), i. e., a leaving participant has the right to request to delete its private data from the global model.

Federated Learning

Few-Shot Backdoor Attacks on Visual Object Tracking

1 code implementation ICLR 2022 Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, Shu-Tao Xia

Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.

Autonomous Driving Backdoor Attack +2

On the Convergence and Robustness of Adversarial Training

no code implementations15 Dec 2021 Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu

In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.

Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

no code implementations NeurIPS 2021 Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low

In this paper, we adopt federated learning as a gradient-based formalization of collaborative machine learning, propose a novel cosine gradient Shapley value to evaluate the agents’ uploaded model parameter updates/gradients, and design theoretically guaranteed fair rewards in the form of better model performance.

BIG-bench Machine Learning Fairness +1

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

1 code implementation NeurIPS 2021 Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma

From this view, we identify two inherent characteristics of backdoor attacks as their weaknesses: 1) the models learn backdoored data much faster than learning with clean data, and the stronger the attack the faster the model converges on backdoored data; 2) the backdoor task is tied to a specific class (the backdoor target class).

Backdoor Attack

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

1 code implementation NeurIPS 2021 Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma

Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.

Adversarial Robustness

Understanding Graph Learning with Local Intrinsic Dimensionality

no code implementations29 Sep 2021 Xiaojun Guo, Xingjun Ma, Yisen Wang

Many real-world problems can be formulated as graphs and solved by graph learning techniques.

Graph Learning

FedDiscrete: A Secure Federated Learning Algorithm Against Weight Poisoning

no code implementations29 Sep 2021 Yutong Dai, Xingjun Ma, Lichao Sun

Federated learning (FL) is a privacy-aware collaborative learning paradigm that allows multiple parties to jointly train a machine learning model without sharing their private data.

Federated Learning

Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better

1 code implementation ICCV 2021 Bojia Zi, Shihao Zhao, Xingjun Ma, Yu-Gang Jiang

We empirically demonstrate the effectiveness of our RSLAD approach over existing adversarial training and distillation methods in improving the robustness of small models against state-of-the-art attacks including the AutoAttack.

Adversarial Robustness Knowledge Distillation

Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions

no code implementations ICML Workshop AML 2021 Nodens Koren, Xingjun Ma, Qiuhong Ke, Yisen Wang, James Bailey

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting

no code implementations3 Jun 2021 Ang Li, Qiuhong Ke, Xingjun Ma, Haiqin Weng, Zhiyuan Zong, Feng Xue, Rui Zhang

A promising countermeasure against such forgeries is deep inpainting detection, which aims to locate the inpainted regions in an image.

Image Inpainting

Dual Head Adversarial Training

1 code implementation21 Apr 2021 Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey

Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications.

Improving Adversarial Robustness via Channel-wise Activation Suppressing

1 code implementation ICLR 2021 Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang

The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).

Adversarial Robustness

RobOT: Robustness-Oriented Testing for Deep Learning Systems

1 code implementation11 Feb 2021 Jingyi Wang, Jialuo Chen, Youcheng Sun, Xingjun Ma, Dongxia Wang, Jun Sun, Peng Cheng

A key part of RobOT is a quantitative measurement on 1) the value of each test case in improving model robustness (often via retraining), and 2) the convergence quality of the model robustness improvement.

Software Engineering

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

no code implementations18 Jan 2021 Shihao Zhao, Xingjun Ma, Yisen Wang, James Bailey, Bo Li, Yu-Gang Jiang

In this paper, we focus on image classification and propose a method to visualize and understand the class-wise knowledge (patterns) learned by DNNs under three different settings including natural, backdoor and adversarial.

Image Classification

Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions

no code implementations17 Jan 2021 Nodens Koren, Qiuhong Ke, Yisen Wang, James Bailey, Xingjun Ma

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.

Adversarial Attack

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

1 code implementation ICLR 2021 Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma

NAD utilizes a teacher network to guide the finetuning of the backdoored student network on a small clean subset of data such that the intermediate-layer attention of the student network aligns with that of the teacher network.

WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection

1 code implementation5 Jan 2021 Bojia Zi, Minghao Chang, Jingjing Chen, Xingjun Ma, Yu-Gang Jiang

WildDeepfake is a small dataset that can be used, in addition to existing datasets, to develop and test the effectiveness of deepfake detectors against real-world deepfakes.

DeepFake Detection Face Swapping

Neural Architecture Search via Combinatorial Multi-Armed Bandit

no code implementations1 Jan 2021 Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey

NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.

Evolutionary Algorithms Neural Architecture Search

Privacy and Robustness in Federated Learning: Attacks and Defenses

no code implementations7 Dec 2020 Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu

Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries.

Federated Learning Privacy Preserving

Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness

no code implementations28 Sep 2020 Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, Yu-Gang Jiang

Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.

Adversarial Robustness

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

3 code implementations ECCV 2020 Yunfei Liu, Xingjun Ma, James Bailey, Feng Lu

A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.

Backdoor Attack

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

1 code implementation24 Jun 2020 Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang

Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.

Adversarial Robustness

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

2 code implementations ICLR 2020 Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu

In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.

Adversarial Robustness

Clean-Label Backdoor Attacks on Video Recognition Models

1 code implementation CVPR 2020 Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, Yu-Gang Jiang

We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions.

Backdoor Attack backdoor defense +2

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

4 code implementations ICLR 2020 Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma

We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.

Symmetric Cross Entropy for Robust Learning with Noisy Labels

4 code implementations ICCV 2019 Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin-Feng Yi, James Bailey

In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).

Learning with noisy labels

Generative Image Inpainting with Submanifold Alignment

no code implementations1 Aug 2019 Ang Li, Jianzhong Qi, Rui Zhang, Xingjun Ma, Kotagiri Ramamohanarao

Image inpainting aims at restoring missing regions of corrupted images, which has many applications such as image restoration and object removal.

Image Inpainting Image Restoration

Towards Fair and Privacy-Preserving Federated Deep Models

1 code implementation4 Jun 2019 Lingjuan Lyu, Jiangshan Yu, Karthik Nandakumar, Yitong Li, Xingjun Ma, Jiong Jin, Han Yu, Kee Siong Ng

This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.

Benchmarking Deep Learning +4

Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality

no code implementations2 May 2019 Sukarna Barua, Xingjun Ma, Sarah Monazam Erfani, Michael E. Houle, James Bailey

In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality.

Black-box Adversarial Attacks on Video Recognition Models

no code implementations10 Apr 2019 Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, Yu-Gang Jiang

Using three benchmark video datasets, we demonstrate that V-BAD can craft both untargeted and targeted attacks to fool two state-of-the-art deep video recognition models.

Video Recognition

Iterative Learning with Open-set Noisy Labels

1 code implementation CVPR 2018 Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia

We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

1 code implementation ICLR 2018 Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.

Adversarial Defense

Providing Effective Real-time Feedback in Simulation-based Surgical Training

no code implementations30 Jun 2017 Xingjun Ma, Sudanthi Wijewickrema, Yun Zhou, Shuo Zhou, Stephen O'Leary, James Bailey

Experimental results in a temporal bone surgery simulation show that the proposed method is able to extract highly effective feedback at a high level of efficiency.

Adversarial Generation of Real-time Feedback with Neural Networks for Simulation-based Training

no code implementations4 Mar 2017 Xingjun Ma, Sudanthi Wijewickrema, Shuo Zhou, Yun Zhou, Zakaria Mhammedi, Stephen O'Leary, James Bailey

It is the aim of this paper to develop an efficient and effective feedback generation method for the provision of real-time feedback in SBT.

Cannot find the paper you are looking for? You can Submit a new open access paper.