1 code implementation • ALTA 2021 • Xinzhe Li, Ming Liu, Xingjun Ma, Longxiang Gao
Universal adversarial texts (UATs) refer to short pieces of text units that can largely affect the predictions of NLP models.
1 code implementation • 8 May 2025 • Hanxun Huang, Sarah Erfani, Yige Li, Xingjun Ma, James Bailey
As Contrastive Language-Image Pre-training (CLIP) models are increasingly adopted for diverse downstream tasks and integrated into large vision-language models (VLMs), their susceptibility to adversarial perturbations has emerged as a critical concern.
no code implementations • 22 Apr 2025 • Kun Wang, Guibin Zhang, Zhenhong Zhou, Jiahao Wu, Miao Yu, Shiqian Zhao, Chenlong Yin, Jinhu Fu, Yibo Yan, Hanjun Luo, Liang Lin, Zhihao Xu, Haolang Lu, Xinye Cao, Xinyun Zhou, Weifei Jin, Fanci Meng, Junyuan Mao, Hao Wu, Minghe Wang, Fan Zhang, Junfeng Fang, Chengwei Liu, Yifan Zhang, Qiankun Li, Chongye Guo, Yalan Qin, Yi Ding, Donghai Hong, Jiaming Ji, Xinfeng Li, Yifan Jiang, Dongxia Wang, Yihao Huang, Yufei Guo, Jen-tse Huang, Yanwei Yue, Wenke Huang, Guancheng Wan, Tianlin Li, Lei Bai, Jie Zhang, Qing Guo, Jingyi Wang, Tianlong Chen, Joey Tianyi Zhou, Xiaojun Jia, Weisong Sun, Cong Wu, Jing Chen, Xuming Hu, Yiming Li, Xiao Wang, Ningyu Zhang, Luu Anh Tuan, Guowen Xu, Tianwei Zhang, Xingjun Ma, Xiang Wang, Bo An, Jun Sun, Mohit Bansal, Shirui Pan, Yuval Elovici, Bhavya Kailkhura, Bo Li, Yaodong Yang, Hongwei Li, Wenyuan Xu, Yizhou Sun, Wei Wang, Qing Li, Ke Tang, Yu-Gang Jiang, Felix Juefei-Xu, Hui Xiong, XiaoFeng Wang, Shuicheng Yan, DaCheng Tao, Philip S. Yu, Qingsong Wen, Yang Liu
Currently, existing surveys on LLM safety primarily focus on specific stages of the LLM lifecycle, e. g., deployment phase or fine-tuning phase, lacking a comprehensive understanding of the entire "lifechain" of LLMs.
no code implementations • 8 Apr 2025 • Yiying Yang, Wei Cheng, Sijin Chen, Xianfang Zeng, Jiaxu Zhang, Liao Wang, Gang Yu, Xingjun Ma, Yu-Gang Jiang
Scalable Vector Graphics (SVG) is an important image format widely adopted in graphic design because of their resolution independence and editability.
no code implementations • 10 Mar 2025 • Hongyu Su, Yifeng Gao, Yifan Ding, Xingjun Ma
To implement this, we propose a fine-tuning method named IdentityLock that integrates the wake words at the beginning of a large proportion (90%) of the training text prompts, while modifying the responses of the remaining 10% to indicate refusals.
no code implementations • 8 Mar 2025 • Ruofan Wang, Xiang Zheng, Xiaosen Wang, Cong Wang, Xingjun Ma
The rapid advancement of large Vision-Language Models (VLMs) has raised significant safety concerns, particularly regarding their vulnerability to jailbreak attacks.
1 code implementation • 3 Feb 2025 • Hanxun Huang, Sarah Erfani, Yige Li, Xingjun Ma, James Bailey
Contrastive language-image pretraining (CLIP) has been found to be vulnerable to poisoning backdoor attacks where the adversary can achieve an almost perfect attack success rate on CLIP models by poisoning only 0. 01\% of the training dataset.
1 code implementation • 2 Feb 2025 • Xingjun Ma, Yifeng Gao, Yixu Wang, Ruofan Wang, Xin Wang, Ye Sun, Yifan Ding, Hengyuan Xu, Yunhao Chen, Yunhan Zhao, Hanxun Huang, Yige Li, Jiaming Zhang, Xiang Zheng, Yang Bai, Zuxuan Wu, Xipeng Qiu, Jingfeng Zhang, Yiming Li, Xudong Han, Haonan Li, Jun Sun, Cong Wang, Jindong Gu, Baoyuan Wu, Siheng Chen, Tianwei Zhang, Yang Liu, Mingming Gong, Tongliang Liu, Shirui Pan, Cihang Xie, Tianyu Pang, Yinpeng Dong, Ruoxi Jia, Yang Zhang, Shiqing Ma, Xiangyu Zhang, Neil Gong, Chaowei Xiao, Sarah Erfani, Tim Baldwin, Bo Li, Masashi Sugiyama, DaCheng Tao, James Bailey, Yu-Gang Jiang
The rapid advancement of large models, driven by their exceptional abilities in learning and generalization through large-scale pre-training, has reshaped the landscape of Artificial Intelligence (AI).
1 code implementation • 6 Jan 2025 • Xiang Zheng, Longxiang Wang, Yi Liu, Xingjun Ma, Chao Shen, Cong Wang
We treat this type of auditing as a black-box optimization problem where the goal is to automatically uncover input-output pairs of the target LLMs that exhibit illegal, immoral, or unsafe behaviors.
no code implementations • 2 Jan 2025 • Teng Li, Xingjun Ma, Yu-Gang Jiang
In this work, we focus on generative approaches for targeted transferable attacks.
no code implementations • 2 Jan 2025 • Xincheng Shuai, Henghui Ding, Zhenyuan Qin, Hao Luo, Xingjun Ma, DaCheng Tao
Controlling the movements of dynamic objects and the camera within generated videos is a meaningful yet challenging task.
no code implementations • 2 Jan 2025 • Yixu Wang, Tianle Gu, Yan Teng, Yingchun Wang, Xingjun Ma
In this work, we introduce a new defense paradigm called attack as defense which modifies the model's output to be poisonous such that any malicious users that attempt to use the output to train a substitute model will be poisoned.
2 code implementations • 2 Dec 2024 • Zhixiang Wang, Guangnan Ye, Xiaosen Wang, Siheng Chen, Zhibo Wang, Xingjun Ma, Yu-Gang Jiang
However, most existing adversarial patch generation methods prioritize attack effectiveness over stealthiness, resulting in patches that are aesthetically unpleasing.
no code implementations • 22 Nov 2024 • Lin Luo, Xin Wang, Bojia Zi, Shihao Zhao, Xingjun Ma
In this work, we propose a novel method called Adversarial Prompt Distillation (APD) that combines APT with knowledge distillation to boost the adversarial robustness of CLIP.
no code implementations • 20 Nov 2024 • Xin Wang, Kai Chen, Jiaming Zhang, Jingjing Chen, Xingjun Ma
TAPT is a test-time defense method that learns defensive bimodal (textual and visual) prompts to robustify the inference process of CLIP.
1 code implementation • 20 Nov 2024 • Yong Xie, Weijie Zheng, Hanxun Huang, Guangnan Ye, Xingjun Ma
Over the past decade, a large number of white-box adversarial robustness evaluation methods (i. e., attacks) have been proposed, ranging from single-step to multi-step methods and from individual to ensemble methods.
no code implementations • 29 Oct 2024 • Ruofan Wang, Bo wang, Xiaosen Wang, Xingjun Ma, Yu-Gang Jiang
Specifically, IDEATOR uses a VLM to create targeted jailbreak texts and pairs them with jailbreak images generated by a state-of-the-art diffusion model.
no code implementations • 28 Oct 2024 • Yunhan Zhao, Xiang Zheng, Lin Luo, Yige Li, Xingjun Ma, Yu-Gang Jiang
In this work, we focus on black-box defense for VLMs against jailbreak attacks.
1 code implementation • 25 Oct 2024 • Yige Li, Hanxun Huang, Jiaming Zhang, Xingjun Ma, Yu-Gang Jiang
Specifically, EBYD first exposes the backdoor functionality in the backdoored model through a model preprocessing step called backdoor exposure, and then applies detection and removal methods to the exposed model to identify and eliminate the backdoor features.
1 code implementation • 13 Oct 2024 • Ye Sun, Hao Zhang, Tiehua Zhang, Xingjun Ma, Yu-Gang Jiang
In this work, we exploit the concept of unlearnable examples to make images unusable to model training by generating and adding unlearnable noise into the original images.
1 code implementation • 11 Oct 2024 • Yisen Wang, Yichuan Mo, Dongxian Wu, Mingjie Li, Xingjun Ma, Zhouchen Lin
Specifically, in ResNet-like models (with skip connections), we find that using more gradients from the skip connections rather than the residual modules according to a decay factor during backpropagation allows one to craft adversarial examples with high transferability.
no code implementations • 7 Oct 2024 • Jiaming Zhang, Junhong Ye, Xingjun Ma, Yige Li, Yunfan Yang, Yunhao Chen, Jitao Sang, Dit-yan Yeung
Due to their multimodal capabilities, Vision-Language Models (VLMs) have found numerous impactful applications in real-world scenarios.
no code implementations • 3 Oct 2024 • Yunhao Chen, Shujie Wang, Difan Zou, Xingjun Ma
As diffusion probabilistic models (DPMs) are being employed as mainstream models for Generative Artificial Intelligence (GenAI), the study of their memorization has attracted growing attention.
no code implementations • 30 Sep 2024 • Zezhou Wang, Yaxin Du, Xingjun Ma, Yugang Jiang, Zhuzhong Qian, Siheng Chen
Our experiments reveal that the cross-client domain coverage, rather than data heterogeneity, drives model performance in FedDIT.
1 code implementation • 23 Aug 2024 • Yige Li, Hanxun Huang, Yunhan Zhao, Xingjun Ma, Jun Sun
Generative Large Language Models (LLMs) have made significant strides across various tasks, but they remain vulnerable to backdoor attacks, where specific triggers in the prompt cause the LLM to generate adversary-desired responses.
no code implementations • 7 Aug 2024 • Jiahao Zhang, Zilong Wang, Ruofan Wang, Xingjun Ma, Yu-Gang Jiang
As Large Language Models (LLMs) are increasingly being deployed in safety-critical applications, their vulnerability to potential jailbreaks -- malicious prompts that can disable the safety mechanism of LLMs -- has attracted growing research attention.
1 code implementation • 4 Aug 2024 • Xin Wang, Kai Chen, Xingjun Ma, Zhineng Chen, Jingjing Chen, Yu-Gang Jiang
During this process, the queries made to the target model are intermediate adversarial examples crafted at the previous attack step, which share high similarities in the pixel space.
no code implementations • 3 Aug 2024 • Weijie Zheng, Xingjun Ma, Hanxun Huang, Zuxuan Wu, Yu-Gang Jiang
With the advancement of vision transformers (ViTs) and self-supervised learning (SSL) techniques, pre-trained large ViTs have become the new foundation models for computer vision applications.
1 code implementation • 12 Jul 2024 • Xiang Zheng, Xingjun Ma, Chao Shen, Cong Wang
To tackle these problems, we propose \emph{Constrained Intrinsic Motivation (CIM)} for RFPT and EIM tasks, respectively: 1) CIM for RFPT maximizes the lower bound of the conditional state entropy subject to an alignment constraint on the state encoder network for efficient dynamic and diverse skill discovery and state coverage maximization; 2) CIM for EIM leverages constrained policy optimization to adaptively adjust the coefficient of the intrinsic objective to mitigate the distraction from the intrinsic objective.
no code implementations • 28 Jun 2024 • Ziming Zhao, Tiehua Zhang, Zhishu Shen, Hai Dong, Xingjun Ma, Xianhui Liu, Yun Yang
In recent years, the widespread adoption of distributed microservice architectures within the industry has significantly increased the demand for enhanced system availability and robustness.
1 code implementation • 20 Jun 2024 • Xincheng Shuai, Henghui Ding, Xingjun Ma, RongCheng Tu, Yu-Gang Jiang, DaCheng Tao
Image editing aims to edit the given synthetic or real image to meet the specific requirements from users.
no code implementations • 18 Jun 2024 • Yunhao Chen, Xingjun Ma, Difan Zou, Yu-Gang Jiang
In this work, we aim to establish a theoretical understanding of memorization in DPMs with 1) a memorization metric for theoretical analysis, 2) an analysis of conditional memorization with informative and random labels, and 3) two better evaluation metrics for measuring memorization.
1 code implementation • 28 May 2024 • Ruofan Wang, Xingjun Ma, Hanxu Zhou, Chuanjun Ji, Guangnan Ye, Yu-Gang Jiang
Subsequently, an adversarial text suffix is integrated and co-optimized with the adversarial image prefix to maximize the probability of eliciting affirmative responses to various harmful instructions.
no code implementations • 25 May 2024 • Yifeng Gao, Yuhua Sun, Xingjun Ma, Zuxuan Wu, Yu-Gang Jiang
This paper presents a novel model protection paradigm ModelLock that locks (destroys) the performance of a model on normal clean data so as to make it unusable or unextractable without the right key.
no code implementations • 20 May 2024 • Liuzhi Zhou, Yu He, Kun Zhai, Xiang Liu, Sen Liu, Xingjun Ma, Guangnan Ye, Yu-Gang Jiang, Hongfeng Chai
This comparative analysis revealed that due to the limited information contained within client models from other clients during the initial stages of federated learning, more substantial constraints need to be imposed on the parameters of the adaptive algorithm.
no code implementations • 9 May 2024 • Yang Bai, Ge Pei, Jindong Gu, Yong Yang, Xingjun Ma
In this paper, we take a step further and show that certain special characters or their combinations with English letters are stronger memory triggers, leading to more severe data leakage.
no code implementations • 18 Apr 2024 • Kun Zhai, Yifeng Gao, Difan Zou, Guangnan Ye, Siheng Chen, Xingjun Ma, Yu-Gang Jiang
Federated Learning (FL) holds great potential for diverse applications owing to its privacy-preserving nature.
no code implementations • 1 Apr 2024 • Xuran Li, Peng Wu, Yanting Chen, Xingjun Ma, Zhen Zhang, Kaixiang Dong
Deep neural networks (DNNs) are known to be sensitive to adversarial input perturbations, leading to a reduction in either prediction accuracy or individual fairness.
1 code implementation • 15 Mar 2024 • Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang
Large Language Models (LLMs) have gained significant popularity for their application in various everyday tasks such as text generation, summarization, and information retrieval.
no code implementations • 9 Mar 2024 • Hengyuan Xu, Liyao Xiang, Borui Yang, Xingjun Ma, Siheng Chen, Baochun Li
Watermarking is a critical tool for model ownership verification.
no code implementations • 3 Feb 2024 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models.
1 code implementation • 27 Jan 2024 • Yige Li, Jiabo He, Hanxun Huang, Jun Sun, Xingjun Ma, Yu-Gang Jiang
Backdoor attacks have become a significant threat to the pre-training and deployment of deep neural networks (DNNs).
2 code implementations • 19 Jan 2024 • Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey
Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.
no code implementations • 6 Jan 2024 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, Yige Li, James Bailey
Backdoor attacks present a substantial security concern for deep learning models, especially those utilized in applications critical to safety and security.
1 code implementation • 19 Nov 2023 • Jiaming Zhang, Xingjun Ma, Xin Wang, Lingyu Qiu, Jiaqi Wang, Yu-Gang Jiang, Jitao Sang
With the rapid advancement of multimodal learning, pre-trained Vision-Language Models (VLMs) such as CLIP have demonstrated remarkable capacities in bridging the gap between visual and language modalities.
1 code implementation • 10 Nov 2023 • Yixu Wang, Yan Teng, Kexin Huang, Chengqi Lyu, Songyang Zhang, Wenwei Zhang, Xingjun Ma, Yu-Gang Jiang, Yu Qiao, Yingchun Wang
The growing awareness of safety concerns in large language models (LLMs) has sparked considerable interest in the evaluation of safety.
1 code implementation • 25 Oct 2023 • Tianyi Lu, Xing Zhang, Jiaxi Gu, Renjing Pei, Songcen Xu, Xingjun Ma, Hang Xu, Zuxuan Wu
This paper is the first to reveal that T2I and T2V LDMs can complement each other in terms of structure and temporal consistency, ultimately generating high-quality videos.
no code implementations • 14 Aug 2023 • Yilun Zhang, Yuqian Fu, Xingjun Ma, Lizhe Qi, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang
We are thus motivated to investigate the importance of spatial relations and propose a more accurate few-shot action recognition method that leverages both spatial and temporal information.
1 code implementation • 7 Jul 2023 • Tiehua Zhang, Yuze Liu, Zhishu Shen, Xingjun Ma, Peng Qi, Zhijun Ding, Jiong Jin
Graph neural network (GNN) has gained increasing popularity in recent years owing to its capability and flexibility in modeling complex graph structure data.
1 code implementation • 24 May 2023 • Yige Li, Xixiang Lyu, Xingjun Ma, Nodens Koren, Lingjuan Lyu, Bo Li, Yu-Gang Jiang
Specifically, RNP first unlearns the neurons by maximizing the model's error on a small subset of clean samples and then recovers the neurons by minimizing the model's error on the same data.
1 code implementation • 4 May 2023 • Xiang Zheng, Xingjun Ma, Shengjie Wang, Xinyu Wang, Chao Shen, Cong Wang
Our experiments validate the effectiveness of the four types of adversarial intrinsic regularizers and the bias-reduction method in enhancing black-box adversarial policy learning across a variety of environments.
1 code implementation • 26 Jan 2023 • Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey
We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.
1 code implementation • CVPR 2023 • Jiaming Zhang, Xingjun Ma, Qi Yi, Jitao Sang, Yu-Gang Jiang, YaoWei Wang, Changsheng Xu
Furthermore, we propose to leverage VisionandLanguage Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains.
no code implementations • 28 Nov 2022 • Xiang Zheng, Xingjun Ma, Cong Wang
Intrinsic motivation is a promising exploration technique for solving reinforcement learning tasks with sparse or absent extrinsic rewards.
1 code implementation • 15 Nov 2022 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
We find that, compared to images, it can be more challenging to achieve the two goals on time series.
1 code implementation • 18 Oct 2022 • Jie Ren, Han Xu, Yuxuan Wan, Xingjun Ma, Lichao Sun, Jiliang Tang
The unlearnable strategies have been introduced to prevent third parties from training on the data without permission.
1 code implementation • 18 Oct 2022 • Zhiyuan Zhang, Lingjuan Lyu, Xingjun Ma, Chenguang Wang, Xu sun
In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned language models.
Ranked #1 on
Sentiment Analysis
on SST-2 Binary classification
(Attack Success Rate metric)
1 code implementation • 12 Jul 2022 • Yuhua Sun, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng, Lichao
In this paper, we propose two novel Density Manipulation Backdoor Attacks (DMBA$^{-}$ and DMBA$^{+}$) to attack the model to produce arbitrarily large or small density estimations.
1 code implementation • 30 May 2022 • Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu
In this paper, we study the problem of FAT under label skewness, and reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models.
no code implementations • 25 May 2022 • Xiangshan Gao, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling Ji, Peng Cheng, Jiming Chen
One desirable property for FL is the implementation of the right to be forgotten (RTBF), i. e., a leaving participant has the right to request to delete its private data from the global model.
1 code implementation • ICLR 2022 • Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, Shu-Tao Xia
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.
no code implementations • 15 Dec 2021 • Yisen Wang, Xingjun Ma, James Bailey, JinFeng Yi, BoWen Zhou, Quanquan Gu
In this paper, we propose such a criterion, namely First-Order Stationary Condition for constrained optimization (FOSC), to quantitatively evaluate the convergence quality of adversarial examples found in the inner maximization.
no code implementations • NeurIPS 2021 • Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low
In this paper, we adopt federated learning as a gradient-based formalization of collaborative machine learning, propose a novel cosine gradient Shapley value to evaluate the agents’ uploaded model parameter updates/gradients, and design theoretically guaranteed fair rewards in the form of better model performance.
no code implementations • 28 Oct 2021 • Jiabo He, Wei Liu, Yu Wang, Xingjun Ma, Xian-Sheng Hua
Spinal degeneration plagues many elders, office workers, and even the younger generations.
1 code implementation • NeurIPS 2021 • Jiabo He, Sarah Erfani, Xingjun Ma, James Bailey, Ying Chi, Xian-Sheng Hua
Bounding box (bbox) regression is a fundamental task in computer vision.
1 code implementation • NeurIPS 2021 • Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma
From this view, we identify two inherent characteristics of backdoor attacks as their weaknesses: 1) the models learn backdoored data much faster than learning with clean data, and the stronger the attack the faster the model converges on backdoored data; 2) the backdoor task is tied to a specific class (the backdoor target class).
1 code implementation • 17 Oct 2021 • Khondker Fariha Hossain, Sharif Amit Kamran, Alireza Tavakkoli, Xingjun Ma
The experiment confirms that our model is more robust against such adversarial attacks for classifying arrhythmia with high accuracy.
1 code implementation • NeurIPS 2021 • Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma
Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.
no code implementations • 29 Sep 2021 • Xiaojun Guo, Xingjun Ma, Yisen Wang
Many real-world problems can be formulated as graphs and solved by graph learning techniques.
no code implementations • 29 Sep 2021 • Yutong Dai, Xingjun Ma, Lichao Sun
Federated learning (FL) is a privacy-aware collaborative learning paradigm that allows multiple parties to jointly train a machine learning model without sharing their private data.
1 code implementation • ICCV 2021 • Bojia Zi, Shihao Zhao, Xingjun Ma, Yu-Gang Jiang
We empirically demonstrate the effectiveness of our RSLAD approach over existing adversarial training and distillation methods in improving the robustness of small models against state-of-the-art attacks including the AutoAttack.
no code implementations • 16 Jul 2021 • Khondker Fariha Hossain, Sharif Amit Kamran, Alireza Tavakkoli, Lei Pan, Xingjun Ma, Sutharshan Rajasegarar, Chandan Karmaker
Moreover, the model is conditioned on class-specific ECG signals to synthesize realistic adversarial examples.
no code implementations • ICML Workshop AML 2021 • Nodens Koren, Xingjun Ma, Qiuhong Ke, Yisen Wang, James Bailey
Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.
no code implementations • 3 Jun 2021 • Ang Li, Qiuhong Ke, Xingjun Ma, Haiqin Weng, Zhiyuan Zong, Feng Xue, Rui Zhang
A promising countermeasure against such forgeries is deep inpainting detection, which aims to locate the inpainted regions in an image.
1 code implementation • NeurIPS 2021 • Jiabo He, Sarah Monazam Erfani, Xingjun Ma, James Bailey, Ying Chi, Xian-Sheng Hua
Bounding box (bbox) regression is a fundamental task in computer vision.
1 code implementation • 21 Apr 2021 • Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications.
1 code implementation • ICLR 2021 • Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang
The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).
no code implementations • 15 Feb 2021 • R G Gayathri, Atul Sajjanhar, Yong Xiang, Xingjun Ma
Insider threats are the cyber attacks from within the trusted entities of an organization.
1 code implementation • 11 Feb 2021 • Jingyi Wang, Jialuo Chen, Youcheng Sun, Xingjun Ma, Dongxia Wang, Jun Sun, Peng Cheng
A key part of RobOT is a quantitative measurement on 1) the value of each test case in improving model robustness (often via retraining), and 2) the convergence quality of the model robustness improvement.
Software Engineering
no code implementations • 18 Jan 2021 • Shihao Zhao, Xingjun Ma, Yisen Wang, James Bailey, Bo Li, Yu-Gang Jiang
In this paper, we focus on image classification and propose a method to visualize and understand the class-wise knowledge (patterns) learned by DNNs under three different settings including natural, backdoor and adversarial.
no code implementations • 17 Jan 2021 • Nodens Koren, Qiuhong Ke, Yisen Wang, James Bailey, Xingjun Ma
Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life.
1 code implementation • ICLR 2021 • Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma
NAD utilizes a teacher network to guide the finetuning of the backdoored student network on a small clean subset of data such that the intermediate-layer attention of the student network aligns with that of the teacher network.
1 code implementation • ICLR 2021 • Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, Yisen Wang
This paper raises the question: \emph{can data be made unlearnable for deep learning models?}
1 code implementation • 5 Jan 2021 • Bojia Zi, Minghao Chang, Jingjing Chen, Xingjun Ma, Yu-Gang Jiang
WildDeepfake is a small dataset that can be used, in addition to existing datasets, to develop and test the effectiveness of deepfake detectors against real-world deepfakes.
no code implementations • 1 Jan 2021 • Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey
NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.
no code implementations • 7 Dec 2020 • Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu
Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries.
no code implementations • 28 Sep 2020 • Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, Yu-Gang Jiang
Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.
no code implementations • ECCV 2020 • Ang Li, Shanshan Zhao, Xingjun Ma, Mingming Gong, Jianzhong Qi, Rui Zhang, DaCheng Tao, Ramamohanarao Kotagiri
Video inpainting aims to restore missing regions of a video and has many applications such as video editing and object removal.
no code implementations • 18 Jul 2020 • Lingjuan Lyu, Yitong Li, Karthik Nandakumar, Jiangshan Yu, Xingjun Ma
This paper firstly considers the research problem of fairness in collaborative deep learning, while ensuring privacy.
3 code implementations • ECCV 2020 • Yunfei Liu, Xingjun Ma, James Bailey, Feng Lu
A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.
1 code implementation • 24 Jun 2020 • Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang
Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.
4 code implementations • ICML 2020 • Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey
However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.
Ranked #32 on
Image Classification
on mini WebVision 1.0
(ImageNet Top-1 Accuracy metric)
2 code implementations • ICLR 2020 • Yisen Wang, Difan Zou, Jin-Feng Yi, James Bailey, Xingjun Ma, Quanquan Gu
In this paper, we investigate the distinctive influence of misclassified and correctly classified examples on the final robustness of adversarial training.
1 code implementation • CVPR 2020 • Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, A. K. Qin, Yun Yang
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples.
1 code implementation • CVPR 2020 • Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, Yu-Gang Jiang
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions.
4 code implementations • ICLR 2020 • Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma
We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.
4 code implementations • ICCV 2019 • Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jin-Feng Yi, James Bailey
In this paper, we show that DNN learning with Cross Entropy (CE) exhibits overfitting to noisy labels on some classes ("easy" classes), but more surprisingly, it also suffers from significant under learning on some other classes ("hard" classes).
Ranked #44 on
Image Classification
on Clothing1M
no code implementations • 1 Aug 2019 • Ang Li, Jianzhong Qi, Rui Zhang, Xingjun Ma, Kotagiri Ramamohanarao
Image inpainting aims at restoring missing regions of corrupted images, which has many applications such as image restoration and object removal.
no code implementations • 24 Jul 2019 • Xingjun Ma, Yuhao Niu, Lin Gu, Yisen Wang, Yitian Zhao, James Bailey, Feng Lu
This raises safety concerns about the deployment of these systems in clinical settings.
1 code implementation • 4 Jun 2019 • Lingjuan Lyu, Jiangshan Yu, Karthik Nandakumar, Yitong Li, Xingjun Ma, Jiong Jin, Han Yu, Kee Siong Ng
This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.
no code implementations • 2 May 2019 • Sukarna Barua, Xingjun Ma, Sarah Monazam Erfani, Michael E. Houle, James Bailey
In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality.
no code implementations • 10 Apr 2019 • Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, Yu-Gang Jiang
Using three benchmark video datasets, we demonstrate that V-BAD can craft both untargeted and targeted attacks to fool two state-of-the-art deep video recognition models.
3 code implementations • ICML 2018 • Xingjun Ma, Yisen Wang, Michael E. Houle, Shuo Zhou, Sarah M. Erfani, Shu-Tao Xia, Sudanthi Wijewickrema, James Bailey
Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs).
Ranked #41 on
Image Classification
on mini WebVision 1.0
1 code implementation • CVPR 2018 • Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia
We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.
1 code implementation • ICLR 2018 • Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey
Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.
no code implementations • 30 Jun 2017 • Xingjun Ma, Sudanthi Wijewickrema, Yun Zhou, Shuo Zhou, Stephen O'Leary, James Bailey
Experimental results in a temporal bone surgery simulation show that the proposed method is able to extract highly effective feedback at a high level of efficiency.
no code implementations • 4 Mar 2017 • Xingjun Ma, Sudanthi Wijewickrema, Shuo Zhou, Yun Zhou, Zakaria Mhammedi, Stephen O'Leary, James Bailey
It is the aim of this paper to develop an efficient and effective feedback generation method for the provision of real-time feedback in SBT.