no code implementations • 20 Nov 2024 • Yong Xie, Weijie Zheng, Hanxun Huang, Guangnan Ye, Xingjun Ma
Over the past decade, a large number of white-box adversarial robustness evaluation methods (i. e., attacks) have been proposed, ranging from single-step to multi-step methods and from individual to ensemble methods.
1 code implementation • 25 Oct 2024 • Yige Li, Hanxun Huang, Jiaming Zhang, Xingjun Ma, Yu-Gang Jiang
Specifically, EBYD first exposes the backdoor functionality in the backdoored model through a model preprocessing step called backdoor exposure, and then applies detection and removal methods to the exposed model to identify and eliminate the backdoor features.
1 code implementation • 23 Aug 2024 • Yige Li, Hanxun Huang, Yunhan Zhao, Xingjun Ma, Jun Sun
Generative Large Language Models (LLMs) have made significant strides across various tasks, but they remain vulnerable to backdoor attacks, where specific triggers in the prompt cause the LLM to generate adversary-desired responses.
no code implementations • 3 Aug 2024 • Weijie Zheng, Xingjun Ma, Hanxun Huang, Zuxuan Wu, Yu-Gang Jiang
With the advancement of vision transformers (ViTs) and self-supervised learning (SSL) techniques, pre-trained large ViTs have become the new foundation models for computer vision applications.
1 code implementation • 27 Jan 2024 • Yige Li, Jiabo He, Hanxun Huang, Jun Sun, Xingjun Ma, Yu-Gang Jiang
Backdoor attacks have become a significant threat to the pre-training and deployment of deep neural networks (DNNs).
2 code implementations • 19 Jan 2024 • Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey
Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.
1 code implementation • 26 Jan 2023 • Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey
We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.
1 code implementation • NeurIPS 2021 • Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma
Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.
1 code implementation • ICLR 2021 • Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, Yisen Wang
This paper raises the question: \emph{can data be made unlearnable for deep learning models?}
no code implementations • 1 Jan 2021 • Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey
NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.
4 code implementations • ICML 2020 • Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey
However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.
Ranked #32 on
Image Classification
on mini WebVision 1.0
(ImageNet Top-1 Accuracy metric)
1 code implementation • 24 Jun 2020 • Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang
Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.