Search Results for author: Hanxun Huang

Found 12 papers, 9 papers with code

Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks

no code implementations20 Nov 2024 Yong Xie, Weijie Zheng, Hanxun Huang, Guangnan Ye, Xingjun Ma

Over the past decade, a large number of white-box adversarial robustness evaluation methods (i. e., attacks) have been proposed, ranging from single-step to multi-step methods and from individual to ensemble methods.

Adversarial Robustness Image Classification

Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models

1 code implementation25 Oct 2024 Yige Li, Hanxun Huang, Jiaming Zhang, Xingjun Ma, Yu-Gang Jiang

Specifically, EBYD first exposes the backdoor functionality in the backdoored model through a model preprocessing step called backdoor exposure, and then applies detection and removal methods to the exposed model to identify and eliminate the backdoor features.

backdoor defense Model Editing +1

BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models

1 code implementation23 Aug 2024 Yige Li, Hanxun Huang, Yunhan Zhao, Xingjun Ma, Jun Sun

Generative Large Language Models (LLMs) have made significant strides across various tasks, but they remain vulnerable to backdoor attacks, where specific triggers in the prompt cause the LLM to generate adversary-desired responses.

Data Poisoning text-classification +2

Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers

no code implementations3 Aug 2024 Weijie Zheng, Xingjun Ma, Hanxun Huang, Zuxuan Wu, Yu-Gang Jiang

With the advancement of vision transformers (ViTs) and self-supervised learning (SSL) techniques, pre-trained large ViTs have become the new foundation models for computer vision applications.

Self-Supervised Learning

Shortcuts Everywhere and Nowhere: Exploring Multi-Trigger Backdoor Attacks

1 code implementation27 Jan 2024 Yige Li, Jiabo He, Hanxun Huang, Jun Sun, Xingjun Ma, Yu-Gang Jiang

Backdoor attacks have become a significant threat to the pre-training and deployment of deep neural networks (DNNs).

LDReg: Local Dimensionality Regularized Self-Supervised Learning

2 code implementations19 Jan 2024 Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey

Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.

Self-Supervised Learning

Distilling Cognitive Backdoor Patterns within an Image

1 code implementation26 Jan 2023 Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey

We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

1 code implementation NeurIPS 2021 Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma

Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.

Adversarial Robustness

Neural Architecture Search via Combinatorial Multi-Armed Bandit

no code implementations1 Jan 2021 Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey

NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.

Evolutionary Algorithms Neural Architecture Search

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

1 code implementation24 Jun 2020 Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang

Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.