Search Results for author: Kaidi Xu

Found 61 papers, 25 papers with code

Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks

no code implementations ICML 2020 Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Mingyi Hong, Una-May O'Reilly

In this paper, we study the problem of constrained min-max optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.

E3: Ensemble of Expert Embedders for Adapting Synthetic Image Detectors to New Generators Using Limited Data

1 code implementation12 Apr 2024 Aref Azizpour, Tai D. Nguyen, Manil Shrestha, Kaidi Xu, Edward Kim, Matthew C. Stamm

To address these issues, we introduce the Ensemble of Expert Embedders (E3), a novel continual learning framework for updating synthetic image detectors.

Continual Learning Synthetic Image Detection +1

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

no code implementations18 Mar 2024 Junyuan Hong, Jinhao Duan, Chenhui Zhang, Zhangheng Li, Chulin Xie, Kelsey Lieberman, James Diffenderfer, Brian Bartoldson, Ajay Jaiswal, Kaidi Xu, Bhavya Kailkhura, Dan Hendrycks, Dawn Song, Zhangyang Wang, Bo Li

While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected.

Ethics Fairness +1

Medical Unlearnable Examples: Securing Medical Data from Unauthorized Traning via Sparsity-Aware Local Masking

no code implementations15 Mar 2024 Weixiang Sun, Yixin Liu, Zhiling Yan, Kaidi Xu, Lichao Sun

With the rapid growth of artificial intelligence (AI) in healthcare, there has been a significant increase in the generation and storage of sensitive medical data.

Unveiling Typographic Deceptions: Insights of the Typographic Vulnerability in Large Vision-Language Model

no code implementations29 Feb 2024 Hao Cheng, Erjia Xiao, Jindong Gu, Le Yang, Jinhao Duan, Jize Zhang, Jiahang Cao, Kaidi Xu, Renjing Xu

Large Vision-Language Models (LVLMs) rely on vision encoders and Large Language Models (LLMs) to exhibit remarkable capabilities on various multi-modal tasks in the joint space of vision and language.

Language Modelling Object Recognition +1

GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations

1 code implementation19 Feb 2024 Jinhao Duan, Renming Zhang, James Diffenderfer, Bhavya Kailkhura, Lichao Sun, Elias Stengel-Eskin, Mohit Bansal, Tianlong Chen, Kaidi Xu

As Large Language Models (LLMs) are integrated into critical real-world applications, their strategic and logical reasoning abilities are increasingly crucial.

Card Games Logical Reasoning

Dynamic Adversarial Attacks on Autonomous Driving Systems

no code implementations10 Dec 2023 Amirhosein Chahe, Chenan Wang, Abhishek Jeyapratap, Kaidi Xu, Lifeng Zhou

Moreover, our method utilizes dynamic patches displayed on a screen, allowing for adaptive changes and movement, enhancing the flexibility and performance of the attack.

Adversarial Attack Autonomous Driving +3

A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly

no code implementations4 Dec 2023 Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, Yue Zhang

In the meantime, LLMs have also gained traction in the security community, revealing security vulnerabilities and showcasing their potential in security-related tasks.

Language Modelling Large Language Model +3

Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?

no code implementations30 Nov 2023 Zhengyue Zhao, Jinhao Duan, Kaidi Xu, Chenan Wang, Rui Zhangp Zidong Dup Qi Guo, Xing Hu

Although these studies have demonstrated the ability to protect images, it is essential to consider that these methods may not be entirely applicable in real-world scenarios.

Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise

1 code implementation22 Nov 2023 Yixin Liu, Kaidi Xu, Xun Chen, Lichao Sun

Observing that simply removing the adversarial noise on the training process of the defensive noise can improve the performance of robust unlearnable examples, we identify that solely the surrogate model's robustness contributes to the performance.

PINNs-Based Uncertainty Quantification for Transient Stability Analysis

no code implementations21 Nov 2023 Ren Wang, Ming Zhong, Kaidi Xu, Lola Giráldez Sánchez-Cortés, Ignacio de Cominges Guerra

This paper addresses the challenge of transient stability in power systems with missing parameters and uncertainty propagation in swing equations.

Uncertainty Quantification

Pursing the Sparse Limitation of Spiking Deep Learning Structures

no code implementations18 Nov 2023 Hao Cheng, Jiahang Cao, Erjia Xiao, Mengshu Sun, Le Yang, Jize Zhang, Xue Lin, Bhavya Kailkhura, Kaidi Xu, Renjing Xu

It posits that within dense neural networks, there exist winning tickets or subnetworks that are sparser but do not compromise performance.

Federated Reinforcement Learning for Resource Allocation in V2X Networks

no code implementations15 Oct 2023 Kaidi Xu, Shenglong Zhou, Geoffrey Ye Li

In this paper, we explore resource allocation in a V2X network under the framework of federated reinforcement learning (FRL).

Federated Learning reinforcement-learning

RBFormer: Improve Adversarial Robustness of Transformer by Robust Bias

no code implementations23 Sep 2023 Hao Cheng, Jinhao Duan, Hui Li, Lyutianyang Zhang, Jiahang Cao, Ping Wang, Jize Zhang, Kaidi Xu, Renjing Xu

Recently, there has been a surge of interest and attention in Transformer-based structures, such as Vision Transformer (ViT) and Vision Multilayer Perceptron (VMLP).

Adversarial Robustness

Semantic Adversarial Attacks via Diffusion Models

1 code implementation14 Sep 2023 Chenan Wang, Jinhao Duan, Chaowei Xiao, Edward Kim, Matthew Stamm, Kaidi Xu

Then there are two variants of this framework: 1) the Semantic Transformation (ST) approach fine-tunes the latent space of the generated image and/or the diffusion model itself; 2) the Latent Masking (LM) approach masks the latent space with another target image and local backpropagation-based interpretation methods.

Adversarial Attack

Communication-Efficient Decentralized Federated Learning via One-Bit Compressive Sensing

no code implementations31 Aug 2023 Shenglong Zhou, Kaidi Xu, Geoffrey Ye Li

Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging, as there is no central server to coordinate the training process.

Compressive Sensing Computational Efficiency +1

Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

no code implementations ICCV 2023 Ningfei Wang, Yunpeng Luo, Takami Sato, Kaidi Xu, Qi Alfred Chen

In this work, we conduct the first measurement study on whether and how effectively the existing designs can lead to system-level effects, especially for the STOP sign-evasion attacks due to their popularity and severity.

Autonomous Driving

Exposing the Fake: Effective Diffusion-Generated Images Detection

no code implementations12 Jul 2023 RuiPeng Ma, Jinhao Duan, Fei Kong, Xiaoshuang Shi, Kaidi Xu

Image synthesis has seen significant advancements with the advent of diffusion-based generative models like Denoising Diffusion Probabilistic Models (DDPM) and text-to-image diffusion models.

Denoising Image Generation

Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models

1 code implementation3 Jul 2023 Jinhao Duan, Hao Cheng, Shiqi Wang, Alex Zavalny, Chenan Wang, Renjing Xu, Bhavya Kailkhura, Kaidi Xu

While Large Language Models (LLMs) have demonstrated remarkable potential in natural language generation and instruction following, a persistent challenge lies in their susceptibility to "hallucinations", which erodes trust in their outputs.

Instruction Following Question Answering +4

Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training

1 code implementation3 Jun 2023 Pucheng Dang, Xing Hu, Kaidi Xu, Jinhao Duan, Di Huang, Husheng Han, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen

Unlearning techniques are proposed to prevent third parties from exploiting unauthorized data, which generate unlearnable samples by adding imperceptible perturbations to data for public publishing.

Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation

no code implementations2 Jun 2023 Zhengyue Zhao, Jinhao Duan, Xing Hu, Kaidi Xu, Chenan Wang, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen

This imperceptible protective noise makes the data almost unlearnable for diffusion models, i. e., diffusion models trained or fine-tuned on the protected data cannot generate high-quality and diverse images related to the protected training data.

Denoising Image Generation

Caterpillar: A Pure-MLP Architecture with Shifted-Pillars-Concatenation

no code implementations28 May 2023 Jin Sun, Xiaoshuang Shi, Zhiyuan Wang, Kaidi Xu, Heng Tao Shen, Xiaofeng Zhu

Then, we build a pure-MLP architecture called Caterpillar by replacing the convolutional layer with the SPC module in a hybrid model of sMLPNet.

Computational Efficiency Inductive Bias

An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization

1 code implementation26 May 2023 Fei Kong, Jinhao Duan, RuiPeng Ma, HengTao Shen, Xiaofeng Zhu, Xiaoshuang Shi, Kaidi Xu

Therefore, we also explore the robustness of diffusion models to MIA in the text-to-speech (TTS) task, which is an audio generation task.

Audio Generation Inference Attack +1

Improve Video Representation with Temporal Adversarial Augmentation

no code implementations28 Apr 2023 Jinhao Duan, Quanfu Fan, Hao Cheng, Xiaoshuang Shi, Kaidi Xu

In this paper, we introduce Temporal Adversarial Augmentation (TA), a novel video augmentation technique that utilizes temporal attention.

Are Diffusion Models Vulnerable to Membership Inference Attacks?

1 code implementation2 Feb 2023 Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, Kaidi Xu

In this paper, we investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern.

Image Generation

Distributed-Training-and-Execution Multi-Agent Reinforcement Learning for Power Control in HetNet

1 code implementation15 Dec 2022 Kaidi Xu, Nguyen Van Huynh, Geoffrey Ye Li

To overcome these limitations, we propose a multi-agent deep reinforcement learning (MADRL) based power control scheme for the HetNet, where each access point makes power control decisions independently based on local information.

Multi-agent Reinforcement Learning Q-Learning +2

General Cutting Planes for Bound-Propagation-Based Neural Network Verification

2 code implementations11 Aug 2022 huan zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Our generalized bound propagation method, GCP-CROWN, opens up the opportunity to apply general cutting plane methods for neural network verification while benefiting from the efficiency and GPU acceleration of bound propagation methods.

Toward Robust Spiking Neural Network Against Adversarial Perturbation

no code implementations12 Apr 2022 Ling Liang, Kaidi Xu, Xing Hu, Lei Deng, Yuan Xie

To the best of our knowledge, this is the first analysis on robust training of SNNs.

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

no code implementations NeurIPS 2021 Husheng Han, Kaidi Xu, Xing Hu, Xiaobing Chen, Ling Liang, Zidong Du, Qi Guo, Yanzhi Wang, Yunji Chen

Our experimental results show that the certified accuracy is increased from 36. 3% (the state-of-the-art certified detection) to 60. 4% on the ImageNet dataset, largely pushing the certified defenses for practical use.

A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks

no code implementations29 Sep 2021 huan zhang, Shiqi Wang, Kaidi Xu, Yihan Wang, Suman Jana, Cho-Jui Hsieh, J Zico Kolter

In this work, we formulate an adversarial attack using a branch-and-bound (BaB) procedure on ReLU neural networks and search adversarial examples in the activation space corresponding to binary variables in a mixed integer programming (MIP) formulation.

Adversarial Attack

Generating Realistic Physical Adversarial Examplesby Patch Transformer Network

no code implementations29 Sep 2021 Quanfu Fan, Kaidi Xu, Chun-Fu Chen, Sijia Liu, Gaoyuan Zhang, David Daniel Cox, Xue Lin

Physical adversarial attacks apply carefully crafted adversarial perturbations onto real objects to maliciously alter the prediction of object classifiers or detectors.

Object

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification

no code implementations NeurIPS 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter

We develop $\beta$-CROWN, a new bound propagation based method that can fully encode neuron split constraints in branch-and-bound (BaB) based complete verification via optimizable parameters $\beta$.

Efficient Micro-Structured Weight Unification and Pruning for Neural Network Compression

no code implementations15 Jun 2021 Sheng Lin, Wei Jiang, Wei Wang, Kaidi Xu, Yanzhi Wang, Shan Liu, Songnan Li

Compressing Deep Neural Network (DNN) models to alleviate the storage and computation requirements is essential for practical applications, especially for resource limited devices.

Neural Network Compression

Mixture of Robust Experts (MoRE):A Robust Denoising Method towards multiple perturbations

no code implementations21 Apr 2021 Kaidi Xu, Chenan Wang, Hao Cheng, Bhavya Kailkhura, Xue Lin, Ryan Goldhahn

To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order embedded within the outer minimization of the training loss.

Adversarial Robustness Denoising

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness Verification

4 code implementations NeurIPS 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Compared to the typically tightest but very costly semidefinite programming (SDP) based incomplete verifiers, we obtain higher verified accuracy with three orders of magnitudes less verification time.

Adversarial Attack

On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning

1 code implementation ICLR 2021 Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang

Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.

Adversarial Attack Adversarial Robustness +3

Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box Optimization Framework

no code implementations21 Dec 2020 Pranay Sharma, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Xue Lin, Pramod K. Varshney

In this work, we focus on the study of stochastic zeroth-order (ZO) optimization which does not require first-order gradient information and uses only function evaluations.

Low-Complexity Joint Power Allocation and Trajectory Design for UAV-Enabled Secure Communications with Power Splitting

no code implementations23 Aug 2020 Kaidi Xu, Ming-Min Zhao, Yunlong Cai, Lajos Hanzo

An unmanned aerial vehicle (UAV)-aided secure communication system is conceived and investigated, where the UAV transmits legitimate information to a ground user in the presence of an eavesdropper (Eve).

Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding Design for Multiuser MIMO Systems

1 code implementation15 Jun 2020 Qiyu Hu, Yunlong Cai, Qingjiang Shi, Kaidi Xu, Guanding Yu, Zhi Ding

Then, we implement the proposed deepunfolding framework to solve the sum-rate maximization problem for precoding design in MU-MIMO systems.

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.

Quantization

Defending against Backdoor Attack on Deep Neural Networks

no code implementations26 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, Xue Lin

Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks.

Backdoor Attack Data Poisoning

Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

no code implementations25 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin

To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner.

Adversarial T-shirt! Evading Person Detectors in A Physical World

1 code implementation ECCV 2020 Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, Xue Lin

To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts.

ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization

1 code implementation NeurIPS 2019 Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, David Cox

In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime.

Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML

1 code implementation30 Sep 2019 Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Minyi Hong, Una-May O'Reilly

In this paper, we study the problem of constrained robust (min-max) optimization ina black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.

REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs

no code implementations29 Sep 2019 Caiwen Ding, Shuo Wang, Ning Liu, Kaidi Xu, Yanzhi Wang, Yun Liang

To achieve real-time, highly-efficient implementations on FPGA, we present the detailed hardware implementation of block circulant matrices on CONV layers and develop an efficient processing element (PE) structure supporting the heterogeneous weight quantization, CONV dataflow and pipelining techniques, design optimization, and a template-based automatic synthesis framework to optimally exploit hardware resource.

Model Compression object-detection +2

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

1 code implementation ICCV 2019 Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin

Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.

Adversarial Attack Bayesian Optimization +1

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective

1 code implementation10 Jun 2019 Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification.

Adversarial Robustness Classification +2

Brain-inspired reverse adversarial examples

no code implementations28 May 2019 Shaokai Ye, Sia Huat Tan, Kaidi Xu, Yanzhi Wang, Chenglong Bao, Kaisheng Ma

On contrast, current state-of-the-art deep learning approaches heavily depend on the variety of training samples and the capacity of the network.

Quantization

Interpreting Adversarial Examples by Activation Promotion and Suppression

no code implementations3 Apr 2019 Kaidi Xu, Sijia Liu, Gaoyuan Zhang, Mengshu Sun, Pu Zhao, Quanfu Fan, Chuang Gan, Xue Lin

It is widely known that convolutional neural networks (CNNs) are vulnerable to adversarial examples: images with imperceptible perturbations crafted to fool classifiers.

Adversarial Robustness

Adversarial Robustness vs Model Compression, or Both?

1 code implementation29 Mar 2019 Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin

Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.

Adversarial Robustness Model Compression +1

Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM

2 code implementations23 Mar 2019 Shaokai Ye, Xiaoyu Feng, Tianyun Zhang, Xiaolong Ma, Sheng Lin, Zhengang Li, Kaidi Xu, Wujie Wen, Sijia Liu, Jian Tang, Makan Fardad, Xue Lin, Yongpan Liu, Yanzhi Wang

A recent work developed a systematic frame-work of DNN weight pruning using the advanced optimization technique ADMM (Alternating Direction Methods of Multipliers), achieving one of state-of-art in weight pruning results.

Model Compression Quantization

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

1 code implementation ICLR 2019 Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, huan zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin

When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.