Search Results for author: huan zhang

Found 55 papers, 36 papers with code

On Lp-norm Robustness of Ensemble Decision Stumps and Trees

no code implementations ICML 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the robustness verification and defense with respect to general $\ell_p$ norm perturbation for ensemble trees and stumps.

Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training

no code implementations18 Oct 2021 Alexander Pan, Yongkyun Lee, huan zhang, Yize Chen, Yuanyuan Shi

Due to the proliferation of renewable energy and its intrinsic intermittency and stochasticity, current power systems face severe operational challenges.

Decision Making

Fast Certified Robust Training with Short Warmup

1 code implementation31 Mar 2021 Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh

Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.

Adversarial Defense

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification

3 code implementations11 Mar 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Recent works in neural network verification show that cheap incomplete verifiers such as CROWN, based upon bound propagations, can effectively be used in Branch-and-Bound (BaB) methods to accelerate complete verification, achieving significant speedups compared to expensive linear programming (LP) based techniques.

Adversarial Attack

Does deep machine vision have just noticeable difference (JND)?

no code implementations16 Feb 2021 Jian Jin, Xingxing Zhang, Xin Fu, huan zhang, Weisi Lin, Jian Lou, Yao Zhao

Experimental results on classification tasks demonstrate that we successfully find and model the JND for deep machine vision.

Neural Network Security Video Compression

Robust Reinforcement Learning on State Observations with Learned Optimal Adversary

1 code implementation ICLR 2021 huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise.

Adversarial Attack Continuous Control

Learning Contextual Perturbation Budgets for Training Robust Neural Networks

no code implementations1 Jan 2021 Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang

We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.

An Efficient Adversarial Attack for Tree Ensembles

1 code implementation NeurIPS 2020 Chong Zhang, huan zhang, Cho-Jui Hsieh

We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs).

Adversarial Attack

On $\ell_p$-norm Robustness of Ensemble Stumps and Trees

1 code implementation20 Aug 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the problem of robustness verification and certified defense with respect to general $\ell_p$ norm perturbations for ensemble decision stumps and trees.

The Limit of the Batch Size

no code implementations15 Jun 2020 Yang You, Yuhui Wang, huan zhang, Zhao Zhang, James Demmel, Cho-Jui Hsieh

For the first time we scale the batch size on ImageNet to at least a magnitude larger than all previous work, and provide detailed studies on the performance of many state-of-the-art optimization schemes under this setting.

Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data

1 code implementation11 May 2020 Lu Wang, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Yuan Jiang

By constraining adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset, the spanning attack significantly improves the query efficiency of a wide variety of existing black-box attacks.

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

3 code implementations NeurIPS 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh

Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.

Quantization

Robustness Verification for Transformers

1 code implementation ICLR 2020 Zhouxing Shi, huan zhang, Kai-Wei Chang, Minlie Huang, Cho-Jui Hsieh

Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees.

Sentiment Analysis

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

2 code implementations ICLR 2020 Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.

MFPN: A Novel Mixture Feature Pyramid Network of Multiple Architectures for Object Detection

no code implementations20 Dec 2019 Ting-Ting Liang, Yongtao Wang, Qijie Zhao, huan zhang, Zhi Tang, Haibin Ling

Feature pyramids are widely exploited in many detectors to solve the scale variation problem for object detection.

Object Detection

Robust Triple-Matrix-Recovery-Based Auto-Weighted Label Propagation for Classification

no code implementations20 Nov 2019 Huan Zhang, Zhao Zhang, Mingbo Zhao, Qiaolin Ye, Min Zhang, Meng Wang

Our method can jointly re-cover the underlying clean data, clean labels and clean weighting spaces by decomposing the original data, predicted soft labels or weights into a clean part plus an error part by fitting noise.

General Classification

Enhancing Certifiable Robustness via a Deep Model Ensemble

no code implementations31 Oct 2019 Huan Zhang, Minhao Cheng, Cho-Jui Hsieh

We propose an algorithm to enhance certified robustness of a deep model ensemble by optimally weighting each base model.

Model Selection

Defending Against Adversarial Attacks Using Random Forests

no code implementations16 Jun 2019 Yifan Ding, Liqiang Wang, huan zhang, Jin-Feng Yi, Deliang Fan, Boqing Gong

As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world.

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

2 code implementations ICLR 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh

In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.

Robustness Verification of Tree-based Models

2 code implementations NeurIPS 2019 Hongge Chen, huan zhang, Si Si, Yang Li, Duane Boning, Cho-Jui Hsieh

We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles, the verification problem can be cast as a max-clique problem on a multi-partite graph with bounded boxicity.

How Training Data Affect the Accuracy and Robustness of Neural Networks for Image Classification

no code implementations ICLR 2019 Suhua Lei, huan zhang, Ke Wang, Zhendong Su

In light of a recent study on the mutual influence between robustness and accuracy over 18 different ImageNet models, this paper investigates how training data affect the accuracy and robustness of deep neural networks.

General Classification Image Classification

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach

no code implementations ICLR 2019 Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks

1 code implementation ICCV 2019 Jun-Ho Choi, huan zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, Jong-Seok Lee

Single-image super-resolution aims to generate a high-resolution version of a low-resolution image, which serves as an essential component in many computer vision applications.

Image Super-Resolution

Adversarial Robustness vs Model Compression, or Both?

1 code implementation29 Mar 2019 Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin

Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.

Model Compression Network Pruning

Robust Decision Trees Against Adversarial Examples

3 code implementations27 Feb 2019 Hongge Chen, huan zhang, Duane Boning, Cho-Jui Hsieh

Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.

Adversarial Attack Adversarial Defense

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

2 code implementations NeurIPS 2019 Hadi Salman, Greg Yang, huan zhang, Cho-Jui Hsieh, Pengchuan Zhang

This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification.

The Limitations of Adversarial Training and the Blind-Spot Attack

no code implementations ICLR 2019 Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S. Dhillon, Cho-Jui Hsieh

In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network.

Efficient Neural Network Robustness Certification with General Activation Functions

12 code implementations NeurIPS 2018 Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.

RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications

4 code implementations28 Oct 2018 Huan Zhang, Pengchuan Zhang, Cho-Jui Hsieh

The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks.

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

1 code implementation19 Oct 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel

We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

2 code implementations ECCV 2018 Dong Su, huan zhang, Hongge Chen, Jin-Feng Yi, Pin-Yu Chen, Yupeng Gao

The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition.

General Classification Image Classification

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

1 code implementation ICLR 2019 Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, huan zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin

When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example.

Adversarial Attack

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

1 code implementation12 Jul 2018 Minhao Cheng, Thong Le, Pin-Yu Chen, Jin-Feng Yi, huan zhang, Cho-Jui Hsieh

We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

1 code implementation30 May 2018 Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

2 code implementations28 May 2018 Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava

Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.

Towards Fast Computation of Certified Robustness for ReLU Networks

6 code implementations ICML 2018 Tsui-Wei Weng, huan zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel

Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17].

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

1 code implementation3 Mar 2018 Minhao Cheng, Jin-Feng Yi, Pin-Yu Chen, huan zhang, Cho-Jui Hsieh

In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) models, whose inputs are discrete text strings and outputs have an almost infinite number of possibilities.

Image Classification Machine Translation +2

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

1 code implementation ICLR 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel

Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.

Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

2 code implementations ACL 2018 Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh

Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.

Image Captioning

Towards Robust Neural Networks via Random Self-ensemble

no code implementations ECCV 2018 Xuanqing Liu, Minhao Cheng, huan zhang, Cho-Jui Hsieh

In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) by combining two important concepts: {\bf randomness} and {\bf ensemble}.

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

1 code implementation13 Sep 2017 Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

5 code implementations14 Aug 2017 Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.

Adversarial Attack Adversarial Defense +3

GPU-acceleration for Large-scale Tree Boosting

3 code implementations26 Jun 2017 Huan Zhang, Si Si, Cho-Jui Hsieh

In this paper, we present a novel massively parallel algorithm for accelerating the decision tree building procedure on GPUs (Graphics Processing Units), which is a crucial step in Gradient Boosted Decision Tree (GBDT) and random forests training.

Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent

1 code implementation NeurIPS 2017 Xiangru Lian, Ce Zhang, huan zhang, Cho-Jui Hsieh, Wei zhang, Ji Liu

On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.

Sublinear Time Orthogonal Tensor Decomposition

1 code implementation NeurIPS 2016 Zhao Song, David Woodruff, huan zhang

We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i. e., even without reading most of the input tensor.

Tensor Decomposition

Cannot find the paper you are looking for? You can Submit a new open access paper.