no code implementations • ICML 2020 • Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh
In this paper, we study the robustness verification and defense with respect to general $\ell_p$ norm perturbation for ensemble trees and stumps.
no code implementations • 11 Nov 2022 • huan zhang, Robert J. Webber, Michael Lindsey, Timothy C. Berkelbach, Jonathan Weare
The use of neural network parametrizations to represent the ground state in variational Monte Carlo (VMC) calculations has generated intense interest in recent years.
no code implementations • 7 Nov 2022 • Li-Cheng Lan, huan zhang, Ti-Rong Wu, Meng-Yu Tsai, I-Chen Wu, Cho-Jui Hsieh
Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions.
1 code implementation • 30 Oct 2022 • Yujia Huang, Ivan Dario Jimenez Rodriguez, huan zhang, Yuanyuan Shi, Yisong Yue
We study how to certifiably enforce forward invariance properties in neural ODEs.
2 code implementations • 13 Oct 2022 • Zhouxing Shi, Yihan Wang, huan zhang, Zico Kolter, Cho-Jui Hsieh
In this paper, we develop an efficient framework for computing the $\ell_\infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian via linear bound propagation.
2 code implementations • 11 Aug 2022 • huan zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter
Our generalized bound propagation method, GCP-CROWN, opens up the opportunity to apply general cutting plane methods for neural network verification while benefiting from the efficiency and GPU acceleration of bound propagation methods.
1 code implementation • 15 Jun 2022 • Tianlong Chen, huan zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang
Certifiable robustness is a highly desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios, but often demands tedious computations to establish.
no code implementations • 29 May 2022 • Zuxin Liu, Zijian Guo, Zhepeng Cen, huan zhang, Jie Tan, Bo Li, Ding Zhao
Safe reinforcement learning (RL) trains a policy to maximize the task reward while satisfying safety constraints.
no code implementations • 14 May 2022 • Wenhao Huang, Haifan Gong, huan zhang, Yu Wang, Haofeng Li, Guanbin Li, Hong Shen
CT-based bronchial tree analysis plays an important role in the computer-aided diagnosis for respiratory diseases, as it could provide structured information for clinicians.
1 code implementation • ICLR 2022 • Fan Wu, Linyi Li, Chejian Xu, huan zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li
We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties.
no code implementations • 16 Dec 2021 • Wenxuan Zhou, Fangyu Liu, huan zhang, Muhao Chen
Deep neural networks are often overparameterized and may not easily achieve model generalization.
no code implementations • 15 Dec 2021 • Jaehui Hwang, huan zhang, Jun-Ho Choi, Cho-Jui Hsieh, Jong-Seok Lee
Another observation enabling our defense method is that adversarial perturbations on videos are sensitive to temporal destruction.
no code implementations • NeurIPS 2021 • Leslie Rice, Anna Bair, huan zhang, J. Zico Kolter
Several recent works in machine learning have focused on evaluating the test-time robustness of a classifier: how well the classifier performs not just on the target domain it was trained upon, but upon perturbed examples.
1 code implementation • NeurIPS 2021 • Yujia Huang, huan zhang, Yuanyuan Shi, J Zico Kolter, Anima Anandkumar
Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant.
no code implementations • 18 Oct 2021 • Alexander Pan, Yongkyun Lee, huan zhang, Yize Chen, Yuanyuan Shi
Due to the proliferation of renewable energy and its intrinsic intermittency and stochasticity, current power systems face severe operational challenges.
no code implementations • 29 Sep 2021 • huan zhang, Shiqi Wang, Kaidi Xu, Yihan Wang, Suman Jana, Cho-Jui Hsieh, J Zico Kolter
In this work, we formulate an adversarial attack using a branch-and-bound (BaB) procedure on ReLU neural networks and search adversarial examples in the activation space corresponding to binary variables in a mixed integer programming (MIP) formulation.
no code implementations • NeurIPS 2021 • Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter
We develop $\beta$-CROWN, a new bound propagation based method that can fully encode neuron split constraints in branch-and-bound (BaB) based complete verification via optimizable parameters $\beta$.
no code implementations • ICML Workshop AML 2021 • Mohammad Sadegh Norouzzadeh, Wan-Yi Lin, Leonid Boytsov, Leslie Rice, huan zhang, Filipe Condessa, J Zico Kolter
Most pre-trained classifiers, though they may work extremely well on the domain they were trained upon, are not trained in a robust fashion, and therefore are sensitive to adversarial attacks.
no code implementations • 30 Apr 2021 • Jun-Ho Choi, huan zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, Jong-Seok Lee
Recently, the vulnerability of deep image classification models to adversarial attacks has been investigated.
1 code implementation • NAACL 2021 • Chong Zhang, Jieyu Zhao, huan zhang, Kai-Wei Chang, Cho-Jui Hsieh
Our method is able to reveal the hidden model biases not directly shown in the test dataset.
2 code implementations • NeurIPS 2021 • Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh
Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.
4 code implementations • NeurIPS 2021 • Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter
Compared to the typically tightest but very costly semidefinite programming (SDP) based incomplete verifiers, we obtain higher verified accuracy with three orders of magnitudes less verification time.
no code implementations • 16 Feb 2021 • Jian Jin, Xingxing Zhang, Xin Fu, huan zhang, Weisi Lin, Jian Lou, Yao Zhao
Experimental results on image classification demonstrate that we successfully find the JND for deep machine vision.
2 code implementations • ICLR 2021 • huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh
We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise.
no code implementations • 1 Jan 2021 • Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang
We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.
3 code implementations • ICLR 2021 • Kaidi Xu, huan zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, Cho-Jui Hsieh
Formal verification of neural networks (NNs) is a challenging and important problem.
1 code implementation • NeurIPS 2020 • Chong Zhang, huan zhang, Cho-Jui Hsieh
We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs).
1 code implementation • 20 Aug 2020 • Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh
In this paper, we study the problem of robustness verification and certified defense with respect to general $\ell_p$ norm perturbations for ensemble decision stumps and trees.
no code implementations • 15 Jun 2020 • Yang You, Yuhui Wang, huan zhang, Zhao Zhang, James Demmel, Cho-Jui Hsieh
For the first time we scale the batch size on ImageNet to at least a magnitude larger than all previous work, and provide detailed studies on the performance of many state-of-the-art optimization schemes under this setting.
1 code implementation • 11 May 2020 • Lu Wang, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Yuan Jiang
By constraining adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset, the spanning attack significantly improves the query efficiency of a wide variety of existing black-box attacks.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Bowen Wu, huan zhang, Mengyuan Li, Zongsheng Wang, Qihang Feng, Junhong Huang, Baoxun Wang
There are plenty of studies showing that the knowledge distillation is efficient in transferring the knowledge from BERT into the model with a smaller size of parameters.
4 code implementations • NeurIPS 2020 • Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh
Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.
5 code implementations • NeurIPS 2020 • Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh
Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.
1 code implementation • ICLR 2020 • Zhouxing Shi, huan zhang, Kai-Wei Chang, Minlie Huang, Cho-Jui Hsieh
Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees.
2 code implementations • ICLR 2020 • Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.
no code implementations • 20 Dec 2019 • Ting-Ting Liang, Yongtao Wang, Qijie Zhao, huan zhang, Zhi Tang, Haibin Ling
Feature pyramids are widely exploited in many detectors to solve the scale variation problem for object detection.
no code implementations • 20 Nov 2019 • Huan Zhang, Zhao Zhang, Mingbo Zhao, Qiaolin Ye, Min Zhang, Meng Wang
Our method can jointly re-cover the underlying clean data, clean labels and clean weighting spaces by decomposing the original data, predicted soft labels or weights into a clean part plus an error part by fitting noise.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Po-Sen Huang, huan zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, Pushmeet Kohli
This paper aims to quantify and reduce a particular type of bias exhibited by language models: bias in the sentiment of generated text.
no code implementations • 31 Oct 2019 • Huan Zhang, Minhao Cheng, Cho-Jui Hsieh
We propose an algorithm to enhance certified robustness of a deep model ensemble by optimally weighting each base model.
no code implementations • 25 Sep 2019 • Zhen Xu, Baoxun Wang, huan zhang, Kexin Qiu, Deyuan Zhang, Chengjie Sun
This paper presents a new methodology for modeling the local semantic distribution of responses to a given query in the human-conversation corpus, and on this basis, explores a specified adversarial learning mechanism for training Neural Response Generation (NRG) models to build conversational agents.
no code implementations • 14 Aug 2019 • Yifu Chen, Zongsheng Wang, Bowen Wu, Mengyuan Li, huan zhang, Lin Ma, Feng Liu, Qihang Feng, Baoxun Wang
Chinese meme-face is a special kind of internet subculture widely spread in Chinese Social Community Networks.
no code implementations • 16 Jun 2019 • Yifan Ding, Liqiang Wang, huan zhang, Jin-Feng Yi, Deliang Fan, Boqing Gong
As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world.
2 code implementations • ICLR 2020 • Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh
In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.
2 code implementations • NeurIPS 2019 • Hongge Chen, huan zhang, Si Si, Yang Li, Duane Boning, Cho-Jui Hsieh
We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles, the verification problem can be cast as a max-clique problem on a multi-partite graph with bounded boxicity.
3 code implementations • NeurIPS 2019 • Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, huan zhang, Ilya Razenshteyn, Sebastien Bubeck
In this paper, we employ adversarial training to improve the performance of randomized smoothing.
no code implementations • ICLR 2019 • Suhua Lei, huan zhang, Ke Wang, Zhendong Su
In light of a recent study on the mutual influence between robustness and accuracy over 18 different ImageNet models, this paper investigates how training data affect the accuracy and robustness of deep neural networks.
no code implementations • ICLR 2019 • Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh
We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.
no code implementations • ICLR 2019 • Huan Zhang, Hai Zhao
Sequence to sequence (seq2seq) models have become a popular framework for neural sequence prediction.
1 code implementation • ICCV 2019 • Jun-Ho Choi, huan zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, Jong-Seok Lee
Single-image super-resolution aims to generate a high-resolution version of a low-resolution image, which serves as an essential component in many computer vision applications.
1 code implementation • 29 Mar 2019 • Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin
Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.
3 code implementations • 27 Feb 2019 • Hongge Chen, huan zhang, Duane Boning, Cho-Jui Hsieh
Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.
3 code implementations • NeurIPS 2019 • Hadi Salman, Greg Yang, huan zhang, Cho-Jui Hsieh, Pengchuan Zhang
This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification.
no code implementations • ICLR 2019 • Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S. Dhillon, Cho-Jui Hsieh
In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network.
13 code implementations • NeurIPS 2018 • Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel
Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.
4 code implementations • 28 Oct 2018 • Huan Zhang, Pengchuan Zhang, Cho-Jui Hsieh
The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks.
1 code implementation • 19 Oct 2018 • Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel
We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.
2 code implementations • ECCV 2018 • Dong Su, huan zhang, Hongge Chen, Jin-Feng Yi, Pin-Yu Chen, Yupeng Gao
The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition.
1 code implementation • ICLR 2019 • Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, huan zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example.
1 code implementation • 12 Jul 2018 • Minhao Cheng, Thong Le, Pin-Yu Chen, Jin-Feng Yi, huan zhang, Cho-Jui Hsieh
We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.
1 code implementation • 30 May 2018 • Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.
3 code implementations • 28 May 2018 • Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava
Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.
6 code implementations • ICML 2018 • Tsui-Wei Weng, huan zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel
Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17].
1 code implementation • 3 Mar 2018 • Minhao Cheng, Jin-Feng Yi, Pin-Yu Chen, huan zhang, Cho-Jui Hsieh
In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) models, whose inputs are discrete text strings and outputs have an almost infinite number of possibilities.
1 code implementation • ICLR 2018 • Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel
Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.
2 code implementations • ACL 2018 • Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh
Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.
no code implementations • ECCV 2018 • Xuanqing Liu, Minhao Cheng, huan zhang, Cho-Jui Hsieh
In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) by combining two important concepts: {\bf randomness} and {\bf ensemble}.
6 code implementations • 13 Sep 2017 • Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh
Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.
5 code implementations • 14 Aug 2017 • Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh
However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.
no code implementations • ICML 2017 • Si Si, huan zhang, S. Sathiya Keerthi, Dhruv Mahajan, Inderjit S. Dhillon, Cho-Jui Hsieh
In this paper, we study the gradient boosted decision trees (GBDT) when the output space is high dimensional and sparse.
3 code implementations • 26 Jun 2017 • Huan Zhang, Si Si, Cho-Jui Hsieh
In this paper, we present a novel massively parallel algorithm for accelerating the decision tree building procedure on GPUs (Graphics Processing Units), which is a crucial step in Gradient Boosted Decision Tree (GBDT) and random forests training.
2 code implementations • NeurIPS 2017 • Xiangru Lian, Ce Zhang, huan zhang, Cho-Jui Hsieh, Wei zhang, Ji Liu
On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.
1 code implementation • NeurIPS 2016 • Zhao Song, David Woodruff, huan zhang
We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i. e., even without reading most of the input tensor.