Search Results for author: huan zhang

Found 90 papers, 56 papers with code

On Lp-norm Robustness of Ensemble Decision Stumps and Trees

no code implementations ICML 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the robustness verification and defense with respect to general $\ell_p$ norm perturbation for ensemble trees and stumps.

Using Explainable AI and Transfer Learning to understand and predict the maintenance of Atlantic blocking with limited observational data

2 code implementations12 Apr 2024 huan zhang, Justin Finkel, Dorian S. Abbot, Edwin P. Gerber, Jonathan Weare

This work demonstrates the potential for machine learning methods to extract meaningful precursors of extreme weather events and achieve better prediction using limited observational data.

Blocking Explainable artificial intelligence +1

Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation for Efficient Synthesis and Verification

1 code implementation11 Apr 2024 Lujie Yang, Hongkai Dai, Zhouxing Shi, Cho-Jui Hsieh, Russ Tedrake, huan zhang

The flexibility and efficiency of our framework allow us to demonstrate Lyapunov-stable output feedback control with synthesized NN-based controllers and NN-based observers with formal stability guarantees, for the first time in literature.

Sequential-in-time training of nonlinear parametrizations for solving time-dependent partial differential equations

no code implementations1 Apr 2024 huan zhang, Yifan Chen, Eric Vanden-Eijnden, Benjamin Peherstorfer

Sequential-in-time methods solve a sequence of training problems to fit nonlinear parametrizations such as neural networks to approximate solution trajectories of partial differential equations over time.

WavCraft: Audio Editing and Generation with Natural Language Prompts

1 code implementation14 Mar 2024 Jinhua Liang, huan zhang, Haohe Liu, Yin Cao, Qiuqiang Kong, Xubo Liu, Wenwu Wang, Mark D. Plumbley, Huy Phan, Emmanouil Benetos

We introduce WavCraft, a collective system that leverages large language models (LLMs) to connect diverse task-specific models for audio content creation and editing.

In-Context Learning

A Safe Screening Rule with Bi-level Optimization of $ν$ Support Vector Machine

no code implementations4 Mar 2024 Zhiji Yang, Wanyi Chen, huan zhang, Yitian Xu, Lei Shi, Jianhua Zhao

Support vector machine (SVM) has achieved many successes in machine learning, especially for a small sample problem.

COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability

1 code implementation13 Feb 2024 Xingang Guo, Fangxu Yu, huan zhang, Lianhui Qin, Bin Hu

Based on this connection, we adapt the Energy-based Constrained Decoding with Langevin Dynamics (COLD), a state-of-the-art, highly efficient algorithm in controllable text generation, and introduce the COLD-Attack framework which unifies and automates the search of adversarial LLM attacks under a variety of control requirements such as fluency, stealthiness, sentiment, and left-right-coherence.

Text Generation

Solving Expensive Optimization Problems in Dynamic Environments with Meta-learning

no code implementations19 Oct 2023 huan zhang, Jinliang Ding, Liang Feng, Kay Chen Tan, Ke Li

Although data-driven evolutionary optimization and Bayesian optimization (BO) approaches have shown promise in solving expensive optimization problems in static environments, the attempts to develop such approaches in dynamic environments remain rarely unexplored.

Bayesian Optimization Meta-Learning

HoneyBee: Progressive Instruction Finetuning of Large Language Models for Materials Science

1 code implementation12 Oct 2023 Yu Song, Santiago Miret, huan zhang, Bang Liu

We propose an instruction-based process for trustworthy data curation in materials science (MatSci-Instruct), which we then apply to finetune a LLaMa-based language model targeted for materials science (HoneyBee).

Language Modelling

Symbolic Music Representations for Classification Tasks: A Systematic Evaluation

1 code implementation5 Sep 2023 huan zhang, Emmanouil Karystinaios, Simon Dixon, Gerhard Widmer, Carlos Eduardo Cancino-Chacón

Music Information Retrieval (MIR) has seen a recent surge in deep learning-based approaches, which often involve encoding symbolic music (i. e., music represented in terms of discrete note events) in an image-like or language like fashion.

Classification Information Retrieval +3

DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing

1 code implementation28 Aug 2023 Jiawei Zhang, Zhongzhu Chen, huan zhang, Chaowei Xiao, Bo Li

Diffusion models have been leveraged to perform adversarial purification and thus provide both empirical and certified robustness for a standard model.

Denoising

Robust Mixture-of-Expert Training for Convolutional Neural Networks

1 code implementation ICCV 2023 Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, huan zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu

Since the lack of robustness has become one of the main hurdles for CNNs, in this paper we ask: How to adversarially robustify a CNN-based MoE model?

Adversarial Robustness

Improving the Generalization Ability in Essay Coherence Evaluation through Monotonic Constraints

no code implementations25 Jul 2023 Chen Zheng, huan zhang, Yan Zhao, Yuxuan Lai

To address these concerns, we propose a coherence scoring model consisting of a regression model with two feature extractors: a local coherence discriminative model and a punctuation correction model.

Coherence Evaluation regression +1

Can Agents Run Relay Race with Strangers? Generalization of RL to Out-of-Distribution Trajectories

no code implementations26 Apr 2023 Li-Cheng Lan, huan zhang, Cho-Jui Hsieh

With extensive experimental evaluation, we show the prevalence of \emph{generalization failure} on controllable states from stranger agents.

Reinforcement Learning (RL)

Provably Bounding Neural Network Preimages

3 code implementations NeurIPS 2023 Suhas Kotha, Christopher Brix, Zico Kolter, Krishnamurthy Dvijotham, huan zhang

Most work on the formal verification of neural networks has focused on bounding the set of outputs that correspond to a given set of inputs (for example, bounded perturbations of a nominal input).

Adversarial Robustness

Understanding and eliminating spurious modes in variational Monte Carlo using collective variables

no code implementations11 Nov 2022 huan zhang, Robert J. Webber, Michael Lindsey, Timothy C. Berkelbach, Jonathan Weare

The use of neural network parametrizations to represent the ground state in variational Monte Carlo (VMC) calculations has generated intense interest in recent years.

Variational Monte Carlo

Are AlphaZero-like Agents Robust to Adversarial Perturbations?

1 code implementation7 Nov 2022 Li-Cheng Lan, huan zhang, Ti-Rong Wu, Meng-Yu Tsai, I-Chen Wu, Cho-Jui Hsieh

Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions.

Adversarial Attack Game of Go

FI-ODE: Certifiably Robust Forward Invariance in Neural ODEs

1 code implementation30 Oct 2022 Yujia Huang, Ivan Dario Jimenez Rodriguez, huan zhang, Yuanyuan Shi, Yisong Yue

Forward invariance is a long-studied property in control theory that is used to certify that a dynamical system stays within some pre-specified set of states for all time, and also admits robustness guarantees (e. g., the certificate holds under perturbations).

Adversarial Robustness Continuous Control +1

Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation

2 code implementations13 Oct 2022 Zhouxing Shi, Yihan Wang, huan zhang, Zico Kolter, Cho-Jui Hsieh

In this paper, we develop an efficient framework for computing the $\ell_\infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian via linear bound propagation.

Fairness

General Cutting Planes for Bound-Propagation-Based Neural Network Verification

2 code implementations11 Aug 2022 huan zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Our generalized bound propagation method, GCP-CROWN, opens up the opportunity to apply general cutting plane methods for neural network verification while benefiting from the efficiency and GPU acceleration of bound propagation methods.

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness

1 code implementation15 Jun 2022 Tianlong Chen, huan zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang

Certifiable robustness is a highly desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios, but often demands tedious computations to establish.

On the Robustness of Safe Reinforcement Learning under Observational Perturbations

1 code implementation29 May 2022 Zuxin Liu, Zijian Guo, Zhepeng Cen, huan zhang, Jie Tan, Bo Li, Ding Zhao

One interesting and counter-intuitive finding is that the maximum reward attack is strong, as it can both induce unsafe behaviors and make the attack stealthy by maintaining the reward.

Adversarial Attack reinforcement-learning +2

BronchusNet: Region and Structure Prior Embedded Representation Learning for Bronchus Segmentation and Classification

no code implementations14 May 2022 Wenhao Huang, Haifan Gong, huan zhang, Yu Wang, Haofeng Li, Guanbin Li, Hong Shen

CT-based bronchial tree analysis plays an important role in the computer-aided diagnosis for respiratory diseases, as it could provide structured information for clinicians.

Classification Graph Learning +3

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks

1 code implementation ICLR 2022 Fan Wu, Linyi Li, Chejian Xu, huan zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li

We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties.

Offline RL reinforcement-learning +1

Sharpness-Aware Minimization with Dynamic Reweighting

no code implementations16 Dec 2021 Wenxuan Zhou, Fangyu Liu, huan zhang, Muhao Chen

Deep neural networks are often overparameterized and may not easily achieve model generalization.

Natural Language Understanding

Temporal Shuffling for Defending Deep Action Recognition Models against Adversarial Attacks

1 code implementation15 Dec 2021 Jaehui Hwang, huan zhang, Jun-Ho Choi, Cho-Jui Hsieh, Jong-Seok Lee

Recently, video-based action recognition methods using convolutional neural networks (CNNs) achieve remarkable recognition performance.

Action Recognition Temporal Action Localization

Robustness between the worst and average case

no code implementations NeurIPS 2021 Leslie Rice, Anna Bair, huan zhang, J. Zico Kolter

Several recent works in machine learning have focused on evaluating the test-time robustness of a classifier: how well the classifier performs not just on the target domain it was trained upon, but upon perturbed examples.

Adversarial Robustness

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

1 code implementation NeurIPS 2021 Yujia Huang, huan zhang, Yuanyuan Shi, J Zico Kolter, Anima Anandkumar

Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant.

Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training

no code implementations18 Oct 2021 Alexander Pan, Yongkyun Lee, huan zhang, Yize Chen, Yuanyuan Shi

Due to the proliferation of renewable energy and its intrinsic intermittency and stochasticity, current power systems face severe operational challenges.

Decision Making reinforcement-learning +1

A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks

no code implementations29 Sep 2021 huan zhang, Shiqi Wang, Kaidi Xu, Yihan Wang, Suman Jana, Cho-Jui Hsieh, J Zico Kolter

In this work, we formulate an adversarial attack using a branch-and-bound (BaB) procedure on ReLU neural networks and search adversarial examples in the activation space corresponding to binary variables in a mixed integer programming (MIP) formulation.

Adversarial Attack

Empirical robustification of pre-trained classifiers

no code implementations ICML Workshop AML 2021 Mohammad Sadegh Norouzzadeh, Wan-Yi Lin, Leonid Boytsov, Leslie Rice, huan zhang, Filipe Condessa, J Zico Kolter

Most pre-trained classifiers, though they may work extremely well on the domain they were trained upon, are not trained in a robust fashion, and therefore are sensitive to adversarial attacks.

Denoising Image Reconstruction +1

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification

no code implementations NeurIPS 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter

We develop $\beta$-CROWN, a new bound propagation based method that can fully encode neuron split constraints in branch-and-bound (BaB) based complete verification via optimizable parameters $\beta$.

Fast Certified Robust Training with Short Warmup

2 code implementations NeurIPS 2021 Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh

Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.

Adversarial Defense

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness Verification

4 code implementations NeurIPS 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Compared to the typically tightest but very costly semidefinite programming (SDP) based incomplete verifiers, we obtain higher verified accuracy with three orders of magnitudes less verification time.

Adversarial Attack

Just Noticeable Difference for Deep Machine Vision

no code implementations16 Feb 2021 Jian Jin, Xingxing Zhang, Xin Fu, huan zhang, Weisi Lin, Jian Lou, Yao Zhao

Experimental results on image classification demonstrate that we successfully find the JND for deep machine vision.

Image Classification Neural Network Security +1

Robust Reinforcement Learning on State Observations with Learned Optimal Adversary

2 code implementations ICLR 2021 huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise.

Adversarial Attack Continuous Control +2

Learning Contextual Perturbation Budgets for Training Robust Neural Networks

no code implementations1 Jan 2021 Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang

We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.

An Efficient Adversarial Attack for Tree Ensembles

1 code implementation NeurIPS 2020 Chong Zhang, huan zhang, Cho-Jui Hsieh

We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs).

Adversarial Attack valid

On $\ell_p$-norm Robustness of Ensemble Stumps and Trees

1 code implementation20 Aug 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the problem of robustness verification and certified defense with respect to general $\ell_p$ norm perturbations for ensemble decision stumps and trees.

The Limit of the Batch Size

no code implementations15 Jun 2020 Yang You, Yuhui Wang, huan zhang, Zhao Zhang, James Demmel, Cho-Jui Hsieh

For the first time we scale the batch size on ImageNet to at least a magnitude larger than all previous work, and provide detailed studies on the performance of many state-of-the-art optimization schemes under this setting.

Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data

1 code implementation11 May 2020 Lu Wang, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Yuan Jiang

By constraining adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset, the spanning attack significantly improves the query efficiency of a wide variety of existing black-box attacks.

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

4 code implementations NeurIPS 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh

Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.

reinforcement-learning Reinforcement Learning (RL)

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.

Quantization

Robustness Verification for Transformers

1 code implementation ICLR 2020 Zhouxing Shi, huan zhang, Kai-Wei Chang, Minlie Huang, Cho-Jui Hsieh

Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees.

Position Sentiment Analysis

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

2 code implementations ICLR 2020 Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.

Robust Triple-Matrix-Recovery-Based Auto-Weighted Label Propagation for Classification

no code implementations20 Nov 2019 Huan Zhang, Zhao Zhang, Mingbo Zhao, Qiaolin Ye, Min Zhang, Meng Wang

Our method can jointly re-cover the underlying clean data, clean labels and clean weighting spaces by decomposing the original data, predicted soft labels or weights into a clean part plus an error part by fitting noise.

General Classification

Enhancing Certifiable Robustness via a Deep Model Ensemble

no code implementations31 Oct 2019 Huan Zhang, Minhao Cheng, Cho-Jui Hsieh

We propose an algorithm to enhance certified robustness of a deep model ensemble by optimally weighting each base model.

Model Selection

LocalGAN: Modeling Local Distributions for Adversarial Response Generation

no code implementations25 Sep 2019 Zhen Xu, Baoxun Wang, huan zhang, Kexin Qiu, Deyuan Zhang, Chengjie Sun

This paper presents a new methodology for modeling the local semantic distribution of responses to a given query in the human-conversation corpus, and on this basis, explores a specified adversarial learning mechanism for training Neural Response Generation (NRG) models to build conversational agents.

Response Generation

Defending Against Adversarial Attacks Using Random Forests

no code implementations16 Jun 2019 Yifan Ding, Liqiang Wang, huan zhang, Jin-Feng Yi, Deliang Fan, Boqing Gong

As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world.

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

2 code implementations ICLR 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh

In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.

Robustness Verification of Tree-based Models

2 code implementations NeurIPS 2019 Hongge Chen, huan zhang, Si Si, Yang Li, Duane Boning, Cho-Jui Hsieh

We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles, the verification problem can be cast as a max-clique problem on a multi-partite graph with bounded boxicity.

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach

no code implementations ICLR 2019 Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

BIG-bench Machine Learning

How Training Data Affect the Accuracy and Robustness of Neural Networks for Image Classification

no code implementations ICLR 2019 Suhua Lei, huan zhang, Ke Wang, Zhendong Su

In light of a recent study on the mutual influence between robustness and accuracy over 18 different ImageNet models, this paper investigates how training data affect the accuracy and robustness of deep neural networks.

General Classification Image Classification

Minimum Divergence vs. Maximum Margin: an Empirical Comparison on Seq2Seq Models

no code implementations ICLR 2019 Huan Zhang, Hai Zhao

Sequence to sequence (seq2seq) models have become a popular framework for neural sequence prediction.

Machine Translation Sentence +2

Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks

1 code implementation ICCV 2019 Jun-Ho Choi, huan zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, Jong-Seok Lee

Single-image super-resolution aims to generate a high-resolution version of a low-resolution image, which serves as an essential component in many computer vision applications.

Image Super-Resolution

Adversarial Robustness vs Model Compression, or Both?

1 code implementation29 Mar 2019 Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin

Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.

Adversarial Robustness Model Compression +1

Robust Decision Trees Against Adversarial Examples

3 code implementations27 Feb 2019 Hongge Chen, huan zhang, Duane Boning, Cho-Jui Hsieh

Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.

Adversarial Attack Adversarial Defense

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

3 code implementations NeurIPS 2019 Hadi Salman, Greg Yang, huan zhang, Cho-Jui Hsieh, Pengchuan Zhang

This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification.

The Limitations of Adversarial Training and the Blind-Spot Attack

no code implementations ICLR 2019 Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S. Dhillon, Cho-Jui Hsieh

In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network.

valid

Efficient Neural Network Robustness Certification with General Activation Functions

14 code implementations NeurIPS 2018 Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.

Computational Efficiency Efficient Neural Network

RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications

4 code implementations28 Oct 2018 Huan Zhang, Pengchuan Zhang, Cho-Jui Hsieh

The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks.

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

1 code implementation19 Oct 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel

We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

1 code implementation ICLR 2019 Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, huan zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin

When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example.

Adversarial Attack

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

2 code implementations ECCV 2018 Dong Su, huan zhang, Hongge Chen, Jin-Feng Yi, Pin-Yu Chen, Yupeng Gao

The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition.

General Classification Image Classification

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

1 code implementation12 Jul 2018 Minhao Cheng, Thong Le, Pin-Yu Chen, Jin-Feng Yi, huan zhang, Cho-Jui Hsieh

We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

BIG-bench Machine Learning

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

1 code implementation30 May 2018 Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.

Adversarial Robustness

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

3 code implementations28 May 2018 Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava

Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.

Adversarial Attack Adversarial Robustness +1

Towards Fast Computation of Certified Robustness for ReLU Networks

6 code implementations ICML 2018 Tsui-Wei Weng, huan zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel

Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17].

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

1 code implementation3 Mar 2018 Minhao Cheng, Jin-Feng Yi, Pin-Yu Chen, huan zhang, Cho-Jui Hsieh

In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) models, whose inputs are discrete text strings and outputs have an almost infinite number of possibilities.

Image Classification Machine Translation +2

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

1 code implementation ICLR 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel

Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.

Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

2 code implementations ACL 2018 Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh

Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.

Caption Generation Image Captioning

Towards Robust Neural Networks via Random Self-ensemble

no code implementations ECCV 2018 Xuanqing Liu, Minhao Cheng, huan zhang, Cho-Jui Hsieh

In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) by combining two important concepts: {\bf randomness} and {\bf ensemble}.

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

6 code implementations13 Sep 2017 Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.

Adversarial Attack Adversarial Robustness

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

5 code implementations14 Aug 2017 Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.

Adversarial Attack Adversarial Defense +3

GPU-acceleration for Large-scale Tree Boosting

3 code implementations26 Jun 2017 Huan Zhang, Si Si, Cho-Jui Hsieh

In this paper, we present a novel massively parallel algorithm for accelerating the decision tree building procedure on GPUs (Graphics Processing Units), which is a crucial step in Gradient Boosted Decision Tree (GBDT) and random forests training.

Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent

3 code implementations NeurIPS 2017 Xiangru Lian, Ce Zhang, huan zhang, Cho-Jui Hsieh, Wei zhang, Ji Liu

On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.

Sublinear Time Orthogonal Tensor Decomposition

1 code implementation NeurIPS 2016 Zhao Song, David Woodruff, huan zhang

We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i. e., even without reading most of the input tensor.

Tensor Decomposition

Cannot find the paper you are looking for? You can Submit a new open access paper.