Search Results for author: Cho-Jui Hsieh

Found 194 papers, 88 papers with code

On the Transferability of Adversarial Attacks against Neural Text Classifier

no code implementations EMNLP 2021 Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang

Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models.

text-classification Text Classification

On Lp-norm Robustness of Ensemble Decision Stumps and Trees

no code implementations ICML 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the robustness verification and defense with respect to general $\ell_p$ norm perturbation for ensemble trees and stumps.

Towards Adversarially Robust Text Classifiers by Learning to Reweight Clean Examples

no code implementations Findings (ACL) 2022 Jianhan Xu, Cenyuan Zhang, Xiaoqing Zheng, Linyang Li, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Most of the existing defense methods improve the adversarial robustness by making the models adapt to the training set augmented with some adversarial examples.

Adversarial Robustness

Online Continuous Hyperparameter Optimization for Contextual Bandits

no code implementations18 Feb 2023 Yue Kang, Cho-Jui Hsieh, Thomas C. M. Lee

Specifically, we use a double-layer bandit framework named CDT (Continuous Dynamic Tuning) and formulate the hyperparameter optimization as a non-stationary continuum-armed bandit, where each arm represents a combination of hyperparameters, and the corresponding reward is the algorithmic result.

Hyperparameter Optimization Multi-Armed Bandits +1

Symbolic Discovery of Optimization Algorithms

7 code implementations13 Feb 2023 Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, Yifeng Lu, Quoc V. Le

We present a method to formulate algorithm discovery as program search, and apply it to discover optimization algorithms for deep neural network training.

Contrastive Learning Image Classification +4

Effective Robustness against Natural Distribution Shifts for Models with Different Training Data

no code implementations2 Feb 2023 Zhouxing Shi, Nicholas Carlini, Ananth Balashankar, Ludwig Schmidt, Cho-Jui Hsieh, Alex Beutel, Yao Qin

In this paper, we propose a new effective robustness evaluation metric to compare the effective robustness of models trained on different data distributions.

Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory

1 code implementation19 Nov 2022 Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh

Among recently proposed methods, Matching Training Trajectories (MTT) achieves state-of-the-art performance on CIFAR-10/100, while having difficulty scaling to ImageNet-1k dataset due to the large memory requirement when performing unrolled gradient computation through back-propagation.

Are AlphaZero-like Agents Robust to Adversarial Perturbations?

no code implementations7 Nov 2022 Li-Cheng Lan, huan zhang, Ti-Rong Wu, Meng-Yu Tsai, I-Chen Wu, Cho-Jui Hsieh

Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions.

Adversarial Attack Game of Go

Preserving In-Context Learning ability in Large Language Model Fine-tuning

no code implementations1 Nov 2022 Yihan Wang, Si Si, Daliang Li, Michal Lukasik, Felix Yu, Cho-Jui Hsieh, Inderjit S Dhillon, Sanjiv Kumar

More importantly, ProMoT shows remarkable generalization ability on tasks that have different formats, e. g. fine-tuning on a NLI binary classification task improves the model's in-context ability to do summarization (+0. 53 Rouge-2 score compared to the pretrained model), making ProMoT a promising method to build general purpose capabilities such as grounding and reasoning into LLMs with small but high quality datasets.

Domain Generalization Few-Shot Learning +2

ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation

1 code implementation22 Oct 2022 Fan Yin, Yao Li, Cho-Jui Hsieh, Kai-Wei Chang

Finally, our analysis shows that the two types of uncertainty provided by \textbf{ADDMU} can be leveraged to characterize adversarial examples and identify the ones that contribute most to model's robustness in adversarial training.

Reducing Training Sample Memorization in GANs by Training with Memorization Rejection

1 code implementation21 Oct 2022 Andrew Bai, Cho-Jui Hsieh, Wendy Kan, Hsuan-Tien Lin

In this paper, we propose memorization rejection, a training scheme that rejects generated samples that are near-duplicates of training samples during training.

Memorization

Uncertainty in Extreme Multi-label Classification

no code implementations18 Oct 2022 Jyun-Yu Jiang, Wei-Cheng Chang, Jiong Zhong, Cho-Jui Hsieh, Hsiang-Fu Yu

Uncertainty quantification is one of the most crucial tasks to obtain trustworthy and reliable machine learning models for decision making.

Classification Decision Making +3

ELIAS: End-to-End Learning to Index and Search in Large Output Spaces

1 code implementation16 Oct 2022 Nilesh Gupta, Patrick H. Chen, Hsiang-Fu Yu, Cho-Jui Hsieh, Inderjit S Dhillon

A popular approach for dealing with the large label space is to arrange the labels into a shallow tree-based index and then learn an ML model to efficiently search this index via beam search.

Extreme Multi-Label Classification

Watermarking Pre-trained Language Models with Backdooring

no code implementations14 Oct 2022 Chenxi Gu, Chengsong Huang, Xiaoqing Zheng, Kai-Wei Chang, Cho-Jui Hsieh

Large pre-trained language models (PLMs) have proven to be a crucial component of modern natural language processing systems.

Multi-Task Learning

Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation

2 code implementations13 Oct 2022 Zhouxing Shi, Yihan Wang, huan zhang, Zico Kolter, Cho-Jui Hsieh

In this paper, we develop an efficient framework for computing the $\ell_\infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian via linear bound propagation.

Fairness

Efficient Non-Parametric Optimizer Search for Diverse Tasks

1 code implementation27 Sep 2022 Ruochen Wang, Yuanhao Xiong, Minhao Cheng, Cho-Jui Hsieh

Efficient and automated design of optimizers plays a crucial role in full-stack AutoML systems.

AutoML

Concept Gradient: Concept-based Interpretation Without Linear Assumption

no code implementations31 Aug 2022 Andrew Bai, Chih-Kuan Yeh, Pradeep Ravikumar, Neil Y. C. Lin, Cho-Jui Hsieh

We showed that for a general (potentially non-linear) concept, we can mathematically evaluate how a small change of concept affecting the model's prediction, which leads to an extension of gradient-based interpretation to the concept space.

General Cutting Planes for Bound-Propagation-Based Neural Network Verification

2 code implementations11 Aug 2022 huan zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Our generalized bound propagation method, GCP-CROWN, opens up the opportunity to apply general cutting plane methods for neural network verification while benefiting from the efficiency and GPU acceleration of bound propagation methods.

FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning

1 code implementation20 Jul 2022 Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu, Cho-Jui Hsieh

Federated learning~(FL) has recently attracted increasing attention from academia and industry, with the ultimate goal of achieving collaborative training under privacy and communication constraints.

Federated Learning Image Classification

DC-BENCH: Dataset Condensation Benchmark

2 code implementations20 Jul 2022 Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh

Dataset Condensation is a newly emerging technique aiming at learning a tiny dataset that captures the rich information encoded in the original dataset.

Data Augmentation Data Compression +2

Improving the Adversarial Robustness of NLP Models by Information Bottleneck

1 code implementation Findings (ACL) 2022 Cenyuan Zhang, Xiang Zhou, Yixin Wan, Xiaoqing Zheng, Kai-Wei Chang, Cho-Jui Hsieh

Existing studies have demonstrated that adversarial examples can be directly attributed to the presence of non-robust features, which are highly predictive, but can be easily manipulated by adversaries to fool NLP models.

Adversarial Robustness SST-2

Generalizing Few-Shot NAS with Gradient Matching

1 code implementation ICLR 2022 Shoukang Hu, Ruochen Wang, Lanqing Hong, Zhenguo Li, Cho-Jui Hsieh, Jiashi Feng

Efficient performance estimation of architectures drawn from large search spaces is essential to Neural Architecture Search.

Neural Architecture Search

On the Convergence of Certified Robust Training with Interval Bound Propagation

no code implementations ICLR 2022 Yihan Wang, Zhouxing Shi, Quanquan Gu, Cho-Jui Hsieh

Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature.

Towards Efficient and Scalable Sharpness-Aware Minimization

1 code implementation CVPR 2022 Yong liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, Yang You

Recently, Sharpness-Aware Minimization (SAM), which connects the geometry of the loss landscape and generalization, has demonstrated significant performance boosts on training large-scale models such as vision transformers.

Extreme Zero-Shot Learning for Extreme Text Classification

1 code implementation NAACL 2022 Yuanhao Xiong, Wei-Cheng Chang, Cho-Jui Hsieh, Hsiang-Fu Yu, Inderjit Dhillon

To learn the semantic embeddings of instances and labels with raw text, we propose to pre-train Transformer-based encoders with self-supervised contrastive losses.

Multi Label Text Classification Multi-Label Text Classification +2

Temporal Shuffling for Defending Deep Action Recognition Models against Adversarial Attacks

no code implementations15 Dec 2021 Jaehui Hwang, huan zhang, Jun-Ho Choi, Cho-Jui Hsieh, Jong-Seok Lee

Another observation enabling our defense method is that adversarial perturbations on videos are sensitive to temporal destruction.

Action Recognition

DRONE: Data-aware Low-rank Compression for Large NLP Models

no code implementations NeurIPS 2021 Pei-Hung Chen, Hsiang-Fu Yu, Inderjit Dhillon, Cho-Jui Hsieh

In addition to compressing standard models, out method can also be used on distilled BERT models to further improve compression rate.

Low-rank compression MRPC +1

A Review of Adversarial Attack and Defense for Classification Methods

1 code implementation18 Nov 2021 Yao Li, Minhao Cheng, Cho-Jui Hsieh, Thomas C. M. Lee

Despite the efficiency and scalability of machine learning systems, recent studies have demonstrated that many classification methods, especially deep neural networks (DNNs), are vulnerable to adversarial examples; i. e., examples that are carefully crafted to fool a well-trained classification model while being indistinguishable from natural data to human.

Adversarial Attack Classification

Can Vision Transformers Perform Convolution?

no code implementations2 Nov 2021 Shanda Li, Xiangning Chen, Di He, Cho-Jui Hsieh

Several recent studies have demonstrated that attention-based networks, such as Vision Transformer (ViT), can outperform Convolutional Neural Networks (CNNs) on several computer vision tasks without using convolutional layers.

Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction

4 code implementations ICLR 2022 Eli Chien, Wei-Cheng Chang, Cho-Jui Hsieh, Hsiang-Fu Yu, Jiong Zhang, Olgica Milenkovic, Inderjit S Dhillon

We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework.

Extreme Multi-Label Classification Language Modelling +3

How and When Adversarial Robustness Transfers in Knowledge Distillation?

no code implementations22 Oct 2021 Rulin Shao, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

Our comprehensive analysis shows several novel insights that (1) With KDIGA, students can preserve or even exceed the adversarial robustness of the teacher model, even when their models have fundamentally different architectures; (2) KDIGA enables robustness to transfer to pre-trained students, such as KD from an adversarially trained ResNet to a pre-trained ViT, without loss of clean accuracy; and (3) Our derived local linearity bounds for characterizing adversarial robustness in KD are consistent with the empirical results.

Adversarial Robustness Knowledge Distillation +1

Adversarial Attack across Datasets

no code implementations13 Oct 2021 Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Lihong Cao, Cho-Jui Hsieh

In this paper, we define a Generalized Transferable Attack (GTA) problem where the attacker doesn't know this information and is acquired to attack any randomly encountered images that may come from unknown datasets.

Adversarial Attack Image Classification

Learning to Schedule Learning rate with Graph Neural Networks

no code implementations ICLR 2022 Yuanhao Xiong, Li-Cheng Lan, Xiangning Chen, Ruochen Wang, Cho-Jui Hsieh

By constructing a directed graph for the underlying neural network of the target problem, GNS encodes current dynamics with a graph message passing network and trains an agent to control the learning rate accordingly via reinforcement learning.

Benchmarking Image Classification +2

A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks

no code implementations29 Sep 2021 huan zhang, Shiqi Wang, Kaidi Xu, Yihan Wang, Suman Jana, Cho-Jui Hsieh, J Zico Kolter

In this work, we formulate an adversarial attack using a branch-and-bound (BaB) procedure on ReLU neural networks and search adversarial examples in the activation space corresponding to binary variables in a mixed integer programming (MIP) formulation.

Adversarial Attack

Sharpness-Aware Minimization in Large-Batch Training: Training Vision Transformer In Minutes

no code implementations29 Sep 2021 Yong liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, Yang You

Large-batch training is an important direction for distributed machine learning, which can improve the utilization of large-scale clusters and therefore accelerate the training process.

FastEnsemble: Benchmarking and Accelerating Ensemble-based Uncertainty Estimation for Image-to-Image Translation

no code implementations29 Sep 2021 Xuanqing Liu, Sara Imboden, Marie Payne, Neil Lin, Cho-Jui Hsieh

In addition, we introduce FastEnsemble, a fast ensemble method which only requires less than $8\%$ of the full-ensemble training time to generate a new ensemble member.

Benchmarking Image-to-Image Translation +2

Training Meta-Surrogate Model for Transferable Adversarial Attack

1 code implementation5 Sep 2021 Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Cho-Jui Hsieh

In this paper, we tackle this problem from a novel angle -- instead of using the original surrogate models, can we obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models?

Adversarial Attack

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

1 code implementation EMNLP 2021 Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, Cho-Jui Hsieh

Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.

Benchmarking

RandomRooms: Unsupervised Pre-training from Synthetic Shapes and Randomized Layouts for 3D Object Detection

2 code implementations ICCV 2021 Yongming Rao, Benlin Liu, Yi Wei, Jiwen Lu, Cho-Jui Hsieh, Jie zhou

In particular, we propose to generate random layouts of a scene by making use of the objects in the synthetic CAD dataset and learn the 3D scene representation by applying object-level contrastive learning on two random scenes generated from the same set of synthetic objects.

2D object detection 3D Object Detection +3

Rethinking Architecture Selection in Differentiable NAS

1 code implementation ICLR 2021 Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh

Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms.

Neural Architecture Search

Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble

no code implementations ACL 2021 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples.

Label Disentanglement in Partition-based Extreme Multilabel Classification

no code implementations NeurIPS 2021 Xuanqing Liu, Wei-Cheng Chang, Hsiang-Fu Yu, Cho-Jui Hsieh, Inderjit S. Dhillon

Partition-based methods are increasingly-used in extreme multi-label classification (XMC) problems due to their scalability to large output spaces (e. g., millions or more).

Classification Disentanglement +1

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification

no code implementations NeurIPS 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter

We develop $\beta$-CROWN, a new bound propagation based method that can fully encode neuron split constraints in branch-and-bound (BaB) based complete verification via optimizable parameters $\beta$.

Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks

no code implementations5 Jun 2021 Qin Ding, Cho-Jui Hsieh, James Sharpnack

We provide theoretical guarantees for our proposed algorithm and show by experiments that our proposed algorithm improves the robustness against various kinds of popular attacks.

Multi-Armed Bandits Recommendation Systems

Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms

no code implementations5 Jun 2021 Qin Ding, Yue Kang, Yi-Wei Liu, Thomas C. M. Lee, Cho-Jui Hsieh, James Sharpnack

To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment.

Recommendation Systems

When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations

2 code implementations ICLR 2022 Xiangning Chen, Cho-Jui Hsieh, Boqing Gong

Vision Transformers (ViTs) and MLPs signal further efforts on replacing hand-wired features or inductive biases with general-purpose neural architectures.

Ranked #5 on Domain Generalization on ImageNet-C (Top 1 Accuracy metric)

Domain Generalization Fine-Grained Image Classification +1

DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

1 code implementation NeurIPS 2021 Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie zhou, Cho-Jui Hsieh

Based on this observation, we propose a dynamic token sparsification framework to prune redundant tokens progressively and dynamically based on the input.

Image Classification

Concurrent Adversarial Learning for Large-Batch Training

no code implementations ICLR 2022 Yong liu, Xiangning Chen, Minhao Cheng, Cho-Jui Hsieh, Yang You

Current methods usually use extensive data augmentation to increase the batch size, but we found the performance gain with data augmentation decreases as batch size increases, and data augmentation will become insufficient after certain point.

Data Augmentation

Balancing Robustness and Sensitivity using Feature Contrastive Learning

no code implementations19 May 2021 Seungyeon Kim, Daniel Glasner, Srikumar Ramalingam, Cho-Jui Hsieh, Kishore Papineni, Sanjiv Kumar

It is generally believed that robust training of extremely large networks is critical to their success in real-world applications.

Contrastive Learning

Detecting Adversarial Examples with Bayesian Neural Network

1 code implementation18 May 2021 Yao Li, Tongyi Tang, Cho-Jui Hsieh, Thomas C. M. Lee

In this paper, we propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors and make it easier to simulate output distribution of deep neural network.

2.5D Visual Relationship Detection

1 code implementation26 Apr 2021 Yu-Chuan Su, Soravit Changpinyo, Xiangning Chen, Sathish Thoppay, Cho-Jui Hsieh, Lior Shapira, Radu Soricut, Hartwig Adam, Matthew Brown, Ming-Hsuan Yang, Boqing Gong

To enable progress on this task, we create a new dataset consisting of 220k human-annotated 2. 5D relationships among 512K objects from 11K images.

Benchmarking Depth Estimation +1

On the Sensitivity and Stability of Model Interpretations in NLP

1 code implementation ACL 2022 Fan Yin, Zhouxing Shi, Cho-Jui Hsieh, Kai-Wei Chang

We propose two new criteria, sensitivity and stability, that provide complementary notions of faithfulness to the existed removal-based criteria.

Adversarial Robustness Dependency Parsing +2

Fast Certified Robust Training with Short Warmup

2 code implementations NeurIPS 2021 Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh

Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.

Adversarial Defense

On the Adversarial Robustness of Vision Transformers

1 code implementation29 Mar 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

Following the success in advancing natural language processing and understanding, transformers are expected to bring revolutionary changes to computer vision.

Adversarial Robustness

Robust and Accurate Object Detection via Adversarial Learning

1 code implementation CVPR 2021 Xiangning Chen, Cihang Xie, Mingxing Tan, Li Zhang, Cho-Jui Hsieh, Boqing Gong

Data augmentation has become a de facto component for training high-performance deep image classifiers, but its potential is under-explored for object detection.

AutoML Data Augmentation +2

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness Verification

4 code implementations NeurIPS 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Compared to the typically tightest but very costly semidefinite programming (SDP) based incomplete verifiers, we obtain higher verified accuracy with three orders of magnitudes less verification time.

Adversarial Attack

Local Critic Training for Model-Parallel Learning of Deep Neural Networks

1 code implementation3 Feb 2021 Hojung Lee, Cho-Jui Hsieh, Jong-Seok Lee

We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs).

Robust Reinforcement Learning on State Observations with Learned Optimal Adversary

2 code implementations ICLR 2021 huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise.

Adversarial Attack Continuous Control +2

Robust Text CAPTCHAs Using Adversarial Examples

no code implementations7 Jan 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

At the second stage, we design and apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.

Adversarial Attack Optical Character Recognition (OCR)

Adversarial Masking: Towards Understanding Robustness Trade-off for Generalization

no code implementations1 Jan 2021 Minhao Cheng, Zhe Gan, Yu Cheng, Shuohang Wang, Cho-Jui Hsieh, Jingjing Liu

By incorporating different feature maps after the masking, we can distill better features to help model generalization.

Data-aware Low-Rank Compression for Large NLP Models

no code implementations1 Jan 2021 Patrick Chen, Hsiang-Fu Yu, Inderjit S Dhillon, Cho-Jui Hsieh

In this paper, we observe that the learned representation of each layer lies in a low-dimensional space.

Low-rank compression MRPC +1

Learning to Learn with Smooth Regularization

no code implementations1 Jan 2021 Yuanhao Xiong, Cho-Jui Hsieh

Recent decades have witnessed great prosperity of deep learning in tackling various problems such as classification and decision making.

Decision Making Few-Shot Learning

Learning Contextual Perturbation Budgets for Training Robust Neural Networks

no code implementations1 Jan 2021 Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang

We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.

Self-Progressing Robust Training

1 code implementation22 Dec 2020 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy machine learning systems.

Adversarial Robustness

Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search

no code implementations14 Dec 2020 Li-Cheng Lan, Meng-Yu Tsai, Ti-Rong Wu, I-Chen Wu, Cho-Jui Hsieh

This implies that a significant amount of resources can be saved if we are able to stop the searching earlier when we are confident with the current searching result.

Atari Games

Voting based ensemble improves robustness of defensive models

no code implementations28 Nov 2020 Devvrit, Minhao Cheng, Cho-Jui Hsieh, Inderjit Dhillon

Several previous attempts tackled this problem by ensembling the soft-label prediction and have been proved vulnerable based on the latest attack methods.

On the Transferability of Adversarial Attacksagainst Neural Text Classifier

no code implementations17 Nov 2020 Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang

Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models.

text-classification Text Classification

An Efficient Adversarial Attack for Tree Ensembles

1 code implementation NeurIPS 2020 Chong Zhang, huan zhang, Cho-Jui Hsieh

We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs).

Adversarial Attack

How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers

no code implementations19 Oct 2020 Yuanhao Xiong, Xuanqing Liu, Li-Cheng Lan, Yang You, Si Si, Cho-Jui Hsieh

For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy.

Benchmarking Graph Mining

On $\ell_p$-norm Robustness of Ensemble Stumps and Trees

1 code implementation20 Aug 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the problem of robustness verification and certified defense with respect to general $\ell_p$ norm perturbations for ensemble decision stumps and trees.

Improving the Speed and Quality of GAN by Adversarial Training

1 code implementation7 Aug 2020 Jiachen Zhong, Xuanqing Liu, Cho-Jui Hsieh

Generative adversarial networks (GAN) have shown remarkable results in image generation tasks.

Image Generation

Multi-Stage Influence Function

no code implementations NeurIPS 2020 Hongge Chen, Si Si, Yang Li, Ciprian Chelba, Sanjiv Kumar, Duane Boning, Cho-Jui Hsieh

With this score, we can identify the pretraining examples in the pretraining task that contribute most to a prediction in the finetuning task.

Transfer Learning

What Does BERT with Vision Look At?

no code implementations ACL 2020 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang

Pre-trained visually grounded language models such as ViLBERT, LXMERT, and UNITER have achieved significant performance improvement on vision-and-language tasks but what they learn during pre-training remains unclear.

Language Modelling

Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble

1 code implementation20 Jun 2020 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Despite neural networks have achieved prominent performance on many natural language processing (NLP) tasks, they are vulnerable to adversarial examples.

The Limit of the Batch Size

no code implementations15 Jun 2020 Yang You, Yuhui Wang, huan zhang, Zhao Zhang, James Demmel, Cho-Jui Hsieh

For the first time we scale the batch size on ImageNet to at least a magnitude larger than all previous work, and provide detailed studies on the performance of many state-of-the-art optimization schemes under this setting.

Provably Robust Metric Learning

2 code implementations NeurIPS 2020 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Yuan Jiang, Cho-Jui Hsieh

Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied.

Metric Learning

An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling

no code implementations7 Jun 2020 Qin Ding, Cho-Jui Hsieh, James Sharpnack

A natural way to resolve this problem is to apply online stochastic gradient descent (SGD) so that the per-step time and memory complexity can be reduced to constant with respect to $t$, but a contextual bandit policy based on online SGD updates that balances exploration and exploitation has remained elusive.

Thompson Sampling

Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data

1 code implementation11 May 2020 Lu Wang, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Yuan Jiang

By constraining adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset, the spanning attack significantly improves the query efficiency of a wide variety of existing black-box attacks.

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

4 code implementations NeurIPS 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh

Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.

reinforcement-learning Reinforcement Learning (RL)

Learning to Encode Position for Transformer with Continuous Dynamical Model

1 code implementation ICML 2020 Xuanqing Liu, Hsiang-Fu Yu, Inderjit Dhillon, Cho-Jui Hsieh

The main reason is that position information among input units is not inherently encoded, i. e., the models are permutation equivalent; this problem justifies why all of the existing models are accompanied by a sinusoidal encoding/embedding layer at the input.

Inductive Bias Linguistic Acceptability +3

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.

Quantization

CAT: Customized Adversarial Training for Improved Robustness

no code implementations17 Feb 2020 Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh

Adversarial training has become one of the most effective methods for improving robustness of neural networks.

Robustness Verification for Transformers

1 code implementation ICLR 2020 Zhouxing Shi, huan zhang, Kai-Wei Chang, Minlie Huang, Cho-Jui Hsieh

Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees.

Sentiment Analysis

Multiscale Non-stationary Stochastic Bandits

no code implementations13 Feb 2020 Qin Ding, Cho-Jui Hsieh, James Sharpnack

Classic contextual bandit algorithms for linear models, such as LinUCB, assume that the reward distribution for an arm is modeled by a stationary linear regression.

regression

Stabilizing Differentiable Architecture Search via Perturbation-based Regularization

1 code implementation ICML 2020 Xiangning Chen, Cho-Jui Hsieh

Furthermore, we mathematically show that SDARTS implicitly regularizes the Hessian norm of the validation loss, which accounts for a smoother loss landscape and improved performance.

Adversarial Attack Neural Architecture Search

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

2 code implementations ICLR 2020 Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.

Overcoming Catastrophic Forgetting by Generative Regularization

no code implementations3 Dec 2019 Patrick H. Chen, Wei Wei, Cho-Jui Hsieh, Bo Dai

In this paper, we propose a new method to overcome catastrophic forgetting by adding generative regularization to Bayesian inference framework.

Bayesian Inference Continual Learning

GraphDefense: Towards Robust Graph Convolutional Networks

no code implementations11 Nov 2019 Xiaoyun Wang, Xuanqing Liu, Cho-Jui Hsieh

Inspired by the previous works on adversarial defense for deep neural networks, and especially adversarial training algorithm, we propose a method called GraphDefense to defend against the adversarial perturbations.

Adversarial Defense

MulCode: A Multiplicative Multi-way Model for Compressing Neural Language Model

no code implementations IJCNLP 2019 Yukun Ma, Patrick H. Chen, Cho-Jui Hsieh

For example, input embedding and Softmax matrices in IWSLT-2014 German-to-English data set account for more than 80{\%} of the total model parameters.

Language Modelling Machine Translation +2

Enhancing Certifiable Robustness via a Deep Model Ensemble

no code implementations31 Oct 2019 Huan Zhang, Minhao Cheng, Cho-Jui Hsieh

We propose an algorithm to enhance certified robustness of a deep model ensemble by optimally weighting each base model.

Model Selection

Learning to Learn by Zeroth-Order Oracle

1 code implementation ICLR 2020 Yangjun Ruan, Yuanhao Xiong, Sashank Reddi, Sanjiv Kumar, Cho-Jui Hsieh

In the learning to learn (L2L) framework, we cast the design of optimization algorithms as a machine learning problem and use deep neural networks to learn the update rules.

Adversarial Attack

Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Class-Imbalanced Data

1 code implementation NeurIPS 2020 Utkarsh Ojha, Krishna Kumar Singh, Cho-Jui Hsieh, Yong Jae Lee

We propose a novel unsupervised generative model that learns to disentangle object identity from other low-level aspects in class-imbalanced data.

Representation Learning

SSE-PT: Sequential Recommendation Via Personalized Transformer

2 code implementations25 Sep 2019 Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack

Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed for better use of the temporal ordering of items that each user has engaged with.

 Ranked #1 on Recommendation Systems on MovieLens 1M (nDCG@10 metric)

Sequential Recommendation

Stochastically Controlled Compositional Gradient for the Composition problem

no code implementations25 Sep 2019 Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao

The strategy is also accompanied by a mini-batch version of the proposed method that improves query complexity with respect to the size of the mini-batch.

SPROUT: Self-Progressing Robust Training

no code implementations25 Sep 2019 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems.

Adversarial Robustness

LEARNING TO LEARN WITH BETTER CONVERGENCE

no code implementations25 Sep 2019 Patrick H. Chen, Sashank Reddi, Sanjiv Kumar, Cho-Jui Hsieh

We consider the learning to learn problem, where the goal is to leverage deeplearning models to automatically learn (iterative) optimization algorithms for training machine learning models.

Defending Against Adversarial Examples by Regularized Deep Embedding

no code implementations25 Sep 2019 Yao Li, Martin Renqiang Min, Wenchao Yu, Cho-Jui Hsieh, Thomas Lee, Erik Kruus

Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples.

Adversarial Attack Adversarial Robustness

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

1 code implementation ICLR 2020 Minhao Cheng, Simranjit Singh, Patrick Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh

We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input.

Adversarial Attack Adversarial Robustness +1

BOSH: An Efficient Meta Algorithm for Decision-based Attacks

no code implementations10 Sep 2019 Zhenxin Xiao, Puyudi Yang, Yuchen Jiang, Kai-Wei Chang, Cho-Jui Hsieh

Adversarial example generation becomes a viable method for evaluating the robustness of a machine learning model.

Adversarial Attack

Natural Adversarial Sentence Generation with Gradient-based Perturbation

1 code implementation6 Sep 2019 Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, Cho-Jui Hsieh

This work proposes a novel algorithm to generate natural language adversarial input for text classification models, in order to investigate the robustness of these models.

Sentence Embeddings Sentiment Analysis +2

Temporal Collaborative Ranking Via Personalized Transformer

2 code implementations15 Aug 2019 Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack

Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed us to make better use of the temporal ordering of items that each user has engaged with.

Collaborative Ranking

Efficient Neural Interaction Function Search for Collaborative Filtering

2 code implementations28 Jun 2019 Quanming Yao, Xiangning Chen, James Kwok, Yong Li, Cho-Jui Hsieh

Motivated by the recent success of automated machine learning (AutoML), we propose in this paper the search for simple neural interaction functions (SIF) in CF.

AutoML Collaborative Filtering

Convergence of Adversarial Training in Overparametrized Neural Networks

no code implementations NeurIPS 2019 Ruiqi Gao, Tianle Cai, Haochuan Li, Li-Wei Wang, Cho-Jui Hsieh, Jason D. Lee

Neural networks are vulnerable to adversarial examples, i. e. inputs that are imperceptibly perturbed from natural data and yet incorrectly classified by the network.

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

2 code implementations ICLR 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh

In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.

Robustness Verification of Tree-based Models

2 code implementations NeurIPS 2019 Hongge Chen, huan zhang, Si Si, Yang Li, Duane Boning, Cho-Jui Hsieh

We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles, the verification problem can be cast as a max-clique problem on a multi-partite graph with bounded boxicity.

Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective

1 code implementation10 Jun 2019 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh

Furthermore, we show that dual solutions for these QP problems could give us a valid lower bound of the adversarial perturbation that can be used for formal robustness verification, giving us a nice view of attack/verification for NN models.

ML-LOO: Detecting Adversarial Examples with Feature Attribution

no code implementations8 Jun 2019 Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, Michael. I. Jordan

Furthermore, we extend our method to include multi-layer feature attributions in order to tackle the attacks with mixed confidence levels.

Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise

1 code implementation5 Jun 2019 Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh

In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE) network, which naturally incorporates various commonly used regularization mechanisms based on random noise injection.

Evaluating and Enhancing the Robustness of Dialogue Systems: A Case Study on a Negotiation Agent

no code implementations NAACL 2019 Minhao Cheng, Wei Wei, Cho-Jui Hsieh

Moreover, we show that with the adversarial training, we are able to improve the robustness of negotiation agents by 1. 5 points on average against all our attacks.

Graph DNA: Deep Neighborhood Aware Graph Encoding for Collaborative Filtering

no code implementations29 May 2019 Liwei Wu, Hsiang-Fu Yu, Nikhil Rao, James Sharpnack, Cho-Jui Hsieh

In this paper, we propose using Graph DNA, a novel Deep Neighborhood Aware graph encoding algorithm, for exploiting deeper neighborhood information.

Collaborative Filtering Recommendation Systems

Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers

2 code implementations NeurIPS 2019 Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack

We find that when used along with widely-used regularization methods such as weight decay and dropout, our proposed SSE can further reduce overfitting, which often leads to more favorable generalization results.

Knowledge Graphs Recommendation Systems

Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

5 code implementations KDD 2019 Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, Cho-Jui Hsieh

Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99. 36 on the PPI dataset, while the previous best result was 98. 71 by [16].

Graph Clustering Link Prediction +1

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach

no code implementations ICLR 2019 Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

BIG-bench Machine Learning

Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks

1 code implementation ICCV 2019 Jun-Ho Choi, huan zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, Jong-Seok Lee

Single-image super-resolution aims to generate a high-resolution version of a low-resolution image, which serves as an essential component in many computer vision applications.

Image Super-Resolution

Efficient Contextual Representation Learning Without Softmax Layer

no code implementations28 Feb 2019 Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang

Our framework reduces the time spent on the output layer to a negligible level, eliminates almost all the trainable parameters of the softmax layer and performs language modeling without truncating the vocabulary.

Dimensionality Reduction Language Modelling +1

Robust Decision Trees Against Adversarial Examples

3 code implementations27 Feb 2019 Hongge Chen, huan zhang, Duane Boning, Cho-Jui Hsieh

Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.

Adversarial Attack Adversarial Defense

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

3 code implementations NeurIPS 2019 Hadi Salman, Greg Yang, huan zhang, Cho-Jui Hsieh, Pengchuan Zhang

This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification.

Large-Batch Training for LSTM and Beyond

1 code implementation24 Jan 2019 Yang You, Jonathan Hseu, Chris Ying, James Demmel, Kurt Keutzer, Cho-Jui Hsieh

LEGW enables Sqrt Scaling scheme to be useful in practice and as a result we achieve much better results than the Linear Scaling learning rate scheme.

The Limitations of Adversarial Training and the Blind-Spot Attack

no code implementations ICLR 2019 Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S. Dhillon, Cho-Jui Hsieh

In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network.

Learning from Group Comparisons: Exploiting Higher Order Interactions

no code implementations NeurIPS 2018 Yao Li, Minhao Cheng, Kevin Fujii, Fushing Hsieh, Cho-Jui Hsieh

We study the problem of learning from group comparisons, with applications in predicting outcomes of sports and online games.

Block-wise Partitioning for Extreme Multi-label Classification

no code implementations4 Nov 2018 Yuefeng Liang, Cho-Jui Hsieh, Thomas C. M. Lee

Extreme multi-label classification aims to learn a classifier that annotates an instance with a relevant subset of labels from an extremely large label set.

Classification Extreme Multi-Label Classification +1

Efficient Neural Network Robustness Certification with General Activation Functions

13 code implementations NeurIPS 2018 Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.

Efficient Neural Network

Learning to Screen for Fast Softmax Inference on Large Vocabulary Neural Networks

no code implementations ICLR 2019 Patrick H. Chen, Si Si, Sanjiv Kumar, Yang Li, Cho-Jui Hsieh

The algorithm achieves an order of magnitude faster inference than the original softmax layer for predicting top-$k$ words in various tasks such as beam search in machine translation or next words prediction.

Machine Translation Translation

RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications

4 code implementations28 Oct 2018 Huan Zhang, Pengchuan Zhang, Cho-Jui Hsieh

The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks.

Attack Graph Convolutional Networks by Adding Fake Nodes

no code implementations ICLR 2019 Xiaoyun Wang, Minhao Cheng, Joe Eaton, Cho-Jui Hsieh, Felix Wu

In this paper, we propose a new type of "fake node attacks" to attack GCNs by adding malicious fake nodes.

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

1 code implementation19 Oct 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel

We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.

Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network

1 code implementation ICLR 2019 Xuanqing Liu, Yao Li, Chongruo wu, Cho-Jui Hsieh

Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way.

Adversarial Defense

Stochastic Second-order Methods for Non-convex Optimization with Inexact Hessian and Gradient

no code implementations26 Sep 2018 Liu Liu, Xuanqing Liu, Cho-Jui Hsieh, DaCheng Tao

Trust region and cubic regularization methods have demonstrated good performance in small scale non-convex optimization, showing the ability to escape from saddle points.

Second-order methods

Stochastically Controlled Stochastic Gradient for the Convex and Non-convex Composition problem

no code implementations6 Sep 2018 Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao

In this paper, we consider the convex and non-convex composition problem with the structure $\frac{1}{n}\sum\nolimits_{i = 1}^n {{F_i}( {G( x )} )}$, where $G( x )=\frac{1}{n}\sum\nolimits_{j = 1}^n {{G_j}( x )} $ is the inner function, and $F_i(\cdot)$ is the outer function.

Fast Variance Reduction Method with Stochastic Batch Size

no code implementations ICML 2018 Xuanqing Liu, Cho-Jui Hsieh

In this paper we study a family of variance reduction methods with randomized batch size---at each step, the algorithm first randomly chooses the batch size and then selects a batch of samples to conduct a variance-reduced stochastic update.

Rob-GAN: Generator, Discriminator, and Adversarial Attacker

2 code implementations CVPR 2019 Xuanqing Liu, Cho-Jui Hsieh

Adversarial training is the technique used to improve the robustness of discriminator by combining adversarial attacker and discriminator in the training phase.

Adversarial Attack Image Generation

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

1 code implementation12 Jul 2018 Minhao Cheng, Thong Le, Pin-Yu Chen, Jin-Feng Yi, huan zhang, Cho-Jui Hsieh

We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

BIG-bench Machine Learning

Extreme Learning to Rank via Low Rank Assumption

no code implementations ICML 2018 Minhao Cheng, Ian Davidson, Cho-Jui Hsieh

We consider the setting where we wish to perform ranking for hundreds of thousands of users which is common in recommender systems and web search ranking.

Learning-To-Rank Recommendation Systems

GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking

no code implementations NeurIPS 2018 Patrick H. Chen, Si Si, Yang Li, Ciprian Chelba, Cho-Jui Hsieh

Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses.

Language Modelling Model Compression +1

Stochastic Zeroth-order Optimization via Variance Reduction method

no code implementations30 May 2018 Liu Liu, Minhao Cheng, Cho-Jui Hsieh, DaCheng Tao

However, due to the variance in the search direction, the convergence rates and query complexities of existing methods suffer from a factor of $d$, where $d$ is the problem dimension.

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

1 code implementation30 May 2018 Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.

Adversarial Robustness

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

3 code implementations28 May 2018 Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava

Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.

Adversarial Attack Adversarial Robustness

LearningWord Embeddings for Low-resource Languages by PU Learning

1 code implementation9 May 2018 Chao Jiang, Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang

In such a situation, the co-occurrence matrix is sparse as the co-occurrences of many word pairs are unobserved.

Towards Fast Computation of Certified Robustness for ReLU Networks

6 code implementations ICML 2018 Tsui-Wei Weng, huan zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel

Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17].

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

1 code implementation3 Mar 2018 Minhao Cheng, Jin-Feng Yi, Pin-Yu Chen, huan zhang, Cho-Jui Hsieh

In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) models, whose inputs are discrete text strings and outputs have an almost infinite number of possibilities.

Image Classification Machine Translation +2

SQL-Rank: A Listwise Approach to Collaborative Ranking

1 code implementation ICML 2018 Liwei Wu, Cho-Jui Hsieh, James Sharpnack

In this paper, we propose a listwise approach for constructing user-specific rankings in recommendation systems in a collaborative fashion.

Collaborative Ranking Recommendation Systems

History PCA: A New Algorithm for Streaming PCA

1 code implementation15 Feb 2018 Puyudi Yang, Cho-Jui Hsieh, Jane-Ling Wang

In this paper we propose a new algorithm for streaming principal component analysis.

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

1 code implementation ICLR 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel

Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.

Better Generalization by Efficient Trust Region Method

no code implementations ICLR 2018 Xuanqing Liu, Jason D. Lee, Cho-Jui Hsieh

Solving this subproblem is non-trivial---existing methods have only sub-linear convergence rate.

A comparison of second-order methods for deep convolutional neural networks

no code implementations ICLR 2018 Patrick H. Chen, Cho-Jui Hsieh

Despite many second-order methods have been proposed to train neural networks, most of the results were done on smaller single layer fully connected networks, so we still cannot conclude whether it's useful in training deep convolutional networks.

Second-order methods

Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

2 code implementations ACL 2018 Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh

Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.

Image Captioning

Towards Robust Neural Networks via Random Self-ensemble

no code implementations ECCV 2018 Xuanqing Liu, Minhao Cheng, huan zhang, Cho-Jui Hsieh

In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) by combining two important concepts: {\bf randomness} and {\bf ensemble}.

ImageNet Training in Minutes

1 code implementation14 Sep 2017 Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, Kurt Keutzer

If we can make full use of the supercomputer for DNN training, we should be able to finish the 90-epoch ResNet-50 training in one minute.

Playing the Game of 2048