Search Results for author: Cho-Jui Hsieh

Found 167 papers, 69 papers with code

On the Transferability of Adversarial Attacks against Neural Text Classifier

no code implementations EMNLP 2021 Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang

Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models.

Text Classification Tokenization

On Lp-norm Robustness of Ensemble Decision Stumps and Trees

no code implementations ICML 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the robustness verification and defense with respect to general $\ell_p$ norm perturbation for ensemble trees and stumps.

DRONE: Data-aware Low-rank Compression for Large NLP Models

no code implementations NeurIPS 2021 Pei-Hung Chen, Hsiang-Fu Yu, Inderjit Dhillon, Cho-Jui Hsieh

In addition to compressing standard models, out method can also be used on distilled BERT models to further improve compression rate.

Low-rank compression Natural Language Inference

A Review of Adversarial Attack and Defense for Classification Methods

1 code implementation18 Nov 2021 Yao Li, Minhao Cheng, Cho-Jui Hsieh, Thomas C. M. Lee

Despite the efficiency and scalability of machine learning systems, recent studies have demonstrated that many classification methods, especially deep neural networks (DNNs), are vulnerable to adversarial examples; i. e., examples that are carefully crafted to fool a well-trained classification model while being indistinguishable from natural data to human.

Adversarial Attack Classification

Can Vision Transformers Perform Convolution?

no code implementations2 Nov 2021 Shanda Li, Xiangning Chen, Di He, Cho-Jui Hsieh

Several recent studies have demonstrated that attention-based networks, such as Vision Transformer (ViT), can outperform Convolutional Neural Networks (CNNs) on several computer vision tasks without using convolutional layers.

Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction

4 code implementations29 Oct 2021 Eli Chien, Wei-Cheng Chang, Cho-Jui Hsieh, Hsiang-Fu Yu, Jiong Zhang, Olgica Milenkovic, Inderjit S Dhillon

We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework.

Extreme Multi-Label Classification Fine-tuning +4

How and When Adversarial Robustness Transfers in Knowledge Distillation?

no code implementations22 Oct 2021 Rulin Shao, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

Our comprehensive analysis shows several novel insights that (1) With KDIGA, students can preserve or even exceed the adversarial robustness of the teacher model, even when their models have fundamentally different architectures; (2) KDIGA enables robustness to transfer to pre-trained students, such as KD from an adversarially trained ResNet to a pre-trained ViT, without loss of clean accuracy; and (3) Our derived local linearity bounds for characterizing adversarial robustness in KD are consistent with the empirical results.

Adversarial Robustness Knowledge Distillation +1

Adversarial Attack across Datasets

no code implementations13 Oct 2021 Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Cho-Jui Hsieh

It has been observed that Deep Neural Networks (DNNs) are vulnerable to transfer attacks in the query-free black-box setting.

Adversarial Attack Image Classification

Sharpness-Aware Minimization in Large-Batch Training: Training Vision Transformer In Minutes

no code implementations29 Sep 2021 Yong liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, Yang You

Large-batch training is an important direction for distributed machine learning, which can improve the utilization of large-scale clusters and therefore accelerate the training process.

FastEnsemble: Benchmarking and Accelerating Ensemble-based Uncertainty Estimation for Image-to-Image Translation

no code implementations29 Sep 2021 Xuanqing Liu, Sara Imboden, Marie Payne, Neil Lin, Cho-Jui Hsieh

In addition, we introduce FastEnsemble, a fast ensemble method which only requires less than $8\%$ of the full-ensemble training time to generate a new ensemble member.

Image-to-Image Translation Self-Driving Cars +1

Training Meta-Surrogate Model for Transferable Adversarial Attack

no code implementations5 Sep 2021 Yunxiao Qin, Yuanhao Xiong, JinFeng Yi, Cho-Jui Hsieh

In this paper, we tackle this problem from a novel angle -- instead of using the original surrogate models, can we obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models?

Adversarial Attack

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

1 code implementation EMNLP 2021 Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, Cho-Jui Hsieh

Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.

RandomRooms: Unsupervised Pre-training from Synthetic Shapes and Randomized Layouts for 3D Object Detection

no code implementations ICCV 2021 Yongming Rao, Benlin Liu, Yi Wei, Jiwen Lu, Cho-Jui Hsieh, Jie zhou

In particular, we propose to generate random layouts of a scene by making use of the objects in the synthetic CAD dataset and learn the 3D scene representation by applying object-level contrastive learning on two random scenes generated from the same set of synthetic objects.

2D Object Detection 3D Object Detection +3

Rethinking Architecture Selection in Differentiable NAS

1 code implementation ICLR 2021 Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh

Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms.

Neural Architecture Search

Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble

no code implementations ACL 2021 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples.

Label Disentanglement in Partition-based Extreme Multilabel Classification

no code implementations NeurIPS 2021 Xuanqing Liu, Wei-Cheng Chang, Hsiang-Fu Yu, Cho-Jui Hsieh, Inderjit S. Dhillon

Partition-based methods are increasingly-used in extreme multi-label classification (XMC) problems due to their scalability to large output spaces (e. g., millions or more).

Classification Extreme Multi-Label Classification +1

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification

no code implementations NeurIPS 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter

We develop $\beta$-CROWN, a new bound propagation based method that can fully encode neuron split constraints in branch-and-bound (BaB) based complete verification via optimizable parameters $\beta$.

Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms

no code implementations5 Jun 2021 Qin Ding, Yi-Wei Liu, Cho-Jui Hsieh, James Sharpnack

To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment.

Recommendation Systems

Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks

no code implementations5 Jun 2021 Qin Ding, Cho-Jui Hsieh, James Sharpnack

In this work, we provide the first robust bandit algorithm for stochastic linear contextual bandit setting under a fully adaptive and omniscient attack.

Multi-Armed Bandits Recommendation Systems

When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations

2 code implementations3 Jun 2021 Xiangning Chen, Cho-Jui Hsieh, Boqing Gong

Vision Transformers (ViTs) and MLPs signal further efforts on replacing hand-wired features or inductive biases with general-purpose neural architectures.

 Ranked #1 on Domain Generalization on ImageNet-R (Top 1 Accuracy metric)

Domain Generalization Fine-Grained Image Classification +1

DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

1 code implementation NeurIPS 2021 Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie zhou, Cho-Jui Hsieh

Based on this observation, we propose a dynamic token sparsification framework to prune redundant tokens progressively and dynamically based on the input.

Image Classification

Concurrent Adversarial Learning for Large-Batch Training

no code implementations1 Jun 2021 Yong liu, Xiangning Chen, Minhao Cheng, Cho-Jui Hsieh, Yang You

Current methods usually use extensive data augmentation to increase the batch size, but we found the performance gain with data augmentation decreases as batch size increases, and data augmentation will become insufficient after certain point.

Data Augmentation

Balancing Robustness and Sensitivity using Feature Contrastive Learning

no code implementations19 May 2021 Seungyeon Kim, Daniel Glasner, Srikumar Ramalingam, Cho-Jui Hsieh, Kishore Papineni, Sanjiv Kumar

It is generally believed that robust training of extremely large networks is critical to their success in real-world applications.

Contrastive Learning

Detecting Adversarial Examples with Bayesian Neural Network

1 code implementation18 May 2021 Yao Li, Tongyi Tang, Cho-Jui Hsieh, Thomas C. M. Lee

In this paper, we propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors and make it easier to simulate output distribution of deep neural network.

2.5D Visual Relationship Detection

1 code implementation26 Apr 2021 Yu-Chuan Su, Soravit Changpinyo, Xiangning Chen, Sathish Thoppay, Cho-Jui Hsieh, Lior Shapira, Radu Soricut, Hartwig Adam, Matthew Brown, Ming-Hsuan Yang, Boqing Gong

To enable progress on this task, we create a new dataset consisting of 220k human-annotated 2. 5D relationships among 512K objects from 11K images.

Depth Estimation Visual Relationship Detection

On the Faithfulness Measurements for Model Interpretations

no code implementations18 Apr 2021 Fan Yin, Zhouxing Shi, Cho-Jui Hsieh, Kai-Wei Chang

Recent years have witnessed the emergence of a variety of post-hoc interpretations that aim to uncover how natural language processing (NLP) models make predictions.

Adversarial Robustness Dependency Parsing +1

Fast Certified Robust Training with Short Warmup

1 code implementation NeurIPS 2021 Zhouxing Shi, Yihan Wang, huan zhang, JinFeng Yi, Cho-Jui Hsieh

Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have per-batch training complexity similar to standard neural network training, they usually use a long warmup schedule with hundreds or thousands epochs to reach SOTA performance and are thus still costly.

Adversarial Defense

On the Adversarial Robustness of Vision Transformers

1 code implementation29 Mar 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

This work provides the first and comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations.

Adversarial Robustness

Robust and Accurate Object Detection via Adversarial Learning

1 code implementation CVPR 2021 Xiangning Chen, Cihang Xie, Mingxing Tan, Li Zhang, Cho-Jui Hsieh, Boqing Gong

Data augmentation has become a de facto component for training high-performance deep image classifiers, but its potential is under-explored for object detection.

AutoML Data Augmentation +2

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Robustness Verification

3 code implementations NeurIPS 2021 Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Compared to the typically tightest but very costly semidefinite programming (SDP) based incomplete verifiers, we obtain higher verified accuracy with three orders of magnitudes less verification time.

Adversarial Attack

Local Critic Training for Model-Parallel Learning of Deep Neural Networks

1 code implementation3 Feb 2021 Hojung Lee, Cho-Jui Hsieh, Jong-Seok Lee

We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs).

Robust Reinforcement Learning on State Observations with Learned Optimal Adversary

2 code implementations ICLR 2021 huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise.

Adversarial Attack Continuous Control

Emotional EEG Classification using Connectivity Features and Convolutional Neural Networks

no code implementations18 Jan 2021 Seong-Eun Moon, Chun-Jui Chen, Cho-Jui Hsieh, Jane-Ling Wang, Jong-Seok Lee

Convolutional neural networks (CNNs) are widely used to recognize the user's state through electroencephalography (EEG) signals.

Classification EEG +2

Robust Text CAPTCHAs Using Adversarial Examples

no code implementations7 Jan 2021 Rulin Shao, Zhouxing Shi, JinFeng Yi, Pin-Yu Chen, Cho-Jui Hsieh

At the second stage, we design and apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.

Adversarial Attack Optical Character Recognition

Learning to Learn with Smooth Regularization

no code implementations1 Jan 2021 Yuanhao Xiong, Cho-Jui Hsieh

Recent decades have witnessed great prosperity of deep learning in tackling various problems such as classification and decision making.

Decision Making Few-Shot Learning

Data-aware Low-Rank Compression for Large NLP Models

no code implementations1 Jan 2021 Patrick Chen, Hsiang-Fu Yu, Inderjit S Dhillon, Cho-Jui Hsieh

In this paper, we observe that the learned representation of each layer lies in a low-dimensional space.

Fine-tuning Low-rank compression +1

Adversarial Masking: Towards Understanding Robustness Trade-off for Generalization

no code implementations1 Jan 2021 Minhao Cheng, Zhe Gan, Yu Cheng, Shuohang Wang, Cho-Jui Hsieh, Jingjing Liu

By incorporating different feature maps after the masking, we can distill better features to help model generalization.

Learning Contextual Perturbation Budgets for Training Robust Neural Networks

no code implementations1 Jan 2021 Jing Xu, Zhouxing Shi, huan zhang, JinFeng Yi, Cho-Jui Hsieh, LiWei Wang

We also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in a given image.

Self-Progressing Robust Training

1 code implementation22 Dec 2020 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy machine learning systems.

Adversarial Robustness

Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search

no code implementations14 Dec 2020 Li-Cheng Lan, Meng-Yu Tsai, Ti-Rong Wu, I-Chen Wu, Cho-Jui Hsieh

This implies that a significant amount of resources can be saved if we are able to stop the searching earlier when we are confident with the current searching result.

Atari Games

Voting based ensemble improves robustness of defensive models

no code implementations28 Nov 2020 Devvrit, Minhao Cheng, Cho-Jui Hsieh, Inderjit Dhillon

Several previous attempts tackled this problem by ensembling the soft-label prediction and have been proved vulnerable based on the latest attack methods.

On the Transferability of Adversarial Attacksagainst Neural Text Classifier

no code implementations17 Nov 2020 Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang

Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models.

Text Classification Tokenization

An Efficient Adversarial Attack for Tree Ensembles

1 code implementation NeurIPS 2020 Chong Zhang, huan zhang, Cho-Jui Hsieh

We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs).

Adversarial Attack

How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers

no code implementations19 Oct 2020 Yuanhao Xiong, Xuanqing Liu, Li-Cheng Lan, Yang You, Si Si, Cho-Jui Hsieh

For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy.

Graph Mining

On $\ell_p$-norm Robustness of Ensemble Stumps and Trees

1 code implementation20 Aug 2020 Yihan Wang, huan zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh

In this paper, we study the problem of robustness verification and certified defense with respect to general $\ell_p$ norm perturbations for ensemble decision stumps and trees.

Improving the Speed and Quality of GAN by Adversarial Training

1 code implementation7 Aug 2020 Jiachen Zhong, Xuanqing Liu, Cho-Jui Hsieh

Generative adversarial networks (GAN) have shown remarkable results in image generation tasks.

Image Generation

Multi-Stage Influence Function

no code implementations NeurIPS 2020 Hongge Chen, Si Si, Yang Li, Ciprian Chelba, Sanjiv Kumar, Duane Boning, Cho-Jui Hsieh

With this score, we can identify the pretraining examples in the pretraining task that contribute most to a prediction in the finetuning task.

Transfer Learning

What Does BERT with Vision Look At?

no code implementations ACL 2020 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang

Pre-trained visually grounded language models such as ViLBERT, LXMERT, and UNITER have achieved significant performance improvement on vision-and-language tasks but what they learn during pre-training remains unclear.

Language Modelling

Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble

1 code implementation20 Jun 2020 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Despite neural networks have achieved prominent performance on many natural language processing (NLP) tasks, they are vulnerable to adversarial examples.

The Limit of the Batch Size

no code implementations15 Jun 2020 Yang You, Yuhui Wang, huan zhang, Zhao Zhang, James Demmel, Cho-Jui Hsieh

For the first time we scale the batch size on ImageNet to at least a magnitude larger than all previous work, and provide detailed studies on the performance of many state-of-the-art optimization schemes under this setting.

Provably Robust Metric Learning

1 code implementation NeurIPS 2020 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Yuan Jiang, Cho-Jui Hsieh

Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied.

Metric Learning

An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling

no code implementations7 Jun 2020 Qin Ding, Cho-Jui Hsieh, James Sharpnack

A natural way to resolve this problem is to apply online stochastic gradient descent (SGD) so that the per-step time and memory complexity can be reduced to constant with respect to $t$, but a contextual bandit policy based on online SGD updates that balances exploration and exploitation has remained elusive.

Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data

1 code implementation11 May 2020 Lu Wang, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Yuan Jiang

By constraining adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset, the spanning attack significantly improves the query efficiency of a wide variety of existing black-box attacks.

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

4 code implementations NeurIPS 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh

Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.

Learning to Encode Position for Transformer with Continuous Dynamical Model

1 code implementation ICML 2020 Xuanqing Liu, Hsiang-Fu Yu, Inderjit Dhillon, Cho-Jui Hsieh

The main reason is that position information among input units is not inherently encoded, i. e., the models are permutation equivalent; this problem justifies why all of the existing models are accompanied by a sinusoidal encoding/embedding layer at the input.

Language understanding Linguistic Acceptability +3

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.

Quantization

CAT: Customized Adversarial Training for Improved Robustness

no code implementations17 Feb 2020 Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh

Adversarial training has become one of the most effective methods for improving robustness of neural networks.

Robustness Verification for Transformers

1 code implementation ICLR 2020 Zhouxing Shi, huan zhang, Kai-Wei Chang, Minlie Huang, Cho-Jui Hsieh

Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees.

Sentiment Analysis

Multiscale Non-stationary Stochastic Bandits

no code implementations13 Feb 2020 Qin Ding, Cho-Jui Hsieh, James Sharpnack

Classic contextual bandit algorithms for linear models, such as LinUCB, assume that the reward distribution for an arm is modeled by a stationary linear regression.

Stabilizing Differentiable Architecture Search via Perturbation-based Regularization

1 code implementation ICML 2020 Xiangning Chen, Cho-Jui Hsieh

Furthermore, we mathematically show that SDARTS implicitly regularizes the Hessian norm of the validation loss, which accounts for a smoother loss landscape and improved performance.

Adversarial Attack Neural Architecture Search

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

2 code implementations ICLR 2020 Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.

Overcoming Catastrophic Forgetting by Generative Regularization

no code implementations3 Dec 2019 Patrick H. Chen, Wei Wei, Cho-Jui Hsieh, Bo Dai

In this paper, we propose a new method to overcome catastrophic forgetting by adding generative regularization to Bayesian inference framework.

Bayesian Inference Continual Learning

GraphDefense: Towards Robust Graph Convolutional Networks

no code implementations11 Nov 2019 Xiaoyun Wang, Xuanqing Liu, Cho-Jui Hsieh

Inspired by the previous works on adversarial defense for deep neural networks, and especially adversarial training algorithm, we propose a method called GraphDefense to defend against the adversarial perturbations.

Adversarial Defense

MulCode: A Multiplicative Multi-way Model for Compressing Neural Language Model

no code implementations IJCNLP 2019 Yukun Ma, Patrick H. Chen, Cho-Jui Hsieh

For example, input embedding and Softmax matrices in IWSLT-2014 German-to-English data set account for more than 80{\%} of the total model parameters.

Language Modelling Machine Translation +2

Enhancing Certifiable Robustness via a Deep Model Ensemble

no code implementations31 Oct 2019 Huan Zhang, Minhao Cheng, Cho-Jui Hsieh

We propose an algorithm to enhance certified robustness of a deep model ensemble by optimally weighting each base model.

Model Selection

Learning to Learn by Zeroth-Order Oracle

1 code implementation ICLR 2020 Yangjun Ruan, Yuanhao Xiong, Sashank Reddi, Sanjiv Kumar, Cho-Jui Hsieh

In the learning to learn (L2L) framework, we cast the design of optimization algorithms as a machine learning problem and use deep neural networks to learn the update rules.

Adversarial Attack

Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Class-Imbalanced Data

no code implementations NeurIPS 2020 Utkarsh Ojha, Krishna Kumar Singh, Cho-Jui Hsieh, Yong Jae Lee

We propose a novel unsupervised generative model that learns to disentangle object identity from other low-level aspects in class-imbalanced data.

Representation Learning

Defending Against Adversarial Examples by Regularized Deep Embedding

no code implementations25 Sep 2019 Yao Li, Martin Renqiang Min, Wenchao Yu, Cho-Jui Hsieh, Thomas Lee, Erik Kruus

Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples.

Adversarial Attack Adversarial Robustness

LEARNING TO LEARN WITH BETTER CONVERGENCE

no code implementations25 Sep 2019 Patrick H. Chen, Sashank Reddi, Sanjiv Kumar, Cho-Jui Hsieh

We consider the learning to learn problem, where the goal is to leverage deeplearning models to automatically learn (iterative) optimization algorithms for training machine learning models.

Stochastically Controlled Compositional Gradient for the Composition problem

no code implementations25 Sep 2019 Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao

The strategy is also accompanied by a mini-batch version of the proposed method that improves query complexity with respect to the size of the mini-batch.

SPROUT: Self-Progressing Robust Training

no code implementations25 Sep 2019 Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems.

Adversarial Robustness

SSE-PT: Sequential Recommendation Via Personalized Transformer

1 code implementation25 Sep 2019 Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack

Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed for better use of the temporal ordering of items that each user has engaged with.

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

1 code implementation ICLR 2020 Minhao Cheng, Simranjit Singh, Patrick Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh

We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input.

Adversarial Attack Adversarial Robustness +1

BOSH: An Efficient Meta Algorithm for Decision-based Attacks

no code implementations10 Sep 2019 Zhenxin Xiao, Puyudi Yang, Yuchen Jiang, Kai-Wei Chang, Cho-Jui Hsieh

Adversarial example generation becomes a viable method for evaluating the robustness of a machine learning model.

Adversarial Attack

Natural Adversarial Sentence Generation with Gradient-based Perturbation

1 code implementation6 Sep 2019 Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, Cho-Jui Hsieh

This work proposes a novel algorithm to generate natural language adversarial input for text classification models, in order to investigate the robustness of these models.

Sentence Embeddings Sentiment Analysis +1

Temporal Collaborative Ranking Via Personalized Transformer

2 code implementations15 Aug 2019 Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack

Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed us to make better use of the temporal ordering of items that each user has engaged with.

Collaborative Ranking

Efficient Neural Interaction Function Search for Collaborative Filtering

2 code implementations28 Jun 2019 Quanming Yao, Xiangning Chen, James Kwok, Yong Li, Cho-Jui Hsieh

Motivated by the recent success of automated machine learning (AutoML), we propose in this paper the search for simple neural interaction functions (SIF) in CF.

AutoML Collaborative Filtering

Convergence of Adversarial Training in Overparametrized Neural Networks

no code implementations NeurIPS 2019 Ruiqi Gao, Tianle Cai, Haochuan Li, Li-Wei Wang, Cho-Jui Hsieh, Jason D. Lee

Neural networks are vulnerable to adversarial examples, i. e. inputs that are imperceptibly perturbed from natural data and yet incorrectly classified by the network.

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

2 code implementations ICLR 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, Cho-Jui Hsieh

In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass.

Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective

1 code implementation10 Jun 2019 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh

Furthermore, we show that dual solutions for these QP problems could give us a valid lower bound of the adversarial perturbation that can be used for formal robustness verification, giving us a nice view of attack/verification for NN models.

Robustness Verification of Tree-based Models

2 code implementations NeurIPS 2019 Hongge Chen, huan zhang, Si Si, Yang Li, Duane Boning, Cho-Jui Hsieh

We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles, the verification problem can be cast as a max-clique problem on a multi-partite graph with bounded boxicity.

ML-LOO: Detecting Adversarial Examples with Feature Attribution

no code implementations8 Jun 2019 Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, Michael. I. Jordan

Furthermore, we extend our method to include multi-layer feature attributions in order to tackle the attacks with mixed confidence levels.

Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise

no code implementations5 Jun 2019 Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh

In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE) network, which naturally incorporates various commonly used regularization mechanisms based on random noise injection.

Evaluating and Enhancing the Robustness of Dialogue Systems: A Case Study on a Negotiation Agent

no code implementations NAACL 2019 Minhao Cheng, Wei Wei, Cho-Jui Hsieh

Moreover, we show that with the adversarial training, we are able to improve the robustness of negotiation agents by 1. 5 points on average against all our attacks.

Graph DNA: Deep Neighborhood Aware Graph Encoding for Collaborative Filtering

no code implementations29 May 2019 Liwei Wu, Hsiang-Fu Yu, Nikhil Rao, James Sharpnack, Cho-Jui Hsieh

In this paper, we propose using Graph DNA, a novel Deep Neighborhood Aware graph encoding algorithm, for exploiting deeper neighborhood information.

Collaborative Filtering Recommendation Systems

Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers

2 code implementations NeurIPS 2019 Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack

We find that when used along with widely-used regularization methods such as weight decay and dropout, our proposed SSE can further reduce overfitting, which often leads to more favorable generalization results.

Knowledge Graphs Recommendation Systems

Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

4 code implementations KDD 2019 Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, Cho-Jui Hsieh

Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99. 36 on the PPI dataset, while the previous best result was 98. 71 by [16].

Graph Clustering Link Prediction +1

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach

no code implementations ICLR 2019 Minhao Cheng, Thong Le, Pin-Yu Chen, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks

1 code implementation ICCV 2019 Jun-Ho Choi, huan zhang, Jun-Hyuk Kim, Cho-Jui Hsieh, Jong-Seok Lee

Single-image super-resolution aims to generate a high-resolution version of a low-resolution image, which serves as an essential component in many computer vision applications.

Image Super-Resolution

Efficient Contextual Representation Learning Without Softmax Layer

no code implementations28 Feb 2019 Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang

Our framework reduces the time spent on the output layer to a negligible level, eliminates almost all the trainable parameters of the softmax layer and performs language modeling without truncating the vocabulary.

Dimensionality Reduction Language Modelling +1

Robust Decision Trees Against Adversarial Examples

3 code implementations27 Feb 2019 Hongge Chen, huan zhang, Duane Boning, Cho-Jui Hsieh

Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.

Adversarial Attack Adversarial Defense

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

2 code implementations NeurIPS 2019 Hadi Salman, Greg Yang, huan zhang, Cho-Jui Hsieh, Pengchuan Zhang

This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification.

Large-Batch Training for LSTM and Beyond

1 code implementation24 Jan 2019 Yang You, Jonathan Hseu, Chris Ying, James Demmel, Kurt Keutzer, Cho-Jui Hsieh

LEGW enables Sqrt Scaling scheme to be useful in practice and as a result we achieve much better results than the Linear Scaling learning rate scheme.

The Limitations of Adversarial Training and the Blind-Spot Attack

no code implementations ICLR 2019 Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S. Dhillon, Cho-Jui Hsieh

In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network.

Learning from Group Comparisons: Exploiting Higher Order Interactions

no code implementations NeurIPS 2018 Yao Li, Minhao Cheng, Kevin Fujii, Fushing Hsieh, Cho-Jui Hsieh

We study the problem of learning from group comparisons, with applications in predicting outcomes of sports and online games.

Block-wise Partitioning for Extreme Multi-label Classification

no code implementations4 Nov 2018 Yuefeng Liang, Cho-Jui Hsieh, Thomas C. M. Lee

Extreme multi-label classification aims to learn a classifier that annotates an instance with a relevant subset of labels from an extremely large label set.

Classification Extreme Multi-Label Classification +2

Efficient Neural Network Robustness Certification with General Activation Functions

13 code implementations NeurIPS 2018 Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.

Learning to Screen for Fast Softmax Inference on Large Vocabulary Neural Networks

no code implementations ICLR 2019 Patrick H. Chen, Si Si, Sanjiv Kumar, Yang Li, Cho-Jui Hsieh

The algorithm achieves an order of magnitude faster inference than the original softmax layer for predicting top-$k$ words in various tasks such as beam search in machine translation or next words prediction.

Machine Translation Translation

RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications

4 code implementations28 Oct 2018 Huan Zhang, Pengchuan Zhang, Cho-Jui Hsieh

The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks.

Attack Graph Convolutional Networks by Adding Fake Nodes

no code implementations ICLR 2019 Xiaoyun Wang, Minhao Cheng, Joe Eaton, Cho-Jui Hsieh, Felix Wu

In this paper, we propose a new type of "fake node attacks" to attack GCNs by adding malicious fake nodes.

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

1 code implementation19 Oct 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel

We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.

Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network

1 code implementation ICLR 2019 Xuanqing Liu, Yao Li, Chongruo wu, Cho-Jui Hsieh

Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way.

Adversarial Defense

Stochastic Second-order Methods for Non-convex Optimization with Inexact Hessian and Gradient

no code implementations26 Sep 2018 Liu Liu, Xuanqing Liu, Cho-Jui Hsieh, DaCheng Tao

Trust region and cubic regularization methods have demonstrated good performance in small scale non-convex optimization, showing the ability to escape from saddle points.

Stochastically Controlled Stochastic Gradient for the Convex and Non-convex Composition problem

no code implementations6 Sep 2018 Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao

In this paper, we consider the convex and non-convex composition problem with the structure $\frac{1}{n}\sum\nolimits_{i = 1}^n {{F_i}( {G( x )} )}$, where $G( x )=\frac{1}{n}\sum\nolimits_{j = 1}^n {{G_j}( x )} $ is the inner function, and $F_i(\cdot)$ is the outer function.

Fast Variance Reduction Method with Stochastic Batch Size

no code implementations ICML 2018 Xuanqing Liu, Cho-Jui Hsieh

In this paper we study a family of variance reduction methods with randomized batch size---at each step, the algorithm first randomly chooses the batch size and then selects a batch of samples to conduct a variance-reduced stochastic update.

Rob-GAN: Generator, Discriminator, and Adversarial Attacker

2 code implementations CVPR 2019 Xuanqing Liu, Cho-Jui Hsieh

Adversarial training is the technique used to improve the robustness of discriminator by combining adversarial attacker and discriminator in the training phase.

Adversarial Attack Image Generation

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

1 code implementation12 Jul 2018 Minhao Cheng, Thong Le, Pin-Yu Chen, Jin-Feng Yi, huan zhang, Cho-Jui Hsieh

We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions.

Extreme Learning to Rank via Low Rank Assumption

no code implementations ICML 2018 Minhao Cheng, Ian Davidson, Cho-Jui Hsieh

We consider the setting where we wish to perform ranking for hundreds of thousands of users which is common in recommender systems and web search ranking.

Learning-To-Rank Recommendation Systems

GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking

no code implementations NeurIPS 2018 Patrick H. Chen, Si Si, Yang Li, Ciprian Chelba, Cho-Jui Hsieh

Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses.

Language Modelling Model Compression +1

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

1 code implementation30 May 2018 Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.

Adversarial Robustness

Stochastic Zeroth-order Optimization via Variance Reduction method

no code implementations30 May 2018 Liu Liu, Minhao Cheng, Cho-Jui Hsieh, DaCheng Tao

However, due to the variance in the search direction, the convergence rates and query complexities of existing methods suffer from a factor of $d$, where $d$ is the problem dimension.

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

2 code implementations28 May 2018 Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava

Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.

LearningWord Embeddings for Low-resource Languages by PU Learning

1 code implementation9 May 2018 Chao Jiang, Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang

In such a situation, the co-occurrence matrix is sparse as the co-occurrences of many word pairs are unobserved.

Towards Fast Computation of Certified Robustness for ReLU Networks

6 code implementations ICML 2018 Tsui-Wei Weng, huan zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel

Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17].

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

1 code implementation3 Mar 2018 Minhao Cheng, Jin-Feng Yi, Pin-Yu Chen, huan zhang, Cho-Jui Hsieh

In this paper, we study the much more challenging problem of crafting adversarial examples for sequence-to-sequence (seq2seq) models, whose inputs are discrete text strings and outputs have an almost infinite number of possibilities.

Image Classification Machine Translation +2

SQL-Rank: A Listwise Approach to Collaborative Ranking

1 code implementation ICML 2018 Liwei Wu, Cho-Jui Hsieh, James Sharpnack

In this paper, we propose a listwise approach for constructing user-specific rankings in recommendation systems in a collaborative fashion.

Collaborative Ranking Recommendation Systems

History PCA: A New Algorithm for Streaming PCA

1 code implementation15 Feb 2018 Puyudi Yang, Cho-Jui Hsieh, Jane-Ling Wang

In this paper we propose a new algorithm for streaming principal component analysis.

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

1 code implementation ICLR 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel

Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.

A comparison of second-order methods for deep convolutional neural networks

no code implementations ICLR 2018 Patrick H. Chen, Cho-Jui Hsieh

Despite many second-order methods have been proposed to train neural networks, most of the results were done on smaller single layer fully connected networks, so we still cannot conclude whether it's useful in training deep convolutional networks.

Better Generalization by Efficient Trust Region Method

no code implementations ICLR 2018 Xuanqing Liu, Jason D. Lee, Cho-Jui Hsieh

Solving this subproblem is non-trivial---existing methods have only sub-linear convergence rate.

Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

2 code implementations ACL 2018 Hongge Chen, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Cho-Jui Hsieh

Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems.

Image Captioning

Towards Robust Neural Networks via Random Self-ensemble

no code implementations ECCV 2018 Xuanqing Liu, Minhao Cheng, huan zhang, Cho-Jui Hsieh

In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) by combining two important concepts: {\bf randomness} and {\bf ensemble}.

ImageNet Training in Minutes

1 code implementation14 Sep 2017 Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, Kurt Keutzer

If we can make full use of the supercomputer for DNN training, we should be able to finish the 90-epoch ResNet-50 training in one minute.

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

1 code implementation13 Sep 2017 Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.

An inexact subsampled proximal Newton-type method for large-scale machine learning

no code implementations28 Aug 2017 Xuanqing Liu, Cho-Jui Hsieh, Jason D. Lee, Yuekai Sun

We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an $\epsilon$-suboptimal point in $\tilde{\mathcal{O}}(d(n + \sqrt{\kappa d})\log(\frac{1}{\epsilon}))$ FLOPS, where $n$ is number of samples, $d$ is feature dimension, and $\kappa$ is the condition number.

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

5 code implementations14 Aug 2017 Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.

Adversarial Attack Adversarial Defense +3

GPU-acceleration for Large-scale Tree Boosting

3 code implementations26 Jun 2017 Huan Zhang, Si Si, Cho-Jui Hsieh

In this paper, we present a novel massively parallel algorithm for accelerating the decision tree building procedure on GPUs (Graphics Processing Units), which is a crucial step in Gradient Boosted Decision Tree (GBDT) and random forests training.

Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent

1 code implementation NeurIPS 2017 Xiangru Lian, Ce Zhang, huan zhang, Cho-Jui Hsieh, Wei zhang, Ji Liu

On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.

Scalable Demand-Aware Recommendation

no code implementations NeurIPS 2017 Jinfeng Yi, Cho-Jui Hsieh, Kush Varshney, Lijun Zhang, Yao Li

In particular for durable goods, time utility is a function of inter-purchase duration within product category because consumers are unlikely to purchase two items in the same category in close temporal succession.

Asynchronous Parallel Greedy Coordinate Descent

no code implementations NeurIPS 2016 Yang You, Xiangru Lian, Ji Liu, Hsiang-Fu Yu, Inderjit S. Dhillon, James Demmel, Cho-Jui Hsieh

n this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints.

A Greedy Approach for Budgeted Maximum Inner Product Search

no code implementations NeurIPS 2017 Hsiang-Fu Yu, Cho-Jui Hsieh, Qi Lei, Inderjit S. Dhillon

Maximum Inner Product Search (MIPS) is an important task in many machine learning applications such as the prediction phase of a low-rank matrix factorization model for a recommender system.

Recommendation Systems

Communication-Efficient Parallel Block Minimization for Kernel Machines

no code implementations5 Aug 2016 Cho-Jui Hsieh, Si Si, Inderjit S. Dhillon

Kernel machines often yield superior predictive performance on various tasks; however, they suffer from severe computational challenges.

Matrix Completion with Noisy Side Information

no code implementations NeurIPS 2015 Kai-Yang Chiang, Cho-Jui Hsieh, Inderjit S. Dhillon

Moreover, we study the effectof general features in theory, and show that by using our model, the sample complexity can still be lower than matrix completion as long as features are sufficiently informative.

Matrix Completion

Sparse Linear Programming via Primal and Dual Augmented Coordinate Descent

no code implementations NeurIPS 2015 Ian En-Hsu Yen, Kai Zhong, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon

Over the past decades, Linear Programming (LP) has been widely used in different areas and considered as one of the mature technologies in numerical optimization.

A Scalable Asynchronous Distributed Algorithm for Topic Modeling

1 code implementation16 Dec 2014 Hsiang-Fu Yu, Cho-Jui Hsieh, Hyokun Yun, S. V. N. Vishwanathan, Inderjit S. Dhillon

Learning meaningful topic models with massive document collections which contain millions of documents and billions of tokens is challenging because of two reasons: First, one needs to deal with a large number of topics (typically in the order of thousands).

Topic Models

QUIC & DIRTY: A Quadratic Approximation Approach for Dirty Statistical Models

no code implementations NeurIPS 2014 Cho-Jui Hsieh, Inderjit S. Dhillon, Pradeep K. Ravikumar, Stephen Becker, Peder A. Olsen

In this paper, we develop a family of algorithms for optimizing superposition-structured” or “dirty” statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid regularization.

Model Selection Multi-Task Learning

Constant Nullspace Strong Convexity and Fast Convergence of Proximal Methods under High-Dimensional Settings

no code implementations NeurIPS 2014 Ian En-Hsu Yen, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon

State of the art statistical estimators for high-dimensional problems take the form of regularized, and hence non-smooth, convex programs.

Fast Prediction for Large-Scale Kernel Machines

no code implementations NeurIPS 2014 Cho-Jui Hsieh, Si Si, Inderjit S. Dhillon

Second, we provide a new theoretical analysis on bounding the error of the solution computed by using Nystr¨om kernel approximation method, and show that the error is related to the weighted kmeans objective function where the weights are given by the model computed from the original kernel.

General Classification

PU Learning for Matrix Completion

no code implementations22 Nov 2014 Cho-Jui Hsieh, Nagarajan Natarajan, Inderjit S. Dhillon

For the first case, we propose a "shifted matrix completion" method that recovers M using only a subset of indices corresponding to ones, while for the second case, we propose a "biased matrix completion" method that recovers the (thresholded) binary matrix.

Link Prediction Matrix Completion +1

NOMAD: Non-locking, stOchastic Multi-machine algorithm for Asynchronous and Decentralized matrix completion

1 code implementation1 Dec 2013 Hyokun Yun, Hsiang-Fu Yu, Cho-Jui Hsieh, S. V. N. Vishwanathan, Inderjit Dhillon

One of the key features of NOMAD is that the ownership of a variable is asynchronously transferred between processors in a decentralized fashion.

Distributed, Parallel, and Cluster Computing

Large Scale Distributed Sparse Precision Estimation

no code implementations NeurIPS 2013 Huahua Wang, Arindam Banerjee, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon

We consider the problem of sparse precision matrix estimation in high dimensions using the CLIME estimator, which has several desirable theoretical properties.

BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables

no code implementations NeurIPS 2013 Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon, Pradeep K. Ravikumar, Russell Poldrack

The l1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix even under high-dimensional settings.

A Divide-and-Conquer Solver for Kernel Support Vector Machines

no code implementations4 Nov 2013 Cho-Jui Hsieh, Si Si, Inderjit S. Dhillon

We show theoretically that the support vectors identified by the subproblem solution are likely to be support vectors of the entire kernel SVM problem, provided that the problem is partitioned appropriately by kernel clustering.

Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation

no code implementations NeurIPS 2011 Cho-Jui Hsieh, Matyas A. Sustik, Inderjit S. Dhillon, Pradeep Ravikumar

The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples.

Cannot find the paper you are looking for? You can Submit a new open access paper.