no code implementations • ICML 2020 • Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Mingyi Hong, Una-May O'Reilly
In this paper, we study the problem of constrained min-max optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.
1 code implementation • 1 May 2022 • Yong Xie, Dakuo Wang, Pin-Yu Chen, JinJun Xiong, Sijia Liu, Sanmi Koyejo
More and more investors and machine learning models rely on social media (e. g., Twitter and Reddit) to gather real-time information and sentiment to predict stock price movements.
no code implementations • 15 Apr 2022 • Quanfu Fan, Yilai Li, Yuguang Yao, John Cohn, Sijia Liu, Seychelle M. Vos, Michael A. Cianfrocco
Single-particle cryo-electron microscopy (cryo-EM) has become one of the mainstream structural biology techniques because of its ability to determine high-resolution structures of dynamic bio-molecules.
1 code implementation • 29 Mar 2022 • Vishal Asnani, Xi Yin, Tal Hassner, Sijia Liu, Xiaoming Liu
That is, a template protected real image, and its manipulated version, is better discriminated compared to the original real image vs. its manipulated one.
1 code implementation • ICLR 2022 • Yimeng Zhang, Yuguang Yao, Jinghan Jia, JinFeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu
To tackle this problem, we next propose to prepend an autoencoder (AE) to a given (black-box) model so that DS can be trained using variance-reduced ZO optimization.
1 code implementation • ICLR 2022 • Yifan Gong, Yuguang Yao, Yize Li, Yimeng Zhang, Xiaoming Liu, Xue Lin, Sijia Liu
However, carefully crafted, tiny adversarial perturbations are difficult to recover by optimizing a unilateral RED objective.
1 code implementation • ICLR 2022 • Tianshu Huang, Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang
Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners.
no code implementations • 15 Feb 2022 • Pin-Yu Chen, Sijia Liu
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.
no code implementations • 21 Jan 2022 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.
no code implementations • 28 Dec 2021 • Bingqing Song, Haoran Sun, Wenqiang Pu, Sijia Liu, Mingyi Hong
We then provide a series of theoretical results to further understand the properties of the two approaches.
1 code implementation • 23 Dec 2021 • Yihua Zhang, Guanhua Zhang, Prashant Khanduri, Mingyi Hong, Shiyu Chang, Sijia Liu
We first show that the commonly-used Fast-AT is equivalent to using a stochastic gradient algorithm to solve a linearized BLO problem involving a sign operation.
no code implementations • 8 Dec 2021 • Ching-Yun Ko, Jeet Mohapatra, Sijia Liu, Pin-Yu Chen, Luca Daniel, Lily Weng
With the integrated framework, we achieve up to 6\% improvement on the standard accuracy and 17\% improvement on the robust accuracy.
no code implementations • 22 Nov 2021 • Yifan Gong, Geng Yuan, Zheng Zhan, Wei Niu, Zhengang Li, Pu Zhao, Yuxuan Cai, Sijia Liu, Bin Ren, Xue Lin, Xulong Tang, Yanzhi Wang
Weight pruning is an effective model compression technique to tackle the challenges of achieving real-time deep neural network (DNN) inference on mobile devices.
1 code implementation • NeurIPS 2021 • Lijie Fan, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Chuang Gan
We show that AdvCL is able to enhance cross-task robustness transferability without loss of model accuracy and finetuning efficiency.
no code implementations • ICCV 2021 • Sung-En Chang, Yanyu Li, Mengshu Sun, Weiwen Jiang, Sijia Liu, Yanzhi Wang, Xue Lin
Specifically, this is the first effort to assign mixed quantization schemes and multiple precisions within layers -- among rows of the DNN weight matrix, for simplified operations in hardware inference, while preserving accuracy.
1 code implementation • NeurIPS 2021 • Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, Siyue Wang, Minghai Qin, Bin Ren, Yanzhi Wang, Sijia Liu, Xue Lin
Systematical evaluation on accuracy, training speed, and memory footprint are conducted, where the proposed MEST framework consistently outperforms representative SOTA works.
no code implementations • 20 Oct 2021 • Sijia Liu, Andrew Wen, LiWei Wang, Huan He, Sunyang Fu, Robert Miller, Andrew Williams, Daniel Harris, Ramakanth Kavuluru, Mei Liu, Noor Abu-el-rub, Dalton Schutte, Rui Zhang, Masoud Rouhizadeh, John D. Osborne, Yongqun He, Umit Topaloglu, Stephanie S Hong, Joel H Saltz, Thomas Schaffter, Emily Pfaff, Christopher G. Chute, Tim Duong, Melissa A. Haendel, Rafael Fuentes, Peter Szolovits, Hua Xu, Hongfang Liu, Natural Language Processing, Subgroup, National COVID Cohort Collaborative
Although we use COVID-19 as a use case in this effort, our framework is general enough to be applied to other domains of interest in clinical NLP.
no code implementations • 12 Oct 2021 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.
no code implementations • 29 Sep 2021 • Quanfu Fan, Kaidi Xu, Chun-Fu Chen, Sijia Liu, Gaoyuan Zhang, David Daniel Cox, Xue Lin
Physical adversarial attacks apply carefully crafted adversarial perturbations onto real objects to maliciously alter the prediction of object classifiers or detectors.
no code implementations • 29 Sep 2021 • Wang Zhang, Lam M. Nguyen, Subhro Das, Pin-Yu Chen, Sijia Liu, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng
In verification-based robust training, existing methods utilize relaxation based methods to bound the worst case performance of neural networks given certain perturbation.
no code implementations • ICLR 2022 • Prashant Khanduri, Haibo Yang, Mingyi Hong, Jia Liu, Hoi To Wai, Sijia Liu
We analyze the optimization and the generalization performance of the proposed framework for the $\ell_2$ loss.
no code implementations • ICLR 2022 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.
1 code implementation • 15 Sep 2021 • Chen Fan, Parikshit Ram, Sijia Liu
The key enabling technique is to interpret MAML as a bilevel optimization (BLO) problem and leverage the sign-based SGD(signSGD) as a lower-level optimizer of BLO.
no code implementations • 4 Jul 2021 • Ao Liu, Xiaoyu Chen, Sijia Liu, Lirong Xia, Chuang Gan
The advantages of our Renyi-Robust-Smooth (RDP-based interpretation method) are three-folds.
1 code implementation • NeurIPS 2021 • Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang
Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis.
1 code implementation • 27 Jun 2021 • Ren Wang, Tianqi Chen, Philip Yao, Sijia Liu, Indika Rajapakse, Alfred Hero
K-Nearest Neighbor (kNN)-based deep learning methods have been applied to many applications due to their simplicity and geometric interpretability.
no code implementations • 30 May 2021 • Wei Niu, Zhenglun Kong, Geng Yuan, Weiwen Jiang, Jiexiong Guan, Caiwen Ding, Pu Zhao, Sijia Liu, Bin Ren, Yanzhi Wang
In this paper, we propose a compression-compilation co-design framework that can guarantee the identified model to meet both resource and real-time specifications of mobile devices.
no code implementations • 28 Apr 2021 • Zhuoyun Li, Changhong Zhong, Sijia Liu, Ruixuan Wang, Wei-Shi Zheng
In order to reduce the forgetting of particularly earlier learned old knowledge and improve the overall continual learning performance, we propose a simple yet effective fusion mechanism by including all the previously learned feature extractors into the intelligent model.
no code implementations • 25 Mar 2021 • Yi Sun, Abel Valente, Sijia Liu, Dakuo Wang
Prior works on formalizing explanations of a graph neural network (GNN) focus on a single use case - to preserve the prediction results through identifying important edges and nodes.
1 code implementation • ICLR 2021 • Shashank Srikant, Sijia Liu, Tamara Mitrovska, Shiyu Chang, Quanfu Fan, Gaoyuan Zhang, Una-May O'Reilly
We further show that our formulation is better at training models that are robust to adversarial attacks.
1 code implementation • 2 Mar 2021 • Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Chia-Mu Yu
In this paper, we propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
no code implementations • 25 Feb 2021 • Chi Zhang, Jinghan Jia, Burhaneddin Yaman, Steen Moeller, Sijia Liu, Mingyi Hong, Mehmet Akçakaya
Although deep learning (DL) has received much attention in accelerated MRI, recent studies suggest small perturbations may lead to instabilities in DL-based reconstructions, leading to concern for their clinical application.
1 code implementation • ICLR 2021 • Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang
Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.
no code implementations • 19 Feb 2021 • Ning Liu, Geng Yuan, Zhengping Che, Xuan Shen, Xiaolong Ma, Qing Jin, Jian Ren, Jian Tang, Sijia Liu, Yanzhi Wang
In deep model compression, the recent finding "Lottery Ticket Hypothesis" (LTH) (Frankle & Carbin, 2018) pointed out that there could exist a winning ticket (i. e., a properly pruned sub-network together with original weight initialization) that can achieve competitive performance than the original dense network.
no code implementations • 1 Feb 2021 • Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Luca Daniel
Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees.
no code implementations • ICLR 2021 • Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements.
no code implementations • 1 Jan 2021 • Gaoyuan Zhang, Songtao Lu, Sijia Liu, Xiangyi Chen, Pin-Yu Chen, Lee Martie, Lior Horesh, Mingyi Hong
Current deep neural networks are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
no code implementations • ICLR 2021 • Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang
In view of those, we introduce two pruning options, e. g., top-down and bottom-up, for finding lifelong tickets.
no code implementations • NeurIPS 2021 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Moreover, as the algorithm for training a sparse neural network is specified as (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned model weights in the hidden layer.
no code implementations • 26 Dec 2020 • Pu Zhao, Wei Niu, Geng Yuan, Yuxuan Cai, Hsin-Hsuan Sung, Sijia Liu, Xipeng Shen, Bin Ren, Yanzhi Wang, Xue Lin
3D object detection is an important task, especially in the autonomous driving application domain.
1 code implementation • 22 Dec 2020 • Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy machine learning systems.
no code implementations • 21 Dec 2020 • Pranay Sharma, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Xue Lin, Pramod K. Varshney
In this work, we focus on the study of stochastic zeroth-order (ZO) optimization which does not require first-order gradient information and uses only function evaluations.
1 code implementation • CVPR 2021 • Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang
We extend the scope of LTH and question whether matching subnetworks still exist in pre-trained computer vision models, that enjoy the same downstream transfer performance.
no code implementations • CVPR 2021 • Zhengang Li, Geng Yuan, Wei Niu, Pu Zhao, Yanyu Li, Yuxuan Cai, Xuan Shen, Zheng Zhan, Zhenglun Kong, Qing Jin, Zhiyu Chen, Sijia Liu, Kaiyuan Yang, Bin Ren, Yanzhi Wang, Xue Lin
With the increasing demand to efficiently deploy DNNs on mobile edge devices, it becomes much more important to reduce unnecessary computation and increase the execution speed.
1 code implementation • NeurIPS 2020 • Tianlong Chen, Weiyi Zhang, Jingyang Zhou, Shiyu Chang, Sijia Liu, Lisa Amini, Zhangyang Wang
Learning to optimize (L2O) has gained increasing attention since classical optimizers require laborious problem-specific design and hyperparameter tuning.
no code implementations • NeurIPS 2020 • Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
We also provide a framework that generalizes the calculation for certification using higher-order information.
no code implementations • 4 Oct 2020 • Yang Jiao, Kai Yang, Shaoyu Dou, Pan Luo, Sijia Liu, Dongjin Song
To this end, we propose an autonomous representation learning approach for multivariate time series (TimeAutoML) with irregular sampling rates and variable lengths.
no code implementations • 29 Sep 2020 • Pu Zhao, Sijia Liu, Parikshit Ram, Songtao Lu, Yuguang Yao, Djallel Bouneffouf, Xue Lin
As novel contributions, we show that the use of LFT within MAML (i) offers the capability to tackle few-shot learning tasks by meta-learning across incongruous yet related problems and (ii) can efficiently work with first-order and derivative-free few-shot learning problems.
no code implementations • 15 Sep 2020 • Wei Niu, Zhenglun Kong, Geng Yuan, Weiwen Jiang, Jiexiong Guan, Caiwen Ding, Pu Zhao, Sijia Liu, Bin Ren, Yanzhi Wang
Our framework can guarantee the identified model to meet both resource and real-time specifications of mobile devices, thus achieving real-time execution of large transformer-based models like BERT variants.
1 code implementation • ECCV 2020 • Ren Wang, Gaoyuan Zhang, Sijia Liu, Pin-Yu Chen, JinJun Xiong, Meng Wang
When the training data are maliciously tampered, the predictions of the acquired deep neural network (DNN) can be manipulated by an adversary known as the Trojan attack (or poisoning backdoor attack).
2 code implementations • NeurIPS 2020 • Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael Carbin
For a range of downstream tasks, we indeed find matching subnetworks at 40% to 90% sparsity.
no code implementations • 20 Jul 2020 • Wei Niu, Mengshu Sun, Zhengang Li, Jou-An Chen, Jiexiong Guan, Xipeng Shen, Yanzhi Wang, Sijia Liu, Xue Lin, Bin Ren
The vanilla sparsity removes whole kernel groups, while KGS sparsity is a more fine-grained structured sparsity that enjoys higher flexibility while exploiting full on-device parallelism.
1 code implementation • ICML 2020 • Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel
Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks.
no code implementations • ICML 2020 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
1 code implementation • 25 Jun 2020 • Yi Wang, Jingyang Zhou, Tianlong Chen, Sijia Liu, Shiyu Chang, Chandrajit Bajaj, Zhangyang Wang
Contrary to the traditional adversarial patch, this new form of attack is mapped into the 3D object world and back-propagates to the 2D image domain through differentiable rendering.
no code implementations • 17 Jun 2020 • Parikshit Ram, Sijia Liu, Deepak Vijaykeerthi, Dakuo Wang, Djallel Bouneffouf, Greg Bramble, Horst Samulowitz, Alexander G. Gray
The CASH problem has been widely studied in the context of automated configurations of machine learning (ML) pipelines and various solvers and toolkits are available.
no code implementations • 11 Jun 2020 • Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning applications.
1 code implementation • CVPR 2020 • Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, Zhangyang Wang
We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins (eg, 3. 83% on robust accuracy and 1. 3% on standard accuracy, on the CIFAR-10 dataset), compared with the conventional end-to-end adversarial training baseline.
no code implementations • 2 Mar 2020 • Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei, Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel
The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public.
no code implementations • 26 Feb 2020 • Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, Xue Lin
Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks.
no code implementations • 25 Feb 2020 • Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin
To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner.
no code implementations • 23 Jan 2020 • Zhengang Li, Yifan Gong, Xiaolong Ma, Sijia Liu, Mengshu Sun, Zheng Zhan, Zhenglun Kong, Geng Yuan, Yanzhi Wang
Structured weight pruning is a representative model compression technique of DNNs for hardware efficiency and inference accelerations.
no code implementations • ECCV 2020 • Xiaolong Ma, Wei Niu, Tianyun Zhang, Sijia Liu, Sheng Lin, Hongjia Li, Xiang Chen, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang
Weight pruning has been widely acknowledged as a straightforward and effective method to eliminate redundancy in Deep Neural Networks (DNN), thereby achieving acceleration on various platforms.
1 code implementation • 19 Dec 2019 • Jeet Mohapatra, Tsui-Wei, Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task.
no code implementations • 24 Oct 2019 • Sunyang Fu, David Chen, Huan He, Sijia Liu, Sungrim Moon, Kevin J Peterson, Feichen Shen, Li-Wei Wang, Yanshan Wang, Andrew Wen, Yiqing Zhao, Sunghwan Sohn, Hongfang Liu
Background Concept extraction, a subdomain of natural language processing (NLP) with a focus on extracting concepts of interest, has been adopted to computationally extract clinical information from text for a wide range of applications ranging from clinical decision support to care quality improvement.
no code implementations • 22 Oct 2019 • Charu Aggarwal, Djallel Bouneffouf, Horst Samulowitz, Beat Buesser, Thanh Hoang, Udayan Khurana, Sijia Liu, Tejaswini Pedapati, Parikshit Ram, Ambrish Rawat, Martin Wistuba, Alexander Gray
Data science is labor-intensive and human experts are scarce but heavily involved in every aspect of it.
1 code implementation • ECCV 2020 • Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, Xue Lin
To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts.
no code implementations • ICML 2020 • Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, Kush R. Varshney
Moreover, the same classifier yields the lack of a trade-off with respect to ideal distributions while yielding a trade-off when accuracy is measured with respect to the given (possibly biased) dataset.
1 code implementation • NeurIPS 2019 • Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, David Cox
In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime.
1 code implementation • 30 Sep 2019 • Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Minyi Hong, Una-May O'Reilly
In this paper, we study the problem of constrained robust (min-max) optimization ina black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.
no code implementations • 27 Sep 2019 • Fu-Ming Guo, Sijia Liu, Finlay S. Mungall, Xue Lin, Yanzhi Wang
Is it possible to compress these large-scale language representation models?
no code implementations • 25 Sep 2019 • Akhilan Boopathy, Lily Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel
We propose that many common certified defenses can be viewed under a unified framework of regularization.
no code implementations • 25 Sep 2019 • Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Shiyu Chang, Luca Daniel
Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks.
no code implementations • 25 Sep 2019 • Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li
The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.
no code implementations • 25 Sep 2019 • Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems.
1 code implementation • ICLR 2020 • Minhao Cheng, Simranjit Singh, Patrick Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh
We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input.
1 code implementation • ICCV 2019 • Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin
Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.
1 code implementation • 10 Jun 2019 • Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin
Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification.
1 code implementation • NeurIPS 2021 • Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li
In this paper, we show how a general framework of min-max optimization over multiple domains can be leveraged to advance the design of different types of adversarial attacks.
no code implementations • WS 2019 • Sijia Liu, Li-Wei Wang, Vipin Chaudhary, Hongfang Liu
Neural network models have shown promise in the temporal relation extraction task.
no code implementations • ICLR 2019 • Sijia Liu, Pin-Yu Chen, Xiangyi Chen, Mingyi Hong
Our study shows that ZO signSGD requires $\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\sqrt{d}/\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations.
no code implementations • 1 May 2019 • Sijia Liu, Parikshit Ram, Deepak Vijaykeerthy, Djallel Bouneffouf, Gregory Bramble, Horst Samulowitz, Dakuo Wang, Andrew Conn, Alexander Gray
We study the AutoML problem of automatically configuring machine learning pipelines by jointly selecting algorithms and their appropriate hyper-parameters for all steps in supervised learning pipelines.
no code implementations • 3 Apr 2019 • Kaidi Xu, Sijia Liu, Gaoyuan Zhang, Mengshu Sun, Pu Zhao, Quanfu Fan, Chuang Gan, Xue Lin
It is widely known that convolutional neural networks (CNNs) are vulnerable to adversarial examples: images with imperceptible perturbations crafted to fool classifiers.
1 code implementation • 29 Mar 2019 • Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin
Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.
2 code implementations • 23 Mar 2019 • Shaokai Ye, Xiaoyu Feng, Tianyun Zhang, Xiaolong Ma, Sheng Lin, Zhengang Li, Kaidi Xu, Wujie Wen, Sijia Liu, Jian Tang, Makan Fardad, Xue Lin, Yongpan Liu, Yanzhi Wang
A recent work developed a systematic frame-work of DNN weight pruning using the advanced optimization technique ADMM (Alternating Direction Methods of Multipliers), achieving one of state-of-art in weight pruning results.
2 code implementations • 29 Nov 2018 • Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel
This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks.
no code implementations • 5 Nov 2018 • Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Jiaming Xie, Yun Liang, Sijia Liu, Xue Lin, Yanzhi Wang
Both DNN weight pruning and clustering/quantization, as well as their combinations, can be solved in a unified manner.
no code implementations • ICLR 2019 • Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Kaidi Xu, Yunfei Yang, Fuxun Yu, Jian Tang, Makan Fardad, Sijia Liu, Xiang Chen, Xue Lin, Yanzhi Wang
Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates.
no code implementations • 24 Sep 2018 • Pin-Yu Chen, Bhanukiran Vinzamuri, Sijia Liu
Many state-of-the-art machine learning models such as deep neural networks have recently shown to be vulnerable to adversarial perturbations, especially in classification tasks.
no code implementations • ICLR 2019 • Xiangyi Chen, Sijia Liu, Ruoyu Sun, Mingyi Hong
We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization.
1 code implementation • ICLR 2019 • Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, huan zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example.
no code implementations • 16 Jun 2018 • Hafiz Tiomoko Ali, Sijia Liu, Yasin Yilmaz, Romain Couillet, Indika Rajapakse, Alfred Hero
We propose a method for simultaneously detecting shared and unshared communities in heterogeneous multilayer weighted and undirected networks.
1 code implementation • 30 May 2018 • Chun-Chen Tu, Pai-Shun Ting, Pin-Yu Chen, Sijia Liu, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh, Shin-Ming Cheng
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.
1 code implementation • 30 May 2018 • Pin-Yu Chen, Lingfei Wu, Sijia Liu, Indika Rajapakse
The von Neumann graph entropy (VNGE) facilitates measurement of information divergence and distance between graphs in a graph sequence.
1 code implementation • NeurIPS 2018 • Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Pai-Shun Ting, Shiyu Chang, Lisa Amini
As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying.
no code implementations • 9 Apr 2018 • Pu Zhao, Sijia Liu, Yanzhi Wang, Xue Lin
In the literature, the added distortions are usually measured by L0, L1, L2, and L infinity norms, namely, L0, L1, L2, and L infinity attacks, respectively.
2 code implementations • 1 Feb 2018 • Yanshan Wang, Sijia Liu, Naveed Afzal, Majid Rastegar-Mojarad, Li-Wei Wang, Feichen Shen, Paul Kingsbury, Hongfang Liu
First, the word embeddings trained on clinical notes and biomedical publications can capture the semantics of medical terms better, and find more relevant similar medical terms, and are closer to human experts' judgments, compared to these trained on Wikipedia and news.
Information Retrieval
no code implementations • 18 Dec 2017 • Farshad Harirchi, Doohyun Kim, Omar A. Khalil, Sijia Liu, Paolo Elvati, Angela Violi, Alfred O. Hero
In this paper, we introduce a novel approach for the identification of the influential reactions in chemical reaction networks for combustion applications, using a data-driven sparse-learning technique.
no code implementations • 12 Dec 2017 • Farshad Harirchi, Omar A. Khalil, Sijia Liu, Paolo Elvati, Angela Violi, Alfred O. Hero
In this paper, we propose an optimization-based sparse learning approach to identify the set of most influential reactions in a chemical reaction network.
no code implementations • 15 Nov 2017 • Tianpei Xie, Sijia Liu, Alfred O. Hero III
Consider a social network where only a few nodes (agents) have meaningful interactions in the sense that the conditional dependency graph over node attribute variables (behaviors) is sparse.
no code implementations • 21 Oct 2017 • Sijia Liu, Jie Chen, Pin-Yu Chen, Alfred O. Hero
In this paper, we design and analyze a new zeroth-order online algorithm, namely, the zeroth-order online alternating direction method of multipliers (ZOO-ADMM), which enjoys dual advantages of being gradient-free operation and employing the ADMM to accommodate complex structured regularizers.
no code implementations • 18 Oct 2017 • Sijia Liu, Yanzhi Wang, Makan Fardad, Pramod K. Varshney
In addition to ADMM, implementation of a customized power iteration (PI) method for eigenvalue/eigenvector computation using memristor crossbars is discussed.
no code implementations • SEMEVAL 2017 • Sijia Liu, Feichen Shen, Vipin Chaudhary, Hongfang Liu
We explored semantic similarities and patterns of keyphrases in scientific publications using pre-trained word embedding models.
no code implementations • 2 Jun 2017 • Pin-Yu Chen, Sijia Liu
This paper presents a bias-variance tradeoff of graph Laplacian regularizer, which is widely used in graph signal processing and semi-supervised learning tasks.
no code implementations • 18 Apr 2017 • Sijia Liu, Pin-Yu Chen, Alfred O. Hero
Our analysis reveals the connection between network topology design and the convergence rate of DDA, and provides quantitative evaluation of DDA acceleration for distributed optimization that is absent in the existing analysis.
no code implementations • 12 Sep 2016 • Sundeep Prabhakar Chepuri, Sijia Liu, Geert Leus, Alfred O. Hero III
Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.